Add non-blocking version of PQcancel
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns.
This patch adds a new cancellation API to libpq which is called
PQcancelConnectionStart. This API can be used to send cancellations in a
non-blocking fashion. To do this it internally uses regular PGconn
connection establishment. This has as a downside that
PQcancelConnectionStart cannot be safely called from a signal handler.
Luckily, this should be fine for most usages of this API. Since most
code that's using an event loop handles signals in that event loop as
well (as opposed to calling functions from the signal handler directly).
There are also a few advantages of this approach:
1. No need to add and maintain a second non-blocking connection
establishment codepath.
2. Cancel connections benefit automatically from any improvements made
to the normal connection establishment codepath. Examples of things
that it currently gets for free currently are TLS support and
keepalive settings.
This patch also includes a test for this new API (and also the already
existing cancellation APIs). The test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
NOTE: I have not tested this with GSS for the moment. My expectation is
that using this new API with a GSS connection will result in a
CONNECTION_BAD status when calling PQcancelStatus. The reason for this
is that GSS reads will also need to communicate back that an EOF was
found, just like I've done for TLS reads and unencrypted reads. Since in
case of a cancel connection an EOF is actually expected, and should not
be treated as an error.
Attachments:
0001-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=0001-Add-non-blocking-version-of-PQcancel.patchDownload
From 0e0d747c60d564991fc375f439649ac6f35f4578 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 12 Jan 2022 09:52:05 +0100
Subject: [PATCH] Add non-blocking version of PQcancel
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns.
This patch adds a new cancellation API to libpq which is called
PQcancelConnectionStart. This API can be used to send cancellations in a
non-blocking fashion. To do this it internally uses regular PGconn
connection establishment. This has as a downside that
PQcancelConnectionStart cannot be safely called from a signal handler.
Luckily, this should be fine for most usages of this API. Since most
code that's using an event loop handles signals in that event loop as
well (as opposed to calling functions from the signal handler directly).
There are also a few advantages of this approach:
1. No need to add and maintain a second non-blocking connection
establishment codepath.
2. Cancel connections benefit automatically from any improvements made
to the normal connection establishment codepath. Examples of things
that it currently gets for free currently are TLS support and
keepalive settings.
This patch also includes a test for this new API (and also the already
existing cancellation APIs). The test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
NOTE: I have not tested this with GSS for the moment. My expectation is
that using this new API with a GSS connection will result in a
CONNECTION_BAD status when calling PQcancelStatus. The reason for this
is that GSS reads will also need to communicate back that an EOF was
found, just like I've done for TLS reads and unencrypted reads. Since in
case of a cancel connection an EOF is actually expected, and should not
be treated as an error.
---
src/interfaces/libpq/exports.txt | 7 +
src/interfaces/libpq/fe-connect.c | 192 +++++++++++++++++-
src/interfaces/libpq/fe-misc.c | 15 +-
src/interfaces/libpq/fe-secure-openssl.c | 2 +-
src/interfaces/libpq/fe-secure.c | 3 +
src/interfaces/libpq/libpq-fe.h | 13 ++
src/interfaces/libpq/libpq-int.h | 8 +
.../modules/libpq_pipeline/libpq_pipeline.c | 115 ++++++++++-
.../libpq_pipeline/t/001_libpq_pipeline.pl | 2 +-
9 files changed, 348 insertions(+), 9 deletions(-)
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index e8bcc88370..64364afeaf 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -186,3 +186,10 @@ PQpipelineStatus 183
PQsetTraceFlags 184
PQmblenBounded 185
PQsendFlushRequest 186
+PQcancelConnect 187
+PQcancelConnectStart 188
+PQcancelConnectPoll 189
+PQcancelStatus 189
+PQcancelSocket 190
+PQcancelErrorMessage 191
+PQcancelFinish 192
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 5fc16be849..347d32ad5f 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -378,6 +378,7 @@ static int connectDBComplete(PGconn *conn);
static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
@@ -604,8 +605,11 @@ pqDropServerData(PGconn *conn)
if (conn->write_err_msg)
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -737,6 +741,120 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConnectStart
+ *
+ * Asynchronously cancel a request on the given connection. This requires
+ * polling the returned PGconn to actually complete the cancellation of the
+ * request.
+ */
+PGcancelConn *
+PQcancelConnectStart(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy over information needed to cancel
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Connect to the database
+ */
+ if (!connectDBStart(cancelConn))
+ {
+ /* Just in case we failed to set it in connectDBStart */
+ cancelConn->status = CONNECTION_BAD;
+ }
+
+ return (PGcancelConn *) cancelConn;
+}
+
+/*
+ * PQcancelConnect
+ *
+ * Cancel a request on the given connection
+ */
+PGcancelConn *
+PQcancelConnect(PGconn *conn)
+{
+ PGcancelConn *cancelConn = PQcancelConnectStart(conn);
+
+ if (cancelConn && cancelConn->conn.status != CONNECTION_BAD)
+ (void) connectDBComplete(&cancelConn->conn);
+
+ return cancelConn;
+}
+
+/*
+ * PQcancelConnectPoll
+ *
+ * Poll a cancel connection. For usage details see the PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelConnectPoll(PGcancelConn * cancelConn)
+{
+ return PQconnectPoll((PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn * cancelConn)
+{
+ return PQstatus((PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn * cancelConn)
+{
+ return PQsocket((PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn * cancelConn)
+{
+ return PQerrorMessage((PGconn *) cancelConn);
+}
+
+void
+PQcancelFinish(PGcancelConn * cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
+
+
/*
* PQconnectStartParams
*
@@ -914,6 +1032,46 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ appendPQExpBufferStr(&dstConn->errorMessage,
+ libpq_gettext("out of memory\n"));
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2276,6 +2434,17 @@ PQconnectPoll(PGconn *conn)
/* Load waiting data */
int n = pqReadData(conn);
+ if (n == -2 && conn->cancelRequest)
+ {
+ /*
+ * This is the expected end state for cancel connections.
+ * They are closed once the cancel is processed by the
+ * server.
+ */
+ conn->status = CONNECTION_CANCEL_FINISHED;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+ }
if (n < 0)
goto error_return;
if (n == 0)
@@ -2950,6 +3119,25 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ appendPQExpBuffer(&conn->errorMessage,
+ libpq_gettext("could not send cancel packet: %s\n"),
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 7fcfe08fd2..a95d63ffcd 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -558,8 +558,11 @@ pqPutMsgEnd(PGconn *conn)
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
- * -1: error detected (including EOF = connection closure);
+ * -1: error detected (excluding EOF = connection closure);
* conn->errorMessage set
+ * -2: EOF detected, connection is closed
+ * conn->errorMessage set
+ *
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
@@ -642,7 +645,7 @@ retry3:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -737,7 +740,7 @@ retry4:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -755,13 +758,17 @@ definitelyEOF:
libpq_gettext("server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.\n"));
+ /* Do *not* drop any already-read data; caller still wants it */
+ pqDropConnection(conn, false);
+ conn->status = CONNECTION_BAD; /* No more connection to backend */
+ return -2;
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
/* Do *not* drop any already-read data; caller still wants it */
pqDropConnection(conn, false);
conn->status = CONNECTION_BAD; /* No more connection to backend */
- return -1;
+ return nread < 0 ? nread : -1;
}
/*
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index 9f735ba437..3cd65fa276 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -252,7 +252,7 @@ rloop:
appendPQExpBufferStr(&conn->errorMessage,
libpq_gettext("SSL connection has been closed unexpectedly\n"));
result_errno = ECONNRESET;
- n = -1;
+ n = -2;
break;
default:
appendPQExpBuffer(&conn->errorMessage,
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 0b998e254d..b2c66f47a5 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -201,6 +201,9 @@ pqsecure_close(PGconn *conn)
* On failure, this function is responsible for appending a suitable message
* to conn->errorMessage. The caller must still inspect errno, but only
* to determine whether to continue/retry after error.
+ *
+ * Returns -1 in case of failures, except in the case of clean connection
+ * closure then it returns -2.
*/
ssize_t
pqsecure_read(PGconn *conn, void *ptr, size_t len)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 20eb855abc..39aed5db3e 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -57,6 +57,7 @@ typedef enum
{
CONNECTION_OK,
CONNECTION_BAD,
+ CONNECTION_CANCEL_FINISHED,
/* Non-blocking mode only below here */
/*
@@ -163,6 +164,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -327,6 +333,13 @@ extern void PQfreeCancel(PGcancel *cancel);
/* issue a cancel request */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
+extern PGcancelConn * PQcancelConnectStart(PGconn *conn);
+extern PGcancelConn * PQcancelConnect(PGconn *conn);
+extern PostgresPollingStatusType PQcancelConnectPoll(PGcancelConn * cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
+extern int PQcancelSocket(const PGcancelConn * cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
+extern void PQcancelFinish(PGcancelConn * cancelConn);
/* backwards compatible version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index fcce13843e..8af3dd0ee7 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -394,6 +394,8 @@ struct pg_conn
char *ssl_max_protocol_version; /* maximum TLS protocol version */
char *target_session_attrs; /* desired session properties */
+ bool cancelRequest;
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -574,6 +576,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
@@ -691,6 +698,7 @@ extern int pqPutInt(int value, size_t bytes, PGconn *conn);
extern int pqPutMsgStart(char msg_type, PGconn *conn);
extern int pqPutMsgEnd(PGconn *conn);
extern int pqReadData(PGconn *conn);
+extern int pqReadDataOrEof(PGconn *conn);
extern int pqFlush(PGconn *conn);
extern int pqWait(int forRead, int forWrite, PGconn *conn);
extern int pqWaitTimed(int forRead, int forWrite, PGconn *conn,
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 0ff563f59a..27188d43bb 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,116 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+static void
+confirm_query_cancelled(PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal("PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal("query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal("query failed with a different error than cancellation: %s", PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+static void
+test_cancel(PGconn *conn)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ char errorbuf[256];
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /* test PQrequestcancel */
+ if (PQsendQuery(conn, "SELECT pg_sleep(3)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ PQrequestCancel(conn);
+ confirm_query_cancelled(conn);
+
+ /* test PQcancel */
+ if (PQsendQuery(conn, "SELECT pg_sleep(3)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ cancel = PQgetCancel(conn);
+ PQcancel(cancel, errorbuf, sizeof(errorbuf));
+ confirm_query_cancelled(conn);
+
+ /* test PQcancelConnect */
+ if (PQsendQuery(conn, "SELECT pg_sleep(3)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ cancelConn = PQcancelConnect(conn);
+ if (PQcancelStatus(cancelConn) != CONNECTION_CANCEL_FINISHED)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConnectStart and then polling with PQcancelConnectPoll */
+ if (PQsendQuery(conn, "SELECT pg_sleep(3)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ cancelConn = PQcancelConnectStart(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelConnectPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_CANCEL_FINISHED)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+ PQcancelFinish(cancelConn);
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -1555,6 +1665,7 @@ print_test_list(void)
printf("singlerow\n");
printf("transaction\n");
printf("uniqviol\n");
+ printf("cancel\n");
}
int
@@ -1642,7 +1753,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
diff --git a/src/test/modules/libpq_pipeline/t/001_libpq_pipeline.pl b/src/test/modules/libpq_pipeline/t/001_libpq_pipeline.pl
index 0c164dcaba..e0773543ae 100644
--- a/src/test/modules/libpq_pipeline/t/001_libpq_pipeline.pl
+++ b/src/test/modules/libpq_pipeline/t/001_libpq_pipeline.pl
@@ -26,7 +26,7 @@ for my $testname (@tests)
my @extraargs = ('-r', $numrows);
my $cmptrace = grep(/^$testname$/,
qw(simple_pipeline nosync multi_pipelines prepared singlerow
- pipeline_abort transaction disallowed_in_pipeline)) > 0;
+ pipeline_abort transaction disallowed_in_pipeline cancel)) > 0;
# For a bunch of tests, generate a libpq trace file too.
my $traceout = "$PostgreSQL::Test::Utils::tmp_check/traces/$testname.trace";
--
2.17.1
Hi,
On 2022-01-12 15:22:18 +0000, Jelte Fennema wrote:
This patch also includes a test for this new API (and also the already
existing cancellation APIs). The test can be easily run like this:cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
Right now tests fails to build on windows with:
[15:45:10.518] src/interfaces/libpq/libpqdll.def : fatal error LNK1121: duplicate ordinal number '189' [c:\cirrus\libpq.vcxproj]
on fails tests on other platforms. See
https://cirrus-ci.com/build/4791821363576832
NOTE: I have not tested this with GSS for the moment. My expectation is
that using this new API with a GSS connection will result in a
CONNECTION_BAD status when calling PQcancelStatus. The reason for this
is that GSS reads will also need to communicate back that an EOF was
found, just like I've done for TLS reads and unencrypted reads. Since in
case of a cancel connection an EOF is actually expected, and should not
be treated as an error.
The failures do not seem related to this.
Greetings,
Andres Freund
Attached is an updated patch which I believe fixes windows and the other test failures.
At least on my machine make check-world passes now when compiled with --enable-tap-tests
I also included a second patch which adds some basic documentation for the libpq tests.
Attachments:
0001-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=0001-Add-non-blocking-version-of-PQcancel.patchDownload
From 3619ddcfc11d7d0cc39cb6dc12a8561fb895b385 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 12 Jan 2022 09:52:05 +0100
Subject: [PATCH 1/2] Add non-blocking version of PQcancel
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns.
This patch adds a new cancellation API to libpq which is called
PQcancelConnectionStart. This API can be used to send cancellations in a
non-blocking fashion. To do this it internally uses regular PGconn
connection establishment. This has as a downside that
PQcancelConnectionStart cannot be safely called from a signal handler.
Luckily, this should be fine for most usages of this API. Since most
code that's using an event loop handles signals in that event loop as
well (as opposed to calling functions from the signal handler directly).
There are also a few advantages of this approach:
1. No need to add and maintain a second non-blocking connection
establishment codepath.
2. Cancel connections benefit automatically from any improvements made
to the normal connection establishment codepath. Examples of things
that it currently gets for free currently are TLS support and
keepalive settings.
This patch also includes a test for this new API (and also the already
existing cancellation APIs). The test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
NOTE: I have not tested this with GSS for the moment. My expectation is
that using this new API with a GSS connection will result in a
CONNECTION_BAD status when calling PQcancelStatus. The reason for this
is that GSS reads will also need to communicate back that an EOF was
found, just like I've done for TLS reads and unencrypted reads. Since in
case of a cancel connection an EOF is actually expected, and should not
be treated as an error.
---
src/interfaces/libpq/exports.txt | 7 +
src/interfaces/libpq/fe-connect.c | 192 +++++++++++++++++-
src/interfaces/libpq/fe-misc.c | 15 +-
src/interfaces/libpq/fe-secure-openssl.c | 2 +-
src/interfaces/libpq/fe-secure.c | 3 +
src/interfaces/libpq/libpq-fe.h | 13 ++
src/interfaces/libpq/libpq-int.h | 8 +
.../modules/libpq_pipeline/libpq_pipeline.c | 115 ++++++++++-
8 files changed, 347 insertions(+), 8 deletions(-)
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index e8bcc88370..a06214a78f 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -186,3 +186,10 @@ PQpipelineStatus 183
PQsetTraceFlags 184
PQmblenBounded 185
PQsendFlushRequest 186
+PQcancelConnect 187
+PQcancelConnectStart 188
+PQcancelConnectPoll 189
+PQcancelStatus 190
+PQcancelSocket 191
+PQcancelErrorMessage 192
+PQcancelFinish 193
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 5fc16be849..347d32ad5f 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -378,6 +378,7 @@ static int connectDBComplete(PGconn *conn);
static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
@@ -604,8 +605,11 @@ pqDropServerData(PGconn *conn)
if (conn->write_err_msg)
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -737,6 +741,120 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConnectStart
+ *
+ * Asynchronously cancel a request on the given connection. This requires
+ * polling the returned PGconn to actually complete the cancellation of the
+ * request.
+ */
+PGcancelConn *
+PQcancelConnectStart(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy over information needed to cancel
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Connect to the database
+ */
+ if (!connectDBStart(cancelConn))
+ {
+ /* Just in case we failed to set it in connectDBStart */
+ cancelConn->status = CONNECTION_BAD;
+ }
+
+ return (PGcancelConn *) cancelConn;
+}
+
+/*
+ * PQcancelConnect
+ *
+ * Cancel a request on the given connection
+ */
+PGcancelConn *
+PQcancelConnect(PGconn *conn)
+{
+ PGcancelConn *cancelConn = PQcancelConnectStart(conn);
+
+ if (cancelConn && cancelConn->conn.status != CONNECTION_BAD)
+ (void) connectDBComplete(&cancelConn->conn);
+
+ return cancelConn;
+}
+
+/*
+ * PQcancelConnectPoll
+ *
+ * Poll a cancel connection. For usage details see the PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelConnectPoll(PGcancelConn * cancelConn)
+{
+ return PQconnectPoll((PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn * cancelConn)
+{
+ return PQstatus((PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn * cancelConn)
+{
+ return PQsocket((PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn * cancelConn)
+{
+ return PQerrorMessage((PGconn *) cancelConn);
+}
+
+void
+PQcancelFinish(PGcancelConn * cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
+
+
/*
* PQconnectStartParams
*
@@ -914,6 +1032,46 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ appendPQExpBufferStr(&dstConn->errorMessage,
+ libpq_gettext("out of memory\n"));
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2276,6 +2434,17 @@ PQconnectPoll(PGconn *conn)
/* Load waiting data */
int n = pqReadData(conn);
+ if (n == -2 && conn->cancelRequest)
+ {
+ /*
+ * This is the expected end state for cancel connections.
+ * They are closed once the cancel is processed by the
+ * server.
+ */
+ conn->status = CONNECTION_CANCEL_FINISHED;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+ }
if (n < 0)
goto error_return;
if (n == 0)
@@ -2950,6 +3119,25 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ appendPQExpBuffer(&conn->errorMessage,
+ libpq_gettext("could not send cancel packet: %s\n"),
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 7fcfe08fd2..a95d63ffcd 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -558,8 +558,11 @@ pqPutMsgEnd(PGconn *conn)
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
- * -1: error detected (including EOF = connection closure);
+ * -1: error detected (excluding EOF = connection closure);
* conn->errorMessage set
+ * -2: EOF detected, connection is closed
+ * conn->errorMessage set
+ *
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
@@ -642,7 +645,7 @@ retry3:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -737,7 +740,7 @@ retry4:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -755,13 +758,17 @@ definitelyEOF:
libpq_gettext("server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.\n"));
+ /* Do *not* drop any already-read data; caller still wants it */
+ pqDropConnection(conn, false);
+ conn->status = CONNECTION_BAD; /* No more connection to backend */
+ return -2;
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
/* Do *not* drop any already-read data; caller still wants it */
pqDropConnection(conn, false);
conn->status = CONNECTION_BAD; /* No more connection to backend */
- return -1;
+ return nread < 0 ? nread : -1;
}
/*
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index 9f735ba437..3cd65fa276 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -252,7 +252,7 @@ rloop:
appendPQExpBufferStr(&conn->errorMessage,
libpq_gettext("SSL connection has been closed unexpectedly\n"));
result_errno = ECONNRESET;
- n = -1;
+ n = -2;
break;
default:
appendPQExpBuffer(&conn->errorMessage,
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 0b998e254d..b2c66f47a5 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -201,6 +201,9 @@ pqsecure_close(PGconn *conn)
* On failure, this function is responsible for appending a suitable message
* to conn->errorMessage. The caller must still inspect errno, but only
* to determine whether to continue/retry after error.
+ *
+ * Returns -1 in case of failures, except in the case of clean connection
+ * closure then it returns -2.
*/
ssize_t
pqsecure_read(PGconn *conn, void *ptr, size_t len)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 20eb855abc..39aed5db3e 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -57,6 +57,7 @@ typedef enum
{
CONNECTION_OK,
CONNECTION_BAD,
+ CONNECTION_CANCEL_FINISHED,
/* Non-blocking mode only below here */
/*
@@ -163,6 +164,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -327,6 +333,13 @@ extern void PQfreeCancel(PGcancel *cancel);
/* issue a cancel request */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
+extern PGcancelConn * PQcancelConnectStart(PGconn *conn);
+extern PGcancelConn * PQcancelConnect(PGconn *conn);
+extern PostgresPollingStatusType PQcancelConnectPoll(PGcancelConn * cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
+extern int PQcancelSocket(const PGcancelConn * cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
+extern void PQcancelFinish(PGcancelConn * cancelConn);
/* backwards compatible version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index fcce13843e..8af3dd0ee7 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -394,6 +394,8 @@ struct pg_conn
char *ssl_max_protocol_version; /* maximum TLS protocol version */
char *target_session_attrs; /* desired session properties */
+ bool cancelRequest;
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -574,6 +576,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
@@ -691,6 +698,7 @@ extern int pqPutInt(int value, size_t bytes, PGconn *conn);
extern int pqPutMsgStart(char msg_type, PGconn *conn);
extern int pqPutMsgEnd(PGconn *conn);
extern int pqReadData(PGconn *conn);
+extern int pqReadDataOrEof(PGconn *conn);
extern int pqFlush(PGconn *conn);
extern int pqWait(int forRead, int forWrite, PGconn *conn);
extern int pqWaitTimed(int forRead, int forWrite, PGconn *conn,
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 0ff563f59a..27188d43bb 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,116 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+static void
+confirm_query_cancelled(PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal("PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal("query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal("query failed with a different error than cancellation: %s", PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+static void
+test_cancel(PGconn *conn)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ char errorbuf[256];
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /* test PQrequestcancel */
+ if (PQsendQuery(conn, "SELECT pg_sleep(3)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ PQrequestCancel(conn);
+ confirm_query_cancelled(conn);
+
+ /* test PQcancel */
+ if (PQsendQuery(conn, "SELECT pg_sleep(3)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ cancel = PQgetCancel(conn);
+ PQcancel(cancel, errorbuf, sizeof(errorbuf));
+ confirm_query_cancelled(conn);
+
+ /* test PQcancelConnect */
+ if (PQsendQuery(conn, "SELECT pg_sleep(3)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ cancelConn = PQcancelConnect(conn);
+ if (PQcancelStatus(cancelConn) != CONNECTION_CANCEL_FINISHED)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConnectStart and then polling with PQcancelConnectPoll */
+ if (PQsendQuery(conn, "SELECT pg_sleep(3)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ cancelConn = PQcancelConnectStart(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelConnectPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_CANCEL_FINISHED)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+ PQcancelFinish(cancelConn);
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -1555,6 +1665,7 @@ print_test_list(void)
printf("singlerow\n");
printf("transaction\n");
printf("uniqviol\n");
+ printf("cancel\n");
}
int
@@ -1642,7 +1753,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.17.1
0002-Add-documentation-for-libpq_pipeline-tests.patchapplication/octet-stream; name=0002-Add-documentation-for-libpq_pipeline-tests.patchDownload
From e59f8174d5d006e78e474873ca4ba89a4e5c21d2 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Thu, 13 Jan 2022 15:26:35 +0100
Subject: [PATCH 2/2] Add documentation for libpq_pipeline tests
This adds some explanation on how to run and add libpq tests.
---
src/test/modules/libpq_pipeline/README | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/src/test/modules/libpq_pipeline/README b/src/test/modules/libpq_pipeline/README
index d8174dd579..3eac0bf131 100644
--- a/src/test/modules/libpq_pipeline/README
+++ b/src/test/modules/libpq_pipeline/README
@@ -1 +1,21 @@
Test programs and libraries for libpq
+=====================================
+
+You can manually run a specific test by running:
+
+ ./libpq_pipeline <name of test>
+
+You can add new tests to libpq_pipeline.c to adding a new test for libpq you
+should add your new test you should add the name of your new test to the
+"print_test_list" function. Then in main you should do something when this test
+name is passed to the program.
+
+If the order in which postgres protocol messages are sent is deterministic for
+your test. Then you can generate a trace of these messages using the following
+command:
+
+ ./libpq_pipeline mynewtest -t traces/mynewtest.trace
+
+Once you've done that you should make sure that when running "make check"
+the generated trace is compared to the expected trace. This is done by adding
+your test name to the $cmptrace definition in the t/001_libpq_pipeline.pl file
--
2.17.1
On Thu, 2022-01-13 at 14:51 +0000, Jelte Fennema wrote:
Attached is an updated patch which I believe fixes windows and the other test failures.
At least on my machine make check-world passes now when compiled with --enable-tap-testsI also included a second patch which adds some basic documentation for the libpq tests.
This is not a full review by any means, but here are my thoughts so
far:
NOTE: I have not tested this with GSS for the moment. My expectation is
that using this new API with a GSS connection will result in a
CONNECTION_BAD status when calling PQcancelStatus. The reason for this
is that GSS reads will also need to communicate back that an EOF was
found, just like I've done for TLS reads and unencrypted reads.
For what it's worth, I did a smoke test with a Kerberos environment via
./libpq_pipeline cancel '... gssencmode=require'
and the tests claim to pass.
2. Cancel connections benefit automatically from any improvements made
to the normal connection establishment codepath. Examples of things
that it currently gets for free currently are TLS support and
keepalive settings.
This seems like a big change compared to PQcancel(); one that's not
really hinted at elsewhere. Having the async version of an API open up
a completely different code path with new features is pretty surprising
to me.
And does the backend actually handle cancel requests via TLS (or GSS)?
It didn't look that way from a quick scan, but I may have missed
something.
@@ -1555,6 +1665,7 @@ print_test_list(void)
printf("singlerow\n");
printf("transaction\n");
printf("uniqviol\n");
+ printf("cancel\n");
}
This should probably go near the top; it looks like the existing list
is alphabetized.
The new cancel tests don't print any feedback. It'd be nice to get the
same sort of output as the other tests.
/* issue a cancel request */ extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize); +extern PGcancelConn * PQcancelConnectStart(PGconn *conn); +extern PGcancelConn * PQcancelConnect(PGconn *conn); +extern PostgresPollingStatusType PQcancelConnectPoll(PGcancelConn * cancelConn); +extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn); +extern int PQcancelSocket(const PGcancelConn * cancelConn); +extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn); +extern void PQcancelFinish(PGcancelConn * cancelConn);
That's a lot of new entry points, most of which don't do anything
except call their twin after a pointer cast. How painful would it be to
just use the existing APIs as-is, and error out when calling
unsupported functions if conn->cancelRequest is true?
--Jacob
Jacob Champion <pchampion@vmware.com> writes:
On Thu, 2022-01-13 at 14:51 +0000, Jelte Fennema wrote:
2. Cancel connections benefit automatically from any improvements made
to the normal connection establishment codepath. Examples of things
that it currently gets for free currently are TLS support and
keepalive settings.
This seems like a big change compared to PQcancel(); one that's not
really hinted at elsewhere. Having the async version of an API open up
a completely different code path with new features is pretty surprising
to me.
Well, the patch lacks any user-facing doco at all, so a-fortiori this
point is not covered. I trust the plan was to write docs later.
I kind of feel that this patch is going in the wrong direction.
I do see the need for a version of PQcancel that can encrypt the
transmitted cancel request (and yes, that should work on the backend
side; see recursion in ProcessStartupPacket). I have not seen
requests for a non-blocking version, and this doesn't surprise me.
I feel that the whole non-blocking aspect of libpq probably belongs
to another era when people didn't trust threads.
So what I'd do is make a version that just takes a PGconn, sends the
cancel request, and returns success or failure; never mind the
non-blocking aspect. One possible long-run advantage of this is that
it might be possible to "sync" the cancel request so that we know,
or at least can find out afterwards, exactly which query got
cancelled; something that's fundamentally impossible if the cancel
function works from a clone data structure that is disconnected
from the current connection state.
(Note that it probably makes sense to make a clone PGconn to pass
to fe-connect.c, internally to this function. I just don't want
to expose that to the app.)
regards, tom lane
Hi,
On 2022-03-24 17:41:53 -0400, Tom Lane wrote:
I kind of feel that this patch is going in the wrong direction.
I do see the need for a version of PQcancel that can encrypt the
transmitted cancel request (and yes, that should work on the backend
side; see recursion in ProcessStartupPacket). I have not seen
requests for a non-blocking version, and this doesn't surprise me.
I feel that the whole non-blocking aspect of libpq probably belongs
to another era when people didn't trust threads.
That's not a whole lot of fun if you think of cases like postgres_fdw (or
citus as in Jelte's case), which run inside the backend. Even with just a
single postgres_fdw, we don't really want to end up in an uninterruptible
PQcancel() that doesn't even react to pg_terminate_backend().
Even if using threads weren't an issue, I don't really buy the premise - most
networking code has moved *away* from using dedicated threads for each
connection. It just doesn't scale.
Leaving PQcancel aside, we use the non-blocking libpq stuff widely
ourselves. I think walreceiver, isolationtester, pgbench etc would be *much*
harder to get working equally well if there was just blocking calls. If
anything, we're getting to the point where purely blocking functionality
shouldn't be added anymore.
Greetings,
Andres Freund
On Thu, Mar 24, 2022 at 6:49 PM Andres Freund <andres@anarazel.de> wrote:
That's not a whole lot of fun if you think of cases like postgres_fdw (or
citus as in Jelte's case), which run inside the backend. Even with just a
single postgres_fdw, we don't really want to end up in an uninterruptible
PQcancel() that doesn't even react to pg_terminate_backend().Even if using threads weren't an issue, I don't really buy the premise - most
networking code has moved *away* from using dedicated threads for each
connection. It just doesn't scale.Leaving PQcancel aside, we use the non-blocking libpq stuff widely
ourselves. I think walreceiver, isolationtester, pgbench etc would be *much*
harder to get working equally well if there was just blocking calls. If
anything, we're getting to the point where purely blocking functionality
shouldn't be added anymore.
+1. I think having a non-blocking version of PQcancel() available is a
great idea, and I've wanted it myself. See commit
ae9bfc5d65123aaa0d1cca9988037489760bdeae.
That said, I don't think that this particular patch is going in the
right direction. I think Jacob's comment upthread is right on point:
"This seems like a big change compared to PQcancel(); one that's not
really hinted at elsewhere. Having the async version of an API open up
a completely different code path with new features is pretty
surprising to me." It seems to me that we want to end up with similar
code paths for PQcancel() and the non-blocking version of cancel. We
could get there in two ways. One way would be to implement the
non-blocking functionality in a manner that matches exactly what
PQcancel() does now. I imagine that the existing code from PQcancel()
would move, with some amount of change, into a new set of non-blocking
APIs. Perhaps PQcancel() would then be rewritten to use those new APIs
instead of hand-rolling the same logic. The other possible approach
would be to first change the blocking version of PQcancel() to use the
regular connection code instead of its own idiosyncratic logic, and
then as a second step, extend it with non-blocking interfaces that use
the regular non-blocking connection code. With either of these
approaches, we end up with the functionality working similarly in the
blocking and non-blocking code paths.
Leaving the question of approach aside, I think it's fairly clear that
this patch cannot be seriously considered for v15. One problem is the
lack of user-facing documentation, but there's a other stuff that just
doesn't look sufficiently well-considered. For example, it updates the
comment for pqsecure_read() to say "Returns -1 in case of failures,
except in the case of clean connection closure then it returns -2."
But that function calls any of three different implementation
functions depending on the situation and the patch only updates one of
them. And it updates that function to return -2 when the is
ECONNRESET, which seems to fly in the face of the comment's idea that
this is the "clean connection closure" case. I think it's probably a
bad sign that this function is tinkering with logic in this sort of
low-level function anyway. pqReadData() is a really general function
that manages to work with non-blocking I/O already, so why does
non-blocking query cancellation need to change its return values, or
whether or not it drops data in certain cases?
I'm also skeptical about the fact that we end up with a whole bunch of
new functions that are just wrappers around existing functions. That's
not a scalable approach. Every function that we have for a PGconn will
eventually need a variant that deals with a PGcancelConn. That seems
kind of pointless, especially considering that a PGcancelConn is
*exactly* a PGconn in disguise. If we decide to pursue the approach of
using the existing infrastructure for PGconn objects to handle query
cancellation, we ought to manipulate them using the same functions we
currently do, with some kind of mode or flag or switch or something
that you can use to turn a regular PGconn into something that cancels
a query. Maybe you create the PGconn and call
PQsprinkleMagicCancelDust() on it, and then you just proceed using the
existing functions, or something like that. Then, not only do the
existing functions not need query-cancel analogues, but any new
functions we add in the future don't either.
I'll set the target version for this patch to 16. I hope work continues.
Thanks,
--
Robert Haas
EDB: http://www.enterprisedb.com
Robert Haas <robertmhaas@gmail.com> writes:
That said, I don't think that this particular patch is going in the
right direction. I think Jacob's comment upthread is right on point:
"This seems like a big change compared to PQcancel(); one that's not
really hinted at elsewhere. Having the async version of an API open up
a completely different code path with new features is pretty
surprising to me." It seems to me that we want to end up with similar
code paths for PQcancel() and the non-blocking version of cancel. We
could get there in two ways. One way would be to implement the
non-blocking functionality in a manner that matches exactly what
PQcancel() does now. I imagine that the existing code from PQcancel()
would move, with some amount of change, into a new set of non-blocking
APIs. Perhaps PQcancel() would then be rewritten to use those new APIs
instead of hand-rolling the same logic. The other possible approach
would be to first change the blocking version of PQcancel() to use the
regular connection code instead of its own idiosyncratic logic, and
then as a second step, extend it with non-blocking interfaces that use
the regular non-blocking connection code. With either of these
approaches, we end up with the functionality working similarly in the
blocking and non-blocking code paths.
I think you misunderstand where the real pain point is. The reason
that PQcancel's functionality is so limited has little to do with
blocking vs non-blocking, and everything to do with the fact that
it's designed to be safe to call from a SIGINT handler. That makes
it quite impractical to invoke OpenSSL, and probably our GSS code
as well. If we want support for all connection-time options then
we have to make a new function that does not promise signal safety.
I'm prepared to yield on the question of whether we should provide
a non-blocking version, though I still say that (a) an easier-to-call,
one-step blocking alternative would be good too, and (b) it should
not be designed around the assumption that there's a completely
independent state object being used to perform the cancel. Even in
the non-blocking case, callers should only deal with the original
PGconn.
Leaving the question of approach aside, I think it's fairly clear that
this patch cannot be seriously considered for v15.
Yeah, I don't think it's anywhere near fully baked yet. On the other
hand, we do have a couple of weeks left.
regards, tom lane
On Fri, Mar 25, 2022 at 2:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
I think you misunderstand where the real pain point is. The reason
that PQcancel's functionality is so limited has little to do with
blocking vs non-blocking, and everything to do with the fact that
it's designed to be safe to call from a SIGINT handler. That makes
it quite impractical to invoke OpenSSL, and probably our GSS code
as well. If we want support for all connection-time options then
we have to make a new function that does not promise signal safety.
Well, that's a fair point, but it's somewhat orthogonal to the one I'm
making, which is that a non-blocking version of function X might be
expected to share code or at least functionality with X itself. Having
something that is named in a way that implies asynchrony without other
differences but which is actually different in other important ways is
no good.
I'm prepared to yield on the question of whether we should provide
a non-blocking version, though I still say that (a) an easier-to-call,
one-step blocking alternative would be good too, and (b) it should
not be designed around the assumption that there's a completely
independent state object being used to perform the cancel. Even in
the non-blocking case, callers should only deal with the original
PGconn.
Well, this sounds like you're arguing for the first of the two
approaches I thought would be acceptable, rather than the second.
Leaving the question of approach aside, I think it's fairly clear that
this patch cannot be seriously considered for v15.Yeah, I don't think it's anywhere near fully baked yet. On the other
hand, we do have a couple of weeks left.
We do?
--
Robert Haas
EDB: http://www.enterprisedb.com
Robert Haas <robertmhaas@gmail.com> writes:
Well, that's a fair point, but it's somewhat orthogonal to the one I'm
making, which is that a non-blocking version of function X might be
expected to share code or at least functionality with X itself. Having
something that is named in a way that implies asynchrony without other
differences but which is actually different in other important ways is
no good.
Yeah. We need to choose a name for these new function(s) that is
sufficiently different from "PQcancel" that people won't expect them
to behave exactly the same as that does. I lack any good ideas about
that, how about you?
Yeah, I don't think it's anywhere near fully baked yet. On the other
hand, we do have a couple of weeks left.
We do?
Um, you did read the psql-release discussion about setting the feature
freeze deadline, no?
regards, tom lane
Thanks for all the feedback everyone. I'll try to send a new patch
later this week that includes user facing docs and a simplified API.
For now a few responses:
Yeah. We need to choose a name for these new function(s) that is
sufficiently different from "PQcancel" that people won't expect them
to behave exactly the same as that does. I lack any good ideas about
that, how about you?
So I guess the names I proposed were not great, since everyone seems to be falling over them.
But I'd like to make my intention clear with the current naming. After this patch there would be
four different APIs for starting a cancelation:
1. PQrequestCancel: deprecated+old, not signal-safe function for requesting query cancellation, only uses a specific set of connection options
2. PQcancel: Cancel queries in a signal safe way, to be signal-safe it only uses a limited set of connection options
3. PQcancelConnect: Cancel queries in a non-signal safe way that uses all connection options
4. PQcancelConnectStart: Cancel queries in a non-signal safe and non-blocking way that uses all connection options
So the idea was that you should not look at PQcancelConnectStart as the non-blocking
version of PQcancel, but as the non-blocking version of PQcancelConnect. I'll try to
think of some different names too, but IMHO these names could be acceptable
when their differences are addressed sufficiently in the documentation.
One other approach to naming that comes to mind now is repurposing PQrequestCancel:
1. PQrequestCancel: Cancel queries in a non-signal safe way that uses all connection options
2. PQrequestCancelStart: Cancel queries in a non-signal safe and non-blocking way that uses all connection options
3. PQcancel: Cancel queries in a signal safe way, to be signal-safe it only uses a limited set of connection options
I think it's probably a
bad sign that this function is tinkering with logic in this sort of
low-level function anyway. pqReadData() is a really general function
that manages to work with non-blocking I/O already, so why does
non-blocking query cancellation need to change its return values, or
whether or not it drops data in certain cases?
The reason for this low level change is that the cancellation part of the
Postgres protocol is following a different, much more simplistic design
than all the other parts. The client does not expect a response message back
from the server after sending the cancellation request. The expectation
is that the server signals completion by closing the connection, i.e. sending EOF.
For all other parts of the protocol, connection termination should be initiated
client side by sending a Terminate message. So the server closing (sending
EOF) is always unexpected and is thus currently considered an error by pqReadData.
But since this is not the case for the cancellation protocol, the result is
changed to -2 in case of EOF to make it possible to distinguish between
an EOF and an actual error.
And it updates that function to return -2 when the is
ECONNRESET, which seems to fly in the face of the comment's idea that
this is the "clean connection closure" case.
The diff sadly does not include the very relevant comment right above these
lines. Pasting the whole case statement here to clear up this confusion:
case SSL_ERROR_ZERO_RETURN:
/*
* Per OpenSSL documentation, this error code is only returned for
* a clean connection closure, so we should not report it as a
* server crash.
*/
appendPQExpBufferStr(&conn->errorMessage,
libpq_gettext("SSL connection has been closed unexpectedly\n"));
result_errno = ECONNRESET;
n = -2;
break;
For example, it updates the
comment for pqsecure_read() to say "Returns -1 in case of failures,
except in the case of clean connection closure then it returns -2."
But that function calls any of three different implementation
functions depending on the situation and the patch only updates one of
them.
That comment is indeed not describing what is happening correctly and I'll
try to make it clearer. The main reason for it being incorrect is coming from
the fact that receiving EOFs is handled in different places based on the
encryption method:
1. Unencrypted TCP: EOF is not returned as an error by pqsecure_read, but detected by pqReadData (see comments related to definitelyEOF)
2. OpenSSL: EOF is returned as an error by pqsecure_read (see copied case statement above)
3. GSS: When writing the patch I was not sure how EOF handling worked here, but given that the tests passed for Jacob on GSS, I'm guessing it works the same as unencrypted TCP.
I attached a new version of this patch. Which does three main things:
1. Change the PQrequestCancel implementation to use the regular
connection establishement code, to support all connection options
including encryption.
2. Add PQrequestCancelStart which is a thread-safe and non-blocking
version of this new PQrequestCancel implementation.
3. Add PQconnectComplete, which completes a connection started by
PQrequestCancelStart. This is useful if you want a thread-safe, but
blocking cancel (without having a need for signal safety).
This change un-deprecates PQrequestCancel, since now there's actually an
advantage to using it over PQcancel. It also includes user facing documentation
for all these functions.
As a API design change from the previous version, PQrequestCancelStart now
returns a regular PGconn for the cancel connection.
@Tom Lane regarding this:
Even in the non-blocking case, callers should only deal with the original PGconn.
This would by definition result in non-threadsafe code (afaict). So I refrained from doing this.
The blocking version doesn't expose a PGconn at all, but the non-blocking one now returns a new PGconn.
There's two more changes that I at least want to do before considering this patch mergable:
1. Go over all the functions that can be called with a PGconn, but should not be
called with a cancellation PGconn and error out or exit early.
2. Copy over the SockAddr from the original connection and always connect to
the same socket. I believe with the current code the cancellation could end up
at the wrong server if there are multiple hosts listed in the connection string.
And there's a third item that I would like to do as a bonus:
3. Actually use the non-blocking API for the postgres_fdw code to implement a
timeout. Which would allow this comment can be removed:
/*
* Issue cancel request. Unfortunately, there's no good way to limit the
* amount of time that we might block inside PQgetCancel().
*/
So a next version of this patch can be expected somewhere later this week.
But any feedback on the current version would be appreciated. Because
these 3 changes won't change the overall design much.
Jelte
Attachments:
0001-Add-documentation-for-libpq_pipeline-tests.patchapplication/octet-stream; name=0001-Add-documentation-for-libpq_pipeline-tests.patchDownload
From 22a02899d47d46ed05ada2e38e3f9804981b96eb Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Thu, 13 Jan 2022 15:26:35 +0100
Subject: [PATCH 1/2] Add documentation for libpq_pipeline tests
This adds some explanation on how to run and add libpq tests.
---
src/test/modules/libpq_pipeline/README | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/src/test/modules/libpq_pipeline/README b/src/test/modules/libpq_pipeline/README
index d8174dd579..6eda6c5756 100644
--- a/src/test/modules/libpq_pipeline/README
+++ b/src/test/modules/libpq_pipeline/README
@@ -1 +1,21 @@
Test programs and libraries for libpq
+=====================================
+
+You can manually run a specific test by running:
+
+ ./libpq_pipeline <name of test>
+
+To add a new libpq test to this module you need to edit libpq_pipeline.c. There
+you should add the name of your new test to the "print_test_list" function.
+Then in main you should do something when this test name is passed to the
+program.
+
+If the order in which postgres protocol messages are sent is deterministic for
+your test. Then you can generate a trace of these messages using the following
+command:
+
+ ./libpq_pipeline mynewtest -t traces/mynewtest.trace
+
+Once you've done that you should make sure that when running "make check"
+the generated trace is compared to the expected trace. This is done by adding
+your test name to the $cmptrace definition in the t/001_libpq_pipeline.pl file
--
2.17.1
0002-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=0002-Add-non-blocking-version-of-PQcancel.patchDownload
From 98a67a65eee6b7ee1275e48e5053ba8ab3055014 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 12 Jan 2022 09:52:05 +0100
Subject: [PATCH 2/2] Add non-blocking version of PQcancel
This patch does two things:
1. Change PQrequestCancel to use the regular connection establishement,
to address a few security issues.
2. Add PQrequestCancelStart which is a thread safe and non blocking
version of this new PQrequestCancel implementation.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns.
This patch adds a new cancellation API to libpq which is called
PQrequestCancelStart. This API can be used to send cancellations in a
non-blocking fashion.
This patch also includes a test for this all of libpq cancellation APIs.
The test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
contrib/dblink/dblink.c | 12 +-
contrib/postgres_fdw/connection.c | 11 +-
doc/src/sgml/libpq.sgml | 212 +++++++++++++----
src/fe_utils/connect_utils.c | 10 +-
src/interfaces/libpq/exports.txt | 2 +
src/interfaces/libpq/fe-connect.c | 225 +++++++++++++++---
src/interfaces/libpq/fe-misc.c | 15 +-
src/interfaces/libpq/fe-secure-openssl.c | 2 +-
src/interfaces/libpq/fe-secure.c | 6 +
src/interfaces/libpq/libpq-fe.h | 13 +-
src/interfaces/libpq/libpq-int.h | 2 +
src/test/isolation/isolationtester.c | 29 +--
.../modules/libpq_pipeline/libpq_pipeline.c | 214 ++++++++++++++++-
13 files changed, 627 insertions(+), 126 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index a06d4bd12d..30cbb22a22 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1380,22 +1380,14 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
-
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
-
- if (res == 1)
+ if (PQrequestCancel(conn))
PG_RETURN_TEXT_P(cstring_to_text("OK"));
else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(PQerrorMessage(conn)));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 129ca79221..2e182645f7 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -1263,8 +1263,6 @@ pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel)
static bool
pgfdw_cancel_query(PGconn *conn)
{
- PGcancel *cancel;
- char errbuf[256];
PGresult *result = NULL;
TimestampTz endtime;
bool timed_out;
@@ -1279,19 +1277,14 @@ pgfdw_cancel_query(PGconn *conn)
* Issue cancel request. Unfortunately, there's no good way to limit the
* amount of time that we might block inside PQgetCancel().
*/
- if ((cancel = PQgetCancel(conn)))
- {
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ if (!PQrequestCancel(conn))
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
+ pchomp(PQerrorMessage(conn)))));
return false;
}
- PQfreeCancel(cancel);
- }
/* Get and discard the result of the query. */
if (pgfdw_get_cleanup_result(conn, endtime, &result, &timed_out))
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 0b2a8720f0..6b0683b9b0 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -499,6 +499,30 @@ switch(PQstatus(conn))
</listitem>
</varlistentry>
+ <varlistentry id="libpq-PQconnectComplete">
+ <term><function>PQconnectComplete</function><indexterm><primary>PQconnectComplete</primary></indexterm></term>
+ <listitem>
+ <para>
+ Complete the connection attempt on a nonblocking connection and block
+ until it is completed.
+
+<synopsis>
+int PQconnectPoll(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This function can be used instead of
+ <xref linkend="libpq-PQconnectPoll"/>
+ to complete a connection that was initially started in a non blocking
+ manner. However, instead of continuing to complete the connection in a
+ non blocking way, calling this function will block until the connection
+ is completed. This is especially useful to complete connections that were
+ started by <xref linkend="libpq-PQrequestCancelStart"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQconndefaults">
<term><function>PQconndefaults</function><indexterm><primary>PQconndefaults</primary></indexterm></term>
<listitem>
@@ -660,7 +684,7 @@ void PQreset(PGconn *conn);
<varlistentry id="libpq-PQresetStart">
<term><function>PQresetStart</function><indexterm><primary>PQresetStart</primary></indexterm></term>
- <term><function>PQresetPoll</function><indexterm><primary>PQresetPoll</primary></indexterm></term>
+ <term id="libpq-PQresetPoll"><function>PQresetPoll</function><indexterm><primary>PQresetPoll</primary></indexterm></term>
<listitem>
<para>
Reset the communication channel to the server, in a nonblocking manner.
@@ -5617,13 +5641,137 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQrequestCancel">
+ <term><function>PQrequestCancel</function><indexterm><primary>PQrequestCancel</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command.
+<synopsis>
+int PQrequestCancel(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This request is made over a connection that uses the same connection
+ options as the the original <structname>PGconn</structname>. So when the
+ original connection is encrypted (using TLS or GSS), the connection for
+ the cancel request connection is encrypted in the same. Any connection
+ options that only make sense for authentication or after authentication
+ are ignored though, because cancellation requests do not require
+ authentication.
+ </para>
+
+ <para>
+ This function operates directly on the <structname>PGconn</structname>
+ object, and in case of failure stores the error message in the
+ <structname>PGconn</structname> object (whence it can be retrieved
+ by <xref linkend="libpq-PQerrorMessage"/>). This behaviour makes this
+ function unsafe to call from within multi-threaded programs or
+ signal handlers, since it is possible that overwriting the
+ <structname>PGconn</structname>'s error message will
+ mess up the operation currently in progress on the connection in another
+ thread.
+ </para>
+
+ <para>
+ The return value is 1 if the cancel request was successfully
+ dispatched and 0 if not. Successful dispatch is no guarantee that the
+ request will have any effect, however. If the cancellation is effective,
+ the current command will terminate early and return an error result. If
+ the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at
+ all.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQrequestCancelStart">
+ <term><function>PQrequestCancelStart</function><indexterm><primary>PQrequestCancelStart</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of
+ <xref linkend="libpq-PQrequestCancel"/>
+ that can be used in thread-safe and/or non-blocking manner.
+<synopsis>
+PGconn *PQrequestCancelStart(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This function returns a new <structname>PGconn</structname>. This
+ connection object can be used to cancel the query that's running on the
+ original connection in a thread-safe way. To do so
+ <xref linkend="libpq-PQrequestCancel"/>
+ must be called while no other thread is using the original PGconn. Then
+ the returned <structname>PGconn</structname>
+ can be used at a later point in any thread to send a cancel request.
+ A cancel request can be sent using the returned PGconn in two ways,
+ non-blocking using <xref linkend="libpq-PQconnectPoll"/>
+ or blocking using <xref linkend="libpq-PQconnectComplete"/>.
+ </para>
+
+ <para>
+ In addition to all the statuses that a regular
+ <structname>PGconn</structname>
+ can have returned connection can have two additional statuses:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQconnectPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQrequestCancel"/>. No connection to the
+ server has been initiated yet at this point. To start cancel request
+ initiation use <xref linkend="libpq-PQconnectPoll"/>
+ for non-blocking behaviour and <xref linkend="libpq-PQconnectComplete"/>
+ for blocking behaviour.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-connection-cancel-finished">
+ <term><symbol>CONNECTION_CANCEL_FINISHED</symbol></term>
+ <listitem>
+ <para>
+ Cancel request was successfully sent. It's not possible to continue
+ using the cancellation connection now, so it should be freed using
+ <xref linkend="libpq-PQfinish"/>. It's also possible to reset the
+ cancellation connection instead using
+ <xref linkend="libpq-PQresetStart"/>, that way it can be reused to
+ cancel a future query on the same connection.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ Since this object represents a connection only meant for cancellations it
+ can only be used with a limited subset of the functions that can be used
+ for a regular <structname>PGconn</structname> object. The functions that
+ this object can be passed to are
+ <xref linkend="libpq-PQstatus"/>,
+ <xref linkend="libpq-PQerrorMessage"/>,
+ <xref linkend="libpq-PQconnectComplete"/>,
+ <xref linkend="libpq-PQconnectPoll"/>,
+ <xref linkend="libpq-PQsocket"/>,
+ <xref linkend="libpq-PQresetStart"/>, and
+ <xref linkend="libpq-PQfinish"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -5665,7 +5813,9 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ A less secure version of
+ <xref linkend="libpq-PQrequestCancel"/>
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
@@ -5679,15 +5829,6 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
recommended size is 256 bytes).
</para>
- <para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
<para>
<xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
handler, if the <parameter>errbuf</parameter> is a local variable in the
@@ -5696,33 +5837,24 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
also be invoked from a thread that is separate from the one
manipulating the <structname>PGconn</structname> object.
</para>
- </listitem>
- </varlistentry>
- </variablelist>
-
- <variablelist>
- <varlistentry id="libpq-PQrequestCancel">
- <term><function>PQrequestCancel</function><indexterm><primary>PQrequestCancel</primary></indexterm></term>
-
- <listitem>
- <para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
-<synopsis>
-int PQrequestCancel(PGconn *conn);
-</synopsis>
- </para>
<para>
- Requests that the server abandon processing of the current
- command. It operates directly on the
- <structname>PGconn</structname> object, and in case of failure stores the
- error message in the <structname>PGconn</structname> object (whence it can
- be retrieved by <xref linkend="libpq-PQerrorMessage"/>). Although
- the functionality is the same, this approach is not safe within
- multiple-thread programs or signal handlers, since it is possible
- that overwriting the <structname>PGconn</structname>'s error message will
- mess up the operation currently in progress on the connection.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. When calling this function a
+ connection is made to the postgres host using the same port. The only
+ connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. This means the connection
+ is never encrypted using TLS or GSS.
</para>
</listitem>
</varlistentry>
@@ -8835,10 +8967,10 @@ int PQisthreadsafe();
</para>
<para>
- The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
+ The functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQrequestCancelStart"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index a30c66f13a..ff18dab043 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -162,19 +162,11 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
-
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQrequestCancel(conn);
}
PQfinish(conn);
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index e8bcc88370..f7609d0c64 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -186,3 +186,5 @@ PQpipelineStatus 183
PQsetTraceFlags 184
PQmblenBounded 185
PQsendFlushRequest 186
+PQrequestCancelStart 187
+PQconnectComplete 188
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index cf554d389f..5462e1305c 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -378,6 +378,7 @@ static int connectDBComplete(PGconn *conn);
static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
@@ -604,8 +605,11 @@ pqDropServerData(PGconn *conn)
if (conn->write_err_msg)
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -737,6 +741,58 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConnectStart
+ *
+ * Asynchronously cancel a request on the given connection. This requires
+ * polling the returned PGconn to actually complete the cancellation of the
+ * request.
+ */
+PGconn *
+PQrequestCancelStart(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ appendPQExpBufferStr(&cancelConn->errorMessage, libpq_gettext("passed connection was NULL\n"));
+ return cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ appendPQExpBufferStr(&cancelConn->errorMessage, libpq_gettext("passed connection is not open\n"));
+ return cancelConn;
+ }
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGconn *) cancelConn;
+
+ /*
+ * Copy over information needed to cancel
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return cancelConn;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return cancelConn;
+}
+
/*
* PQconnectStartParams
*
@@ -914,6 +970,46 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ appendPQExpBufferStr(&dstConn->errorMessage,
+ libpq_gettext("out of memory\n"));
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2134,6 +2230,15 @@ connectDBComplete(PGconn *conn)
if (conn == NULL || conn->status == CONNECTION_BAD)
return 0;
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!connectDBStart(conn))
+ {
+ conn->status = CONNECTION_BAD;
+ return 0;
+ }
+ }
+
/*
* Set up a time limit, if connect_timeout isn't zero.
*/
@@ -2274,13 +2379,15 @@ PQconnectPoll(PGconn *conn)
switch (conn->status)
{
/*
- * We really shouldn't have been polled in these two cases, but we
- * can handle it.
+ * We really shouldn't have been polled in these three cases, but
+ * we can handle it.
*/
case CONNECTION_BAD:
return PGRES_POLLING_FAILED;
case CONNECTION_OK:
return PGRES_POLLING_OK;
+ case CONNECTION_CANCEL_FINISHED:
+ return PGRES_POLLING_OK;
/* These are reading states */
case CONNECTION_AWAITING_RESPONSE:
@@ -2292,6 +2399,17 @@ PQconnectPoll(PGconn *conn)
/* Load waiting data */
int n = pqReadData(conn);
+ if (n == -2 && conn->cancelRequest)
+ {
+ /*
+ * This is the expected end state for cancel connections.
+ * They are closed once the cancel is processed by the
+ * server.
+ */
+ conn->status = CONNECTION_CANCEL_FINISHED;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+ }
if (n < 0)
goto error_return;
if (n == 0)
@@ -2301,6 +2419,7 @@ PQconnectPoll(PGconn *conn)
}
/* These are writing states, so we just proceed. */
+ case CONNECTION_STARTING:
case CONNECTION_STARTED:
case CONNECTION_MADE:
break;
@@ -2758,6 +2877,16 @@ keep_going: /* We will come back to here until there is
}
}
+ case CONNECTION_STARTING:
+ {
+ if (!connectDBStart(conn))
+ {
+ goto error_return;
+ }
+ conn->status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+
case CONNECTION_STARTED:
{
socklen_t optlen = sizeof(optval);
@@ -2966,6 +3095,25 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ appendPQExpBuffer(&conn->errorMessage,
+ libpq_gettext("could not send cancel packet: %s\n"),
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -4194,6 +4342,11 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4311,6 +4464,12 @@ PQresetStart(PGconn *conn)
{
closePGconn(conn);
+ if (conn->cancelRequest)
+ {
+ conn->status = CONNECTION_STARTING;
+ return 1;
+ }
+
return connectDBStart(conn);
}
@@ -4663,6 +4822,22 @@ cancel_errReturn:
return false;
}
+/*
+ * PQconnectComplete: takes a non blocking cancel connection and completes it
+ * in a blocking manner.
+ *
+ * Returns 1 if able to connect successfully and 0 if not.
+ *
+ * This can useful if you only care about the thread safety of
+ * PQrequestCancelStart and not about its non blocking functionality.
+ */
+int
+PQconnectComplete(PGconn *cancelConn)
+{
+ connectDBComplete(cancelConn);
+ return cancelConn->status != CONNECTION_BAD;
+}
+
/*
* PQrequestCancel: old, not thread-safe function for requesting query cancel
@@ -4679,45 +4854,31 @@ cancel_errReturn:
int
PQrequestCancel(PGconn *conn)
{
- int r;
- PGcancel *cancel;
+ PGconn *cancelConn = NULL;
- /* Check we have an open connection */
- if (!conn)
- return false;
-
- if (conn->sock == PGINVALID_SOCKET)
+ cancelConn = PQrequestCancelStart(conn);
+ if (!cancelConn)
{
- strlcpy(conn->errorMessage.data,
- "PQrequestCancel() -- connection is not open\n",
- conn->errorMessage.maxlen);
- conn->errorMessage.len = strlen(conn->errorMessage.data);
- conn->errorReported = 0;
-
+ appendPQExpBufferStr(&conn->errorMessage, libpq_gettext("out of memory\n"));
return false;
}
- cancel = PQgetCancel(conn);
- if (cancel)
- {
- r = PQcancel(cancel, conn->errorMessage.data,
- conn->errorMessage.maxlen);
- PQfreeCancel(cancel);
- }
- else
+ if (cancelConn->status == CONNECTION_BAD)
{
- strlcpy(conn->errorMessage.data, "out of memory",
- conn->errorMessage.maxlen);
- r = false;
+ appendPQExpBufferStr(&conn->errorMessage, PQerrorMessage(cancelConn));
+ freePGconn(cancelConn);
+ return false;
}
- if (!r)
+ if (!PQconnectComplete(cancelConn))
{
- conn->errorMessage.len = strlen(conn->errorMessage.data);
- conn->errorReported = 0;
+ appendPQExpBufferStr(&conn->errorMessage, PQerrorMessage(cancelConn));
+ freePGconn(cancelConn);
+ return false;
}
- return r;
+ freePGconn(cancelConn);
+ return true;
}
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index d76bb3957a..a944cb2c12 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -558,8 +558,11 @@ pqPutMsgEnd(PGconn *conn)
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
- * -1: error detected (including EOF = connection closure);
+ * -1: error detected (excluding EOF = connection closure);
* conn->errorMessage set
+ * -2: EOF detected, connection is closed
+ * conn->errorMessage set
+ *
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
@@ -642,7 +645,7 @@ retry3:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -737,7 +740,7 @@ retry4:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -755,13 +758,17 @@ definitelyEOF:
libpq_gettext("server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.\n"));
+ /* Do *not* drop any already-read data; caller still wants it */
+ pqDropConnection(conn, false);
+ conn->status = CONNECTION_BAD; /* No more connection to backend */
+ return -2;
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
/* Do *not* drop any already-read data; caller still wants it */
pqDropConnection(conn, false);
conn->status = CONNECTION_BAD; /* No more connection to backend */
- return -1;
+ return nread < 0 ? nread : -1;
}
/*
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index d3bf57b850..4ffaea63c1 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -252,7 +252,7 @@ rloop:
appendPQExpBufferStr(&conn->errorMessage,
libpq_gettext("SSL connection has been closed unexpectedly\n"));
result_errno = ECONNRESET;
- n = -1;
+ n = -2;
break;
default:
appendPQExpBuffer(&conn->errorMessage,
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index a1dc7b796d..9771805dd3 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -201,6 +201,12 @@ pqsecure_close(PGconn *conn)
* On failure, this function is responsible for appending a suitable message
* to conn->errorMessage. The caller must still inspect errno, but only
* to determine whether to continue/retry after error.
+ *
+ * Returns -1 in case of failures, except in the case of where a failure means
+ * that there was a clean connection closure, in those cases -2 is return.
+ * Currently only the TLS implementation of pqsecure_read ever returns -2. For
+ * the other implementations a clean connection closure is detected in
+ * pqReadData instead.
*/
ssize_t
pqsecure_read(PGconn *conn, void *ptr, size_t len)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 7986445f1a..42367d4886 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,12 +59,15 @@ typedef enum
{
CONNECTION_OK,
CONNECTION_BAD,
+ CONNECTION_CANCEL_FINISHED,
/* Non-blocking mode only below here */
/*
* The existence of these should never be relied upon - they should only
* be used for user feedback or similar purposes.
*/
+ CONNECTION_STARTING, /* Waiting for connection attempt to be
+ * started. */
CONNECTION_STARTED, /* Waiting for connection to be made. */
CONNECTION_MADE, /* Connection OK; waiting to send. */
CONNECTION_AWAITING_RESPONSE, /* Waiting for a response from the
@@ -165,6 +168,10 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -282,6 +289,7 @@ extern PGconn *PQconnectStart(const char *conninfo);
extern PGconn *PQconnectStartParams(const char *const *keywords,
const char *const *values, int expand_dbname);
extern PostgresPollingStatusType PQconnectPoll(PGconn *conn);
+extern int PQconnectComplete(PGconn *conn);
/* Synchronous (blocking) */
extern PGconn *PQconnectdb(const char *conninfo);
@@ -330,9 +338,12 @@ extern void PQfreeCancel(PGcancel *cancel);
/* issue a cancel request */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* more secure version of PQcancel */
extern int PQrequestCancel(PGconn *conn);
+/* non blocking and thread safe version of PQrequestCancel */
+extern PGconn *PQrequestCancelStart(PGconn *conn);
+
/* Accessor functions for PGconn objects */
extern char *PQdb(const PGconn *conn);
extern char *PQuser(const PGconn *conn);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index e0cee4b142..ff9555e263 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -394,6 +394,8 @@ struct pg_conn
char *ssl_max_protocol_version; /* maximum TLS protocol version */
char *target_session_attrs; /* desired session properties */
+ bool cancelRequest;
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 12179f2514..fe1ca168c8 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -948,26 +948,17 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
-
- if (cancel != NULL)
- {
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ if (PQrequestCancel(conn)) {
+ /*
+ * print to stdout not stderr, as this should appear
+ * in the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQerrorMessage(conn));
}
/*
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 0ff563f59a..95f1d5eb2f 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,215 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+static void
+confirm_query_cancelled(PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal("PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal("query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal("query failed with a different error than cancellation: %s", PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+static void
+test_cancel(PGconn *conn)
+{
+ PGcancel *cancel = NULL;
+ PGconn *cancelConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /* test PQcancel */
+ if (PQsendQuery(conn, "SELECT pg_sleep(3)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ if (PQsendQuery(conn, "SELECT pg_sleep(3)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ if (PQsendQuery(conn, "SELECT pg_sleep(3)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_cancelled(conn);
+
+ /* test PQrequestCancelStart and then polling with PQcancelConnectPoll */
+ if (PQsendQuery(conn, "SELECT pg_sleep(3)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ cancelConn = PQrequestCancelStart(conn);
+ if (PQstatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQconnectPoll(cancelConn);
+ int sock = PQsocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQerrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQstatus(cancelConn) != CONNECTION_CANCEL_FINISHED)
+ pg_fatal("unexpected cancel connection status: %s", PQerrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /*
+ * test PQresetStart works on the cancel connection and it can be reused
+ * after
+ */
+ if (!PQresetStart(cancelConn))
+ {
+ pg_fatal("cancel connection reset failed: %s", PQerrorMessage(cancelConn));
+ }
+
+ if (PQsendQuery(conn, "SELECT pg_sleep(3)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ if (PQstatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQresetPoll(cancelConn);
+ int sock = PQsocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQerrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQstatus(cancelConn) != CONNECTION_CANCEL_FINISHED)
+ pg_fatal("unexpected cancel connection status: %s", PQerrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ PQfinish(cancelConn);
+
+ /* test PQconnectComplete */
+ if (PQsendQuery(conn, "SELECT pg_sleep(3)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ cancelConn = PQrequestCancelStart(conn);
+ if (PQstatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ if (!PQconnectComplete(cancelConn))
+ pg_fatal("failed to send cancel: %s", PQerrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /* test PQconnectComplete with reset connection */
+ if (!PQresetStart(cancelConn))
+ {
+ pg_fatal("cancel connection reset failed: %s", PQerrorMessage(cancelConn));
+ }
+
+ if (PQsendQuery(conn, "SELECT pg_sleep(3)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ if (PQstatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ if (!PQconnectComplete(cancelConn))
+ pg_fatal("failed to send cancel: %s", PQerrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+ PQfinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -1545,6 +1754,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1642,7 +1852,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.17.1
Note that the patch is still variously failing in cirrus.
https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/37/3511
You may already know that it's possible to trigger the cirrus ci tasks using a
github branch. See src/tools/ci/README.
Attached is the latest version of this patch, which I think is now in a state
in which it could be merged. The changes are:
1. Don't do host and address discovery for cancel connections. It now
reuses raddr and whichhost from the original connection. This makes
sure the cancel always goes to the right server, even when DNS records
change or another server would be chosen now in case of connnection
strings containing multiple hosts.
2. Fix the windows CI failure. This is done by both using the threadsafe code
in the the dblink cancellation code, and also by not erroring a cancellation
connection on windows in case of any errors. This last one is to work around
the issue described in this thread:
/messages/by-id/90b34057-4176-7bb0-0dbb-9822a5f6425b@greiz-reinsdorf.de
I also went over most of the functions that take a PGconn, to see if they needed
extra checks to guard against being executed on cancel. So far all seemed fine,
either they should be okay to execute against a cancellation connection, or
they failed already anyway because a cancellation connection never reaches
the CONNECTION_OK state. So I didn't add any checks specifically for cancel
connections. I'll do this again next week with a fresh head, to see if I haven't
missed any cases.
I'll try to find some time early next week to implement non-blocking cancellation
usage in postgres_fdw, i.e. the bonus task I mentioned in my previous email. But
I don't think it's necessary to have that implemented before merging.
Attachments:
0002-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=0002-Add-non-blocking-version-of-PQcancel.patchDownload
From 8277c3f5eeaf13f018ab9b5899650f109c4a45b6 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 12 Jan 2022 09:52:05 +0100
Subject: [PATCH 2/2] Add non-blocking version of PQcancel
This patch does two things:
1. Change PQrequestCancel to use the regular connection establishement,
to address a few security issues.
2. Add PQrequestCancelStart which is a thread safe and non blocking
version of this new PQrequestCancel implementation.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns.
This patch adds a new cancellation API to libpq which is called
PQrequestCancelStart. This API can be used to send cancellations in a
non-blocking fashion.
This patch also includes a test for this all of libpq cancellation APIs.
The test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
contrib/dblink/dblink.c | 31 +-
contrib/postgres_fdw/connection.c | 19 +-
doc/src/sgml/libpq.sgml | 212 +++++++++--
src/fe_utils/connect_utils.c | 10 +-
src/interfaces/libpq/exports.txt | 2 +
src/interfaces/libpq/fe-connect.c | 341 +++++++++++++++---
src/interfaces/libpq/fe-misc.c | 15 +-
src/interfaces/libpq/fe-secure-openssl.c | 2 +-
src/interfaces/libpq/fe-secure.c | 6 +
src/interfaces/libpq/libpq-fe.h | 9 +-
src/interfaces/libpq/libpq-int.h | 4 +
src/test/isolation/isolationtester.c | 28 +-
.../modules/libpq_pipeline/libpq_pipeline.c | 214 ++++++++++-
13 files changed, 743 insertions(+), 150 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index a06d4bd12d..f3935331f6 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1380,22 +1380,35 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGconn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQrequestCancelStart(conn);
+ if (!cancelConn)
+ {
+ PG_RETURN_TEXT_P(cstring_to_text("out of memory"));
+ }
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ if (PQstatus(cancelConn) == CONNECTION_BAD)
+ {
+ msg = pchomp(PQerrorMessage(cancelConn));
+ PQfinish(cancelConn);
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
+ }
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
+ if (PQconnectComplete(cancelConn))
+ {
+ msg = "OK";
+ }
else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ {
+ msg = pchomp(PQerrorMessage(cancelConn));
+ }
+ PQfinish(cancelConn);
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 129ca79221..8ad810d621 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -1263,8 +1263,6 @@ pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel)
static bool
pgfdw_cancel_query(PGconn *conn)
{
- PGcancel *cancel;
- char errbuf[256];
PGresult *result = NULL;
TimestampTz endtime;
bool timed_out;
@@ -1279,18 +1277,13 @@ pgfdw_cancel_query(PGconn *conn)
* Issue cancel request. Unfortunately, there's no good way to limit the
* amount of time that we might block inside PQgetCancel().
*/
- if ((cancel = PQgetCancel(conn)))
+ if (!PQrequestCancel(conn))
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
- {
- ereport(WARNING,
- (errcode(ERRCODE_CONNECTION_FAILURE),
- errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
- }
- PQfreeCancel(cancel);
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQerrorMessage(conn)))));
+ return false;
}
/* Get and discard the result of the query. */
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 1c20901c3c..45f8001fbd 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -499,6 +499,30 @@ switch(PQstatus(conn))
</listitem>
</varlistentry>
+ <varlistentry id="libpq-PQconnectComplete">
+ <term><function>PQconnectComplete</function><indexterm><primary>PQconnectComplete</primary></indexterm></term>
+ <listitem>
+ <para>
+ Complete the connection attempt on a nonblocking connection and block
+ until it is completed.
+
+<synopsis>
+int PQconnectPoll(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This function can be used instead of
+ <xref linkend="libpq-PQconnectPoll"/>
+ to complete a connection that was initially started in a non blocking
+ manner. However, instead of continuing to complete the connection in a
+ non blocking way, calling this function will block until the connection
+ is completed. This is especially useful to complete connections that were
+ started by <xref linkend="libpq-PQrequestCancelStart"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQconndefaults">
<term><function>PQconndefaults</function><indexterm><primary>PQconndefaults</primary></indexterm></term>
<listitem>
@@ -660,7 +684,7 @@ void PQreset(PGconn *conn);
<varlistentry id="libpq-PQresetStart">
<term><function>PQresetStart</function><indexterm><primary>PQresetStart</primary></indexterm></term>
- <term><function>PQresetPoll</function><indexterm><primary>PQresetPoll</primary></indexterm></term>
+ <term id="libpq-PQresetPoll"><function>PQresetPoll</function><indexterm><primary>PQresetPoll</primary></indexterm></term>
<listitem>
<para>
Reset the communication channel to the server, in a nonblocking manner.
@@ -5617,13 +5641,137 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQrequestCancel">
+ <term><function>PQrequestCancel</function><indexterm><primary>PQrequestCancel</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command.
+<synopsis>
+int PQrequestCancel(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This request is made over a connection that uses the same connection
+ options as the the original <structname>PGconn</structname>. So when the
+ original connection is encrypted (using TLS or GSS), the connection for
+ the cancel request connection is encrypted in the same. Any connection
+ options that only make sense for authentication or after authentication
+ are ignored though, because cancellation requests do not require
+ authentication.
+ </para>
+
+ <para>
+ This function operates directly on the <structname>PGconn</structname>
+ object, and in case of failure stores the error message in the
+ <structname>PGconn</structname> object (whence it can be retrieved
+ by <xref linkend="libpq-PQerrorMessage"/>). This behaviour makes this
+ function unsafe to call from within multi-threaded programs or
+ signal handlers, since it is possible that overwriting the
+ <structname>PGconn</structname>'s error message will
+ mess up the operation currently in progress on the connection in another
+ thread.
+ </para>
+
+ <para>
+ The return value is 1 if the cancel request was successfully
+ dispatched and 0 if not. Successful dispatch is no guarantee that the
+ request will have any effect, however. If the cancellation is effective,
+ the current command will terminate early and return an error result. If
+ the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at
+ all.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQrequestCancelStart">
+ <term><function>PQrequestCancelStart</function><indexterm><primary>PQrequestCancelStart</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of
+ <xref linkend="libpq-PQrequestCancel"/>
+ that can be used in thread-safe and/or non-blocking manner.
+<synopsis>
+PGconn *PQrequestCancelStart(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This function returns a new <structname>PGconn</structname>. This
+ connection object can be used to cancel the query that's running on the
+ original connection in a thread-safe way. To do so
+ <xref linkend="libpq-PQrequestCancel"/>
+ must be called while no other thread is using the original PGconn. Then
+ the returned <structname>PGconn</structname>
+ can be used at a later point in any thread to send a cancel request.
+ A cancel request can be sent using the returned PGconn in two ways,
+ non-blocking using <xref linkend="libpq-PQconnectPoll"/>
+ or blocking using <xref linkend="libpq-PQconnectComplete"/>.
+ </para>
+
+ <para>
+ In addition to all the statuses that a regular
+ <structname>PGconn</structname>
+ can have returned connection can have two additional statuses:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQconnectPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQrequestCancel"/>. No connection to the
+ server has been initiated yet at this point. To start cancel request
+ initiation use <xref linkend="libpq-PQconnectPoll"/>
+ for non-blocking behaviour and <xref linkend="libpq-PQconnectComplete"/>
+ for blocking behaviour.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-connection-cancel-finished">
+ <term><symbol>CONNECTION_CANCEL_FINISHED</symbol></term>
+ <listitem>
+ <para>
+ Cancel request was successfully sent. It's not possible to continue
+ using the cancellation connection now, so it should be freed using
+ <xref linkend="libpq-PQfinish"/>. It's also possible to reset the
+ cancellation connection instead using
+ <xref linkend="libpq-PQresetStart"/>, that way it can be reused to
+ cancel a future query on the same connection.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ Since this object represents a connection only meant for cancellations it
+ can only be used with a limited subset of the functions that can be used
+ for a regular <structname>PGconn</structname> object. The functions that
+ this object can be passed to are
+ <xref linkend="libpq-PQstatus"/>,
+ <xref linkend="libpq-PQerrorMessage"/>,
+ <xref linkend="libpq-PQconnectComplete"/>,
+ <xref linkend="libpq-PQconnectPoll"/>,
+ <xref linkend="libpq-PQsocket"/>,
+ <xref linkend="libpq-PQresetStart"/>, and
+ <xref linkend="libpq-PQfinish"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -5665,7 +5813,9 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ A less secure version of
+ <xref linkend="libpq-PQrequestCancel"/>
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
@@ -5679,15 +5829,6 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
recommended size is 256 bytes).
</para>
- <para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
<para>
<xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
handler, if the <parameter>errbuf</parameter> is a local variable in the
@@ -5696,33 +5837,24 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
also be invoked from a thread that is separate from the one
manipulating the <structname>PGconn</structname> object.
</para>
- </listitem>
- </varlistentry>
- </variablelist>
-
- <variablelist>
- <varlistentry id="libpq-PQrequestCancel">
- <term><function>PQrequestCancel</function><indexterm><primary>PQrequestCancel</primary></indexterm></term>
-
- <listitem>
- <para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
-<synopsis>
-int PQrequestCancel(PGconn *conn);
-</synopsis>
- </para>
<para>
- Requests that the server abandon processing of the current
- command. It operates directly on the
- <structname>PGconn</structname> object, and in case of failure stores the
- error message in the <structname>PGconn</structname> object (whence it can
- be retrieved by <xref linkend="libpq-PQerrorMessage"/>). Although
- the functionality is the same, this approach is not safe within
- multiple-thread programs or signal handlers, since it is possible
- that overwriting the <structname>PGconn</structname>'s error message will
- mess up the operation currently in progress on the connection.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. When calling this function a
+ connection is made to the postgres host using the same port. The only
+ connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. This means the connection
+ is never encrypted using TLS or GSS.
</para>
</listitem>
</varlistentry>
@@ -8850,10 +8982,10 @@ int PQisthreadsafe();
</para>
<para>
- The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
+ The functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQrequestCancelStart"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index a30c66f13a..ff18dab043 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -162,19 +162,11 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
-
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQrequestCancel(conn);
}
PQfinish(conn);
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index e8bcc88370..f7609d0c64 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -186,3 +186,5 @@ PQpipelineStatus 183
PQsetTraceFlags 184
PQmblenBounded 185
PQsendFlushRequest 186
+PQrequestCancelStart 187
+PQconnectComplete 188
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index cf554d389f..e8356e75a2 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -378,6 +378,7 @@ static int connectDBComplete(PGconn *conn);
static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
@@ -604,8 +605,17 @@ pqDropServerData(PGconn *conn)
if (conn->write_err_msg)
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQresetStart invocations. Otherwise they don't know the secret token of
+ * the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -737,6 +747,68 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConnectStart
+ *
+ * Asynchronously cancel a request on the given connection. This requires
+ * polling the returned PGconn to actually complete the cancellation of the
+ * request.
+ */
+PGconn *
+PQrequestCancelStart(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ appendPQExpBufferStr(&cancelConn->errorMessage, libpq_gettext("passed connection was NULL\n"));
+ return cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ appendPQExpBufferStr(&cancelConn->errorMessage, libpq_gettext("passed connection is not open\n"));
+ return cancelConn;
+ }
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGconn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return cancelConn;
+
+ /*
+ * Copy cancelation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used.
+ */
+ memcpy(&cancelConn->raddr, &conn->raddr, sizeof(SockAddr));
+ cancelConn->whichhost = conn->whichhost;
+ conn->try_next_host = false;
+ conn->try_next_addr = false;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return cancelConn;
+}
+
/*
* PQconnectStartParams
*
@@ -914,6 +986,46 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ appendPQExpBufferStr(&dstConn->errorMessage,
+ libpq_gettext("out of memory\n"));
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2082,10 +2194,17 @@ connectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host,
+ * which is determined in PQcancelConnectStart. So leave these settings
+ * alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2134,6 +2253,15 @@ connectDBComplete(PGconn *conn)
if (conn == NULL || conn->status == CONNECTION_BAD)
return 0;
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!connectDBStart(conn))
+ {
+ conn->status = CONNECTION_BAD;
+ return 0;
+ }
+ }
+
/*
* Set up a time limit, if connect_timeout isn't zero.
*/
@@ -2274,13 +2402,15 @@ PQconnectPoll(PGconn *conn)
switch (conn->status)
{
/*
- * We really shouldn't have been polled in these two cases, but we
- * can handle it.
+ * We really shouldn't have been polled in these three cases, but
+ * we can handle it.
*/
case CONNECTION_BAD:
return PGRES_POLLING_FAILED;
case CONNECTION_OK:
return PGRES_POLLING_OK;
+ case CONNECTION_CANCEL_FINISHED:
+ return PGRES_POLLING_OK;
/* These are reading states */
case CONNECTION_AWAITING_RESPONSE:
@@ -2292,6 +2422,34 @@ PQconnectPoll(PGconn *conn)
/* Load waiting data */
int n = pqReadData(conn);
+#ifndef WIN32
+ if (n == -2 && conn->cancelRequest)
+#else
+
+ /*
+ * Windows is a bit special in its EOF behaviour for TCP.
+ * Sometimes it will error with an ECONNRESET when there is a
+ * clean connection closure. See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the
+ * cancellation anyway, so even if this is not always correct
+ * we do the same here.
+ */
+ if (n < 0 && conn->cancelRequest)
+#endif
+ {
+ /*
+ * This is the expected end state for cancel connections.
+ * They are closed once the cancel is processed by the
+ * server.
+ */
+ conn->status = CONNECTION_CANCEL_FINISHED;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+ }
if (n < 0)
goto error_return;
if (n == 0)
@@ -2301,6 +2459,7 @@ PQconnectPoll(PGconn *conn)
}
/* These are writing states, so we just proceed. */
+ case CONNECTION_STARTING:
case CONNECTION_STARTED:
case CONNECTION_MADE:
break;
@@ -2325,6 +2484,14 @@ keep_going: /* We will come back to here until there is
/* Time to advance to next address, or next host if no more addresses? */
if (conn->try_next_addr)
{
+ /*
+ * Cancel requests never have more addresses to try. They should only
+ * try a single one.
+ */
+ if (conn->cancelRequest)
+ {
+ goto error_return;
+ }
if (conn->addr_cur && conn->addr_cur->ai_next)
{
conn->addr_cur = conn->addr_cur->ai_next;
@@ -2344,6 +2511,15 @@ keep_going: /* We will come back to here until there is
int ret;
char portstr[MAXPGPATH];
+ /*
+ * Cancel requests never have more hosts to try. They should only try
+ * a single one.
+ */
+ if (conn->cancelRequest)
+ {
+ goto error_return;
+ }
+
if (conn->whichhost + 1 < conn->nconnhost)
conn->whichhost++;
else
@@ -2529,19 +2705,27 @@ keep_going: /* We will come back to here until there is
char host_addr[NI_MAXHOST];
/*
- * Advance to next possible host, if we've tried all of
- * the addresses for the current host.
+ * Cancel requests don't use addr_cur at all. They have
+ * their raddr field already filled in during
+ * initialization in PQcancelConnectStart.
*/
- if (addr_cur == NULL)
+ if (!conn->cancelRequest)
{
- conn->try_next_host = true;
- goto keep_going;
- }
+ /*
+ * Advance to next possible host, if we've tried all
+ * of the addresses for the current host.
+ */
+ if (addr_cur == NULL)
+ {
+ conn->try_next_host = true;
+ goto keep_going;
+ }
- /* Remember current address for possible use later */
- memcpy(&conn->raddr.addr, addr_cur->ai_addr,
- addr_cur->ai_addrlen);
- conn->raddr.salen = addr_cur->ai_addrlen;
+ /* Remember current address for possible use later */
+ memcpy(&conn->raddr.addr, addr_cur->ai_addr,
+ addr_cur->ai_addrlen);
+ conn->raddr.salen = addr_cur->ai_addrlen;
+ }
/*
* Set connip, too. Note we purposely ignore strdup
@@ -2557,7 +2741,7 @@ keep_going: /* We will come back to here until there is
conn->connip = strdup(host_addr);
/* Try to create the socket */
- conn->sock = socket(addr_cur->ai_family, SOCK_STREAM, 0);
+ conn->sock = socket(conn->raddr.addr.ss_family, SOCK_STREAM, 0);
if (conn->sock == PGINVALID_SOCKET)
{
int errorno = SOCK_ERRNO;
@@ -2567,12 +2751,18 @@ keep_going: /* We will come back to here until there is
* addresses to try; this reduces useless chatter in
* cases where the address list includes both IPv4 and
* IPv6 but kernel only accepts one family.
+ *
+ * Cancel requests never have more addresses to try.
+ * They should only try a single one.
*/
- if (addr_cur->ai_next != NULL ||
- conn->whichhost + 1 < conn->nconnhost)
+ if (!conn->cancelRequest)
{
- conn->try_next_addr = true;
- goto keep_going;
+ if (addr_cur->ai_next != NULL ||
+ conn->whichhost + 1 < conn->nconnhost)
+ {
+ conn->try_next_addr = true;
+ goto keep_going;
+ }
}
emitHostIdentityInfo(conn, host_addr);
appendPQExpBuffer(&conn->errorMessage,
@@ -2595,7 +2785,7 @@ keep_going: /* We will come back to here until there is
* TCP sockets, nonblock mode, close-on-exec. Try the
* next address if any of this fails.
*/
- if (addr_cur->ai_family != AF_UNIX)
+ if (conn->raddr.addr.ss_family != AF_UNIX)
{
if (!connectNoDelay(conn))
{
@@ -2624,7 +2814,7 @@ keep_going: /* We will come back to here until there is
}
#endif /* F_SETFD */
- if (addr_cur->ai_family != AF_UNIX)
+ if (conn->raddr.addr.ss_family != AF_UNIX)
{
#ifndef WIN32
int on = 1;
@@ -2718,8 +2908,9 @@ keep_going: /* We will come back to here until there is
* Start/make connection. This should not block, since we
* are in nonblock mode. If it does, well, too bad.
*/
- if (connect(conn->sock, addr_cur->ai_addr,
- addr_cur->ai_addrlen) < 0)
+ if (connect(conn->sock,
+ (struct sockaddr *) &conn->raddr.addr,
+ conn->raddr.salen) < 0)
{
if (SOCK_ERRNO == EINPROGRESS ||
#ifdef WIN32
@@ -2758,6 +2949,16 @@ keep_going: /* We will come back to here until there is
}
}
+ case CONNECTION_STARTING:
+ {
+ if (!connectDBStart(conn))
+ {
+ goto error_return;
+ }
+ conn->status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+
case CONNECTION_STARTED:
{
socklen_t optlen = sizeof(optval);
@@ -2966,6 +3167,25 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ appendPQExpBuffer(&conn->errorMessage,
+ libpq_gettext("could not send cancel packet: %s\n"),
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -4194,6 +4414,11 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4311,6 +4536,12 @@ PQresetStart(PGconn *conn)
{
closePGconn(conn);
+ if (conn->cancelRequest)
+ {
+ conn->status = CONNECTION_STARTING;
+ return 1;
+ }
+
return connectDBStart(conn);
}
@@ -4663,6 +4894,22 @@ cancel_errReturn:
return false;
}
+/*
+ * PQconnectComplete: takes a non blocking cancel connection and completes it
+ * in a blocking manner.
+ *
+ * Returns 1 if able to connect successfully and 0 if not.
+ *
+ * This can useful if you only care about the thread safety of
+ * PQrequestCancelStart and not about its non blocking functionality.
+ */
+int
+PQconnectComplete(PGconn *cancelConn)
+{
+ connectDBComplete(cancelConn);
+ return cancelConn->status != CONNECTION_BAD;
+}
+
/*
* PQrequestCancel: old, not thread-safe function for requesting query cancel
@@ -4679,45 +4926,31 @@ cancel_errReturn:
int
PQrequestCancel(PGconn *conn)
{
- int r;
- PGcancel *cancel;
-
- /* Check we have an open connection */
- if (!conn)
- return false;
+ PGconn *cancelConn = NULL;
- if (conn->sock == PGINVALID_SOCKET)
+ cancelConn = PQrequestCancelStart(conn);
+ if (!cancelConn)
{
- strlcpy(conn->errorMessage.data,
- "PQrequestCancel() -- connection is not open\n",
- conn->errorMessage.maxlen);
- conn->errorMessage.len = strlen(conn->errorMessage.data);
- conn->errorReported = 0;
-
+ appendPQExpBufferStr(&conn->errorMessage, libpq_gettext("out of memory\n"));
return false;
}
- cancel = PQgetCancel(conn);
- if (cancel)
- {
- r = PQcancel(cancel, conn->errorMessage.data,
- conn->errorMessage.maxlen);
- PQfreeCancel(cancel);
- }
- else
+ if (cancelConn->status == CONNECTION_BAD)
{
- strlcpy(conn->errorMessage.data, "out of memory",
- conn->errorMessage.maxlen);
- r = false;
+ appendPQExpBufferStr(&conn->errorMessage, PQerrorMessage(cancelConn));
+ freePGconn(cancelConn);
+ return false;
}
- if (!r)
+ if (!PQconnectComplete(cancelConn))
{
- conn->errorMessage.len = strlen(conn->errorMessage.data);
- conn->errorReported = 0;
+ appendPQExpBufferStr(&conn->errorMessage, PQerrorMessage(cancelConn));
+ freePGconn(cancelConn);
+ return false;
}
- return r;
+ freePGconn(cancelConn);
+ return true;
}
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index d76bb3957a..a944cb2c12 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -558,8 +558,11 @@ pqPutMsgEnd(PGconn *conn)
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
- * -1: error detected (including EOF = connection closure);
+ * -1: error detected (excluding EOF = connection closure);
* conn->errorMessage set
+ * -2: EOF detected, connection is closed
+ * conn->errorMessage set
+ *
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
@@ -642,7 +645,7 @@ retry3:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -737,7 +740,7 @@ retry4:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -755,13 +758,17 @@ definitelyEOF:
libpq_gettext("server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.\n"));
+ /* Do *not* drop any already-read data; caller still wants it */
+ pqDropConnection(conn, false);
+ conn->status = CONNECTION_BAD; /* No more connection to backend */
+ return -2;
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
/* Do *not* drop any already-read data; caller still wants it */
pqDropConnection(conn, false);
conn->status = CONNECTION_BAD; /* No more connection to backend */
- return -1;
+ return nread < 0 ? nread : -1;
}
/*
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index 24a598b6e4..8a2a7c112c 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -255,7 +255,7 @@ rloop:
appendPQExpBufferStr(&conn->errorMessage,
libpq_gettext("SSL connection has been closed unexpectedly\n"));
result_errno = ECONNRESET;
- n = -1;
+ n = -2;
break;
default:
appendPQExpBuffer(&conn->errorMessage,
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index a1dc7b796d..9771805dd3 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -201,6 +201,12 @@ pqsecure_close(PGconn *conn)
* On failure, this function is responsible for appending a suitable message
* to conn->errorMessage. The caller must still inspect errno, but only
* to determine whether to continue/retry after error.
+ *
+ * Returns -1 in case of failures, except in the case of where a failure means
+ * that there was a clean connection closure, in those cases -2 is return.
+ * Currently only the TLS implementation of pqsecure_read ever returns -2. For
+ * the other implementations a clean connection closure is detected in
+ * pqReadData instead.
*/
ssize_t
pqsecure_read(PGconn *conn, void *ptr, size_t len)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 7986445f1a..24695a6026 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,12 +59,15 @@ typedef enum
{
CONNECTION_OK,
CONNECTION_BAD,
+ CONNECTION_CANCEL_FINISHED,
/* Non-blocking mode only below here */
/*
* The existence of these should never be relied upon - they should only
* be used for user feedback or similar purposes.
*/
+ CONNECTION_STARTING, /* Waiting for connection attempt to be
+ * started. */
CONNECTION_STARTED, /* Waiting for connection to be made. */
CONNECTION_MADE, /* Connection OK; waiting to send. */
CONNECTION_AWAITING_RESPONSE, /* Waiting for a response from the
@@ -282,6 +285,7 @@ extern PGconn *PQconnectStart(const char *conninfo);
extern PGconn *PQconnectStartParams(const char *const *keywords,
const char *const *values, int expand_dbname);
extern PostgresPollingStatusType PQconnectPoll(PGconn *conn);
+extern int PQconnectComplete(PGconn *conn);
/* Synchronous (blocking) */
extern PGconn *PQconnectdb(const char *conninfo);
@@ -330,9 +334,12 @@ extern void PQfreeCancel(PGcancel *cancel);
/* issue a cancel request */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* more secure version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
+/* non-blocking and thread-safe version of PQrequestCancel */
+extern PGconn *PQrequestCancelStart(PGconn *conn);
+
/* Accessor functions for PGconn objects */
extern char *PQdb(const PGconn *conn);
extern char *PQuser(const PGconn *conn);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index e0cee4b142..50b6a7bc7d 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -394,6 +394,10 @@ struct pg_conn
char *ssl_max_protocol_version; /* maximum TLS protocol version */
char *target_session_attrs; /* desired session properties */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 12179f2514..b073235197 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -948,26 +948,18 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
-
- if (cancel != NULL)
+ if (PQrequestCancel(conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQerrorMessage(conn));
}
/*
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 0ff563f59a..4e53d3c165 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,215 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+static void
+confirm_query_cancelled(PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal("PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal("query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal("query failed with a different error than cancellation: %s", PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+static void
+test_cancel(PGconn *conn)
+{
+ PGcancel *cancel = NULL;
+ PGconn *cancelConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /* test PQcancel */
+ if (PQsendQuery(conn, "SELECT pg_sleep(10)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ if (PQsendQuery(conn, "SELECT pg_sleep(10)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ if (PQsendQuery(conn, "SELECT pg_sleep(10)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_cancelled(conn);
+
+ /* test PQrequestCancelStart and then polling with PQcancelConnectPoll */
+ if (PQsendQuery(conn, "SELECT pg_sleep(10)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ cancelConn = PQrequestCancelStart(conn);
+ if (PQstatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQconnectPoll(cancelConn);
+ int sock = PQsocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQerrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQstatus(cancelConn) != CONNECTION_CANCEL_FINISHED)
+ pg_fatal("unexpected cancel connection status: %s", PQerrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /*
+ * test PQresetStart works on the cancel connection and it can be reused
+ * after
+ */
+ if (!PQresetStart(cancelConn))
+ {
+ pg_fatal("cancel connection reset failed: %s", PQerrorMessage(cancelConn));
+ }
+
+ if (PQsendQuery(conn, "SELECT pg_sleep(10)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ if (PQstatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQresetPoll(cancelConn);
+ int sock = PQsocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQerrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQstatus(cancelConn) != CONNECTION_CANCEL_FINISHED)
+ pg_fatal("unexpected cancel connection status: %s", PQerrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ PQfinish(cancelConn);
+
+ /* test PQconnectComplete */
+ if (PQsendQuery(conn, "SELECT pg_sleep(10)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ cancelConn = PQrequestCancelStart(conn);
+ if (PQstatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ if (!PQconnectComplete(cancelConn))
+ pg_fatal("failed to send cancel: %s", PQerrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /* test PQconnectComplete with reset connection */
+ if (!PQresetStart(cancelConn))
+ {
+ pg_fatal("cancel connection reset failed: %s", PQerrorMessage(cancelConn));
+ }
+
+ if (PQsendQuery(conn, "SELECT pg_sleep(10)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ if (PQstatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ if (!PQconnectComplete(cancelConn))
+ pg_fatal("failed to send cancel: %s", PQerrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+ PQfinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -1545,6 +1754,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1642,7 +1852,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.17.1
Hereby what I consider the final version of this patch. I don't have any
changes planned myself (except for ones that come up during review).
Things that changed since the previous iteration:
1. postgres_fdw now uses the non-blocking cancellation API (including test).
2. Added some extra sleeps to the cancellation test, to remove random failures on FreeBSD.
Attachments:
0002-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=0002-Add-non-blocking-version-of-PQcancel.patchDownload
From ebb611ca522a5fbabf9334ed47681e9490644aea Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 12 Jan 2022 09:52:05 +0100
Subject: [PATCH 2/2] Add non-blocking version of PQcancel
This patch does four things:
1. Change the PQrequestCancel implementation to use the regular
connection establishement code, to support all connection options
including encryption.
2. Add PQrequestCancelStart which is a thread-safe and non-blocking
version of this new PQrequestCancel implementation.
3. Add PQconnectComplete, which completes a connection started by
PQrequestCancelStart. This is useful if you want a thread-safe but
blocking cancel (without having a need for signal-safety).
4. Use this new cancellation API everywhere in the codebase where
signal-safety is not a necessity.
This change un-deprecates PQrequestCancel, since now there's actually an
advantage to using it over PQcancel. It also includes user facing
documentation for all the newly added functions.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQrequestCancelStart can now be used
instead, to have a non-blocking way of sending cancel requests. The
postgres_fdw cancellation code has been modified to make use of this.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
contrib/dblink/dblink.c | 28 +-
contrib/postgres_fdw/connection.c | 93 ++++-
.../postgres_fdw/expected/postgres_fdw.out | 15 +
contrib/postgres_fdw/sql/postgres_fdw.sql | 8 +
doc/src/sgml/libpq.sgml | 212 +++++++++--
src/fe_utils/connect_utils.c | 10 +-
src/interfaces/libpq/exports.txt | 2 +
src/interfaces/libpq/fe-connect.c | 341 +++++++++++++++---
src/interfaces/libpq/fe-misc.c | 15 +-
src/interfaces/libpq/fe-secure-openssl.c | 2 +-
src/interfaces/libpq/fe-secure.c | 6 +
src/interfaces/libpq/libpq-fe.h | 9 +-
src/interfaces/libpq/libpq-int.h | 4 +
src/test/isolation/isolationtester.c | 28 +-
.../modules/libpq_pipeline/libpq_pipeline.c | 229 +++++++++++-
15 files changed, 849 insertions(+), 153 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index a06d4bd12d..551dc8617a 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1380,22 +1380,30 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGconn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
-
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ cancelConn = PQrequestCancelStart(conn);
+ if (PQstatus(cancelConn) == CONNECTION_BAD)
+ {
+ msg = pchomp(PQerrorMessage(cancelConn));
+ PQfinish(cancelConn);
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
+ }
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
+ if (PQconnectComplete(cancelConn))
+ {
+ msg = "OK";
+ }
else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ {
+ msg = pchomp(PQerrorMessage(cancelConn));
+ }
+ PQfinish(cancelConn);
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 129ca79221..c270ac3dd1 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -1263,35 +1263,98 @@ pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel)
static bool
pgfdw_cancel_query(PGconn *conn)
{
- PGcancel *cancel;
- char errbuf[256];
PGresult *result = NULL;
- TimestampTz endtime;
- bool timed_out;
/*
* If it takes too long to cancel the query and discard the result, assume
* the connection is dead.
*/
- endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ bool timed_out = false;
+ bool failed = false;
+ PGconn *cancel_conn = PQrequestCancelStart(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQstatus(cancel_conn) == CONNECTION_BAD)
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQerrorMessage(cancel_conn)))));
+ return false;
+ }
+
+ /* In what follows, do not leak any PGconn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQconnectPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQsocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ }
+ PG_CATCH();
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PQfinish(cancel_conn);
+ PG_RE_THROW();
+ }
+ PG_END_TRY();
+
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQerrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PQfinish(cancel_conn);
+ return failed;
}
+ PQfinish(cancel_conn);
/* Get and discard the result of the query. */
if (pgfdw_get_cleanup_result(conn, endtime, &result, &timed_out))
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 11e9b4e8cc..2608f63d79 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2567,6 +2567,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index 6b5de89e14..e17a9569b4 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -326,6 +326,7 @@ DELETE FROM loct_empty;
ANALYZE ft_empty;
EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft_empty ORDER BY c1;
+
-- ===================================================================
-- WHERE with remotely-executable conditions
-- ===================================================================
@@ -681,6 +682,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 1c20901c3c..45f8001fbd 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -499,6 +499,30 @@ switch(PQstatus(conn))
</listitem>
</varlistentry>
+ <varlistentry id="libpq-PQconnectComplete">
+ <term><function>PQconnectComplete</function><indexterm><primary>PQconnectComplete</primary></indexterm></term>
+ <listitem>
+ <para>
+ Complete the connection attempt on a nonblocking connection and block
+ until it is completed.
+
+<synopsis>
+int PQconnectPoll(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This function can be used instead of
+ <xref linkend="libpq-PQconnectPoll"/>
+ to complete a connection that was initially started in a non blocking
+ manner. However, instead of continuing to complete the connection in a
+ non blocking way, calling this function will block until the connection
+ is completed. This is especially useful to complete connections that were
+ started by <xref linkend="libpq-PQrequestCancelStart"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQconndefaults">
<term><function>PQconndefaults</function><indexterm><primary>PQconndefaults</primary></indexterm></term>
<listitem>
@@ -660,7 +684,7 @@ void PQreset(PGconn *conn);
<varlistentry id="libpq-PQresetStart">
<term><function>PQresetStart</function><indexterm><primary>PQresetStart</primary></indexterm></term>
- <term><function>PQresetPoll</function><indexterm><primary>PQresetPoll</primary></indexterm></term>
+ <term id="libpq-PQresetPoll"><function>PQresetPoll</function><indexterm><primary>PQresetPoll</primary></indexterm></term>
<listitem>
<para>
Reset the communication channel to the server, in a nonblocking manner.
@@ -5617,13 +5641,137 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQrequestCancel">
+ <term><function>PQrequestCancel</function><indexterm><primary>PQrequestCancel</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command.
+<synopsis>
+int PQrequestCancel(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This request is made over a connection that uses the same connection
+ options as the the original <structname>PGconn</structname>. So when the
+ original connection is encrypted (using TLS or GSS), the connection for
+ the cancel request connection is encrypted in the same. Any connection
+ options that only make sense for authentication or after authentication
+ are ignored though, because cancellation requests do not require
+ authentication.
+ </para>
+
+ <para>
+ This function operates directly on the <structname>PGconn</structname>
+ object, and in case of failure stores the error message in the
+ <structname>PGconn</structname> object (whence it can be retrieved
+ by <xref linkend="libpq-PQerrorMessage"/>). This behaviour makes this
+ function unsafe to call from within multi-threaded programs or
+ signal handlers, since it is possible that overwriting the
+ <structname>PGconn</structname>'s error message will
+ mess up the operation currently in progress on the connection in another
+ thread.
+ </para>
+
+ <para>
+ The return value is 1 if the cancel request was successfully
+ dispatched and 0 if not. Successful dispatch is no guarantee that the
+ request will have any effect, however. If the cancellation is effective,
+ the current command will terminate early and return an error result. If
+ the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at
+ all.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQrequestCancelStart">
+ <term><function>PQrequestCancelStart</function><indexterm><primary>PQrequestCancelStart</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of
+ <xref linkend="libpq-PQrequestCancel"/>
+ that can be used in thread-safe and/or non-blocking manner.
+<synopsis>
+PGconn *PQrequestCancelStart(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This function returns a new <structname>PGconn</structname>. This
+ connection object can be used to cancel the query that's running on the
+ original connection in a thread-safe way. To do so
+ <xref linkend="libpq-PQrequestCancel"/>
+ must be called while no other thread is using the original PGconn. Then
+ the returned <structname>PGconn</structname>
+ can be used at a later point in any thread to send a cancel request.
+ A cancel request can be sent using the returned PGconn in two ways,
+ non-blocking using <xref linkend="libpq-PQconnectPoll"/>
+ or blocking using <xref linkend="libpq-PQconnectComplete"/>.
+ </para>
+
+ <para>
+ In addition to all the statuses that a regular
+ <structname>PGconn</structname>
+ can have returned connection can have two additional statuses:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQconnectPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQrequestCancel"/>. No connection to the
+ server has been initiated yet at this point. To start cancel request
+ initiation use <xref linkend="libpq-PQconnectPoll"/>
+ for non-blocking behaviour and <xref linkend="libpq-PQconnectComplete"/>
+ for blocking behaviour.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-connection-cancel-finished">
+ <term><symbol>CONNECTION_CANCEL_FINISHED</symbol></term>
+ <listitem>
+ <para>
+ Cancel request was successfully sent. It's not possible to continue
+ using the cancellation connection now, so it should be freed using
+ <xref linkend="libpq-PQfinish"/>. It's also possible to reset the
+ cancellation connection instead using
+ <xref linkend="libpq-PQresetStart"/>, that way it can be reused to
+ cancel a future query on the same connection.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ Since this object represents a connection only meant for cancellations it
+ can only be used with a limited subset of the functions that can be used
+ for a regular <structname>PGconn</structname> object. The functions that
+ this object can be passed to are
+ <xref linkend="libpq-PQstatus"/>,
+ <xref linkend="libpq-PQerrorMessage"/>,
+ <xref linkend="libpq-PQconnectComplete"/>,
+ <xref linkend="libpq-PQconnectPoll"/>,
+ <xref linkend="libpq-PQsocket"/>,
+ <xref linkend="libpq-PQresetStart"/>, and
+ <xref linkend="libpq-PQfinish"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -5665,7 +5813,9 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ A less secure version of
+ <xref linkend="libpq-PQrequestCancel"/>
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
@@ -5679,15 +5829,6 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
recommended size is 256 bytes).
</para>
- <para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
<para>
<xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
handler, if the <parameter>errbuf</parameter> is a local variable in the
@@ -5696,33 +5837,24 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
also be invoked from a thread that is separate from the one
manipulating the <structname>PGconn</structname> object.
</para>
- </listitem>
- </varlistentry>
- </variablelist>
-
- <variablelist>
- <varlistentry id="libpq-PQrequestCancel">
- <term><function>PQrequestCancel</function><indexterm><primary>PQrequestCancel</primary></indexterm></term>
-
- <listitem>
- <para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
-<synopsis>
-int PQrequestCancel(PGconn *conn);
-</synopsis>
- </para>
<para>
- Requests that the server abandon processing of the current
- command. It operates directly on the
- <structname>PGconn</structname> object, and in case of failure stores the
- error message in the <structname>PGconn</structname> object (whence it can
- be retrieved by <xref linkend="libpq-PQerrorMessage"/>). Although
- the functionality is the same, this approach is not safe within
- multiple-thread programs or signal handlers, since it is possible
- that overwriting the <structname>PGconn</structname>'s error message will
- mess up the operation currently in progress on the connection.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. When calling this function a
+ connection is made to the postgres host using the same port. The only
+ connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. This means the connection
+ is never encrypted using TLS or GSS.
</para>
</listitem>
</varlistentry>
@@ -8850,10 +8982,10 @@ int PQisthreadsafe();
</para>
<para>
- The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
+ The functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQrequestCancelStart"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index a30c66f13a..ff18dab043 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -162,19 +162,11 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
-
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQrequestCancel(conn);
}
PQfinish(conn);
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index e8bcc88370..f7609d0c64 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -186,3 +186,5 @@ PQpipelineStatus 183
PQsetTraceFlags 184
PQmblenBounded 185
PQsendFlushRequest 186
+PQrequestCancelStart 187
+PQconnectComplete 188
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index cf554d389f..e8356e75a2 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -378,6 +378,7 @@ static int connectDBComplete(PGconn *conn);
static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
@@ -604,8 +605,17 @@ pqDropServerData(PGconn *conn)
if (conn->write_err_msg)
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQresetStart invocations. Otherwise they don't know the secret token of
+ * the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -737,6 +747,68 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConnectStart
+ *
+ * Asynchronously cancel a request on the given connection. This requires
+ * polling the returned PGconn to actually complete the cancellation of the
+ * request.
+ */
+PGconn *
+PQrequestCancelStart(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ appendPQExpBufferStr(&cancelConn->errorMessage, libpq_gettext("passed connection was NULL\n"));
+ return cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ appendPQExpBufferStr(&cancelConn->errorMessage, libpq_gettext("passed connection is not open\n"));
+ return cancelConn;
+ }
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGconn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return cancelConn;
+
+ /*
+ * Copy cancelation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used.
+ */
+ memcpy(&cancelConn->raddr, &conn->raddr, sizeof(SockAddr));
+ cancelConn->whichhost = conn->whichhost;
+ conn->try_next_host = false;
+ conn->try_next_addr = false;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return cancelConn;
+}
+
/*
* PQconnectStartParams
*
@@ -914,6 +986,46 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ appendPQExpBufferStr(&dstConn->errorMessage,
+ libpq_gettext("out of memory\n"));
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2082,10 +2194,17 @@ connectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host,
+ * which is determined in PQcancelConnectStart. So leave these settings
+ * alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2134,6 +2253,15 @@ connectDBComplete(PGconn *conn)
if (conn == NULL || conn->status == CONNECTION_BAD)
return 0;
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!connectDBStart(conn))
+ {
+ conn->status = CONNECTION_BAD;
+ return 0;
+ }
+ }
+
/*
* Set up a time limit, if connect_timeout isn't zero.
*/
@@ -2274,13 +2402,15 @@ PQconnectPoll(PGconn *conn)
switch (conn->status)
{
/*
- * We really shouldn't have been polled in these two cases, but we
- * can handle it.
+ * We really shouldn't have been polled in these three cases, but
+ * we can handle it.
*/
case CONNECTION_BAD:
return PGRES_POLLING_FAILED;
case CONNECTION_OK:
return PGRES_POLLING_OK;
+ case CONNECTION_CANCEL_FINISHED:
+ return PGRES_POLLING_OK;
/* These are reading states */
case CONNECTION_AWAITING_RESPONSE:
@@ -2292,6 +2422,34 @@ PQconnectPoll(PGconn *conn)
/* Load waiting data */
int n = pqReadData(conn);
+#ifndef WIN32
+ if (n == -2 && conn->cancelRequest)
+#else
+
+ /*
+ * Windows is a bit special in its EOF behaviour for TCP.
+ * Sometimes it will error with an ECONNRESET when there is a
+ * clean connection closure. See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the
+ * cancellation anyway, so even if this is not always correct
+ * we do the same here.
+ */
+ if (n < 0 && conn->cancelRequest)
+#endif
+ {
+ /*
+ * This is the expected end state for cancel connections.
+ * They are closed once the cancel is processed by the
+ * server.
+ */
+ conn->status = CONNECTION_CANCEL_FINISHED;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+ }
if (n < 0)
goto error_return;
if (n == 0)
@@ -2301,6 +2459,7 @@ PQconnectPoll(PGconn *conn)
}
/* These are writing states, so we just proceed. */
+ case CONNECTION_STARTING:
case CONNECTION_STARTED:
case CONNECTION_MADE:
break;
@@ -2325,6 +2484,14 @@ keep_going: /* We will come back to here until there is
/* Time to advance to next address, or next host if no more addresses? */
if (conn->try_next_addr)
{
+ /*
+ * Cancel requests never have more addresses to try. They should only
+ * try a single one.
+ */
+ if (conn->cancelRequest)
+ {
+ goto error_return;
+ }
if (conn->addr_cur && conn->addr_cur->ai_next)
{
conn->addr_cur = conn->addr_cur->ai_next;
@@ -2344,6 +2511,15 @@ keep_going: /* We will come back to here until there is
int ret;
char portstr[MAXPGPATH];
+ /*
+ * Cancel requests never have more hosts to try. They should only try
+ * a single one.
+ */
+ if (conn->cancelRequest)
+ {
+ goto error_return;
+ }
+
if (conn->whichhost + 1 < conn->nconnhost)
conn->whichhost++;
else
@@ -2529,19 +2705,27 @@ keep_going: /* We will come back to here until there is
char host_addr[NI_MAXHOST];
/*
- * Advance to next possible host, if we've tried all of
- * the addresses for the current host.
+ * Cancel requests don't use addr_cur at all. They have
+ * their raddr field already filled in during
+ * initialization in PQcancelConnectStart.
*/
- if (addr_cur == NULL)
+ if (!conn->cancelRequest)
{
- conn->try_next_host = true;
- goto keep_going;
- }
+ /*
+ * Advance to next possible host, if we've tried all
+ * of the addresses for the current host.
+ */
+ if (addr_cur == NULL)
+ {
+ conn->try_next_host = true;
+ goto keep_going;
+ }
- /* Remember current address for possible use later */
- memcpy(&conn->raddr.addr, addr_cur->ai_addr,
- addr_cur->ai_addrlen);
- conn->raddr.salen = addr_cur->ai_addrlen;
+ /* Remember current address for possible use later */
+ memcpy(&conn->raddr.addr, addr_cur->ai_addr,
+ addr_cur->ai_addrlen);
+ conn->raddr.salen = addr_cur->ai_addrlen;
+ }
/*
* Set connip, too. Note we purposely ignore strdup
@@ -2557,7 +2741,7 @@ keep_going: /* We will come back to here until there is
conn->connip = strdup(host_addr);
/* Try to create the socket */
- conn->sock = socket(addr_cur->ai_family, SOCK_STREAM, 0);
+ conn->sock = socket(conn->raddr.addr.ss_family, SOCK_STREAM, 0);
if (conn->sock == PGINVALID_SOCKET)
{
int errorno = SOCK_ERRNO;
@@ -2567,12 +2751,18 @@ keep_going: /* We will come back to here until there is
* addresses to try; this reduces useless chatter in
* cases where the address list includes both IPv4 and
* IPv6 but kernel only accepts one family.
+ *
+ * Cancel requests never have more addresses to try.
+ * They should only try a single one.
*/
- if (addr_cur->ai_next != NULL ||
- conn->whichhost + 1 < conn->nconnhost)
+ if (!conn->cancelRequest)
{
- conn->try_next_addr = true;
- goto keep_going;
+ if (addr_cur->ai_next != NULL ||
+ conn->whichhost + 1 < conn->nconnhost)
+ {
+ conn->try_next_addr = true;
+ goto keep_going;
+ }
}
emitHostIdentityInfo(conn, host_addr);
appendPQExpBuffer(&conn->errorMessage,
@@ -2595,7 +2785,7 @@ keep_going: /* We will come back to here until there is
* TCP sockets, nonblock mode, close-on-exec. Try the
* next address if any of this fails.
*/
- if (addr_cur->ai_family != AF_UNIX)
+ if (conn->raddr.addr.ss_family != AF_UNIX)
{
if (!connectNoDelay(conn))
{
@@ -2624,7 +2814,7 @@ keep_going: /* We will come back to here until there is
}
#endif /* F_SETFD */
- if (addr_cur->ai_family != AF_UNIX)
+ if (conn->raddr.addr.ss_family != AF_UNIX)
{
#ifndef WIN32
int on = 1;
@@ -2718,8 +2908,9 @@ keep_going: /* We will come back to here until there is
* Start/make connection. This should not block, since we
* are in nonblock mode. If it does, well, too bad.
*/
- if (connect(conn->sock, addr_cur->ai_addr,
- addr_cur->ai_addrlen) < 0)
+ if (connect(conn->sock,
+ (struct sockaddr *) &conn->raddr.addr,
+ conn->raddr.salen) < 0)
{
if (SOCK_ERRNO == EINPROGRESS ||
#ifdef WIN32
@@ -2758,6 +2949,16 @@ keep_going: /* We will come back to here until there is
}
}
+ case CONNECTION_STARTING:
+ {
+ if (!connectDBStart(conn))
+ {
+ goto error_return;
+ }
+ conn->status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+
case CONNECTION_STARTED:
{
socklen_t optlen = sizeof(optval);
@@ -2966,6 +3167,25 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ appendPQExpBuffer(&conn->errorMessage,
+ libpq_gettext("could not send cancel packet: %s\n"),
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -4194,6 +4414,11 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4311,6 +4536,12 @@ PQresetStart(PGconn *conn)
{
closePGconn(conn);
+ if (conn->cancelRequest)
+ {
+ conn->status = CONNECTION_STARTING;
+ return 1;
+ }
+
return connectDBStart(conn);
}
@@ -4663,6 +4894,22 @@ cancel_errReturn:
return false;
}
+/*
+ * PQconnectComplete: takes a non blocking cancel connection and completes it
+ * in a blocking manner.
+ *
+ * Returns 1 if able to connect successfully and 0 if not.
+ *
+ * This can useful if you only care about the thread safety of
+ * PQrequestCancelStart and not about its non blocking functionality.
+ */
+int
+PQconnectComplete(PGconn *cancelConn)
+{
+ connectDBComplete(cancelConn);
+ return cancelConn->status != CONNECTION_BAD;
+}
+
/*
* PQrequestCancel: old, not thread-safe function for requesting query cancel
@@ -4679,45 +4926,31 @@ cancel_errReturn:
int
PQrequestCancel(PGconn *conn)
{
- int r;
- PGcancel *cancel;
-
- /* Check we have an open connection */
- if (!conn)
- return false;
+ PGconn *cancelConn = NULL;
- if (conn->sock == PGINVALID_SOCKET)
+ cancelConn = PQrequestCancelStart(conn);
+ if (!cancelConn)
{
- strlcpy(conn->errorMessage.data,
- "PQrequestCancel() -- connection is not open\n",
- conn->errorMessage.maxlen);
- conn->errorMessage.len = strlen(conn->errorMessage.data);
- conn->errorReported = 0;
-
+ appendPQExpBufferStr(&conn->errorMessage, libpq_gettext("out of memory\n"));
return false;
}
- cancel = PQgetCancel(conn);
- if (cancel)
- {
- r = PQcancel(cancel, conn->errorMessage.data,
- conn->errorMessage.maxlen);
- PQfreeCancel(cancel);
- }
- else
+ if (cancelConn->status == CONNECTION_BAD)
{
- strlcpy(conn->errorMessage.data, "out of memory",
- conn->errorMessage.maxlen);
- r = false;
+ appendPQExpBufferStr(&conn->errorMessage, PQerrorMessage(cancelConn));
+ freePGconn(cancelConn);
+ return false;
}
- if (!r)
+ if (!PQconnectComplete(cancelConn))
{
- conn->errorMessage.len = strlen(conn->errorMessage.data);
- conn->errorReported = 0;
+ appendPQExpBufferStr(&conn->errorMessage, PQerrorMessage(cancelConn));
+ freePGconn(cancelConn);
+ return false;
}
- return r;
+ freePGconn(cancelConn);
+ return true;
}
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index d76bb3957a..a944cb2c12 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -558,8 +558,11 @@ pqPutMsgEnd(PGconn *conn)
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
- * -1: error detected (including EOF = connection closure);
+ * -1: error detected (excluding EOF = connection closure);
* conn->errorMessage set
+ * -2: EOF detected, connection is closed
+ * conn->errorMessage set
+ *
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
@@ -642,7 +645,7 @@ retry3:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -737,7 +740,7 @@ retry4:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -755,13 +758,17 @@ definitelyEOF:
libpq_gettext("server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.\n"));
+ /* Do *not* drop any already-read data; caller still wants it */
+ pqDropConnection(conn, false);
+ conn->status = CONNECTION_BAD; /* No more connection to backend */
+ return -2;
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
/* Do *not* drop any already-read data; caller still wants it */
pqDropConnection(conn, false);
conn->status = CONNECTION_BAD; /* No more connection to backend */
- return -1;
+ return nread < 0 ? nread : -1;
}
/*
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index 24a598b6e4..8a2a7c112c 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -255,7 +255,7 @@ rloop:
appendPQExpBufferStr(&conn->errorMessage,
libpq_gettext("SSL connection has been closed unexpectedly\n"));
result_errno = ECONNRESET;
- n = -1;
+ n = -2;
break;
default:
appendPQExpBuffer(&conn->errorMessage,
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index a1dc7b796d..9771805dd3 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -201,6 +201,12 @@ pqsecure_close(PGconn *conn)
* On failure, this function is responsible for appending a suitable message
* to conn->errorMessage. The caller must still inspect errno, but only
* to determine whether to continue/retry after error.
+ *
+ * Returns -1 in case of failures, except in the case of where a failure means
+ * that there was a clean connection closure, in those cases -2 is return.
+ * Currently only the TLS implementation of pqsecure_read ever returns -2. For
+ * the other implementations a clean connection closure is detected in
+ * pqReadData instead.
*/
ssize_t
pqsecure_read(PGconn *conn, void *ptr, size_t len)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 7986445f1a..24695a6026 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,12 +59,15 @@ typedef enum
{
CONNECTION_OK,
CONNECTION_BAD,
+ CONNECTION_CANCEL_FINISHED,
/* Non-blocking mode only below here */
/*
* The existence of these should never be relied upon - they should only
* be used for user feedback or similar purposes.
*/
+ CONNECTION_STARTING, /* Waiting for connection attempt to be
+ * started. */
CONNECTION_STARTED, /* Waiting for connection to be made. */
CONNECTION_MADE, /* Connection OK; waiting to send. */
CONNECTION_AWAITING_RESPONSE, /* Waiting for a response from the
@@ -282,6 +285,7 @@ extern PGconn *PQconnectStart(const char *conninfo);
extern PGconn *PQconnectStartParams(const char *const *keywords,
const char *const *values, int expand_dbname);
extern PostgresPollingStatusType PQconnectPoll(PGconn *conn);
+extern int PQconnectComplete(PGconn *conn);
/* Synchronous (blocking) */
extern PGconn *PQconnectdb(const char *conninfo);
@@ -330,9 +334,12 @@ extern void PQfreeCancel(PGcancel *cancel);
/* issue a cancel request */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* more secure version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
+/* non-blocking and thread-safe version of PQrequestCancel */
+extern PGconn *PQrequestCancelStart(PGconn *conn);
+
/* Accessor functions for PGconn objects */
extern char *PQdb(const PGconn *conn);
extern char *PQuser(const PGconn *conn);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index e0cee4b142..50b6a7bc7d 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -394,6 +394,10 @@ struct pg_conn
char *ssl_max_protocol_version; /* maximum TLS protocol version */
char *target_session_attrs; /* desired session properties */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 12179f2514..b073235197 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -948,26 +948,18 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
-
- if (cancel != NULL)
+ if (PQrequestCancel(conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQerrorMessage(conn));
}
/*
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 0ff563f59a..ed625d486e 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,230 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got cancelled.
+ *
+ * This is a function wrapped in a macrco to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_cancelled(conn) confirm_query_cancelled_impl(__LINE__, conn)
+static void
+confirm_query_cancelled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+static void
+test_cancel(PGconn *conn)
+{
+ PGcancel *cancel = NULL;
+ PGconn *cancelConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /* test PQcancel */
+ if (PQsendQuery(conn, "SELECT pg_sleep(10)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ pg_usleep(10000);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ if (PQsendQuery(conn, "SELECT pg_sleep(10)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ pg_usleep(10000);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ if (PQsendQuery(conn, "SELECT pg_sleep(10)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ pg_usleep(10000);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_cancelled(conn);
+
+ /* test PQrequestCancelStart and then polling with PQcancelConnectPoll */
+ if (PQsendQuery(conn, "SELECT pg_sleep(10)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ pg_usleep(10000);
+ cancelConn = PQrequestCancelStart(conn);
+ if (PQstatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQconnectPoll(cancelConn);
+ int sock = PQsocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQerrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQstatus(cancelConn) != CONNECTION_CANCEL_FINISHED)
+ pg_fatal("unexpected cancel connection status: %s", PQerrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /*
+ * test PQresetStart works on the cancel connection and it can be reused
+ * after
+ */
+ if (!PQresetStart(cancelConn))
+ {
+ pg_fatal("cancel connection reset failed: %s", PQerrorMessage(cancelConn));
+ }
+
+ if (PQsendQuery(conn, "SELECT pg_sleep(10)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ pg_usleep(10000);
+ if (PQstatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQresetPoll(cancelConn);
+ int sock = PQsocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQerrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQstatus(cancelConn) != CONNECTION_CANCEL_FINISHED)
+ pg_fatal("unexpected cancel connection status: %s", PQerrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ PQfinish(cancelConn);
+
+ /* test PQconnectComplete */
+ if (PQsendQuery(conn, "SELECT pg_sleep(10)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ pg_usleep(10000);
+ cancelConn = PQrequestCancelStart(conn);
+ if (PQstatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ if (!PQconnectComplete(cancelConn))
+ pg_fatal("failed to send cancel: %s", PQerrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /* test PQconnectComplete with reset connection */
+ if (!PQresetStart(cancelConn))
+ {
+ pg_fatal("cancel connection reset failed: %s", PQerrorMessage(cancelConn));
+ }
+
+ if (PQsendQuery(conn, "SELECT pg_sleep(10)") != 1)
+ pg_fatal("failed to send query: %s", PQerrorMessage(conn));
+ pg_usleep(10000);
+ if (PQstatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ if (!PQconnectComplete(cancelConn))
+ pg_fatal("failed to send cancel: %s", PQerrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+ PQfinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -1545,6 +1769,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1642,7 +1867,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.17.1
Resending with a problematic email removed from CC...
On Mon, Apr 04, 2022 at 03:21:54PM +0000, Jelte Fennema wrote:
2. Added some extra sleeps to the cancellation test, to remove random failures on FreeBSD.
Apparently there's still an occasional issue.
https://cirrus-ci.com/task/6613309985128448
result 232/352 (error): ERROR: duplicate key value violates unique constraint "ppln_uniqviol_pkey"
DETAIL: Key (id)=(116) already exists.
This shows that the issue is pretty rare:
https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/38/3511
--
Justin
(resent because it was blocked from the mailing-list due to inclusion of a blocked email address in the To line)
From: Andres Freund <andres@anarazel.de>
On 2022-04-04 15:21:54 +0000, Jelte Fennema wrote:
2. Added some extra sleeps to the cancellation test, to remove random
failures on FreeBSD.That's extremely extremely rarely the solution to address test reliability
issues. It'll fail when running test under valgrind etc.Why do you need sleeps / can you find another way to make the test reliable?
The problem they are solving is racy behaviour between sending the query
and sending the cancellation. If the cancellation is handled before the query
is started, then the query doesn't get cancelled. To solve this problem I used
the sleeps to wait a bit before sending the cancelation request.
When I wrote this, I couldn't think of a better way to do it then with sleeps.
But I didn't like it either (and I still don't). These emails made me start to think
again, about other ways of solving the problem. I think I've found another
solution (see attached patch). The way I solve it now is by using another
connection to check the state of the first one.
Jelte
Attachments:
0001-Add-documentation-for-libpq_pipeline-tests.patchapplication/octet-stream; name=0001-Add-documentation-for-libpq_pipeline-tests.patchDownload
From 2be20b649243934d4fec1829d56fda7fc05b7268 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Thu, 13 Jan 2022 15:26:35 +0100
Subject: [PATCH 1/2] Add documentation for libpq_pipeline tests
This adds some explanation on how to run and add libpq tests.
---
src/test/modules/libpq_pipeline/README | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/src/test/modules/libpq_pipeline/README b/src/test/modules/libpq_pipeline/README
index d8174dd579..6eda6c5756 100644
--- a/src/test/modules/libpq_pipeline/README
+++ b/src/test/modules/libpq_pipeline/README
@@ -1 +1,21 @@
Test programs and libraries for libpq
+=====================================
+
+You can manually run a specific test by running:
+
+ ./libpq_pipeline <name of test>
+
+To add a new libpq test to this module you need to edit libpq_pipeline.c. There
+you should add the name of your new test to the "print_test_list" function.
+Then in main you should do something when this test name is passed to the
+program.
+
+If the order in which postgres protocol messages are sent is deterministic for
+your test. Then you can generate a trace of these messages using the following
+command:
+
+ ./libpq_pipeline mynewtest -t traces/mynewtest.trace
+
+Once you've done that you should make sure that when running "make check"
+the generated trace is compared to the expected trace. This is done by adding
+your test name to the $cmptrace definition in the t/001_libpq_pipeline.pl file
--
2.34.1
0002-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=0002-Add-non-blocking-version-of-PQcancel.patchDownload
From 437b098b2ec6334affb2d818cf9154c7f170e2dc Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 12 Jan 2022 09:52:05 +0100
Subject: [PATCH 2/2] Add non-blocking version of PQcancel
This patch does four things:
1. Change the PQrequestCancel implementation to use the regular
connection establishement code, to support all connection options
including encryption.
2. Add PQrequestCancelStart which is a thread-safe and non-blocking
version of this new PQrequestCancel implementation.
3. Add PQconnectComplete, which completes a connection started by
PQrequestCancelStart. This is useful if you want a thread-safe but
blocking cancel (without having a need for signal-safety).
4. Use this new cancellation API everywhere in the codebase where
signal-safety is not a necessity.
This change un-deprecates PQrequestCancel, since now there's actually an
advantage to using it over PQcancel. It also includes user facing
documentation for all the newly added functions.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQrequestCancelStart can now be used
instead, to have a non-blocking way of sending cancel requests. The
postgres_fdw cancellation code has been modified to make use of this.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
contrib/dblink/dblink.c | 28 +-
contrib/postgres_fdw/connection.c | 93 ++++-
.../postgres_fdw/expected/postgres_fdw.out | 15 +
contrib/postgres_fdw/sql/postgres_fdw.sql | 8 +
doc/src/sgml/libpq.sgml | 212 +++++++++--
src/fe_utils/connect_utils.c | 10 +-
src/interfaces/libpq/exports.txt | 2 +
src/interfaces/libpq/fe-connect.c | 341 +++++++++++++++---
src/interfaces/libpq/fe-misc.c | 15 +-
src/interfaces/libpq/fe-secure-openssl.c | 2 +-
src/interfaces/libpq/fe-secure.c | 6 +
src/interfaces/libpq/libpq-fe.h | 9 +-
src/interfaces/libpq/libpq-int.h | 4 +
src/test/isolation/isolationtester.c | 28 +-
.../modules/libpq_pipeline/libpq_pipeline.c | 274 +++++++++++++-
15 files changed, 894 insertions(+), 153 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index a561d1d652..b572cf6d5b 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1379,22 +1379,30 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGconn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
-
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ cancelConn = PQrequestCancelStart(conn);
+ if (PQstatus(cancelConn) == CONNECTION_BAD)
+ {
+ msg = pchomp(PQerrorMessage(cancelConn));
+ PQfinish(cancelConn);
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
+ }
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
+ if (PQconnectComplete(cancelConn))
+ {
+ msg = "OK";
+ }
else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ {
+ msg = pchomp(PQerrorMessage(cancelConn));
+ }
+ PQfinish(cancelConn);
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 061ffaf329..fa47274d15 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -1264,35 +1264,98 @@ pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel)
static bool
pgfdw_cancel_query(PGconn *conn)
{
- PGcancel *cancel;
- char errbuf[256];
PGresult *result = NULL;
- TimestampTz endtime;
- bool timed_out;
/*
* If it takes too long to cancel the query and discard the result, assume
* the connection is dead.
*/
- endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ bool timed_out = false;
+ bool failed = false;
+ PGconn *cancel_conn = PQrequestCancelStart(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQstatus(cancel_conn) == CONNECTION_BAD)
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQerrorMessage(cancel_conn)))));
+ return false;
+ }
+
+ /* In what follows, do not leak any PGconn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQconnectPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQsocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ }
+ PG_CATCH();
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PQfinish(cancel_conn);
+ PG_RE_THROW();
+ }
+ PG_END_TRY();
+
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQerrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PQfinish(cancel_conn);
+ return failed;
}
+ PQfinish(cancel_conn);
/* Get and discard the result of the query. */
if (pgfdw_get_cleanup_result(conn, endtime, &result, &timed_out))
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 44457f930c..6d108b56ef 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2567,6 +2567,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index 92d1212027..7e02ed6803 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -326,6 +326,7 @@ DELETE FROM loct_empty;
ANALYZE ft_empty;
EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft_empty ORDER BY c1;
+
-- ===================================================================
-- WHERE with remotely-executable conditions
-- ===================================================================
@@ -681,6 +682,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 37ec3cb4e5..8e033bb8f3 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -499,6 +499,30 @@ switch(PQstatus(conn))
</listitem>
</varlistentry>
+ <varlistentry id="libpq-PQconnectComplete">
+ <term><function>PQconnectComplete</function><indexterm><primary>PQconnectComplete</primary></indexterm></term>
+ <listitem>
+ <para>
+ Complete the connection attempt on a nonblocking connection and block
+ until it is completed.
+
+<synopsis>
+int PQconnectPoll(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This function can be used instead of
+ <xref linkend="libpq-PQconnectPoll"/>
+ to complete a connection that was initially started in a non blocking
+ manner. However, instead of continuing to complete the connection in a
+ non blocking way, calling this function will block until the connection
+ is completed. This is especially useful to complete connections that were
+ started by <xref linkend="libpq-PQrequestCancelStart"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQconndefaults">
<term><function>PQconndefaults</function><indexterm><primary>PQconndefaults</primary></indexterm></term>
<listitem>
@@ -660,7 +684,7 @@ void PQreset(PGconn *conn);
<varlistentry id="libpq-PQresetStart">
<term><function>PQresetStart</function><indexterm><primary>PQresetStart</primary></indexterm></term>
- <term><function>PQresetPoll</function><indexterm><primary>PQresetPoll</primary></indexterm></term>
+ <term id="libpq-PQresetPoll"><function>PQresetPoll</function><indexterm><primary>PQresetPoll</primary></indexterm></term>
<listitem>
<para>
Reset the communication channel to the server, in a nonblocking manner.
@@ -5617,13 +5641,137 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQrequestCancel">
+ <term><function>PQrequestCancel</function><indexterm><primary>PQrequestCancel</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command.
+<synopsis>
+int PQrequestCancel(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This request is made over a connection that uses the same connection
+ options as the the original <structname>PGconn</structname>. So when the
+ original connection is encrypted (using TLS or GSS), the connection for
+ the cancel request connection is encrypted in the same. Any connection
+ options that only make sense for authentication or after authentication
+ are ignored though, because cancellation requests do not require
+ authentication.
+ </para>
+
+ <para>
+ This function operates directly on the <structname>PGconn</structname>
+ object, and in case of failure stores the error message in the
+ <structname>PGconn</structname> object (whence it can be retrieved
+ by <xref linkend="libpq-PQerrorMessage"/>). This behaviour makes this
+ function unsafe to call from within multi-threaded programs or
+ signal handlers, since it is possible that overwriting the
+ <structname>PGconn</structname>'s error message will
+ mess up the operation currently in progress on the connection in another
+ thread.
+ </para>
+
+ <para>
+ The return value is 1 if the cancel request was successfully
+ dispatched and 0 if not. Successful dispatch is no guarantee that the
+ request will have any effect, however. If the cancellation is effective,
+ the current command will terminate early and return an error result. If
+ the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at
+ all.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQrequestCancelStart">
+ <term><function>PQrequestCancelStart</function><indexterm><primary>PQrequestCancelStart</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of
+ <xref linkend="libpq-PQrequestCancel"/>
+ that can be used in thread-safe and/or non-blocking manner.
+<synopsis>
+PGconn *PQrequestCancelStart(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This function returns a new <structname>PGconn</structname>. This
+ connection object can be used to cancel the query that's running on the
+ original connection in a thread-safe way. To do so
+ <xref linkend="libpq-PQrequestCancel"/>
+ must be called while no other thread is using the original PGconn. Then
+ the returned <structname>PGconn</structname>
+ can be used at a later point in any thread to send a cancel request.
+ A cancel request can be sent using the returned PGconn in two ways,
+ non-blocking using <xref linkend="libpq-PQconnectPoll"/>
+ or blocking using <xref linkend="libpq-PQconnectComplete"/>.
+ </para>
+
+ <para>
+ In addition to all the statuses that a regular
+ <structname>PGconn</structname>
+ can have returned connection can have two additional statuses:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQconnectPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQrequestCancel"/>. No connection to the
+ server has been initiated yet at this point. To start cancel request
+ initiation use <xref linkend="libpq-PQconnectPoll"/>
+ for non-blocking behaviour and <xref linkend="libpq-PQconnectComplete"/>
+ for blocking behaviour.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-connection-cancel-finished">
+ <term><symbol>CONNECTION_CANCEL_FINISHED</symbol></term>
+ <listitem>
+ <para>
+ Cancel request was successfully sent. It's not possible to continue
+ using the cancellation connection now, so it should be freed using
+ <xref linkend="libpq-PQfinish"/>. It's also possible to reset the
+ cancellation connection instead using
+ <xref linkend="libpq-PQresetStart"/>, that way it can be reused to
+ cancel a future query on the same connection.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ Since this object represents a connection only meant for cancellations it
+ can only be used with a limited subset of the functions that can be used
+ for a regular <structname>PGconn</structname> object. The functions that
+ this object can be passed to are
+ <xref linkend="libpq-PQstatus"/>,
+ <xref linkend="libpq-PQerrorMessage"/>,
+ <xref linkend="libpq-PQconnectComplete"/>,
+ <xref linkend="libpq-PQconnectPoll"/>,
+ <xref linkend="libpq-PQsocket"/>,
+ <xref linkend="libpq-PQresetStart"/>, and
+ <xref linkend="libpq-PQfinish"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -5665,7 +5813,9 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ A less secure version of
+ <xref linkend="libpq-PQrequestCancel"/>
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
@@ -5679,15 +5829,6 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
recommended size is 256 bytes).
</para>
- <para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
<para>
<xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
handler, if the <parameter>errbuf</parameter> is a local variable in the
@@ -5696,33 +5837,24 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
also be invoked from a thread that is separate from the one
manipulating the <structname>PGconn</structname> object.
</para>
- </listitem>
- </varlistentry>
- </variablelist>
-
- <variablelist>
- <varlistentry id="libpq-PQrequestCancel">
- <term><function>PQrequestCancel</function><indexterm><primary>PQrequestCancel</primary></indexterm></term>
-
- <listitem>
- <para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
-<synopsis>
-int PQrequestCancel(PGconn *conn);
-</synopsis>
- </para>
<para>
- Requests that the server abandon processing of the current
- command. It operates directly on the
- <structname>PGconn</structname> object, and in case of failure stores the
- error message in the <structname>PGconn</structname> object (whence it can
- be retrieved by <xref linkend="libpq-PQerrorMessage"/>). Although
- the functionality is the same, this approach is not safe within
- multiple-thread programs or signal handlers, since it is possible
- that overwriting the <structname>PGconn</structname>'s error message will
- mess up the operation currently in progress on the connection.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. When calling this function a
+ connection is made to the postgres host using the same port. The only
+ connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. This means the connection
+ is never encrypted using TLS or GSS.
</para>
</listitem>
</varlistentry>
@@ -8856,10 +8988,10 @@ int PQisthreadsafe();
</para>
<para>
- The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
+ The functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQrequestCancelStart"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index f2e583f9fa..b9f0c0558c 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -158,19 +158,11 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
-
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQrequestCancel(conn);
}
PQfinish(conn);
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index e8bcc88370..f7609d0c64 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -186,3 +186,5 @@ PQpipelineStatus 183
PQsetTraceFlags 184
PQmblenBounded 185
PQsendFlushRequest 186
+PQrequestCancelStart 187
+PQconnectComplete 188
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 6e936bbff3..7390fbec7c 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -379,6 +379,7 @@ static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
@@ -605,8 +606,17 @@ pqDropServerData(PGconn *conn)
if (conn->write_err_msg)
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQresetStart invocations. Otherwise they don't know the secret token of
+ * the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -737,6 +747,68 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConnectStart
+ *
+ * Asynchronously cancel a request on the given connection. This requires
+ * polling the returned PGconn to actually complete the cancellation of the
+ * request.
+ */
+PGconn *
+PQrequestCancelStart(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ appendPQExpBufferStr(&cancelConn->errorMessage, libpq_gettext("passed connection was NULL\n"));
+ return cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ appendPQExpBufferStr(&cancelConn->errorMessage, libpq_gettext("passed connection is not open\n"));
+ return cancelConn;
+ }
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGconn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return cancelConn;
+
+ /*
+ * Copy cancelation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used.
+ */
+ memcpy(&cancelConn->raddr, &conn->raddr, sizeof(SockAddr));
+ cancelConn->whichhost = conn->whichhost;
+ conn->try_next_host = false;
+ conn->try_next_addr = false;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return cancelConn;
+}
+
/*
* PQconnectStartParams
*
@@ -914,6 +986,46 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ appendPQExpBufferStr(&dstConn->errorMessage,
+ libpq_gettext("out of memory\n"));
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2082,10 +2194,17 @@ connectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host,
+ * which is determined in PQcancelConnectStart. So leave these settings
+ * alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2134,6 +2253,15 @@ connectDBComplete(PGconn *conn)
if (conn == NULL || conn->status == CONNECTION_BAD)
return 0;
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!connectDBStart(conn))
+ {
+ conn->status = CONNECTION_BAD;
+ return 0;
+ }
+ }
+
/*
* Set up a time limit, if connect_timeout isn't zero.
*/
@@ -2274,13 +2402,15 @@ PQconnectPoll(PGconn *conn)
switch (conn->status)
{
/*
- * We really shouldn't have been polled in these two cases, but we
- * can handle it.
+ * We really shouldn't have been polled in these three cases, but
+ * we can handle it.
*/
case CONNECTION_BAD:
return PGRES_POLLING_FAILED;
case CONNECTION_OK:
return PGRES_POLLING_OK;
+ case CONNECTION_CANCEL_FINISHED:
+ return PGRES_POLLING_OK;
/* These are reading states */
case CONNECTION_AWAITING_RESPONSE:
@@ -2292,6 +2422,34 @@ PQconnectPoll(PGconn *conn)
/* Load waiting data */
int n = pqReadData(conn);
+#ifndef WIN32
+ if (n == -2 && conn->cancelRequest)
+#else
+
+ /*
+ * Windows is a bit special in its EOF behaviour for TCP.
+ * Sometimes it will error with an ECONNRESET when there is a
+ * clean connection closure. See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the
+ * cancellation anyway, so even if this is not always correct
+ * we do the same here.
+ */
+ if (n < 0 && conn->cancelRequest)
+#endif
+ {
+ /*
+ * This is the expected end state for cancel connections.
+ * They are closed once the cancel is processed by the
+ * server.
+ */
+ conn->status = CONNECTION_CANCEL_FINISHED;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+ }
if (n < 0)
goto error_return;
if (n == 0)
@@ -2301,6 +2459,7 @@ PQconnectPoll(PGconn *conn)
}
/* These are writing states, so we just proceed. */
+ case CONNECTION_STARTING:
case CONNECTION_STARTED:
case CONNECTION_MADE:
break;
@@ -2325,6 +2484,14 @@ keep_going: /* We will come back to here until there is
/* Time to advance to next address, or next host if no more addresses? */
if (conn->try_next_addr)
{
+ /*
+ * Cancel requests never have more addresses to try. They should only
+ * try a single one.
+ */
+ if (conn->cancelRequest)
+ {
+ goto error_return;
+ }
if (conn->addr_cur && conn->addr_cur->ai_next)
{
conn->addr_cur = conn->addr_cur->ai_next;
@@ -2344,6 +2511,15 @@ keep_going: /* We will come back to here until there is
int ret;
char portstr[MAXPGPATH];
+ /*
+ * Cancel requests never have more hosts to try. They should only try
+ * a single one.
+ */
+ if (conn->cancelRequest)
+ {
+ goto error_return;
+ }
+
if (conn->whichhost + 1 < conn->nconnhost)
conn->whichhost++;
else
@@ -2529,19 +2705,27 @@ keep_going: /* We will come back to here until there is
char host_addr[NI_MAXHOST];
/*
- * Advance to next possible host, if we've tried all of
- * the addresses for the current host.
+ * Cancel requests don't use addr_cur at all. They have
+ * their raddr field already filled in during
+ * initialization in PQcancelConnectStart.
*/
- if (addr_cur == NULL)
+ if (!conn->cancelRequest)
{
- conn->try_next_host = true;
- goto keep_going;
- }
+ /*
+ * Advance to next possible host, if we've tried all
+ * of the addresses for the current host.
+ */
+ if (addr_cur == NULL)
+ {
+ conn->try_next_host = true;
+ goto keep_going;
+ }
- /* Remember current address for possible use later */
- memcpy(&conn->raddr.addr, addr_cur->ai_addr,
- addr_cur->ai_addrlen);
- conn->raddr.salen = addr_cur->ai_addrlen;
+ /* Remember current address for possible use later */
+ memcpy(&conn->raddr.addr, addr_cur->ai_addr,
+ addr_cur->ai_addrlen);
+ conn->raddr.salen = addr_cur->ai_addrlen;
+ }
/*
* Set connip, too. Note we purposely ignore strdup
@@ -2557,7 +2741,7 @@ keep_going: /* We will come back to here until there is
conn->connip = strdup(host_addr);
/* Try to create the socket */
- conn->sock = socket(addr_cur->ai_family, SOCK_STREAM, 0);
+ conn->sock = socket(conn->raddr.addr.ss_family, SOCK_STREAM, 0);
if (conn->sock == PGINVALID_SOCKET)
{
int errorno = SOCK_ERRNO;
@@ -2567,12 +2751,18 @@ keep_going: /* We will come back to here until there is
* addresses to try; this reduces useless chatter in
* cases where the address list includes both IPv4 and
* IPv6 but kernel only accepts one family.
+ *
+ * Cancel requests never have more addresses to try.
+ * They should only try a single one.
*/
- if (addr_cur->ai_next != NULL ||
- conn->whichhost + 1 < conn->nconnhost)
+ if (!conn->cancelRequest)
{
- conn->try_next_addr = true;
- goto keep_going;
+ if (addr_cur->ai_next != NULL ||
+ conn->whichhost + 1 < conn->nconnhost)
+ {
+ conn->try_next_addr = true;
+ goto keep_going;
+ }
}
emitHostIdentityInfo(conn, host_addr);
appendPQExpBuffer(&conn->errorMessage,
@@ -2595,7 +2785,7 @@ keep_going: /* We will come back to here until there is
* TCP sockets, nonblock mode, close-on-exec. Try the
* next address if any of this fails.
*/
- if (addr_cur->ai_family != AF_UNIX)
+ if (conn->raddr.addr.ss_family != AF_UNIX)
{
if (!connectNoDelay(conn))
{
@@ -2624,7 +2814,7 @@ keep_going: /* We will come back to here until there is
}
#endif /* F_SETFD */
- if (addr_cur->ai_family != AF_UNIX)
+ if (conn->raddr.addr.ss_family != AF_UNIX)
{
#ifndef WIN32
int on = 1;
@@ -2718,8 +2908,9 @@ keep_going: /* We will come back to here until there is
* Start/make connection. This should not block, since we
* are in nonblock mode. If it does, well, too bad.
*/
- if (connect(conn->sock, addr_cur->ai_addr,
- addr_cur->ai_addrlen) < 0)
+ if (connect(conn->sock,
+ (struct sockaddr *) &conn->raddr.addr,
+ conn->raddr.salen) < 0)
{
if (SOCK_ERRNO == EINPROGRESS ||
#ifdef WIN32
@@ -2758,6 +2949,16 @@ keep_going: /* We will come back to here until there is
}
}
+ case CONNECTION_STARTING:
+ {
+ if (!connectDBStart(conn))
+ {
+ goto error_return;
+ }
+ conn->status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+
case CONNECTION_STARTED:
{
socklen_t optlen = sizeof(optval);
@@ -2966,6 +3167,25 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ appendPQExpBuffer(&conn->errorMessage,
+ libpq_gettext("could not send cancel packet: %s\n"),
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -4194,6 +4414,11 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4311,6 +4536,12 @@ PQresetStart(PGconn *conn)
{
closePGconn(conn);
+ if (conn->cancelRequest)
+ {
+ conn->status = CONNECTION_STARTING;
+ return 1;
+ }
+
return connectDBStart(conn);
}
@@ -4663,6 +4894,22 @@ cancel_errReturn:
return false;
}
+/*
+ * PQconnectComplete: takes a non blocking cancel connection and completes it
+ * in a blocking manner.
+ *
+ * Returns 1 if able to connect successfully and 0 if not.
+ *
+ * This can useful if you only care about the thread safety of
+ * PQrequestCancelStart and not about its non blocking functionality.
+ */
+int
+PQconnectComplete(PGconn *cancelConn)
+{
+ connectDBComplete(cancelConn);
+ return cancelConn->status != CONNECTION_BAD;
+}
+
/*
* PQrequestCancel: old, not thread-safe function for requesting query cancel
@@ -4679,45 +4926,31 @@ cancel_errReturn:
int
PQrequestCancel(PGconn *conn)
{
- int r;
- PGcancel *cancel;
-
- /* Check we have an open connection */
- if (!conn)
- return false;
+ PGconn *cancelConn = NULL;
- if (conn->sock == PGINVALID_SOCKET)
+ cancelConn = PQrequestCancelStart(conn);
+ if (!cancelConn)
{
- strlcpy(conn->errorMessage.data,
- "PQrequestCancel() -- connection is not open\n",
- conn->errorMessage.maxlen);
- conn->errorMessage.len = strlen(conn->errorMessage.data);
- conn->errorReported = 0;
-
+ appendPQExpBufferStr(&conn->errorMessage, libpq_gettext("out of memory\n"));
return false;
}
- cancel = PQgetCancel(conn);
- if (cancel)
- {
- r = PQcancel(cancel, conn->errorMessage.data,
- conn->errorMessage.maxlen);
- PQfreeCancel(cancel);
- }
- else
+ if (cancelConn->status == CONNECTION_BAD)
{
- strlcpy(conn->errorMessage.data, "out of memory",
- conn->errorMessage.maxlen);
- r = false;
+ appendPQExpBufferStr(&conn->errorMessage, PQerrorMessage(cancelConn));
+ freePGconn(cancelConn);
+ return false;
}
- if (!r)
+ if (!PQconnectComplete(cancelConn))
{
- conn->errorMessage.len = strlen(conn->errorMessage.data);
- conn->errorReported = 0;
+ appendPQExpBufferStr(&conn->errorMessage, PQerrorMessage(cancelConn));
+ freePGconn(cancelConn);
+ return false;
}
- return r;
+ freePGconn(cancelConn);
+ return true;
}
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index d76bb3957a..a944cb2c12 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -558,8 +558,11 @@ pqPutMsgEnd(PGconn *conn)
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
- * -1: error detected (including EOF = connection closure);
+ * -1: error detected (excluding EOF = connection closure);
* conn->errorMessage set
+ * -2: EOF detected, connection is closed
+ * conn->errorMessage set
+ *
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
@@ -642,7 +645,7 @@ retry3:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -737,7 +740,7 @@ retry4:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -755,13 +758,17 @@ definitelyEOF:
libpq_gettext("server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.\n"));
+ /* Do *not* drop any already-read data; caller still wants it */
+ pqDropConnection(conn, false);
+ conn->status = CONNECTION_BAD; /* No more connection to backend */
+ return -2;
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
/* Do *not* drop any already-read data; caller still wants it */
pqDropConnection(conn, false);
conn->status = CONNECTION_BAD; /* No more connection to backend */
- return -1;
+ return nread < 0 ? nread : -1;
}
/*
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index 8117cbd40f..c553a74898 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -255,7 +255,7 @@ rloop:
appendPQExpBufferStr(&conn->errorMessage,
libpq_gettext("SSL connection has been closed unexpectedly\n"));
result_errno = ECONNRESET;
- n = -1;
+ n = -2;
break;
default:
appendPQExpBuffer(&conn->errorMessage,
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index a1dc7b796d..9771805dd3 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -201,6 +201,12 @@ pqsecure_close(PGconn *conn)
* On failure, this function is responsible for appending a suitable message
* to conn->errorMessage. The caller must still inspect errno, but only
* to determine whether to continue/retry after error.
+ *
+ * Returns -1 in case of failures, except in the case of where a failure means
+ * that there was a clean connection closure, in those cases -2 is return.
+ * Currently only the TLS implementation of pqsecure_read ever returns -2. For
+ * the other implementations a clean connection closure is detected in
+ * pqReadData instead.
*/
ssize_t
pqsecure_read(PGconn *conn, void *ptr, size_t len)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 7986445f1a..24695a6026 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,12 +59,15 @@ typedef enum
{
CONNECTION_OK,
CONNECTION_BAD,
+ CONNECTION_CANCEL_FINISHED,
/* Non-blocking mode only below here */
/*
* The existence of these should never be relied upon - they should only
* be used for user feedback or similar purposes.
*/
+ CONNECTION_STARTING, /* Waiting for connection attempt to be
+ * started. */
CONNECTION_STARTED, /* Waiting for connection to be made. */
CONNECTION_MADE, /* Connection OK; waiting to send. */
CONNECTION_AWAITING_RESPONSE, /* Waiting for a response from the
@@ -282,6 +285,7 @@ extern PGconn *PQconnectStart(const char *conninfo);
extern PGconn *PQconnectStartParams(const char *const *keywords,
const char *const *values, int expand_dbname);
extern PostgresPollingStatusType PQconnectPoll(PGconn *conn);
+extern int PQconnectComplete(PGconn *conn);
/* Synchronous (blocking) */
extern PGconn *PQconnectdb(const char *conninfo);
@@ -330,9 +334,12 @@ extern void PQfreeCancel(PGcancel *cancel);
/* issue a cancel request */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* more secure version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
+/* non-blocking and thread-safe version of PQrequestCancel */
+extern PGconn *PQrequestCancelStart(PGconn *conn);
+
/* Accessor functions for PGconn objects */
extern char *PQdb(const PGconn *conn);
extern char *PQuser(const PGconn *conn);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 3db6a17db4..b9ce1d58c1 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -394,6 +394,10 @@ struct pg_conn
char *ssl_max_protocol_version; /* maximum TLS protocol version */
char *target_session_attrs; /* desired session properties */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 12179f2514..b073235197 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -948,26 +948,18 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
-
- if (cancel != NULL)
+ if (PQrequestCancel(conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQerrorMessage(conn));
}
/*
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 0ff563f59a..52503907bd 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,275 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got cancelled.
+ *
+ * This is a function wrapped in a macrco to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_cancelled(conn) confirm_query_cancelled_impl(__LINE__, conn)
+static void
+confirm_query_cancelled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ if (PQsendQuery(conn, "SELECT pg_sleep(30)") != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep(30)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGconn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_cancelled(conn);
+
+ /* test PQrequestCancelStart and then polling with PQcancelConnectPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQrequestCancelStart(conn);
+ if (PQstatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQconnectPoll(cancelConn);
+ int sock = PQsocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQerrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQstatus(cancelConn) != CONNECTION_CANCEL_FINISHED)
+ pg_fatal("unexpected cancel connection status: %s", PQerrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /*
+ * test PQresetStart works on the cancel connection and it can be reused
+ * after
+ */
+ if (!PQresetStart(cancelConn))
+ {
+ pg_fatal("cancel connection reset failed: %s", PQerrorMessage(cancelConn));
+ }
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQstatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQresetPoll(cancelConn);
+ int sock = PQsocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQerrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQstatus(cancelConn) != CONNECTION_CANCEL_FINISHED)
+ pg_fatal("unexpected cancel connection status: %s", PQerrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ PQfinish(cancelConn);
+
+ /* test PQconnectComplete */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQrequestCancelStart(conn);
+ if (PQstatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ if (!PQconnectComplete(cancelConn))
+ pg_fatal("failed to send cancel: %s", PQerrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /* test PQconnectComplete with reset connection */
+ if (!PQresetStart(cancelConn))
+ {
+ pg_fatal("cancel connection reset failed: %s", PQerrorMessage(cancelConn));
+ }
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQstatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ if (!PQconnectComplete(cancelConn))
+ pg_fatal("failed to send cancel: %s", PQerrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+ PQfinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -1545,6 +1814,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1642,7 +1912,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
Import Notes
Reply to msg id not found: 20220625013617.e4qvy6qlaukgvfpo@alap3.anarazel.de
On Fri, Jun 24, 2022 at 07:36:16PM -0500, Justin Pryzby wrote:
Resending with a problematic email removed from CC...
On Mon, Apr 04, 2022 at 03:21:54PM +0000, Jelte Fennema wrote:
2. Added some extra sleeps to the cancellation test, to remove random failures on FreeBSD.
Apparently there's still an occasional issue.
https://cirrus-ci.com/task/6613309985128448
I think that failure is actually not related to this patch.
There are probably others, but I noticed because it also affected one of my
patches, which changes nothing relevant.
https://cirrus-ci.com/task/5904044051922944
On 2022-Jun-27, Justin Pryzby wrote:
On Fri, Jun 24, 2022 at 07:36:16PM -0500, Justin Pryzby wrote:
Apparently there's still an occasional issue.
https://cirrus-ci.com/task/6613309985128448I think that failure is actually not related to this patch.
Yeah, it's not -- Kyotaro diagnosed it as a problem in libpq's pipeline
mode. I hope to push his fix soon, but there are nearby problems that I
haven't been able to track down a good fix for. I'm looking into the
whole.
--
Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
Jelte Fennema <Jelte.Fennema@microsoft.com> writes:
[ non-blocking PQcancel ]
I pushed the 0001 patch (libpq_pipeline documentation) with a bit
of further wordsmithing.
As for 0002, I'm not sure that's anywhere near ready. I doubt it's
a great idea to un-deprecate PQrequestCancel with a major change
in its behavior. If there is anybody out there still using it,
they're not likely to appreciate that. Let's leave that alone and
pick some other name.
I'm also finding the entire design of PQrequestCancelStart etc to
be horribly confusing --- it's not *bad* necessarily, but the chosen
function names are seriously misleading. PQrequestCancelStart doesn't
actually "start" anything, so the apparent parallel with PQconnectStart
is just wrong. It's also fairly unclear what the state of a cancel
PQconn is after the request cycle is completed, and whether you can
re-use it (especially after a failed request), and whether you have
to dispose of it separately.
On the whole it feels like a mistake to have two separate kinds of
PGconn with fundamentally different behaviors and yet no distinction
in the API. I think I'd recommend having a separate struct type
(which might internally contain little more than a pointer to a
cloned PGconn), and provide only a limited set of operations on it.
Seems like create, start/continue cancel request, destroy, and
fetch error message ought to be enough. I don't see a reason why we
need to support all of libpq's inquiry operations on such objects ---
for instance, if you want to know which host is involved, you could
perfectly well query the parent PGconn. Nor do I want to run around
and add code to every single libpq entry point to make it reject cancel
PGconns if it can't support them, but we'd have to do so if there's
just one struct type.
I'm not seeing the use-case for PQconnectComplete. If you want
a non-blocking cancel request, why would you then use a blocking
operation to complete the request? Seems like it'd be better
to have just a monolithic cancel function for those who don't
need non-blocking.
This change:
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,12 +59,15 @@ typedef enum
{
CONNECTION_OK,
CONNECTION_BAD,
+ CONNECTION_CANCEL_FINISHED,
/* Non-blocking mode only below here */
is an absolute non-starter: it breaks ABI for every libpq client,
even ones that aren't using this facility. Why do we need a new
ConnStatusType value anyway? Seems like PostgresPollingStatusType
covers what we need: once you reach PGRES_POLLING_OK, the cancel
request is done.
The test case is still not very bulletproof on slow machines,
as it seems to be assuming that 30 seconds == forever. It
would be all right to use $PostgreSQL::Test::Utils::timeout_default,
but I'm not sure that that's easily retrievable by C code.
Maybe make the TAP test pass it in with another optional switch
to libpq_pipeline? Alternatively, we could teach libpq_pipeline
to do getenv("PG_TEST_TIMEOUT_DEFAULT") with a fallback to 180,
but that feels like it might be overly familiar with the innards
of Utils.pm.
regards, tom lane
Thanks for all the feedback. I attached a new patch that I think
addresses all of it. Below some additional info.
On the whole it feels like a mistake to have two separate kinds of
PGconn with fundamentally different behaviors and yet no distinction
in the API. I think I'd recommend having a separate struct type
(which might internally contain little more than a pointer to a
cloned PGconn), and provide only a limited set of operations on it.
In my first version of this patch, this is exactly what I did. But then
I got this feedback from Jacob, so I changed it to reusing PGconn:
/* issue a cancel request */ extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize); +extern PGcancelConn * PQcancelConnectStart(PGconn *conn); +extern PGcancelConn * PQcancelConnect(PGconn *conn); +extern PostgresPollingStatusType PQcancelConnectPoll(PGcancelConn * cancelConn); +extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn); +extern int PQcancelSocket(const PGcancelConn * cancelConn); +extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn); +extern void PQcancelFinish(PGcancelConn * cancelConn);That's a lot of new entry points, most of which don't do anything
except call their twin after a pointer cast. How painful would it be to
just use the existing APIs as-is, and error out when calling
unsupported functions if conn->cancelRequest is true?
I changed it back to use PGcancelConn as per your suggestion and I
agree that the API got better because of it.
+ CONNECTION_CANCEL_FINISHED,
/* Non-blocking mode only below here */is an absolute non-starter: it breaks ABI for every libpq client,
even ones that aren't using this facility.
I removed this now. The main reason was so it was clear that no
queries could be sent over the connection, like is normally the case
when CONNECTION_OK happens. I don't think this is as useful anymore
now that this patch has a dedicated PGcancelStatus function.
NOTE: The CONNECTION_STARTING ConnStatusType is still necessary.
But to keep ABI compatibility I moved it to the end of the enum.
Alternatively, we could teach libpq_pipeline
to do getenv("PG_TEST_TIMEOUT_DEFAULT") with a fallback to 180,
but that feels like it might be overly familiar with the innards
of Utils.pm.
I went with this approach, because this environment variable was
already used in 2 other places than Utils.pm:
- contrib/test_decoding/sql/twophase.sql
- src/test/isolation/isolationtester.c
So, one more place seemed quite harmless.
P.S. I noticed a logical conflict between this patch and my libpq load
balancing patch. Because this patch depends on the connhost array
is constructed the exact same on the second invocation of connectOptions2.
But the libpq loadbalancing patch breaks this assumption. I'm making
a mental (and public) note that whichever of these patches gets merged last
should address this issue.
Attachments:
0001-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=0001-Add-non-blocking-version-of-PQcancel.patchDownload
From 437b098b2ec6334affb2d818cf9154c7f170e2dc Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 12 Jan 2022 09:52:05 +0100
Subject: [PATCH] Add non-blocking version of PQcancel
This patch does four things:
1. Change the PQrequestCancel implementation to use the regular
connection establishement code, to support all connection options
including encryption.
2. Add PQrequestCancelStart which is a thread-safe and non-blocking
version of this new PQrequestCancel implementation.
3. Add PQconnectComplete, which completes a connection started by
PQrequestCancelStart. This is useful if you want a thread-safe but
blocking cancel (without having a need for signal-safety).
4. Use this new cancellation API everywhere in the codebase where
signal-safety is not a necessity.
This change un-deprecates PQrequestCancel, since now there's actually an
advantage to using it over PQcancel. It also includes user facing
documentation for all the newly added functions.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQrequestCancelStart can now be used
instead, to have a non-blocking way of sending cancel requests. The
postgres_fdw cancellation code has been modified to make use of this.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
contrib/dblink/dblink.c | 28 +-
contrib/postgres_fdw/connection.c | 93 ++++-
.../postgres_fdw/expected/postgres_fdw.out | 15 +
contrib/postgres_fdw/sql/postgres_fdw.sql | 8 +
doc/src/sgml/libpq.sgml | 212 +++++++++--
src/fe_utils/connect_utils.c | 10 +-
src/interfaces/libpq/exports.txt | 2 +
src/interfaces/libpq/fe-connect.c | 341 +++++++++++++++---
src/interfaces/libpq/fe-misc.c | 15 +-
src/interfaces/libpq/fe-secure-openssl.c | 2 +-
src/interfaces/libpq/fe-secure.c | 6 +
src/interfaces/libpq/libpq-fe.h | 9 +-
src/interfaces/libpq/libpq-int.h | 4 +
src/test/isolation/isolationtester.c | 28 +-
.../modules/libpq_pipeline/libpq_pipeline.c | 274 +++++++++++++-
15 files changed, 894 insertions(+), 153 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index a561d1d652..b572cf6d5b 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1379,22 +1379,30 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGconn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
-
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ cancelConn = PQrequestCancelStart(conn);
+ if (PQstatus(cancelConn) == CONNECTION_BAD)
+ {
+ msg = pchomp(PQerrorMessage(cancelConn));
+ PQfinish(cancelConn);
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
+ }
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
+ if (PQconnectComplete(cancelConn))
+ {
+ msg = "OK";
+ }
else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ {
+ msg = pchomp(PQerrorMessage(cancelConn));
+ }
+ PQfinish(cancelConn);
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 061ffaf329..fa47274d15 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -1264,35 +1264,98 @@ pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel)
static bool
pgfdw_cancel_query(PGconn *conn)
{
- PGcancel *cancel;
- char errbuf[256];
PGresult *result = NULL;
- TimestampTz endtime;
- bool timed_out;
/*
* If it takes too long to cancel the query and discard the result, assume
* the connection is dead.
*/
- endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ bool timed_out = false;
+ bool failed = false;
+ PGconn *cancel_conn = PQrequestCancelStart(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQstatus(cancel_conn) == CONNECTION_BAD)
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQerrorMessage(cancel_conn)))));
+ return false;
+ }
+
+ /* In what follows, do not leak any PGconn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQconnectPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQsocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ }
+ PG_CATCH();
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PQfinish(cancel_conn);
+ PG_RE_THROW();
+ }
+ PG_END_TRY();
+
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQerrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PQfinish(cancel_conn);
+ return failed;
}
+ PQfinish(cancel_conn);
/* Get and discard the result of the query. */
if (pgfdw_get_cleanup_result(conn, endtime, &result, &timed_out))
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 44457f930c..6d108b56ef 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2567,6 +2567,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index 92d1212027..7e02ed6803 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -326,6 +326,7 @@ DELETE FROM loct_empty;
ANALYZE ft_empty;
EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft_empty ORDER BY c1;
+
-- ===================================================================
-- WHERE with remotely-executable conditions
-- ===================================================================
@@ -681,6 +682,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 37ec3cb4e5..8e033bb8f3 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -499,6 +499,30 @@ switch(PQstatus(conn))
</listitem>
</varlistentry>
+ <varlistentry id="libpq-PQconnectComplete">
+ <term><function>PQconnectComplete</function><indexterm><primary>PQconnectComplete</primary></indexterm></term>
+ <listitem>
+ <para>
+ Complete the connection attempt on a nonblocking connection and block
+ until it is completed.
+
+<synopsis>
+int PQconnectPoll(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This function can be used instead of
+ <xref linkend="libpq-PQconnectPoll"/>
+ to complete a connection that was initially started in a non blocking
+ manner. However, instead of continuing to complete the connection in a
+ non blocking way, calling this function will block until the connection
+ is completed. This is especially useful to complete connections that were
+ started by <xref linkend="libpq-PQrequestCancelStart"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQconndefaults">
<term><function>PQconndefaults</function><indexterm><primary>PQconndefaults</primary></indexterm></term>
<listitem>
@@ -660,7 +684,7 @@ void PQreset(PGconn *conn);
<varlistentry id="libpq-PQresetStart">
<term><function>PQresetStart</function><indexterm><primary>PQresetStart</primary></indexterm></term>
- <term><function>PQresetPoll</function><indexterm><primary>PQresetPoll</primary></indexterm></term>
+ <term id="libpq-PQresetPoll"><function>PQresetPoll</function><indexterm><primary>PQresetPoll</primary></indexterm></term>
<listitem>
<para>
Reset the communication channel to the server, in a nonblocking manner.
@@ -5617,13 +5641,137 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQrequestCancel">
+ <term><function>PQrequestCancel</function><indexterm><primary>PQrequestCancel</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command.
+<synopsis>
+int PQrequestCancel(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This request is made over a connection that uses the same connection
+ options as the the original <structname>PGconn</structname>. So when the
+ original connection is encrypted (using TLS or GSS), the connection for
+ the cancel request connection is encrypted in the same. Any connection
+ options that only make sense for authentication or after authentication
+ are ignored though, because cancellation requests do not require
+ authentication.
+ </para>
+
+ <para>
+ This function operates directly on the <structname>PGconn</structname>
+ object, and in case of failure stores the error message in the
+ <structname>PGconn</structname> object (whence it can be retrieved
+ by <xref linkend="libpq-PQerrorMessage"/>). This behaviour makes this
+ function unsafe to call from within multi-threaded programs or
+ signal handlers, since it is possible that overwriting the
+ <structname>PGconn</structname>'s error message will
+ mess up the operation currently in progress on the connection in another
+ thread.
+ </para>
+
+ <para>
+ The return value is 1 if the cancel request was successfully
+ dispatched and 0 if not. Successful dispatch is no guarantee that the
+ request will have any effect, however. If the cancellation is effective,
+ the current command will terminate early and return an error result. If
+ the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at
+ all.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQrequestCancelStart">
+ <term><function>PQrequestCancelStart</function><indexterm><primary>PQrequestCancelStart</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of
+ <xref linkend="libpq-PQrequestCancel"/>
+ that can be used in thread-safe and/or non-blocking manner.
+<synopsis>
+PGconn *PQrequestCancelStart(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This function returns a new <structname>PGconn</structname>. This
+ connection object can be used to cancel the query that's running on the
+ original connection in a thread-safe way. To do so
+ <xref linkend="libpq-PQrequestCancel"/>
+ must be called while no other thread is using the original PGconn. Then
+ the returned <structname>PGconn</structname>
+ can be used at a later point in any thread to send a cancel request.
+ A cancel request can be sent using the returned PGconn in two ways,
+ non-blocking using <xref linkend="libpq-PQconnectPoll"/>
+ or blocking using <xref linkend="libpq-PQconnectComplete"/>.
+ </para>
+
+ <para>
+ In addition to all the statuses that a regular
+ <structname>PGconn</structname>
+ can have returned connection can have two additional statuses:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQconnectPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQrequestCancel"/>. No connection to the
+ server has been initiated yet at this point. To start cancel request
+ initiation use <xref linkend="libpq-PQconnectPoll"/>
+ for non-blocking behaviour and <xref linkend="libpq-PQconnectComplete"/>
+ for blocking behaviour.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-connection-cancel-finished">
+ <term><symbol>CONNECTION_CANCEL_FINISHED</symbol></term>
+ <listitem>
+ <para>
+ Cancel request was successfully sent. It's not possible to continue
+ using the cancellation connection now, so it should be freed using
+ <xref linkend="libpq-PQfinish"/>. It's also possible to reset the
+ cancellation connection instead using
+ <xref linkend="libpq-PQresetStart"/>, that way it can be reused to
+ cancel a future query on the same connection.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ Since this object represents a connection only meant for cancellations it
+ can only be used with a limited subset of the functions that can be used
+ for a regular <structname>PGconn</structname> object. The functions that
+ this object can be passed to are
+ <xref linkend="libpq-PQstatus"/>,
+ <xref linkend="libpq-PQerrorMessage"/>,
+ <xref linkend="libpq-PQconnectComplete"/>,
+ <xref linkend="libpq-PQconnectPoll"/>,
+ <xref linkend="libpq-PQsocket"/>,
+ <xref linkend="libpq-PQresetStart"/>, and
+ <xref linkend="libpq-PQfinish"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -5665,7 +5813,9 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ A less secure version of
+ <xref linkend="libpq-PQrequestCancel"/>
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
@@ -5679,15 +5829,6 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
recommended size is 256 bytes).
</para>
- <para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
<para>
<xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
handler, if the <parameter>errbuf</parameter> is a local variable in the
@@ -5696,33 +5837,24 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
also be invoked from a thread that is separate from the one
manipulating the <structname>PGconn</structname> object.
</para>
- </listitem>
- </varlistentry>
- </variablelist>
-
- <variablelist>
- <varlistentry id="libpq-PQrequestCancel">
- <term><function>PQrequestCancel</function><indexterm><primary>PQrequestCancel</primary></indexterm></term>
-
- <listitem>
- <para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
-<synopsis>
-int PQrequestCancel(PGconn *conn);
-</synopsis>
- </para>
<para>
- Requests that the server abandon processing of the current
- command. It operates directly on the
- <structname>PGconn</structname> object, and in case of failure stores the
- error message in the <structname>PGconn</structname> object (whence it can
- be retrieved by <xref linkend="libpq-PQerrorMessage"/>). Although
- the functionality is the same, this approach is not safe within
- multiple-thread programs or signal handlers, since it is possible
- that overwriting the <structname>PGconn</structname>'s error message will
- mess up the operation currently in progress on the connection.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. When calling this function a
+ connection is made to the postgres host using the same port. The only
+ connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. This means the connection
+ is never encrypted using TLS or GSS.
</para>
</listitem>
</varlistentry>
@@ -8856,10 +8988,10 @@ int PQisthreadsafe();
</para>
<para>
- The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
+ The functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQrequestCancelStart"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index f2e583f9fa..b9f0c0558c 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -158,19 +158,11 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
-
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQrequestCancel(conn);
}
PQfinish(conn);
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index e8bcc88370..f7609d0c64 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -186,3 +186,5 @@ PQpipelineStatus 183
PQsetTraceFlags 184
PQmblenBounded 185
PQsendFlushRequest 186
+PQrequestCancelStart 187
+PQconnectComplete 188
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 6e936bbff3..7390fbec7c 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -379,6 +379,7 @@ static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
@@ -605,8 +606,17 @@ pqDropServerData(PGconn *conn)
if (conn->write_err_msg)
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQresetStart invocations. Otherwise they don't know the secret token of
+ * the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -737,6 +747,68 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConnectStart
+ *
+ * Asynchronously cancel a request on the given connection. This requires
+ * polling the returned PGconn to actually complete the cancellation of the
+ * request.
+ */
+PGconn *
+PQrequestCancelStart(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ appendPQExpBufferStr(&cancelConn->errorMessage, libpq_gettext("passed connection was NULL\n"));
+ return cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ appendPQExpBufferStr(&cancelConn->errorMessage, libpq_gettext("passed connection is not open\n"));
+ return cancelConn;
+ }
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGconn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return cancelConn;
+
+ /*
+ * Copy cancelation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used.
+ */
+ memcpy(&cancelConn->raddr, &conn->raddr, sizeof(SockAddr));
+ cancelConn->whichhost = conn->whichhost;
+ conn->try_next_host = false;
+ conn->try_next_addr = false;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return cancelConn;
+}
+
/*
* PQconnectStartParams
*
@@ -914,6 +986,46 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ appendPQExpBufferStr(&dstConn->errorMessage,
+ libpq_gettext("out of memory\n"));
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2082,10 +2194,17 @@ connectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host,
+ * which is determined in PQcancelConnectStart. So leave these settings
+ * alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2134,6 +2253,15 @@ connectDBComplete(PGconn *conn)
if (conn == NULL || conn->status == CONNECTION_BAD)
return 0;
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!connectDBStart(conn))
+ {
+ conn->status = CONNECTION_BAD;
+ return 0;
+ }
+ }
+
/*
* Set up a time limit, if connect_timeout isn't zero.
*/
@@ -2274,13 +2402,15 @@ PQconnectPoll(PGconn *conn)
switch (conn->status)
{
/*
- * We really shouldn't have been polled in these two cases, but we
- * can handle it.
+ * We really shouldn't have been polled in these three cases, but
+ * we can handle it.
*/
case CONNECTION_BAD:
return PGRES_POLLING_FAILED;
case CONNECTION_OK:
return PGRES_POLLING_OK;
+ case CONNECTION_CANCEL_FINISHED:
+ return PGRES_POLLING_OK;
/* These are reading states */
case CONNECTION_AWAITING_RESPONSE:
@@ -2292,6 +2422,34 @@ PQconnectPoll(PGconn *conn)
/* Load waiting data */
int n = pqReadData(conn);
+#ifndef WIN32
+ if (n == -2 && conn->cancelRequest)
+#else
+
+ /*
+ * Windows is a bit special in its EOF behaviour for TCP.
+ * Sometimes it will error with an ECONNRESET when there is a
+ * clean connection closure. See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the
+ * cancellation anyway, so even if this is not always correct
+ * we do the same here.
+ */
+ if (n < 0 && conn->cancelRequest)
+#endif
+ {
+ /*
+ * This is the expected end state for cancel connections.
+ * They are closed once the cancel is processed by the
+ * server.
+ */
+ conn->status = CONNECTION_CANCEL_FINISHED;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+ }
if (n < 0)
goto error_return;
if (n == 0)
@@ -2301,6 +2459,7 @@ PQconnectPoll(PGconn *conn)
}
/* These are writing states, so we just proceed. */
+ case CONNECTION_STARTING:
case CONNECTION_STARTED:
case CONNECTION_MADE:
break;
@@ -2325,6 +2484,14 @@ keep_going: /* We will come back to here until there is
/* Time to advance to next address, or next host if no more addresses? */
if (conn->try_next_addr)
{
+ /*
+ * Cancel requests never have more addresses to try. They should only
+ * try a single one.
+ */
+ if (conn->cancelRequest)
+ {
+ goto error_return;
+ }
if (conn->addr_cur && conn->addr_cur->ai_next)
{
conn->addr_cur = conn->addr_cur->ai_next;
@@ -2344,6 +2511,15 @@ keep_going: /* We will come back to here until there is
int ret;
char portstr[MAXPGPATH];
+ /*
+ * Cancel requests never have more hosts to try. They should only try
+ * a single one.
+ */
+ if (conn->cancelRequest)
+ {
+ goto error_return;
+ }
+
if (conn->whichhost + 1 < conn->nconnhost)
conn->whichhost++;
else
@@ -2529,19 +2705,27 @@ keep_going: /* We will come back to here until there is
char host_addr[NI_MAXHOST];
/*
- * Advance to next possible host, if we've tried all of
- * the addresses for the current host.
+ * Cancel requests don't use addr_cur at all. They have
+ * their raddr field already filled in during
+ * initialization in PQcancelConnectStart.
*/
- if (addr_cur == NULL)
+ if (!conn->cancelRequest)
{
- conn->try_next_host = true;
- goto keep_going;
- }
+ /*
+ * Advance to next possible host, if we've tried all
+ * of the addresses for the current host.
+ */
+ if (addr_cur == NULL)
+ {
+ conn->try_next_host = true;
+ goto keep_going;
+ }
- /* Remember current address for possible use later */
- memcpy(&conn->raddr.addr, addr_cur->ai_addr,
- addr_cur->ai_addrlen);
- conn->raddr.salen = addr_cur->ai_addrlen;
+ /* Remember current address for possible use later */
+ memcpy(&conn->raddr.addr, addr_cur->ai_addr,
+ addr_cur->ai_addrlen);
+ conn->raddr.salen = addr_cur->ai_addrlen;
+ }
/*
* Set connip, too. Note we purposely ignore strdup
@@ -2557,7 +2741,7 @@ keep_going: /* We will come back to here until there is
conn->connip = strdup(host_addr);
/* Try to create the socket */
- conn->sock = socket(addr_cur->ai_family, SOCK_STREAM, 0);
+ conn->sock = socket(conn->raddr.addr.ss_family, SOCK_STREAM, 0);
if (conn->sock == PGINVALID_SOCKET)
{
int errorno = SOCK_ERRNO;
@@ -2567,12 +2751,18 @@ keep_going: /* We will come back to here until there is
* addresses to try; this reduces useless chatter in
* cases where the address list includes both IPv4 and
* IPv6 but kernel only accepts one family.
+ *
+ * Cancel requests never have more addresses to try.
+ * They should only try a single one.
*/
- if (addr_cur->ai_next != NULL ||
- conn->whichhost + 1 < conn->nconnhost)
+ if (!conn->cancelRequest)
{
- conn->try_next_addr = true;
- goto keep_going;
+ if (addr_cur->ai_next != NULL ||
+ conn->whichhost + 1 < conn->nconnhost)
+ {
+ conn->try_next_addr = true;
+ goto keep_going;
+ }
}
emitHostIdentityInfo(conn, host_addr);
appendPQExpBuffer(&conn->errorMessage,
@@ -2595,7 +2785,7 @@ keep_going: /* We will come back to here until there is
* TCP sockets, nonblock mode, close-on-exec. Try the
* next address if any of this fails.
*/
- if (addr_cur->ai_family != AF_UNIX)
+ if (conn->raddr.addr.ss_family != AF_UNIX)
{
if (!connectNoDelay(conn))
{
@@ -2624,7 +2814,7 @@ keep_going: /* We will come back to here until there is
}
#endif /* F_SETFD */
- if (addr_cur->ai_family != AF_UNIX)
+ if (conn->raddr.addr.ss_family != AF_UNIX)
{
#ifndef WIN32
int on = 1;
@@ -2718,8 +2908,9 @@ keep_going: /* We will come back to here until there is
* Start/make connection. This should not block, since we
* are in nonblock mode. If it does, well, too bad.
*/
- if (connect(conn->sock, addr_cur->ai_addr,
- addr_cur->ai_addrlen) < 0)
+ if (connect(conn->sock,
+ (struct sockaddr *) &conn->raddr.addr,
+ conn->raddr.salen) < 0)
{
if (SOCK_ERRNO == EINPROGRESS ||
#ifdef WIN32
@@ -2758,6 +2949,16 @@ keep_going: /* We will come back to here until there is
}
}
+ case CONNECTION_STARTING:
+ {
+ if (!connectDBStart(conn))
+ {
+ goto error_return;
+ }
+ conn->status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+
case CONNECTION_STARTED:
{
socklen_t optlen = sizeof(optval);
@@ -2966,6 +3167,25 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ appendPQExpBuffer(&conn->errorMessage,
+ libpq_gettext("could not send cancel packet: %s\n"),
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -4194,6 +4414,11 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4311,6 +4536,12 @@ PQresetStart(PGconn *conn)
{
closePGconn(conn);
+ if (conn->cancelRequest)
+ {
+ conn->status = CONNECTION_STARTING;
+ return 1;
+ }
+
return connectDBStart(conn);
}
@@ -4663,6 +4894,22 @@ cancel_errReturn:
return false;
}
+/*
+ * PQconnectComplete: takes a non blocking cancel connection and completes it
+ * in a blocking manner.
+ *
+ * Returns 1 if able to connect successfully and 0 if not.
+ *
+ * This can useful if you only care about the thread safety of
+ * PQrequestCancelStart and not about its non blocking functionality.
+ */
+int
+PQconnectComplete(PGconn *cancelConn)
+{
+ connectDBComplete(cancelConn);
+ return cancelConn->status != CONNECTION_BAD;
+}
+
/*
* PQrequestCancel: old, not thread-safe function for requesting query cancel
@@ -4679,45 +4926,31 @@ cancel_errReturn:
int
PQrequestCancel(PGconn *conn)
{
- int r;
- PGcancel *cancel;
-
- /* Check we have an open connection */
- if (!conn)
- return false;
+ PGconn *cancelConn = NULL;
- if (conn->sock == PGINVALID_SOCKET)
+ cancelConn = PQrequestCancelStart(conn);
+ if (!cancelConn)
{
- strlcpy(conn->errorMessage.data,
- "PQrequestCancel() -- connection is not open\n",
- conn->errorMessage.maxlen);
- conn->errorMessage.len = strlen(conn->errorMessage.data);
- conn->errorReported = 0;
-
+ appendPQExpBufferStr(&conn->errorMessage, libpq_gettext("out of memory\n"));
return false;
}
- cancel = PQgetCancel(conn);
- if (cancel)
- {
- r = PQcancel(cancel, conn->errorMessage.data,
- conn->errorMessage.maxlen);
- PQfreeCancel(cancel);
- }
- else
+ if (cancelConn->status == CONNECTION_BAD)
{
- strlcpy(conn->errorMessage.data, "out of memory",
- conn->errorMessage.maxlen);
- r = false;
+ appendPQExpBufferStr(&conn->errorMessage, PQerrorMessage(cancelConn));
+ freePGconn(cancelConn);
+ return false;
}
- if (!r)
+ if (!PQconnectComplete(cancelConn))
{
- conn->errorMessage.len = strlen(conn->errorMessage.data);
- conn->errorReported = 0;
+ appendPQExpBufferStr(&conn->errorMessage, PQerrorMessage(cancelConn));
+ freePGconn(cancelConn);
+ return false;
}
- return r;
+ freePGconn(cancelConn);
+ return true;
}
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index d76bb3957a..a944cb2c12 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -558,8 +558,11 @@ pqPutMsgEnd(PGconn *conn)
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
- * -1: error detected (including EOF = connection closure);
+ * -1: error detected (excluding EOF = connection closure);
* conn->errorMessage set
+ * -2: EOF detected, connection is closed
+ * conn->errorMessage set
+ *
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
@@ -642,7 +645,7 @@ retry3:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -737,7 +740,7 @@ retry4:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -755,13 +758,17 @@ definitelyEOF:
libpq_gettext("server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.\n"));
+ /* Do *not* drop any already-read data; caller still wants it */
+ pqDropConnection(conn, false);
+ conn->status = CONNECTION_BAD; /* No more connection to backend */
+ return -2;
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
/* Do *not* drop any already-read data; caller still wants it */
pqDropConnection(conn, false);
conn->status = CONNECTION_BAD; /* No more connection to backend */
- return -1;
+ return nread < 0 ? nread : -1;
}
/*
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index 8117cbd40f..c553a74898 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -255,7 +255,7 @@ rloop:
appendPQExpBufferStr(&conn->errorMessage,
libpq_gettext("SSL connection has been closed unexpectedly\n"));
result_errno = ECONNRESET;
- n = -1;
+ n = -2;
break;
default:
appendPQExpBuffer(&conn->errorMessage,
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index a1dc7b796d..9771805dd3 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -201,6 +201,12 @@ pqsecure_close(PGconn *conn)
* On failure, this function is responsible for appending a suitable message
* to conn->errorMessage. The caller must still inspect errno, but only
* to determine whether to continue/retry after error.
+ *
+ * Returns -1 in case of failures, except in the case of where a failure means
+ * that there was a clean connection closure, in those cases -2 is return.
+ * Currently only the TLS implementation of pqsecure_read ever returns -2. For
+ * the other implementations a clean connection closure is detected in
+ * pqReadData instead.
*/
ssize_t
pqsecure_read(PGconn *conn, void *ptr, size_t len)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 7986445f1a..24695a6026 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,12 +59,15 @@ typedef enum
{
CONNECTION_OK,
CONNECTION_BAD,
+ CONNECTION_CANCEL_FINISHED,
/* Non-blocking mode only below here */
/*
* The existence of these should never be relied upon - they should only
* be used for user feedback or similar purposes.
*/
+ CONNECTION_STARTING, /* Waiting for connection attempt to be
+ * started. */
CONNECTION_STARTED, /* Waiting for connection to be made. */
CONNECTION_MADE, /* Connection OK; waiting to send. */
CONNECTION_AWAITING_RESPONSE, /* Waiting for a response from the
@@ -282,6 +285,7 @@ extern PGconn *PQconnectStart(const char *conninfo);
extern PGconn *PQconnectStartParams(const char *const *keywords,
const char *const *values, int expand_dbname);
extern PostgresPollingStatusType PQconnectPoll(PGconn *conn);
+extern int PQconnectComplete(PGconn *conn);
/* Synchronous (blocking) */
extern PGconn *PQconnectdb(const char *conninfo);
@@ -330,9 +334,12 @@ extern void PQfreeCancel(PGcancel *cancel);
/* issue a cancel request */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* more secure version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
+/* non-blocking and thread-safe version of PQrequestCancel */
+extern PGconn *PQrequestCancelStart(PGconn *conn);
+
/* Accessor functions for PGconn objects */
extern char *PQdb(const PGconn *conn);
extern char *PQuser(const PGconn *conn);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 3db6a17db4..b9ce1d58c1 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -394,6 +394,10 @@ struct pg_conn
char *ssl_max_protocol_version; /* maximum TLS protocol version */
char *target_session_attrs; /* desired session properties */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 12179f2514..b073235197 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -948,26 +948,18 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
-
- if (cancel != NULL)
+ if (PQrequestCancel(conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQerrorMessage(conn));
}
/*
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 0ff563f59a..52503907bd 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,275 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got cancelled.
+ *
+ * This is a function wrapped in a macrco to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_cancelled(conn) confirm_query_cancelled_impl(__LINE__, conn)
+static void
+confirm_query_cancelled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ if (PQsendQuery(conn, "SELECT pg_sleep(30)") != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep(30)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGconn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_cancelled(conn);
+
+ /* test PQrequestCancelStart and then polling with PQcancelConnectPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQrequestCancelStart(conn);
+ if (PQstatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQconnectPoll(cancelConn);
+ int sock = PQsocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQerrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQstatus(cancelConn) != CONNECTION_CANCEL_FINISHED)
+ pg_fatal("unexpected cancel connection status: %s", PQerrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /*
+ * test PQresetStart works on the cancel connection and it can be reused
+ * after
+ */
+ if (!PQresetStart(cancelConn))
+ {
+ pg_fatal("cancel connection reset failed: %s", PQerrorMessage(cancelConn));
+ }
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQstatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQresetPoll(cancelConn);
+ int sock = PQsocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQerrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQstatus(cancelConn) != CONNECTION_CANCEL_FINISHED)
+ pg_fatal("unexpected cancel connection status: %s", PQerrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ PQfinish(cancelConn);
+
+ /* test PQconnectComplete */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQrequestCancelStart(conn);
+ if (PQstatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ if (!PQconnectComplete(cancelConn))
+ pg_fatal("failed to send cancel: %s", PQerrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /* test PQconnectComplete with reset connection */
+ if (!PQresetStart(cancelConn))
+ {
+ pg_fatal("cancel connection reset failed: %s", PQerrorMessage(cancelConn));
+ }
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQstatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQerrorMessage(cancelConn));
+ if (!PQconnectComplete(cancelConn))
+ pg_fatal("failed to send cancel: %s", PQerrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+ PQfinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -1545,6 +1814,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1642,7 +1912,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
On 10/5/22 06:23, Jelte Fennema wrote:
In my first version of this patch, this is exactly what I did. But then
I got this feedback from Jacob, so I changed it to reusing PGconn:[snip]
I changed it back to use PGcancelConn as per your suggestion and I
agree that the API got better because of it.
Sorry for the whiplash!
Is the latest attachment the correct version? I don't see any difference
between the latest 0001 and the previous version's 0002 -- it has no
references to PG_TEST_TIMEOUT_DEFAULT, PGcancelConn, etc.
Thanks,
--Jacob
Ugh, it indeed seems like I somehow messed up sending the new patch.
Here's the correct one.
Attachments:
0001-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=0001-Add-non-blocking-version-of-PQcancel.patchDownload
From d8d581a0033e0365faf96a39e8ce75a6ec9ebf7d Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 12 Jan 2022 09:52:05 +0100
Subject: [PATCH] Add non-blocking version of PQcancel
This patch makes the following changes in libpq:
1. Add a new PQcancelSend function, which sends cancellation requests
using the regular connection establishment code. This makes sure
that cancel requests support and use all connection options
including encryption.
2. Add a new PQcancelConn function which allows sending cancellation in
a non-blocking way by using it together with the newly added
PQcancelPoll and PQcancelSocket.
3. Use these two new cancellation APIs everywhere in the codebase where
signal-safety is not a necessity.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests. The postgres_fdw
cancellation code has been modified to make use of this.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
contrib/dblink/dblink.c | 22 +-
contrib/postgres_fdw/connection.c | 93 ++++-
.../postgres_fdw/expected/postgres_fdw.out | 15 +
contrib/postgres_fdw/sql/postgres_fdw.sql | 8 +
doc/src/sgml/libpq.sgml | 279 +++++++++++--
src/fe_utils/connect_utils.c | 10 +-
src/interfaces/libpq/exports.txt | 8 +
src/interfaces/libpq/fe-connect.c | 375 ++++++++++++++++--
src/interfaces/libpq/fe-misc.c | 15 +-
src/interfaces/libpq/fe-secure-openssl.c | 2 +-
src/interfaces/libpq/fe-secure.c | 6 +
src/interfaces/libpq/libpq-fe.h | 25 +-
src/interfaces/libpq/libpq-int.h | 9 +
src/test/isolation/isolationtester.c | 29 +-
.../modules/libpq_pipeline/libpq_pipeline.c | 263 +++++++++++-
15 files changed, 1050 insertions(+), 109 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 9eef417c47..2a55c6759a 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1378,22 +1378,24 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
-
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ cancelConn = PQcancelSend(conn);
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ {
+ msg = "OK";
+ }
+ PQcancelFinish(cancelConn);
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 939d114f02..9622441da7 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -1264,35 +1264,98 @@ pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel)
static bool
pgfdw_cancel_query(PGconn *conn)
{
- PGcancel *cancel;
- char errbuf[256];
PGresult *result = NULL;
- TimestampTz endtime;
- bool timed_out;
/*
* If it takes too long to cancel the query and discard the result, assume
* the connection is dead.
*/
- endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQcancelStatus(cancel_conn) == CONNECTION_BAD)
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ return false;
+ }
+
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ }
+ PG_CATCH();
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PQcancelFinish(cancel_conn);
+ PG_RE_THROW();
+ }
+ PG_END_TRY();
+
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PQcancelFinish(cancel_conn);
+ return failed;
}
+ PQcancelFinish(cancel_conn);
/* Get and discard the result of the query. */
if (pgfdw_get_cleanup_result(conn, endtime, &result, &timed_out))
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index cc9e39c4a5..113f3204cc 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2688,6 +2688,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index e48ccd286b..bf977442d6 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -326,6 +326,7 @@ DELETE FROM loct_empty;
ANALYZE ft_empty;
EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft_empty ORDER BY c1;
+
-- ===================================================================
-- WHERE with remotely-executable conditions
-- ===================================================================
@@ -713,6 +714,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 3c9bd3d673..90db021c1d 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -4909,7 +4909,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelSend"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -5627,13 +5627,220 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQcancelSend">
+ <term><function>PQcancelSend</function><indexterm><primary>PQcancelSend</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command.
+<synopsis>
+PGcancelConn *PQcancelSend(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This request is made over a connection that uses the same connection
+ options as the the original <structname>PGconn</structname>. So when the
+ original connection is encrypted (using TLS or GSS), the connection for
+ the cancel request connection is encrypted in the same. Any connection
+ options that only make sense for authentication or after authentication
+ are ignored though, because cancellation requests do not require
+ authentication.
+ </para>
+
+ <para>
+ This function returns a <structname>PGcancelConn</structname>
+ object. By using
+ <xref linkend="libpq-PQcancelStatus"/>
+ it can be checked if there was any error when sending the cancellation
+ request. If <xref linkend="libpq-PQcancelStatus"/>
+ returns for <symbol>CONNECTION_OK</symbol> the request was
+ successfully sent, but if it returns <symbol>CONNECTION_BAD</symbol>
+ an error occured. If an error occured the error message can be retrieved using
+ <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being cancelled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelSend</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQcancelSend"/> that can be used
+ in a non-blocking manner.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>,
+ but it won't instantly start sending a cancel request over this
+ connection like <xref linkend="libpq-PQcancelSend"/>.
+ <xref linkend="libpq-PQcancelStatus"/> should be called on the return
+ value to check if the <structname> PGcancelConn </structname> was
+ created successfully.
+ The <structname>PGcancelConn</structname> object is an opaque structure
+ that is not meant to be accessed directly by the application.
+ This <structname>PGcancelConn</structname> object can be used to cancel
+ the query that's running on the original connection in a thread-safe and
+ non-blocking way.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQstatus"/> that can be used for
+ cancellation connections.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ <para>
+ In addition to all the statuses that a <structname>PGconn</structname>
+ can have, this connection can have one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQcancelPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelPoll"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually cancelled).
+ While a <symbol>CONNECTION_OK</symbol> result for
+ <structname>PGconn</structname> means thatqueries can be sent over the
+ connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelPoll">
+ <term><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQconnectPoll"/> that can be used for
+ cancellation connections.
+<synopsis>
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -5675,14 +5882,30 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ An insecure version of
+ <xref linkend="libpq-PQcancelSend"/>, but one that can be used safely
+ from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
+ <xref linkend="libpq-PQcancel"/> should only be used if it's necessary
+ to cancel a query from a signal-handler. If signal-safety is not needed,
+ <xref linkend="libpq-PQcancelSend"/> should be used to cancel the query
+ instead.
+ <xref linkend="libpq-PQcancel"/> can be safely invoked from a signal
+ handler, if the <parameter>errbuf</parameter> is a local variable in the
+ signal handler. The <structname>PGcancel</structname> object is read-only
+ as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
+ also be invoked from a thread that is separate from the one
+ manipulating the <structname>PGconn</structname> object.
+ </para>
+
+ <para>
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
with an explanatory error message. <parameter>errbuf</parameter>
must be a char array of size <parameter>errbufsize</parameter> (the
@@ -5690,21 +5913,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
- <para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. When calling this function a
+ connection is made to the postgres host using the same port. The only
+ connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. <emphasis>This means the connection
+ is never encrypted using TLS or GSS</emphasis>.
</para>
</listitem>
</varlistentry>
@@ -5716,13 +5940,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelSend"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelSend"/> should be
+ used instead, to avoid the security and thread-safety issues that this
+ function has. This function has the same security issues as
+ <xref linkend="libpq-PQcancel"/>, but without the benefit of being
+ signal-safe.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -8871,7 +9104,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelSend"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 1cc97b72f7..0f5e84ad71 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,11 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
-
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ PQcancelFinish(PQcancelSend(conn));
}
PQfinish(conn);
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index e8bcc88370..f56e8c185c 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -186,3 +186,11 @@ PQpipelineStatus 183
PQsetTraceFlags 184
PQmblenBounded 185
PQsendFlushRequest 186
+PQcancelSend 187
+PQcancelConn 188
+PQcancelPoll 189
+PQcancelStatus 190
+PQcancelSocket 191
+PQcancelErrorMessage 192
+PQcancelReset 193
+PQcancelFinish 194
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 746e9b4f1e..7b59697e64 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -376,6 +376,7 @@ static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
@@ -599,8 +600,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQresetStart invocations. Otherwise they don't know the secret token of
+ * the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -731,6 +741,68 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a request on the given connection. This requires
+ * polling the returned PGconn to actually complete the cancellation of the
+ * request.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ appendPQExpBufferStr(&cancelConn->errorMessage, libpq_gettext("passed connection was NULL\n"));
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ appendPQExpBufferStr(&cancelConn->errorMessage, libpq_gettext("passed connection is not open\n"));
+ return (PGcancelConn *) cancelConn;
+ }
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancelation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used.
+ */
+ memcpy(&cancelConn->raddr, &conn->raddr, sizeof(SockAddr));
+ cancelConn->whichhost = conn->whichhost;
+ conn->try_next_host = false;
+ conn->try_next_addr = false;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return (PGcancelConn *) cancelConn;
+}
+
/*
* PQconnectStartParams
*
@@ -907,6 +979,46 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ appendPQExpBufferStr(&dstConn->errorMessage,
+ libpq_gettext("out of memory\n"));
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2055,10 +2167,17 @@ connectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host,
+ * which is determined in PQcancelConn. So leave these settings
+ * alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2107,6 +2226,15 @@ connectDBComplete(PGconn *conn)
if (conn == NULL || conn->status == CONNECTION_BAD)
return 0;
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!connectDBStart(conn))
+ {
+ conn->status = CONNECTION_BAD;
+ return 0;
+ }
+ }
+
/*
* Set up a time limit, if connect_timeout isn't zero.
*/
@@ -2247,8 +2375,8 @@ PQconnectPoll(PGconn *conn)
switch (conn->status)
{
/*
- * We really shouldn't have been polled in these two cases, but we
- * can handle it.
+ * We really shouldn't have been polled in these three cases, but
+ * we can handle it.
*/
case CONNECTION_BAD:
return PGRES_POLLING_FAILED;
@@ -2265,6 +2393,34 @@ PQconnectPoll(PGconn *conn)
/* Load waiting data */
int n = pqReadData(conn);
+#ifndef WIN32
+ if (n == -2 && conn->cancelRequest)
+#else
+
+ /*
+ * Windows is a bit special in its EOF behaviour for TCP.
+ * Sometimes it will error with an ECONNRESET when there is a
+ * clean connection closure. See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the
+ * cancellation anyway, so even if this is not always correct
+ * we do the same here.
+ */
+ if (n < 0 && conn->cancelRequest)
+#endif
+ {
+ /*
+ * This is the expected end state for cancel connections.
+ * They are closed once the cancel is processed by the
+ * server.
+ */
+ conn->status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+ }
if (n < 0)
goto error_return;
if (n == 0)
@@ -2274,6 +2430,7 @@ PQconnectPoll(PGconn *conn)
}
/* These are writing states, so we just proceed. */
+ case CONNECTION_STARTING:
case CONNECTION_STARTED:
case CONNECTION_MADE:
break;
@@ -2298,6 +2455,14 @@ keep_going: /* We will come back to here until there is
/* Time to advance to next address, or next host if no more addresses? */
if (conn->try_next_addr)
{
+ /*
+ * Cancel requests never have more addresses to try. They should only
+ * try a single one.
+ */
+ if (conn->cancelRequest)
+ {
+ goto error_return;
+ }
if (conn->addr_cur && conn->addr_cur->ai_next)
{
conn->addr_cur = conn->addr_cur->ai_next;
@@ -2317,6 +2482,15 @@ keep_going: /* We will come back to here until there is
int ret;
char portstr[MAXPGPATH];
+ /*
+ * Cancel requests never have more hosts to try. They should only try
+ * a single one.
+ */
+ if (conn->cancelRequest)
+ {
+ goto error_return;
+ }
+
if (conn->whichhost + 1 < conn->nconnhost)
conn->whichhost++;
else
@@ -2498,19 +2672,27 @@ keep_going: /* We will come back to here until there is
char host_addr[NI_MAXHOST];
/*
- * Advance to next possible host, if we've tried all of
- * the addresses for the current host.
+ * Cancel requests don't use addr_cur at all. They have
+ * their raddr field already filled in during
+ * initialization in PQcancelConn.
*/
- if (addr_cur == NULL)
+ if (!conn->cancelRequest)
{
- conn->try_next_host = true;
- goto keep_going;
- }
+ /*
+ * Advance to next possible host, if we've tried all
+ * of the addresses for the current host.
+ */
+ if (addr_cur == NULL)
+ {
+ conn->try_next_host = true;
+ goto keep_going;
+ }
- /* Remember current address for possible use later */
- memcpy(&conn->raddr.addr, addr_cur->ai_addr,
- addr_cur->ai_addrlen);
- conn->raddr.salen = addr_cur->ai_addrlen;
+ /* Remember current address for possible use later */
+ memcpy(&conn->raddr.addr, addr_cur->ai_addr,
+ addr_cur->ai_addrlen);
+ conn->raddr.salen = addr_cur->ai_addrlen;
+ }
/*
* Set connip, too. Note we purposely ignore strdup
@@ -2526,7 +2708,7 @@ keep_going: /* We will come back to here until there is
conn->connip = strdup(host_addr);
/* Try to create the socket */
- conn->sock = socket(addr_cur->ai_family, SOCK_STREAM, 0);
+ conn->sock = socket(conn->raddr.addr.ss_family, SOCK_STREAM, 0);
if (conn->sock == PGINVALID_SOCKET)
{
int errorno = SOCK_ERRNO;
@@ -2536,12 +2718,18 @@ keep_going: /* We will come back to here until there is
* addresses to try; this reduces useless chatter in
* cases where the address list includes both IPv4 and
* IPv6 but kernel only accepts one family.
+ *
+ * Cancel requests never have more addresses to try.
+ * They should only try a single one.
*/
- if (addr_cur->ai_next != NULL ||
- conn->whichhost + 1 < conn->nconnhost)
+ if (!conn->cancelRequest)
{
- conn->try_next_addr = true;
- goto keep_going;
+ if (addr_cur->ai_next != NULL ||
+ conn->whichhost + 1 < conn->nconnhost)
+ {
+ conn->try_next_addr = true;
+ goto keep_going;
+ }
}
emitHostIdentityInfo(conn, host_addr);
appendPQExpBuffer(&conn->errorMessage,
@@ -2564,7 +2752,7 @@ keep_going: /* We will come back to here until there is
* TCP sockets, nonblock mode, close-on-exec. Try the
* next address if any of this fails.
*/
- if (addr_cur->ai_family != AF_UNIX)
+ if (conn->raddr.addr.ss_family != AF_UNIX)
{
if (!connectNoDelay(conn))
{
@@ -2593,7 +2781,7 @@ keep_going: /* We will come back to here until there is
}
#endif /* F_SETFD */
- if (addr_cur->ai_family != AF_UNIX)
+ if (conn->raddr.addr.ss_family != AF_UNIX)
{
#ifndef WIN32
int on = 1;
@@ -2687,8 +2875,9 @@ keep_going: /* We will come back to here until there is
* Start/make connection. This should not block, since we
* are in nonblock mode. If it does, well, too bad.
*/
- if (connect(conn->sock, addr_cur->ai_addr,
- addr_cur->ai_addrlen) < 0)
+ if (connect(conn->sock,
+ (struct sockaddr *) &conn->raddr.addr,
+ conn->raddr.salen) < 0)
{
if (SOCK_ERRNO == EINPROGRESS ||
#ifdef WIN32
@@ -2727,6 +2916,16 @@ keep_going: /* We will come back to here until there is
}
}
+ case CONNECTION_STARTING:
+ {
+ if (!connectDBStart(conn))
+ {
+ goto error_return;
+ }
+ conn->status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+
case CONNECTION_STARTED:
{
socklen_t optlen = sizeof(optval);
@@ -2935,6 +3134,30 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancelation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ appendPQExpBuffer(&conn->errorMessage,
+ libpq_gettext("could not send cancel packet: %s\n"),
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -4114,6 +4337,15 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a Terminate
+ * message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4582,6 +4814,96 @@ cancel_errReturn:
return false;
}
+/*
+ * PQrequestCancel: old, not thread-safe function for requesting query cancel
+ *
+ * Returns true if able to send the cancel request, false if not.
+ *
+ * On failure, the error message is saved in conn->errorMessage; this means
+ * that this can't be used when there might be other active operations on
+ * the connection object.
+ *
+ * NOTE: error messages will be cut off at the current size of the
+ * error message buffer, since we dare not try to expand conn->errorMessage!
+ */
+PGcancelConn *
+PQcancelSend(PGconn *conn)
+{
+ PGcancelConn *cancelConn = PQcancelConn(conn);
+
+ if (cancelConn && cancelConn->conn.status != CONNECTION_BAD)
+ (void) connectDBComplete(&cancelConn->conn);
+
+ return cancelConn;
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn * cancelConn)
+{
+ return PQconnectPoll((PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn * cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn * cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn * cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn *cancelConn)
+{
+ closePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_STARTING;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn * cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
/*
* PQrequestCancel: old, not thread-safe function for requesting query cancel
@@ -4640,6 +4962,7 @@ PQrequestCancel(PGconn *conn)
}
+
/*
* pqPacketSend() -- convenience routine to send a message to server.
*
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 795500c593..b5b10ec2ba 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -556,8 +556,11 @@ pqPutMsgEnd(PGconn *conn)
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
- * -1: error detected (including EOF = connection closure);
+ * -1: error detected (excluding EOF = connection closure);
* conn->errorMessage set
+ * -2: EOF detected, connection is closed
+ * conn->errorMessage set
+ *
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
@@ -640,7 +643,7 @@ retry3:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -735,7 +738,7 @@ retry4:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -753,13 +756,17 @@ definitelyEOF:
libpq_gettext("server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.\n"));
+ /* Do *not* drop any already-read data; caller still wants it */
+ pqDropConnection(conn, false);
+ conn->status = CONNECTION_BAD; /* No more connection to backend */
+ return -2;
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
/* Do *not* drop any already-read data; caller still wants it */
pqDropConnection(conn, false);
conn->status = CONNECTION_BAD; /* No more connection to backend */
- return -1;
+ return nread < 0 ? nread : -1;
}
/*
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index b42a908733..ca378d2ad5 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -253,7 +253,7 @@ rloop:
appendPQExpBufferStr(&conn->errorMessage,
libpq_gettext("SSL connection has been closed unexpectedly\n"));
result_errno = ECONNRESET;
- n = -1;
+ n = -2;
break;
default:
appendPQExpBuffer(&conn->errorMessage,
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 3df4a97f2e..18dff253c4 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -199,6 +199,12 @@ pqsecure_close(PGconn *conn)
* On failure, this function is responsible for appending a suitable message
* to conn->errorMessage. The caller must still inspect errno, but only
* to determine whether to continue/retry after error.
+ *
+ * Returns -1 in case of failures, except in the case of where a failure means
+ * that there was a clean connection closure, in those cases -2 is returned.
+ * Currently only the TLS implementation of pqsecure_read ever returns -2. For
+ * the other implementations a clean connection closure is detected in
+ * pqReadData instead.
*/
ssize_t
pqsecure_read(PGconn *conn, void *ptr, size_t len)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index b7df3224c0..de2e32ca63 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_STARTING /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,28 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* issue a cancel request */
+extern PGcancelConn * PQcancelSend(PGconn *conn);
+/* non-blocking version of PQrequestSend */
+extern PGcancelConn * PQcancelConn(PGconn *conn);
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn * cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
+extern int PQcancelSocket(const PGcancelConn * cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
+extern void PQcancelReset(PGcancelConn *cancelConn);
+extern void PQcancelFinish(PGcancelConn * cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* a less secure version of PQcancelSend, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index c75ed63a2c..84027bc4ab 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -397,6 +397,10 @@ struct pg_conn
char *ssl_max_protocol_version; /* maximum TLS protocol version */
char *target_session_attrs; /* desired session properties */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -592,6 +596,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153..3781f7982b 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelSend(conn);
- if (cancel != NULL)
+ if (PQcancelStatus(cancel_conn) == CONNECTION_OK)
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index c609f42258..2674abb539 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got cancelled.
+ *
+ * This is a function wrapped in a macrco to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_cancelled(conn) confirm_query_cancelled_impl(__LINE__, conn)
+static void
+confirm_query_cancelled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_cancelled(conn);
+
+ /* test PQcancelSend */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelSend(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -1638,6 +1896,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1739,7 +1998,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
On 15 Nov 2022, at 12:38, Jelte Fennema <Jelte.Fennema@microsoft.com> wrote:
Here's the correct one.<0001-Add-non-blocking-version-of-PQcancel.patch>
This version of the patch no longer applies, a rebased version is needed.
--
Daniel Gustafsson https://vmware.com/
This version of the patch no longer applies, a rebased version is needed.
Attached is a patch that applies cleanly again and is also changed
to use the recently introduced libpq_append_conn_error.
I also attached a patch that runs pgindent after the introduction of
libpq_append_conn_error. I noticed that this hadn't happened when
trying to run pgindent on my own changes.
Attachments:
v9-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patchapplication/octet-stream; name=v9-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patchDownload
From 9bf997762786f9d276f211c588918a1cdca598d5 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 30 Nov 2022 10:07:19 +0100
Subject: [PATCH v9 1/2] libpq: Run pgindent after a9e9a9f32b3
It seems that pgindent was not run after the error handling refactor in
commit a9e9a9f32b35edf129c88e8b929ef223f8511f59. This fixes that and
also addresses a few other things pgindent wanted to change in libpq.
---
src/interfaces/libpq/fe-auth-scram.c | 2 +-
src/interfaces/libpq/fe-auth.c | 8 +-
src/interfaces/libpq/fe-connect.c | 124 +++++++++++------------
src/interfaces/libpq/fe-exec.c | 16 +--
src/interfaces/libpq/fe-lobj.c | 42 ++++----
src/interfaces/libpq/fe-misc.c | 10 +-
src/interfaces/libpq/fe-protocol3.c | 2 +-
src/interfaces/libpq/fe-secure-common.c | 6 +-
src/interfaces/libpq/fe-secure-gssapi.c | 12 +--
src/interfaces/libpq/fe-secure-openssl.c | 64 ++++++------
src/interfaces/libpq/fe-secure.c | 8 +-
src/interfaces/libpq/libpq-int.h | 4 +-
12 files changed, 149 insertions(+), 149 deletions(-)
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index e71626580a..b8e961795d 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -710,7 +710,7 @@ read_server_final_message(fe_scram_state *state, char *input)
return false;
}
libpq_append_conn_error(conn, "error received from server in SCRAM exchange: %s",
- errmsg);
+ errmsg);
return false;
}
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 4a6c358bb6..3caca455aa 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -73,7 +73,7 @@ pg_GSS_continue(PGconn *conn, int payloadlen)
if (!ginbuf.value)
{
libpq_append_conn_error(conn, "out of memory allocating GSSAPI buffer (%d)",
- payloadlen);
+ payloadlen);
return STATUS_ERROR;
}
if (pqGetnchar(ginbuf.value, payloadlen, conn))
@@ -223,7 +223,7 @@ pg_SSPI_continue(PGconn *conn, int payloadlen)
if (!inputbuf)
{
libpq_append_conn_error(conn, "out of memory allocating SSPI buffer (%d)",
- payloadlen);
+ payloadlen);
return STATUS_ERROR;
}
if (pqGetnchar(inputbuf, payloadlen, conn))
@@ -623,7 +623,7 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
if (!challenge)
{
libpq_append_conn_error(conn, "out of memory allocating SASL buffer (%d)",
- payloadlen);
+ payloadlen);
return STATUS_ERROR;
}
@@ -1277,7 +1277,7 @@ PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user,
else
{
libpq_append_conn_error(conn, "unrecognized password encryption algorithm \"%s\"",
- algorithm);
+ algorithm);
return NULL;
}
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index f88d672c6c..d0c3b21fb9 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -1079,7 +1079,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "could not match %d host names to %d hostaddr values",
- count_comma_separated_elems(conn->pghost), conn->nconnhost);
+ count_comma_separated_elems(conn->pghost), conn->nconnhost);
return false;
}
}
@@ -1159,7 +1159,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "could not match %d port numbers to %d hosts",
- count_comma_separated_elems(conn->pgport), conn->nconnhost);
+ count_comma_separated_elems(conn->pgport), conn->nconnhost);
return false;
}
}
@@ -1248,7 +1248,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "channel_binding", conn->channel_binding);
+ "channel_binding", conn->channel_binding);
return false;
}
}
@@ -1273,7 +1273,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "sslmode", conn->sslmode);
+ "sslmode", conn->sslmode);
return false;
}
@@ -1293,7 +1293,7 @@ connectOptions2(PGconn *conn)
case 'v': /* "verify-ca" or "verify-full" */
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "sslmode value \"%s\" invalid when SSL support is not compiled in",
- conn->sslmode);
+ conn->sslmode);
return false;
}
#endif
@@ -1313,16 +1313,16 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "ssl_min_protocol_version",
- conn->ssl_min_protocol_version);
+ "ssl_min_protocol_version",
+ conn->ssl_min_protocol_version);
return false;
}
if (!sslVerifyProtocolVersion(conn->ssl_max_protocol_version))
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "ssl_max_protocol_version",
- conn->ssl_max_protocol_version);
+ "ssl_max_protocol_version",
+ conn->ssl_max_protocol_version);
return false;
}
@@ -1359,7 +1359,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "gssencmode value \"%s\" invalid when GSSAPI support is not compiled in",
- conn->gssencmode);
+ conn->gssencmode);
return false;
}
#endif
@@ -1392,8 +1392,8 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "target_session_attrs",
- conn->target_session_attrs);
+ "target_session_attrs",
+ conn->target_session_attrs);
return false;
}
}
@@ -1609,7 +1609,7 @@ connectNoDelay(PGconn *conn)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "could not set socket to TCP no delay mode: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -1787,7 +1787,7 @@ parse_int_param(const char *value, int *result, PGconn *conn,
error:
libpq_append_conn_error(conn, "invalid integer value \"%s\" for connection option \"%s\"",
- value, context);
+ value, context);
return false;
}
@@ -1816,9 +1816,9 @@ setKeepalivesIdle(PGconn *conn)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- PG_TCP_KEEPALIVE_IDLE_STR,
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ PG_TCP_KEEPALIVE_IDLE_STR,
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -1850,9 +1850,9 @@ setKeepalivesInterval(PGconn *conn)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- "TCP_KEEPINTVL",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ "TCP_KEEPINTVL",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -1885,9 +1885,9 @@ setKeepalivesCount(PGconn *conn)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- "TCP_KEEPCNT",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ "TCP_KEEPCNT",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -1949,8 +1949,8 @@ prepKeepalivesWin32(PGconn *conn)
if (!setKeepalivesWin32(conn->sock, idle, interval))
{
libpq_append_conn_error(conn, "%s(%s) failed: error code %d",
- "WSAIoctl", "SIO_KEEPALIVE_VALS",
- WSAGetLastError());
+ "WSAIoctl", "SIO_KEEPALIVE_VALS",
+ WSAGetLastError());
return 0;
}
return 1;
@@ -1983,9 +1983,9 @@ setTCPUserTimeout(PGconn *conn)
char sebuf[256];
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- "TCP_USER_TIMEOUT",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ "TCP_USER_TIMEOUT",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -2354,7 +2354,7 @@ keep_going: /* We will come back to here until there is
if (ret || !conn->addrlist)
{
libpq_append_conn_error(conn, "could not translate host name \"%s\" to address: %s",
- ch->host, gai_strerror(ret));
+ ch->host, gai_strerror(ret));
goto keep_going;
}
break;
@@ -2366,7 +2366,7 @@ keep_going: /* We will come back to here until there is
if (ret || !conn->addrlist)
{
libpq_append_conn_error(conn, "could not parse network address \"%s\": %s",
- ch->hostaddr, gai_strerror(ret));
+ ch->hostaddr, gai_strerror(ret));
goto keep_going;
}
break;
@@ -2377,8 +2377,8 @@ keep_going: /* We will come back to here until there is
if (strlen(portstr) >= UNIXSOCK_PATH_BUFLEN)
{
libpq_append_conn_error(conn, "Unix-domain socket path \"%s\" is too long (maximum %d bytes)",
- portstr,
- (int) (UNIXSOCK_PATH_BUFLEN - 1));
+ portstr,
+ (int) (UNIXSOCK_PATH_BUFLEN - 1));
goto keep_going;
}
@@ -2391,7 +2391,7 @@ keep_going: /* We will come back to here until there is
if (ret || !conn->addrlist)
{
libpq_append_conn_error(conn, "could not translate Unix-domain socket path \"%s\" to address: %s",
- portstr, gai_strerror(ret));
+ portstr, gai_strerror(ret));
goto keep_going;
}
break;
@@ -2513,7 +2513,7 @@ keep_going: /* We will come back to here until there is
}
emitHostIdentityInfo(conn, host_addr);
libpq_append_conn_error(conn, "could not create socket: %s",
- SOCK_STRERROR(errorno, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(errorno, sebuf, sizeof(sebuf)));
goto error_return;
}
@@ -2543,7 +2543,7 @@ keep_going: /* We will come back to here until there is
if (!pg_set_noblock(conn->sock))
{
libpq_append_conn_error(conn, "could not set socket to nonblocking mode: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
conn->try_next_addr = true;
goto keep_going;
}
@@ -2552,7 +2552,7 @@ keep_going: /* We will come back to here until there is
if (fcntl(conn->sock, F_SETFD, FD_CLOEXEC) == -1)
{
libpq_append_conn_error(conn, "could not set socket to close-on-exec mode: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
conn->try_next_addr = true;
goto keep_going;
}
@@ -2581,9 +2581,9 @@ keep_going: /* We will come back to here until there is
(char *) &on, sizeof(on)) < 0)
{
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- "SO_KEEPALIVE",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ "SO_KEEPALIVE",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
err = 1;
}
else if (!setKeepalivesIdle(conn)
@@ -2708,7 +2708,7 @@ keep_going: /* We will come back to here until there is
(char *) &optval, &optlen) == -1)
{
libpq_append_conn_error(conn, "could not get socket error status: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
else if (optval != 0)
@@ -2735,7 +2735,7 @@ keep_going: /* We will come back to here until there is
&conn->laddr.salen) < 0)
{
libpq_append_conn_error(conn, "could not get client address from socket: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
@@ -2775,7 +2775,7 @@ keep_going: /* We will come back to here until there is
libpq_append_conn_error(conn, "requirepeer parameter is not supported on this platform");
else
libpq_append_conn_error(conn, "could not get peer credentials: %s",
- strerror_r(errno, sebuf, sizeof(sebuf)));
+ strerror_r(errno, sebuf, sizeof(sebuf)));
goto error_return;
}
@@ -2788,7 +2788,7 @@ keep_going: /* We will come back to here until there is
if (strcmp(remote_username, conn->requirepeer) != 0)
{
libpq_append_conn_error(conn, "requirepeer specifies \"%s\", but actual peer user name is \"%s\"",
- conn->requirepeer, remote_username);
+ conn->requirepeer, remote_username);
free(remote_username);
goto error_return;
}
@@ -2829,7 +2829,7 @@ keep_going: /* We will come back to here until there is
if (pqPacketSend(conn, 0, &pv, sizeof(pv)) != STATUS_OK)
{
libpq_append_conn_error(conn, "could not send GSSAPI negotiation packet: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
@@ -2840,7 +2840,7 @@ keep_going: /* We will come back to here until there is
else if (!conn->gctx && conn->gssencmode[0] == 'r')
{
libpq_append_conn_error(conn,
- "GSSAPI encryption required but was impossible (possibly no credential cache, no server support, or using a local socket)");
+ "GSSAPI encryption required but was impossible (possibly no credential cache, no server support, or using a local socket)");
goto error_return;
}
#endif
@@ -2882,7 +2882,7 @@ keep_going: /* We will come back to here until there is
if (pqPacketSend(conn, 0, &pv, sizeof(pv)) != STATUS_OK)
{
libpq_append_conn_error(conn, "could not send SSL negotiation packet: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
/* Ok, wait for response */
@@ -2911,7 +2911,7 @@ keep_going: /* We will come back to here until there is
if (pqPacketSend(conn, 0, startpacket, packetlen) != STATUS_OK)
{
libpq_append_conn_error(conn, "could not send startup packet: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
free(startpacket);
goto error_return;
}
@@ -3012,7 +3012,7 @@ keep_going: /* We will come back to here until there is
else
{
libpq_append_conn_error(conn, "received invalid response to SSL negotiation: %c",
- SSLok);
+ SSLok);
goto error_return;
}
}
@@ -3123,7 +3123,7 @@ keep_going: /* We will come back to here until there is
else if (gss_ok != 'G')
{
libpq_append_conn_error(conn, "received invalid response to GSSAPI negotiation: %c",
- gss_ok);
+ gss_ok);
goto error_return;
}
}
@@ -3201,7 +3201,7 @@ keep_going: /* We will come back to here until there is
if (!(beresp == 'R' || beresp == 'v' || beresp == 'E'))
{
libpq_append_conn_error(conn, "expected authentication request from server, but received %c",
- beresp);
+ beresp);
goto error_return;
}
@@ -3216,17 +3216,17 @@ keep_going: /* We will come back to here until there is
* Try to validate message length before using it.
* Authentication requests can't be very large, although GSS
* auth requests may not be that small. Same for
- * NegotiateProtocolVersion. Errors can be a
- * little larger, but not huge. If we see a large apparent
- * length in an error, it means we're really talking to a
- * pre-3.0-protocol server; cope. (Before version 14, the
- * server also used the old protocol for errors that happened
- * before processing the startup packet.)
+ * NegotiateProtocolVersion. Errors can be a little larger,
+ * but not huge. If we see a large apparent length in an
+ * error, it means we're really talking to a pre-3.0-protocol
+ * server; cope. (Before version 14, the server also used the
+ * old protocol for errors that happened before processing the
+ * startup packet.)
*/
if ((beresp == 'R' || beresp == 'v') && (msgLength < 8 || msgLength > 2000))
{
libpq_append_conn_error(conn, "expected authentication request from server, but received %c",
- beresp);
+ beresp);
goto error_return;
}
@@ -3705,7 +3705,7 @@ keep_going: /* We will come back to here until there is
/* Append error report to conn->errorMessage. */
libpq_append_conn_error(conn, "\"%s\" failed",
- "SHOW transaction_read_only");
+ "SHOW transaction_read_only");
/* Close connection politely. */
conn->status = CONNECTION_OK;
@@ -3755,7 +3755,7 @@ keep_going: /* We will come back to here until there is
/* Append error report to conn->errorMessage. */
libpq_append_conn_error(conn, "\"%s\" failed",
- "SELECT pg_is_in_recovery()");
+ "SELECT pg_is_in_recovery()");
/* Close connection politely. */
conn->status = CONNECTION_OK;
@@ -3768,8 +3768,8 @@ keep_going: /* We will come back to here until there is
default:
libpq_append_conn_error(conn,
- "invalid connection state %d, probably indicative of memory corruption",
- conn->status);
+ "invalid connection state %d, probably indicative of memory corruption",
+ conn->status);
goto error_return;
}
@@ -7148,7 +7148,7 @@ pgpassfileWarning(PGconn *conn)
if (sqlstate && strcmp(sqlstate, ERRCODE_INVALID_PASSWORD) == 0)
libpq_append_conn_error(conn, "password retrieved from file \"%s\"",
- conn->pgpassfile);
+ conn->pgpassfile);
}
}
diff --git a/src/interfaces/libpq/fe-exec.c b/src/interfaces/libpq/fe-exec.c
index da229d632a..88600ce883 100644
--- a/src/interfaces/libpq/fe-exec.c
+++ b/src/interfaces/libpq/fe-exec.c
@@ -1444,7 +1444,7 @@ PQsendQueryInternal(PGconn *conn, const char *query, bool newQuery)
if (conn->pipelineStatus != PQ_PIPELINE_OFF)
{
libpq_append_conn_error(conn, "%s not allowed in pipeline mode",
- "PQsendQuery");
+ "PQsendQuery");
return 0;
}
@@ -1512,7 +1512,7 @@ PQsendQueryParams(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -1558,7 +1558,7 @@ PQsendPrepare(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -1652,7 +1652,7 @@ PQsendQueryPrepared(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -2099,10 +2099,9 @@ PQgetResult(PGconn *conn)
/*
* We're about to return the NULL that terminates the round of
- * results from the current query; prepare to send the results
- * of the next query, if any, when we're called next. If there's
- * no next element in the command queue, this gets us in IDLE
- * state.
+ * results from the current query; prepare to send the results of
+ * the next query, if any, when we're called next. If there's no
+ * next element in the command queue, this gets us in IDLE state.
*/
pqPipelineProcessQueue(conn);
res = NULL; /* query is complete */
@@ -3047,6 +3046,7 @@ pqPipelineProcessQueue(PGconn *conn)
return;
case PGASYNC_IDLE:
+
/*
* If we're in IDLE mode and there's some command in the queue,
* get us into PIPELINE_IDLE mode and process normally. Otherwise
diff --git a/src/interfaces/libpq/fe-lobj.c b/src/interfaces/libpq/fe-lobj.c
index bcd228cef1..50282ff423 100644
--- a/src/interfaces/libpq/fe-lobj.c
+++ b/src/interfaces/libpq/fe-lobj.c
@@ -142,7 +142,7 @@ lo_truncate(PGconn *conn, int fd, size_t len)
if (conn->lobjfuncs->fn_lo_truncate == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_truncate");
+ "lo_truncate");
return -1;
}
@@ -205,7 +205,7 @@ lo_truncate64(PGconn *conn, int fd, pg_int64 len)
if (conn->lobjfuncs->fn_lo_truncate64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_truncate64");
+ "lo_truncate64");
return -1;
}
@@ -395,7 +395,7 @@ lo_lseek64(PGconn *conn, int fd, pg_int64 offset, int whence)
if (conn->lobjfuncs->fn_lo_lseek64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_lseek64");
+ "lo_lseek64");
return -1;
}
@@ -485,7 +485,7 @@ lo_create(PGconn *conn, Oid lobjId)
if (conn->lobjfuncs->fn_lo_create == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_create");
+ "lo_create");
return InvalidOid;
}
@@ -558,7 +558,7 @@ lo_tell64(PGconn *conn, int fd)
if (conn->lobjfuncs->fn_lo_tell64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_tell64");
+ "lo_tell64");
return -1;
}
@@ -667,7 +667,7 @@ lo_import_internal(PGconn *conn, const char *filename, Oid oid)
if (fd < 0)
{ /* error */
libpq_append_conn_error(conn, "could not open file \"%s\": %s",
- filename, strerror_r(errno, sebuf, sizeof(sebuf)));
+ filename, strerror_r(errno, sebuf, sizeof(sebuf)));
return InvalidOid;
}
@@ -723,8 +723,8 @@ lo_import_internal(PGconn *conn, const char *filename, Oid oid)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not read from file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return InvalidOid;
}
@@ -778,8 +778,8 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not open file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return -1;
}
@@ -799,8 +799,8 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not write to file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return -1;
}
}
@@ -822,7 +822,7 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
if (close(fd) != 0 && result >= 0)
{
libpq_append_conn_error(conn, "could not write to file \"%s\": %s",
- filename, strerror_r(errno, sebuf, sizeof(sebuf)));
+ filename, strerror_r(errno, sebuf, sizeof(sebuf)));
result = -1;
}
@@ -954,56 +954,56 @@ lo_initialize(PGconn *conn)
if (lobjfuncs->fn_lo_open == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_open");
+ "lo_open");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_close == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_close");
+ "lo_close");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_creat == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_creat");
+ "lo_creat");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_unlink == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_unlink");
+ "lo_unlink");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_lseek == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_lseek");
+ "lo_lseek");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_tell == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_tell");
+ "lo_tell");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_read == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "loread");
+ "loread");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_write == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lowrite");
+ "lowrite");
free(lobjfuncs);
return -1;
}
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 4159610f6c..fadc46817b 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -749,8 +749,8 @@ retry4:
*/
definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
@@ -1067,7 +1067,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s() failed: %s", "select",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
}
return result;
@@ -1280,7 +1280,7 @@ libpq_ngettext(const char *msgid, const char *msgid_plural, unsigned long n)
* newline.
*/
void
-libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...)
+libpq_append_error(PQExpBuffer errorMessage, const char *fmt,...)
{
int save_errno = errno;
bool done;
@@ -1309,7 +1309,7 @@ libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...)
* format should not end with a newline.
*/
void
-libpq_append_conn_error(PGconn *conn, const char *fmt, ...)
+libpq_append_conn_error(PGconn *conn, const char *fmt,...)
{
int save_errno = errno;
bool done;
diff --git a/src/interfaces/libpq/fe-protocol3.c b/src/interfaces/libpq/fe-protocol3.c
index 364bad2b88..7a8e9c9962 100644
--- a/src/interfaces/libpq/fe-protocol3.c
+++ b/src/interfaces/libpq/fe-protocol3.c
@@ -466,7 +466,7 @@ static void
handleSyncLoss(PGconn *conn, char id, int msgLength)
{
libpq_append_conn_error(conn, "lost synchronization with server: got message type \"%c\", length %d",
- id, msgLength);
+ id, msgLength);
/* build an error result holding the error message */
pqSaveErrorResult(conn);
conn->asyncStatus = PGASYNC_READY; /* drop out of PQgetResult wait loop */
diff --git a/src/interfaces/libpq/fe-secure-common.c b/src/interfaces/libpq/fe-secure-common.c
index 7e4246c51f..6377e800dd 100644
--- a/src/interfaces/libpq/fe-secure-common.c
+++ b/src/interfaces/libpq/fe-secure-common.c
@@ -226,7 +226,7 @@ pq_verify_peer_name_matches_certificate_ip(PGconn *conn,
* wrong given the subject matter.
*/
libpq_append_conn_error(conn, "certificate contains IP address with invalid length %zu",
- iplen);
+ iplen);
return -1;
}
@@ -235,7 +235,7 @@ pq_verify_peer_name_matches_certificate_ip(PGconn *conn,
if (!addrstr)
{
libpq_append_conn_error(conn, "could not convert certificate's IP address to string: %s",
- strerror_r(errno, sebuf, sizeof(sebuf)));
+ strerror_r(errno, sebuf, sizeof(sebuf)));
return -1;
}
@@ -292,7 +292,7 @@ pq_verify_peer_name_matches_certificate(PGconn *conn)
else if (names_examined == 1)
{
libpq_append_conn_error(conn, "server certificate for \"%s\" does not match host name \"%s\"",
- first_name, host);
+ first_name, host);
}
else
{
diff --git a/src/interfaces/libpq/fe-secure-gssapi.c b/src/interfaces/libpq/fe-secure-gssapi.c
index 0ce92dbf43..af95dfa322 100644
--- a/src/interfaces/libpq/fe-secure-gssapi.c
+++ b/src/interfaces/libpq/fe-secure-gssapi.c
@@ -213,8 +213,8 @@ pg_GSS_write(PGconn *conn, const void *ptr, size_t len)
if (output.length > PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "client tried to send oversize GSSAPI packet (%zu > %zu)",
- (size_t) output.length,
- PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32));
+ (size_t) output.length,
+ PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32));
errno = EIO; /* for lack of a better idea */
goto cleanup;
}
@@ -349,8 +349,8 @@ pg_GSS_read(PGconn *conn, void *ptr, size_t len)
if (input.length > PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "oversize GSSAPI packet sent by the server (%zu > %zu)",
- (size_t) input.length,
- PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
+ (size_t) input.length,
+ PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
errno = EIO; /* for lack of a better idea */
return -1;
}
@@ -588,8 +588,8 @@ pqsecure_open_gss(PGconn *conn)
if (input.length > PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "oversize GSSAPI packet sent by the server (%zu > %zu)",
- (size_t) input.length,
- PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
+ (size_t) input.length,
+ PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
return PGRES_POLLING_FAILED;
}
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index bad85359b6..4eb212d15c 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -213,12 +213,12 @@ rloop:
if (result_errno == EPIPE ||
result_errno == ECONNRESET)
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
else
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
}
else
{
@@ -313,12 +313,12 @@ pgtls_write(PGconn *conn, const void *ptr, size_t len)
result_errno = SOCK_ERRNO;
if (result_errno == EPIPE || result_errno == ECONNRESET)
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
else
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
}
else
{
@@ -410,7 +410,7 @@ pgtls_get_peer_certificate_hash(PGconn *conn, size_t *len)
if (algo_type == NULL)
{
libpq_append_conn_error(conn, "could not find digest for NID %s",
- OBJ_nid2sn(algo_nid));
+ OBJ_nid2sn(algo_nid));
return NULL;
}
break;
@@ -962,7 +962,7 @@ initialize_SSL(PGconn *conn)
if (ssl_min_ver == -1)
{
libpq_append_conn_error(conn, "invalid value \"%s\" for minimum SSL protocol version",
- conn->ssl_min_protocol_version);
+ conn->ssl_min_protocol_version);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -988,7 +988,7 @@ initialize_SSL(PGconn *conn)
if (ssl_max_ver == -1)
{
libpq_append_conn_error(conn, "invalid value \"%s\" for maximum SSL protocol version",
- conn->ssl_max_protocol_version);
+ conn->ssl_max_protocol_version);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1032,7 +1032,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read root certificate file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
SSL_CTX_free(SSL_context);
return -1;
@@ -1084,10 +1084,10 @@ initialize_SSL(PGconn *conn)
*/
if (fnbuf[0] == '\0')
libpq_append_conn_error(conn, "could not get home directory to locate root certificate file\n"
- "Either provide the file or change sslmode to disable server certificate verification.");
+ "Either provide the file or change sslmode to disable server certificate verification.");
else
libpq_append_conn_error(conn, "root certificate file \"%s\" does not exist\n"
- "Either provide the file or change sslmode to disable server certificate verification.", fnbuf);
+ "Either provide the file or change sslmode to disable server certificate verification.", fnbuf);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1117,7 +1117,7 @@ initialize_SSL(PGconn *conn)
if (errno != ENOENT && errno != ENOTDIR)
{
libpq_append_conn_error(conn, "could not open certificate file \"%s\": %s",
- fnbuf, strerror_r(errno, sebuf, sizeof(sebuf)));
+ fnbuf, strerror_r(errno, sebuf, sizeof(sebuf)));
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1135,7 +1135,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read certificate file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
SSL_CTX_free(SSL_context);
return -1;
@@ -1234,7 +1234,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not load SSL engine \"%s\": %s",
- engine_str, err);
+ engine_str, err);
SSLerrfree(err);
free(engine_str);
return -1;
@@ -1245,7 +1245,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not initialize SSL engine \"%s\": %s",
- engine_str, err);
+ engine_str, err);
SSLerrfree(err);
ENGINE_free(conn->engine);
conn->engine = NULL;
@@ -1260,7 +1260,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read private SSL key \"%s\" from engine \"%s\": %s",
- engine_colon, engine_str, err);
+ engine_colon, engine_str, err);
SSLerrfree(err);
ENGINE_finish(conn->engine);
ENGINE_free(conn->engine);
@@ -1273,7 +1273,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not load private SSL key \"%s\" from engine \"%s\": %s",
- engine_colon, engine_str, err);
+ engine_colon, engine_str, err);
SSLerrfree(err);
ENGINE_finish(conn->engine);
ENGINE_free(conn->engine);
@@ -1310,10 +1310,10 @@ initialize_SSL(PGconn *conn)
{
if (errno == ENOENT)
libpq_append_conn_error(conn, "certificate present, but not private key file \"%s\"",
- fnbuf);
+ fnbuf);
else
libpq_append_conn_error(conn, "could not stat private key file \"%s\": %m",
- fnbuf);
+ fnbuf);
return -1;
}
@@ -1321,7 +1321,7 @@ initialize_SSL(PGconn *conn)
if (!S_ISREG(buf.st_mode))
{
libpq_append_conn_error(conn, "private key file \"%s\" is not a regular file",
- fnbuf);
+ fnbuf);
return -1;
}
@@ -1378,7 +1378,7 @@ initialize_SSL(PGconn *conn)
if (SSL_use_PrivateKey_file(conn->ssl, fnbuf, SSL_FILETYPE_ASN1) != 1)
{
libpq_append_conn_error(conn, "could not load private key file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
return -1;
}
@@ -1394,7 +1394,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "certificate does not match private key file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
return -1;
}
@@ -1447,7 +1447,7 @@ open_client_SSL(PGconn *conn)
if (r == -1)
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
else
libpq_append_conn_error(conn, "SSL SYSCALL error: EOF detected");
pgtls_close(conn);
@@ -1489,12 +1489,12 @@ open_client_SSL(PGconn *conn)
case SSL_R_VERSION_TOO_LOW:
#endif
libpq_append_conn_error(conn, "This may indicate that the server does not support any SSL protocol version between %s and %s.",
- conn->ssl_min_protocol_version ?
- conn->ssl_min_protocol_version :
- MIN_OPENSSL_TLS_VERSION,
- conn->ssl_max_protocol_version ?
- conn->ssl_max_protocol_version :
- MAX_OPENSSL_TLS_VERSION);
+ conn->ssl_min_protocol_version ?
+ conn->ssl_min_protocol_version :
+ MIN_OPENSSL_TLS_VERSION,
+ conn->ssl_max_protocol_version ?
+ conn->ssl_max_protocol_version :
+ MAX_OPENSSL_TLS_VERSION);
break;
default:
break;
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index e74d3ccf69..215c9a74ed 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -255,14 +255,14 @@ pqsecure_raw_read(PGconn *conn, void *ptr, size_t len)
case EPIPE:
case ECONNRESET:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
break;
default:
libpq_append_conn_error(conn, "could not receive data from server: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
break;
}
}
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 512762f999..b4bb6db5a7 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -888,8 +888,8 @@ extern char *libpq_ngettext(const char *msgid, const char *msgid_plural, unsigne
*/
#undef _
-extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...) pg_attribute_printf(2, 3);
-extern void libpq_append_conn_error(PGconn *conn, const char *fmt, ...) pg_attribute_printf(2, 3);
+extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt,...) pg_attribute_printf(2, 3);
+extern void libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
/*
* These macros are needed to let error-handling code be portable between
--
2.34.1
v9-0002-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=v9-0002-Add-non-blocking-version-of-PQcancel.patchDownload
From 4103ac630049bd0c9ea25105c2220273901077d7 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 12 Jan 2022 09:52:05 +0100
Subject: [PATCH v9 2/2] Add non-blocking version of PQcancel
This patch makes the following changes in libpq:
1. Add a new PQcancelSend function, which sends cancellation requests
using the regular connection establishment code. This makes sure
that cancel requests support and use all connection options
including encryption.
2. Add a new PQcancelConn function which allows sending cancellation in
a non-blocking way by using it together with the newly added
PQcancelPoll and PQcancelSocket.
3. Use these two new cancellation APIs everywhere in the codebase where
signal-safety is not a necessity.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests. The postgres_fdw
cancellation code has been modified to make use of this.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
contrib/dblink/dblink.c | 22 +-
contrib/postgres_fdw/connection.c | 93 ++++-
.../postgres_fdw/expected/postgres_fdw.out | 15 +
contrib/postgres_fdw/sql/postgres_fdw.sql | 8 +
doc/src/sgml/libpq.sgml | 279 +++++++++++--
src/fe_utils/connect_utils.c | 10 +-
src/interfaces/libpq/exports.txt | 8 +
src/interfaces/libpq/fe-connect.c | 373 ++++++++++++++++--
src/interfaces/libpq/fe-misc.c | 15 +-
src/interfaces/libpq/fe-secure-openssl.c | 2 +-
src/interfaces/libpq/fe-secure.c | 6 +
src/interfaces/libpq/libpq-fe.h | 25 +-
src/interfaces/libpq/libpq-int.h | 9 +
src/test/isolation/isolationtester.c | 29 +-
.../modules/libpq_pipeline/libpq_pipeline.c | 263 +++++++++++-
15 files changed, 1048 insertions(+), 109 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 04095a8f0e..05058dc66b 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1378,22 +1378,24 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
-
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ cancelConn = PQcancelSend(conn);
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ {
+ msg = "OK";
+ }
+ PQcancelFinish(cancelConn);
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index f0c45b00db..ce5f908bb1 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -1264,35 +1264,98 @@ pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel)
static bool
pgfdw_cancel_query(PGconn *conn)
{
- PGcancel *cancel;
- char errbuf[256];
PGresult *result = NULL;
- TimestampTz endtime;
- bool timed_out;
/*
* If it takes too long to cancel the query and discard the result, assume
* the connection is dead.
*/
- endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQcancelStatus(cancel_conn) == CONNECTION_BAD)
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ return false;
+ }
+
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ }
+ PG_CATCH();
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PQcancelFinish(cancel_conn);
+ PG_RE_THROW();
+ }
+ PG_END_TRY();
+
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PQcancelFinish(cancel_conn);
+ return failed;
}
+ PQcancelFinish(cancel_conn);
/* Get and discard the result of the query. */
if (pgfdw_get_cleanup_result(conn, endtime, &result, &timed_out))
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 2ab3f1efaa..1036ebc336 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2688,6 +2688,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index 51560429e0..0923f93803 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -326,6 +326,7 @@ DELETE FROM loct_empty;
ANALYZE ft_empty;
EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft_empty ORDER BY c1;
+
-- ===================================================================
-- WHERE with remotely-executable conditions
-- ===================================================================
@@ -713,6 +714,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index f9558dec3b..2e21ea6ee7 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -4909,7 +4909,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelSend"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -5627,13 +5627,220 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQcancelSend">
+ <term><function>PQcancelSend</function><indexterm><primary>PQcancelSend</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command.
+<synopsis>
+PGcancelConn *PQcancelSend(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This request is made over a connection that uses the same connection
+ options as the the original <structname>PGconn</structname>. So when the
+ original connection is encrypted (using TLS or GSS), the connection for
+ the cancel request connection is encrypted in the same. Any connection
+ options that only make sense for authentication or after authentication
+ are ignored though, because cancellation requests do not require
+ authentication.
+ </para>
+
+ <para>
+ This function returns a <structname>PGcancelConn</structname>
+ object. By using
+ <xref linkend="libpq-PQcancelStatus"/>
+ it can be checked if there was any error when sending the cancellation
+ request. If <xref linkend="libpq-PQcancelStatus"/>
+ returns for <symbol>CONNECTION_OK</symbol> the request was
+ successfully sent, but if it returns <symbol>CONNECTION_BAD</symbol>
+ an error occured. If an error occured the error message can be retrieved using
+ <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being cancelled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelSend</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQcancelSend"/> that can be used
+ in a non-blocking manner.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>,
+ but it won't instantly start sending a cancel request over this
+ connection like <xref linkend="libpq-PQcancelSend"/>.
+ <xref linkend="libpq-PQcancelStatus"/> should be called on the return
+ value to check if the <structname> PGcancelConn </structname> was
+ created successfully.
+ The <structname>PGcancelConn</structname> object is an opaque structure
+ that is not meant to be accessed directly by the application.
+ This <structname>PGcancelConn</structname> object can be used to cancel
+ the query that's running on the original connection in a thread-safe and
+ non-blocking way.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQstatus"/> that can be used for
+ cancellation connections.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ <para>
+ In addition to all the statuses that a <structname>PGconn</structname>
+ can have, this connection can have one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQcancelPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelPoll"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually cancelled).
+ While a <symbol>CONNECTION_OK</symbol> result for
+ <structname>PGconn</structname> means thatqueries can be sent over the
+ connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelPoll">
+ <term><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQconnectPoll"/> that can be used for
+ cancellation connections.
+<synopsis>
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -5675,14 +5882,30 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ An insecure version of
+ <xref linkend="libpq-PQcancelSend"/>, but one that can be used safely
+ from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
+ <xref linkend="libpq-PQcancel"/> should only be used if it's necessary
+ to cancel a query from a signal-handler. If signal-safety is not needed,
+ <xref linkend="libpq-PQcancelSend"/> should be used to cancel the query
+ instead.
+ <xref linkend="libpq-PQcancel"/> can be safely invoked from a signal
+ handler, if the <parameter>errbuf</parameter> is a local variable in the
+ signal handler. The <structname>PGcancel</structname> object is read-only
+ as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
+ also be invoked from a thread that is separate from the one
+ manipulating the <structname>PGconn</structname> object.
+ </para>
+
+ <para>
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
with an explanatory error message. <parameter>errbuf</parameter>
must be a char array of size <parameter>errbufsize</parameter> (the
@@ -5690,21 +5913,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
- <para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. When calling this function a
+ connection is made to the postgres host using the same port. The only
+ connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. <emphasis>This means the connection
+ is never encrypted using TLS or GSS</emphasis>.
</para>
</listitem>
</varlistentry>
@@ -5716,13 +5940,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelSend"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelSend"/> should be
+ used instead, to avoid the security and thread-safety issues that this
+ function has. This function has the same security issues as
+ <xref linkend="libpq-PQcancel"/>, but without the benefit of being
+ signal-safe.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -8871,7 +9104,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelSend"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 1cc97b72f7..0f5e84ad71 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,11 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
-
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ PQcancelFinish(PQcancelSend(conn));
}
PQfinish(conn);
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index e8bcc88370..f56e8c185c 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -186,3 +186,11 @@ PQpipelineStatus 183
PQsetTraceFlags 184
PQmblenBounded 185
PQsendFlushRequest 186
+PQcancelSend 187
+PQcancelConn 188
+PQcancelPoll 189
+PQcancelStatus 190
+PQcancelSocket 191
+PQcancelErrorMessage 192
+PQcancelReset 193
+PQcancelFinish 194
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index d0c3b21fb9..364fd4f032 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -376,6 +376,7 @@ static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
@@ -599,8 +600,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQresetStart invocations. Otherwise they don't know the secret token of
+ * the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -731,6 +741,68 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a request on the given connection. This requires
+ * polling the returned PGconn to actually complete the cancellation of the
+ * request.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancelation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used.
+ */
+ memcpy(&cancelConn->raddr, &conn->raddr, sizeof(SockAddr));
+ cancelConn->whichhost = conn->whichhost;
+ conn->try_next_host = false;
+ conn->try_next_addr = false;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return (PGcancelConn *) cancelConn;
+}
+
/*
* PQconnectStartParams
*
@@ -906,6 +978,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2030,10 +2141,17 @@ connectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host,
+ * which is determined in PQcancelConn. So leave these settings alone for
+ * cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2082,6 +2200,15 @@ connectDBComplete(PGconn *conn)
if (conn == NULL || conn->status == CONNECTION_BAD)
return 0;
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!connectDBStart(conn))
+ {
+ conn->status = CONNECTION_BAD;
+ return 0;
+ }
+ }
+
/*
* Set up a time limit, if connect_timeout isn't zero.
*/
@@ -2222,8 +2349,8 @@ PQconnectPoll(PGconn *conn)
switch (conn->status)
{
/*
- * We really shouldn't have been polled in these two cases, but we
- * can handle it.
+ * We really shouldn't have been polled in these three cases, but
+ * we can handle it.
*/
case CONNECTION_BAD:
return PGRES_POLLING_FAILED;
@@ -2240,6 +2367,34 @@ PQconnectPoll(PGconn *conn)
/* Load waiting data */
int n = pqReadData(conn);
+#ifndef WIN32
+ if (n == -2 && conn->cancelRequest)
+#else
+
+ /*
+ * Windows is a bit special in its EOF behaviour for TCP.
+ * Sometimes it will error with an ECONNRESET when there is a
+ * clean connection closure. See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the
+ * cancellation anyway, so even if this is not always correct
+ * we do the same here.
+ */
+ if (n < 0 && conn->cancelRequest)
+#endif
+ {
+ /*
+ * This is the expected end state for cancel connections.
+ * They are closed once the cancel is processed by the
+ * server.
+ */
+ conn->status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+ }
if (n < 0)
goto error_return;
if (n == 0)
@@ -2249,6 +2404,7 @@ PQconnectPoll(PGconn *conn)
}
/* These are writing states, so we just proceed. */
+ case CONNECTION_STARTING:
case CONNECTION_STARTED:
case CONNECTION_MADE:
break;
@@ -2272,6 +2428,14 @@ keep_going: /* We will come back to here until there is
/* Time to advance to next address, or next host if no more addresses? */
if (conn->try_next_addr)
{
+ /*
+ * Cancel requests never have more addresses to try. They should only
+ * try a single one.
+ */
+ if (conn->cancelRequest)
+ {
+ goto error_return;
+ }
if (conn->addr_cur && conn->addr_cur->ai_next)
{
conn->addr_cur = conn->addr_cur->ai_next;
@@ -2291,6 +2455,15 @@ keep_going: /* We will come back to here until there is
int ret;
char portstr[MAXPGPATH];
+ /*
+ * Cancel requests never have more hosts to try. They should only try
+ * a single one.
+ */
+ if (conn->cancelRequest)
+ {
+ goto error_return;
+ }
+
if (conn->whichhost + 1 < conn->nconnhost)
conn->whichhost++;
else
@@ -2466,19 +2639,27 @@ keep_going: /* We will come back to here until there is
char host_addr[NI_MAXHOST];
/*
- * Advance to next possible host, if we've tried all of
- * the addresses for the current host.
+ * Cancel requests don't use addr_cur at all. They have
+ * their raddr field already filled in during
+ * initialization in PQcancelConn.
*/
- if (addr_cur == NULL)
+ if (!conn->cancelRequest)
{
- conn->try_next_host = true;
- goto keep_going;
- }
+ /*
+ * Advance to next possible host, if we've tried all
+ * of the addresses for the current host.
+ */
+ if (addr_cur == NULL)
+ {
+ conn->try_next_host = true;
+ goto keep_going;
+ }
- /* Remember current address for possible use later */
- memcpy(&conn->raddr.addr, addr_cur->ai_addr,
- addr_cur->ai_addrlen);
- conn->raddr.salen = addr_cur->ai_addrlen;
+ /* Remember current address for possible use later */
+ memcpy(&conn->raddr.addr, addr_cur->ai_addr,
+ addr_cur->ai_addrlen);
+ conn->raddr.salen = addr_cur->ai_addrlen;
+ }
/*
* Set connip, too. Note we purposely ignore strdup
@@ -2494,7 +2675,7 @@ keep_going: /* We will come back to here until there is
conn->connip = strdup(host_addr);
/* Try to create the socket */
- conn->sock = socket(addr_cur->ai_family, SOCK_STREAM, 0);
+ conn->sock = socket(conn->raddr.addr.ss_family, SOCK_STREAM, 0);
if (conn->sock == PGINVALID_SOCKET)
{
int errorno = SOCK_ERRNO;
@@ -2504,12 +2685,18 @@ keep_going: /* We will come back to here until there is
* addresses to try; this reduces useless chatter in
* cases where the address list includes both IPv4 and
* IPv6 but kernel only accepts one family.
+ *
+ * Cancel requests never have more addresses to try.
+ * They should only try a single one.
*/
- if (addr_cur->ai_next != NULL ||
- conn->whichhost + 1 < conn->nconnhost)
+ if (!conn->cancelRequest)
{
- conn->try_next_addr = true;
- goto keep_going;
+ if (addr_cur->ai_next != NULL ||
+ conn->whichhost + 1 < conn->nconnhost)
+ {
+ conn->try_next_addr = true;
+ goto keep_going;
+ }
}
emitHostIdentityInfo(conn, host_addr);
libpq_append_conn_error(conn, "could not create socket: %s",
@@ -2531,7 +2718,7 @@ keep_going: /* We will come back to here until there is
* TCP sockets, nonblock mode, close-on-exec. Try the
* next address if any of this fails.
*/
- if (addr_cur->ai_family != AF_UNIX)
+ if (conn->raddr.addr.ss_family != AF_UNIX)
{
if (!connectNoDelay(conn))
{
@@ -2558,7 +2745,7 @@ keep_going: /* We will come back to here until there is
}
#endif /* F_SETFD */
- if (addr_cur->ai_family != AF_UNIX)
+ if (conn->raddr.addr.ss_family != AF_UNIX)
{
#ifndef WIN32
int on = 1;
@@ -2650,8 +2837,9 @@ keep_going: /* We will come back to here until there is
* Start/make connection. This should not block, since we
* are in nonblock mode. If it does, well, too bad.
*/
- if (connect(conn->sock, addr_cur->ai_addr,
- addr_cur->ai_addrlen) < 0)
+ if (connect(conn->sock,
+ (struct sockaddr *) &conn->raddr.addr,
+ conn->raddr.salen) < 0)
{
if (SOCK_ERRNO == EINPROGRESS ||
#ifdef WIN32
@@ -2690,6 +2878,16 @@ keep_going: /* We will come back to here until there is
}
}
+ case CONNECTION_STARTING:
+ {
+ if (!connectDBStart(conn))
+ {
+ goto error_return;
+ }
+ conn->status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+
case CONNECTION_STARTED:
{
socklen_t optlen = sizeof(optval);
@@ -2891,6 +3089,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancelation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -4063,6 +4284,15 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4531,6 +4761,96 @@ cancel_errReturn:
return false;
}
+/*
+ * PQrequestCancel: old, not thread-safe function for requesting query cancel
+ *
+ * Returns true if able to send the cancel request, false if not.
+ *
+ * On failure, the error message is saved in conn->errorMessage; this means
+ * that this can't be used when there might be other active operations on
+ * the connection object.
+ *
+ * NOTE: error messages will be cut off at the current size of the
+ * error message buffer, since we dare not try to expand conn->errorMessage!
+ */
+PGcancelConn *
+PQcancelSend(PGconn *conn)
+{
+ PGcancelConn *cancelConn = PQcancelConn(conn);
+
+ if (cancelConn && cancelConn->conn.status != CONNECTION_BAD)
+ (void) connectDBComplete(&cancelConn->conn);
+
+ return cancelConn;
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn * cancelConn)
+{
+ return PQconnectPoll((PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn * cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn * cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn * cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn * cancelConn)
+{
+ closePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_STARTING;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn * cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
/*
* PQrequestCancel: old, not thread-safe function for requesting query cancel
@@ -4589,6 +4909,7 @@ PQrequestCancel(PGconn *conn)
}
+
/*
* pqPacketSend() -- convenience routine to send a message to server.
*
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index fadc46817b..40a7b9ab71 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -556,8 +556,11 @@ pqPutMsgEnd(PGconn *conn)
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
- * -1: error detected (including EOF = connection closure);
+ * -1: error detected (excluding EOF = connection closure);
* conn->errorMessage set
+ * -2: EOF detected, connection is closed
+ * conn->errorMessage set
+ *
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
@@ -639,7 +642,7 @@ retry3:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -734,7 +737,7 @@ retry4:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -751,13 +754,17 @@ definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.");
+ /* Do *not* drop any already-read data; caller still wants it */
+ pqDropConnection(conn, false);
+ conn->status = CONNECTION_BAD; /* No more connection to backend */
+ return -2;
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
/* Do *not* drop any already-read data; caller still wants it */
pqDropConnection(conn, false);
conn->status = CONNECTION_BAD; /* No more connection to backend */
- return -1;
+ return nread < 0 ? nread : -1;
}
/*
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index 4eb212d15c..1b83946aee 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -248,7 +248,7 @@ rloop:
*/
libpq_append_conn_error(conn, "SSL connection has been closed unexpectedly");
result_errno = ECONNRESET;
- n = -1;
+ n = -2;
break;
default:
libpq_append_conn_error(conn, "unrecognized SSL error code: %d", err);
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 215c9a74ed..2cbfa501e2 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -199,6 +199,12 @@ pqsecure_close(PGconn *conn)
* On failure, this function is responsible for appending a suitable message
* to conn->errorMessage. The caller must still inspect errno, but only
* to determine whether to continue/retry after error.
+ *
+ * Returns -1 in case of failures, except in the case of where a failure means
+ * that there was a clean connection closure, in those cases -2 is returned.
+ * Currently only the TLS implementation of pqsecure_read ever returns -2. For
+ * the other implementations a clean connection closure is detected in
+ * pqReadData instead.
*/
ssize_t
pqsecure_read(PGconn *conn, void *ptr, size_t len)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index b7df3224c0..697e0ed85d 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_STARTING /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,28 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* issue a cancel request */
+extern PGcancelConn * PQcancelSend(PGconn *conn);
+/* non-blocking version of PQrequestSend */
+extern PGcancelConn * PQcancelConn(PGconn *conn);
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn * cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
+extern int PQcancelSocket(const PGcancelConn * cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
+extern void PQcancelReset(PGcancelConn * cancelConn);
+extern void PQcancelFinish(PGcancelConn * cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* a less secure version of PQcancelSend, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index b4bb6db5a7..6e117f8b86 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -397,6 +397,10 @@ struct pg_conn
char *ssl_max_protocol_version; /* maximum TLS protocol version */
char *target_session_attrs; /* desired session properties */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -592,6 +596,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153..3781f7982b 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelSend(conn);
- if (cancel != NULL)
+ if (PQcancelStatus(cancel_conn) == CONNECTION_OK)
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index a37e4e2500..4018c61a3b 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got cancelled.
+ *
+ * This is a function wrapped in a macrco to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_cancelled(conn) confirm_query_cancelled_impl(__LINE__, conn)
+static void
+confirm_query_cancelled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_cancelled(conn);
+
+ /* test PQcancelSend */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelSend(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -1681,6 +1939,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1782,7 +2041,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
Is there anything that is currently blocking this patch? I'd quite
like it to get into PG16.
Especially since I ran into another use case that I would want to use
this patch for recently: Adding an async cancel function to Python
it's psycopg3 library. This library exposes both a Connection class
and an AsyncConnection class (using python its asyncio feature). But
one downside of the AsyncConnection type is that it doesn't have a
cancel method.
I ran into this while changing the PgBouncer tests to use python. And
the cancellation tests were the only tests that required me to use a
ThreadPoolExecutor instead of simply being able to use async-await
style programming:
https://github.com/pgbouncer/pgbouncer/blob/master/test/test_cancel.py#LL9C17-L9C17
After discussing this patch privately with Andres I created a new
version of this patch.
The main changes are:
1. Build on top of a refactor to addrinfo handling I had done for
another patch of mine (libpq load balancing). This allows creation of
a fake addrinfo list, which made it possible to remove lots of special
cases for cancel requests from PQconnectPoll
2. Move -2 return value of pqReadData to a separate commit.
3. Move usage of new cancel APIs to a separate commit.
4. Move most of the logic that's specific to cancel requests to cancel
related functions, e.g. PQcancelPoll does more than simply forwarding
to PQconnectPoll now.
5. Copy over the connhost data from the original connection, instead
of assuming that it will be rebuilt identically in the cancel
connection. The main reason for this is that when/if the loadbalancing
patch gets merged, then it won't necessarily be rebuilt identically
anymore.
Attachments:
v10-0002-Refactor-libpq-to-store-addrinfo-in-a-libpq-owne.patchapplication/octet-stream; name=v10-0002-Refactor-libpq-to-store-addrinfo-in-a-libpq-owne.patchDownload
From 8886ac4485a039ee28b966a0ad53175b2a10f290 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 25 Jan 2023 10:22:41 +0100
Subject: [PATCH v10 2/5] Refactor libpq to store addrinfo in a libpq owned
array
This refactors libpq to copy addrinfos returned by getaddrinfo to
memory owned by us. This refactoring is useful for two upcoming patches,
which need to change the addrinfo list in some way. Doing that with the
original addrinfo list is risky since we don't control how memory is
freed. Also changing the contents of a C array is quite a bit easier
than changing a linked list.
As a nice side effect of this refactor the is that mechanism for
iteration over addresses in PQconnectPoll is now identical to its
iteration over hosts.
---
src/include/libpq/pqcomm.h | 6 ++
src/interfaces/libpq/fe-connect.c | 107 +++++++++++++++++++++---------
src/interfaces/libpq/libpq-int.h | 6 +-
src/tools/pgindent/typedefs.list | 1 +
4 files changed, 87 insertions(+), 33 deletions(-)
diff --git a/src/include/libpq/pqcomm.h b/src/include/libpq/pqcomm.h
index 66ba359390f..ee28e223bd7 100644
--- a/src/include/libpq/pqcomm.h
+++ b/src/include/libpq/pqcomm.h
@@ -27,6 +27,12 @@ typedef struct
socklen_t salen;
} SockAddr;
+typedef struct
+{
+ int family;
+ SockAddr addr;
+} AddrInfo;
+
/* Configure the UNIX socket location for the well known port. */
#define UNIXSOCK_PATH(path, port, sockdir) \
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 773e9e1f3a2..46afe127f15 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -379,6 +379,7 @@ static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
+static bool store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
static PQconninfoOption *conninfo_init(PQExpBuffer errorMessage);
static PQconninfoOption *parse_connection_string(const char *connstr,
@@ -2077,7 +2078,7 @@ connectDBComplete(PGconn *conn)
time_t finish_time = ((time_t) -1);
int timeout = 0;
int last_whichhost = -2; /* certainly different from whichhost */
- struct addrinfo *last_addr_cur = NULL;
+ int last_whichaddr = -2; /* certainly different from whichaddr */
if (conn == NULL || conn->status == CONNECTION_BAD)
return 0;
@@ -2121,11 +2122,11 @@ connectDBComplete(PGconn *conn)
if (flag != PGRES_POLLING_OK &&
timeout > 0 &&
(conn->whichhost != last_whichhost ||
- conn->addr_cur != last_addr_cur))
+ conn->whichaddr != last_whichaddr))
{
finish_time = time(NULL) + timeout;
last_whichhost = conn->whichhost;
- last_addr_cur = conn->addr_cur;
+ last_whichaddr = conn->whichaddr;
}
/*
@@ -2272,9 +2273,9 @@ keep_going: /* We will come back to here until there is
/* Time to advance to next address, or next host if no more addresses? */
if (conn->try_next_addr)
{
- if (conn->addr_cur && conn->addr_cur->ai_next)
+ if (conn->whichaddr < conn->naddr)
{
- conn->addr_cur = conn->addr_cur->ai_next;
+ conn->whichaddr++;
reset_connection_state_machine = true;
}
else
@@ -2287,6 +2288,7 @@ keep_going: /* We will come back to here until there is
{
pg_conn_host *ch;
struct addrinfo hint;
+ struct addrinfo *addrlist;
int thisport;
int ret;
char portstr[MAXPGPATH];
@@ -2327,7 +2329,7 @@ keep_going: /* We will come back to here until there is
/* Initialize hint structure */
MemSet(&hint, 0, sizeof(hint));
hint.ai_socktype = SOCK_STREAM;
- conn->addrlist_family = hint.ai_family = AF_UNSPEC;
+ hint.ai_family = AF_UNSPEC;
/* Figure out the port number we're going to use. */
if (ch->port == NULL || ch->port[0] == '\0')
@@ -2350,8 +2352,8 @@ keep_going: /* We will come back to here until there is
{
case CHT_HOST_NAME:
ret = pg_getaddrinfo_all(ch->host, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not translate host name \"%s\" to address: %s",
ch->host, gai_strerror(ret));
@@ -2362,8 +2364,8 @@ keep_going: /* We will come back to here until there is
case CHT_HOST_ADDRESS:
hint.ai_flags = AI_NUMERICHOST;
ret = pg_getaddrinfo_all(ch->hostaddr, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not parse network address \"%s\": %s",
ch->hostaddr, gai_strerror(ret));
@@ -2372,7 +2374,7 @@ keep_going: /* We will come back to here until there is
break;
case CHT_UNIX_SOCKET:
- conn->addrlist_family = hint.ai_family = AF_UNIX;
+ hint.ai_family = AF_UNIX;
UNIXSOCK_PATH(portstr, thisport, ch->host);
if (strlen(portstr) >= UNIXSOCK_PATH_BUFLEN)
{
@@ -2387,8 +2389,8 @@ keep_going: /* We will come back to here until there is
* name as a Unix-domain socket path.
*/
ret = pg_getaddrinfo_all(NULL, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not translate Unix-domain socket path \"%s\" to address: %s",
portstr, gai_strerror(ret));
@@ -2397,8 +2399,14 @@ keep_going: /* We will come back to here until there is
break;
}
- /* OK, scan this addrlist for a working server address */
- conn->addr_cur = conn->addrlist;
+ if (!store_conn_addrinfo(conn, addrlist))
+ {
+ pg_freeaddrinfo_all(hint.ai_family, addrlist);
+ libpq_append_conn_error(conn, "out of memory");
+ goto error_return;
+ }
+ pg_freeaddrinfo_all(hint.ai_family, addrlist);
+
reset_connection_state_machine = true;
conn->try_next_host = false;
}
@@ -2455,30 +2463,29 @@ keep_going: /* We will come back to here until there is
{
/*
* Try to initiate a connection to one of the addresses
- * returned by pg_getaddrinfo_all(). conn->addr_cur is the
+ * returned by pg_getaddrinfo_all(). conn->whichaddr is the
* next one to try.
*
* The extra level of braces here is historical. It's not
* worth reindenting this whole switch case to remove 'em.
*/
{
- struct addrinfo *addr_cur = conn->addr_cur;
char host_addr[NI_MAXHOST];
+ AddrInfo *addr_cur;
/*
* Advance to next possible host, if we've tried all of
* the addresses for the current host.
*/
- if (addr_cur == NULL)
+ if (conn->whichaddr == conn->naddr)
{
conn->try_next_host = true;
goto keep_going;
}
+ addr_cur = &conn->addr[conn->whichaddr];
/* Remember current address for possible use later */
- memcpy(&conn->raddr.addr, addr_cur->ai_addr,
- addr_cur->ai_addrlen);
- conn->raddr.salen = addr_cur->ai_addrlen;
+ memcpy(&conn->raddr, &addr_cur->addr, sizeof(SockAddr));
/*
* Set connip, too. Note we purposely ignore strdup
@@ -2494,7 +2501,7 @@ keep_going: /* We will come back to here until there is
conn->connip = strdup(host_addr);
/* Try to create the socket */
- conn->sock = socket(addr_cur->ai_family, SOCK_STREAM, 0);
+ conn->sock = socket(addr_cur->family, SOCK_STREAM, 0);
if (conn->sock == PGINVALID_SOCKET)
{
int errorno = SOCK_ERRNO;
@@ -2505,7 +2512,7 @@ keep_going: /* We will come back to here until there is
* cases where the address list includes both IPv4 and
* IPv6 but kernel only accepts one family.
*/
- if (addr_cur->ai_next != NULL ||
+ if (conn->whichaddr < conn->naddr ||
conn->whichhost + 1 < conn->nconnhost)
{
conn->try_next_addr = true;
@@ -2531,7 +2538,7 @@ keep_going: /* We will come back to here until there is
* TCP sockets, nonblock mode, close-on-exec. Try the
* next address if any of this fails.
*/
- if (addr_cur->ai_family != AF_UNIX)
+ if (addr_cur->family != AF_UNIX)
{
if (!connectNoDelay(conn))
{
@@ -2558,7 +2565,7 @@ keep_going: /* We will come back to here until there is
}
#endif /* F_SETFD */
- if (addr_cur->ai_family != AF_UNIX)
+ if (addr_cur->family != AF_UNIX)
{
#ifndef WIN32
int on = 1;
@@ -2650,8 +2657,8 @@ keep_going: /* We will come back to here until there is
* Start/make connection. This should not block, since we
* are in nonblock mode. If it does, well, too bad.
*/
- if (connect(conn->sock, addr_cur->ai_addr,
- addr_cur->ai_addrlen) < 0)
+ if (connect(conn->sock, (struct sockaddr *) &addr_cur->addr.addr,
+ addr_cur->addr.salen) < 0)
{
if (SOCK_ERRNO == EINPROGRESS ||
#ifdef WIN32
@@ -4041,6 +4048,45 @@ freePGconn(PGconn *conn)
free(conn);
}
+/*
+ * Copies over the addrinfos from addrlist to the PGconn. The reason we do this
+ * so that we can edit the resulting list as we please, because now the memory
+ * is owned by us. Changing the original addrinfo directly is risky, since we
+ * don't control how the memory is freed and by changing it we might confuse
+ * the implementation of freeaddrinfo.
+ */
+static bool
+store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist)
+{
+ struct addrinfo *ai = addrlist;
+
+ conn->whichaddr = 0;
+
+ conn->naddr = 0;
+ while (ai)
+ {
+ ai = ai->ai_next;
+ conn->naddr++;
+ }
+
+ conn->addr = calloc(conn->naddr, sizeof(AddrInfo));
+ if (conn->addr == NULL)
+ return false;
+
+ ai = addrlist;
+ for (int i = 0; i < conn->naddr; i++)
+ {
+ conn->addr[i].family = ai->ai_family;
+
+ memcpy(&conn->addr[i].addr.addr, ai->ai_addr,
+ ai->ai_addrlen);
+ conn->addr[i].addr.salen = ai->ai_addrlen;
+ ai = ai->ai_next;
+ }
+
+ return true;
+}
+
/*
* release_conn_addrinfo
* - Free any addrinfo list in the PGconn.
@@ -4048,11 +4094,10 @@ freePGconn(PGconn *conn)
static void
release_conn_addrinfo(PGconn *conn)
{
- if (conn->addrlist)
+ if (conn->addr)
{
- pg_freeaddrinfo_all(conn->addrlist_family, conn->addrlist);
- conn->addrlist = NULL;
- conn->addr_cur = NULL; /* for safety */
+ free(conn->addr);
+ conn->addr = NULL;
}
}
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 712d572373c..940db7ecc8c 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -461,8 +461,10 @@ struct pg_conn
PGTargetServerType target_server_type; /* desired session properties */
bool try_next_addr; /* time to advance to next address/host? */
bool try_next_host; /* time to advance to next connhost[]? */
- struct addrinfo *addrlist; /* list of addresses for current connhost */
- struct addrinfo *addr_cur; /* the one currently being tried */
+ int naddr; /* number of addresses returned by getaddrinfo */
+ int whichaddr; /* the address currently being tried */
+ AddrInfo *addr; /* the array of addresses for the currently
+ * tried host */
int addrlist_family; /* needed to know how to free addrlist */
bool send_appname; /* okay to send application_name? */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 51484ca7e2f..6762f4dc70f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -26,6 +26,7 @@ AcquireSampleRowsFunc
ActionList
ActiveSnapshotElt
AddForeignUpdateTargets_function
+AddrInfo
AffixNode
AffixNodeData
AfterTriggerEvent
--
2.34.1
v10-0004-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=v10-0004-Add-non-blocking-version-of-PQcancel.patchDownload
From 0b3cec92870adab99452a074f0cf379fae798046 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 12 Jan 2022 09:52:05 +0100
Subject: [PATCH v10 4/5] Add non-blocking version of PQcancel
This patch makes the following changes in libpq:
1. Add a new PQcancelSend function, which sends cancellation requests
using the regular connection establishment code. This makes sure
that cancel requests support and use all connection options
including encryption.
2. Add a new PQcancelConn function which allows sending cancellation in
a non-blocking way by using it together with the newly added
PQcancelPoll and PQcancelSocket.
3. Use these two new cancellation APIs everywhere in the codebase where
signal-safety is not a necessity.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests. The postgres_fdw
cancellation code has been modified to make use of this.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
doc/src/sgml/libpq.sgml | 279 ++++++++++-
src/interfaces/libpq/exports.txt | 8 +
src/interfaces/libpq/fe-connect.c | 461 +++++++++++++++++-
src/interfaces/libpq/libpq-fe.h | 25 +-
src/interfaces/libpq/libpq-int.h | 9 +
.../modules/libpq_pipeline/libpq_pipeline.c | 265 +++++++++-
6 files changed, 993 insertions(+), 54 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 0e7ae70c706..b342ff407b9 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -4909,7 +4909,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelSend"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -5628,13 +5628,220 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQcancelSend">
+ <term><function>PQcancelSend</function><indexterm><primary>PQcancelSend</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command.
+<synopsis>
+PGcancelConn *PQcancelSend(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This request is made over a connection that uses the same connection
+ options as the the original <structname>PGconn</structname>. So when the
+ original connection is encrypted (using TLS or GSS), the connection for
+ the cancel request connection is encrypted in the same. Any connection
+ options that only make sense for authentication or after authentication
+ are ignored though, because cancellation requests do not require
+ authentication.
+ </para>
+
+ <para>
+ This function returns a <structname>PGcancelConn</structname>
+ object. By using
+ <xref linkend="libpq-PQcancelStatus"/>
+ it can be checked if there was any error when sending the cancellation
+ request. If <xref linkend="libpq-PQcancelStatus"/>
+ returns for <symbol>CONNECTION_OK</symbol> the request was
+ successfully sent, but if it returns <symbol>CONNECTION_BAD</symbol>
+ an error occured. If an error occured the error message can be retrieved using
+ <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being cancelled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelSend</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQcancelSend"/> that can be used
+ in a non-blocking manner.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>,
+ but it won't instantly start sending a cancel request over this
+ connection like <xref linkend="libpq-PQcancelSend"/>.
+ <xref linkend="libpq-PQcancelStatus"/> should be called on the return
+ value to check if the <structname> PGcancelConn </structname> was
+ created successfully.
+ The <structname>PGcancelConn</structname> object is an opaque structure
+ that is not meant to be accessed directly by the application.
+ This <structname>PGcancelConn</structname> object can be used to cancel
+ the query that's running on the original connection in a thread-safe and
+ non-blocking way.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQstatus"/> that can be used for
+ cancellation connections.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ <para>
+ In addition to all the statuses that a <structname>PGconn</structname>
+ can have, this connection can have one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQcancelPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelPoll"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually cancelled).
+ While a <symbol>CONNECTION_OK</symbol> result for
+ <structname>PGconn</structname> means thatqueries can be sent over the
+ connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelPoll">
+ <term><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQconnectPoll"/> that can be used for
+ cancellation connections.
+<synopsis>
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -5676,14 +5883,30 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ An insecure version of
+ <xref linkend="libpq-PQcancelSend"/>, but one that can be used safely
+ from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
+ <xref linkend="libpq-PQcancel"/> should only be used if it's necessary
+ to cancel a query from a signal-handler. If signal-safety is not needed,
+ <xref linkend="libpq-PQcancelSend"/> should be used to cancel the query
+ instead.
+ <xref linkend="libpq-PQcancel"/> can be safely invoked from a signal
+ handler, if the <parameter>errbuf</parameter> is a local variable in the
+ signal handler. The <structname>PGcancel</structname> object is read-only
+ as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
+ also be invoked from a thread that is separate from the one
+ manipulating the <structname>PGconn</structname> object.
+ </para>
+
+ <para>
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
with an explanatory error message. <parameter>errbuf</parameter>
must be a char array of size <parameter>errbufsize</parameter> (the
@@ -5691,21 +5914,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
- <para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. When calling this function a
+ connection is made to the postgres host using the same port. The only
+ connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. <emphasis>This means the connection
+ is never encrypted using TLS or GSS</emphasis>.
</para>
</listitem>
</varlistentry>
@@ -5717,13 +5941,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelSend"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelSend"/> should be
+ used instead, to avoid the security and thread-safety issues that this
+ function has. This function has the same security issues as
+ <xref linkend="libpq-PQcancel"/>, but without the benefit of being
+ signal-safe.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -8872,7 +9105,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelSend"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index e8bcc883709..f56e8c185c4 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -186,3 +186,11 @@ PQpipelineStatus 183
PQsetTraceFlags 184
PQmblenBounded 185
PQsendFlushRequest 186
+PQcancelSend 187
+PQcancelConn 188
+PQcancelPoll 189
+PQcancelStatus 190
+PQcancelSocket 191
+PQcancelErrorMessage 192
+PQcancelReset 193
+PQcancelFinish 194
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 46afe127f15..94d68f8d5c4 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -376,8 +376,10 @@ static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
+static void release_conn_hosts(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
static bool store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
@@ -600,8 +602,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQcancelReset invocations. Otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -732,6 +743,113 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a request on the given connection. This requires
+ * polling the returned PGconn to actually complete the cancellation of the
+ * request.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancelation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we we manually create the host and address arrays
+ * with a single element after freeing the host array that we generated
+ * from the connection options.
+ */
+ release_conn_hosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
/*
* PQconnectStartParams
*
@@ -907,6 +1025,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2031,10 +2188,18 @@ connectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address. These fields have already set up in PQcancelConn. So leave
+ * these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2176,7 +2341,10 @@ connectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2301,13 +2469,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -2538,7 +2710,7 @@ keep_going: /* We will come back to here until there is
* TCP sockets, nonblock mode, close-on-exec. Try the
* next address if any of this fails.
*/
- if (addr_cur->family != AF_UNIX)
+ if (conn->raddr.addr.ss_family != AF_UNIX)
{
if (!connectNoDelay(conn))
{
@@ -2657,7 +2829,7 @@ keep_going: /* We will come back to here until there is
* Start/make connection. This should not block, since we
* are in nonblock mode. If it does, well, too bad.
*/
- if (connect(conn->sock, (struct sockaddr *) &addr_cur->addr.addr,
+ if (connect(conn->sock, (struct sockaddr *) &addr_cur->addr,
addr_cur->addr.salen) < 0)
{
if (SOCK_ERRNO == EINPROGRESS ||
@@ -2898,6 +3070,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancelation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -3610,8 +3805,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -3978,19 +4179,13 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
- /* clean up pg_conn_host structures */
- for (int i = 0; i < conn->nconnhost; ++i)
- {
- free(conn->connhost[i].host);
- free(conn->connhost[i].hostaddr);
- free(conn->connhost[i].port);
- if (conn->connhost[i].password != NULL)
- {
- explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
- free(conn->connhost[i].password);
- }
- }
- free(conn->connhost);
+ /*
+ * For cancel requests we don't free the addrinfo in closePGconn (see
+ * comment there for reasoning). So we still have to free it here.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
+ release_conn_hosts(conn);
free(conn->client_encoding_initial);
free(conn->events);
@@ -4101,6 +4296,31 @@ release_conn_addrinfo(PGconn *conn)
}
}
+/*
+ * release_conn_hosts
+ * - Free the host list in the PGconn.
+ */
+static void
+release_conn_hosts(PGconn *conn)
+{
+ if (conn->connhost)
+ {
+ for (int i = 0; i < conn->nconnhost; ++i)
+ {
+ free(conn->connhost[i].host);
+ free(conn->connhost[i].hostaddr);
+ free(conn->connhost[i].port);
+ if (conn->connhost[i].password != NULL)
+ {
+ explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
+ free(conn->connhost[i].password);
+ }
+ }
+ free(conn->connhost);
+ }
+}
+
+
/*
* sendTerminateConn
* - Send a terminate message to backend.
@@ -4108,6 +4328,15 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4161,7 +4390,13 @@ closePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we have to rebuild during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
@@ -4576,6 +4811,180 @@ cancel_errReturn:
return false;
}
+/*
+ * PQcancelSend
+ *
+ * Send a cancellation request in a blocking fashion.
+ */
+PGcancelConn *
+PQcancelSend(PGconn *conn)
+{
+ PGcancelConn *cancelConn = PQcancelConn(conn);
+
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return cancelConn;
+
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_BAD;
+ return cancelConn;
+ }
+
+ (void) connectDBComplete(&cancelConn->conn);
+
+ return cancelConn;
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn * cancelConn)
+{
+ PGconn *conn = (PGconn *) cancelConn;
+ int n;
+
+ /*
+ * Before we can call PQconnectPoll we first need to start the connection
+ * using connectDBstart. Non-cancel connections already do this whenever
+ * the connection is initialized. But cancel connections wait until the
+ * caller starts polling, because there might be a large delay between
+ * creating a cancel connection and actually wanting to use it.
+ */
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+ }
+
+ /*
+ * The rest of the connection establishement we leave to PQconnectPoll,
+ * since it's very similar to normal connection establishment. But once we
+ * get to the CONNECTION_AWAITING_RESPONSE we need to do our own thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * Windows is a bit special in its EOF behaviour for TCP. Sometimes it
+ * will error with an ECONNRESET when there is a clean connection closure.
+ * See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here. For
+ * all other OSes we consider any other error than EOF and report it as
+ * such.
+ */
+ if (n < 0 && n != -2)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we receive
+ * some we consider that an error.
+ */
+ if (n > 0)
+ {
+
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF. Which is what we were
+ * expecting. The cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn * cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn * cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn * cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn * cancelConn)
+{
+ closePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_STARTING;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn * cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
/*
* PQrequestCancel: old, not thread-safe function for requesting query cancel
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index f3d92204964..95899b9f55b 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_STARTING /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,28 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* issue a cancel request */
+extern PGcancelConn * PQcancelSend(PGconn *conn);
+/* non-blocking version of PQcancelSend */
+extern PGcancelConn * PQcancelConn(PGconn *conn);
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn * cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
+extern int PQcancelSocket(const PGcancelConn * cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
+extern void PQcancelReset(PGcancelConn * cancelConn);
+extern void PQcancelFinish(PGcancelConn * cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* a less secure version of PQcancelSend, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 940db7ecc8c..cd1857ea493 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -397,6 +397,10 @@ struct pg_conn
char *ssl_max_protocol_version; /* maximum TLS protocol version */
char *target_session_attrs; /* desired session properties */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -594,6 +598,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 6111bf9b672..6d88d419a6c 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got cancelled.
+ *
+ * This is a function wrapped in a macrco to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_cancelled(conn) confirm_query_cancelled_impl(__LINE__, conn)
+static void
+confirm_query_cancelled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_cancelled(conn);
+
+ /* test PQcancelSend */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelSend(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -985,7 +1243,7 @@ test_prepared(PGconn *conn)
static void
notice_processor(void *arg, const char *message)
{
- int *n_notices = (int *) arg;
+ int *n_notices = (int *) arg;
(*n_notices)++;
fprintf(stderr, "NOTICE %d: %s", *n_notices, message);
@@ -1681,6 +1939,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1782,7 +2041,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
v10-0003-Return-2-from-pqReadData-on-EOF.patchapplication/octet-stream; name=v10-0003-Return-2-from-pqReadData-on-EOF.patchDownload
From b9fdf1a109635cbb54efc0916de2e67894ad39b1 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Thu, 26 Jan 2023 12:24:38 +0100
Subject: [PATCH v10 3/5] Return -2 from pqReadData on EOF
This patch changes pqReadData to return -2 when a connection is cleanly
closed by the other side. For most of the Postgres protocol this is
considered an error, because the client will close the connection
instead of the server. But for Postgres its cancellation protocol
the distinction between errors and clean connection closure is
important, because clean connection closure is the way for the server to
signal that the cancellation was handled.
This patch is in preparation for a follow-up patch where pqReadData is
used for the cancellation protocol implementation.
No existing callsites of pqReadData or any of its internal functions
need to be updated as all of them check if the result is less than 0
instead a strict comparison against -1.
---
src/interfaces/libpq/fe-misc.c | 15 +++++++++++----
src/interfaces/libpq/fe-secure-openssl.c | 2 +-
src/interfaces/libpq/fe-secure.c | 6 ++++++
3 files changed, 18 insertions(+), 5 deletions(-)
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 660cdec93c9..2d49188d910 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -556,8 +556,11 @@ pqPutMsgEnd(PGconn *conn)
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
- * -1: error detected (including EOF = connection closure);
+ * -1: error detected (excluding EOF = clean connection closure);
* conn->errorMessage set
+ * -2: EOF detected, connection is closed cleanly by other side;
+ * conn->errorMessage set
+ *
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
@@ -639,7 +642,7 @@ retry3:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -734,7 +737,7 @@ retry4:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -751,13 +754,17 @@ definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.");
+ /* Do *not* drop any already-read data; caller still wants it */
+ pqDropConnection(conn, false);
+ conn->status = CONNECTION_BAD; /* No more connection to backend */
+ return -2;
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
/* Do *not* drop any already-read data; caller still wants it */
pqDropConnection(conn, false);
conn->status = CONNECTION_BAD; /* No more connection to backend */
- return -1;
+ return nread < 0 ? nread : -1;
}
/*
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index ab2cbf045b8..4e29a9f5c90 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -248,7 +248,7 @@ rloop:
*/
libpq_append_conn_error(conn, "SSL connection has been closed unexpectedly");
result_errno = ECONNRESET;
- n = -1;
+ n = -2;
break;
default:
libpq_append_conn_error(conn, "unrecognized SSL error code: %d", err);
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 8069e381424..20265dcb317 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -199,6 +199,12 @@ pqsecure_close(PGconn *conn)
* On failure, this function is responsible for appending a suitable message
* to conn->errorMessage. The caller must still inspect errno, but only
* to determine whether to continue/retry after error.
+ *
+ * Returns -1 in case of failures, except in the case of where a failure means
+ * that there was a clean connection closure, in those cases -2 is returned.
+ * Currently only the TLS implementation of pqsecure_read ever returns -2. For
+ * the other implementations a clean connection closure is detected in
+ * pqReadData instead.
*/
ssize_t
pqsecure_read(PGconn *conn, void *ptr, size_t len)
--
2.34.1
v10-0005-Start-using-new-libpq-cancel-APIs.patchapplication/octet-stream; name=v10-0005-Start-using-new-libpq-cancel-APIs.patchDownload
From 2ba7bcc5c4dd0b55487178612744169a82f2fac7 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 25 Jan 2023 13:32:15 +0100
Subject: [PATCH v10 5/5] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in the codebase with these newer
ones.
---
contrib/dblink/dblink.c | 22 +++--
contrib/postgres_fdw/connection.c | 93 ++++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 8 ++
src/fe_utils/connect_utils.c | 10 +-
src/test/isolation/isolationtester.c | 29 +++---
6 files changed, 126 insertions(+), 51 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 8982d623d3b..be09d4fe926 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1326,22 +1326,24 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
-
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ cancelConn = PQcancelSend(conn);
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ {
+ msg = "OK";
+ }
+ PQcancelFinish(cancelConn);
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 7760380f00d..d8554ae775c 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -1234,35 +1234,98 @@ pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel)
static bool
pgfdw_cancel_query(PGconn *conn)
{
- PGcancel *cancel;
- char errbuf[256];
PGresult *result = NULL;
- TimestampTz endtime;
- bool timed_out;
/*
* If it takes too long to cancel the query and discard the result, assume
* the connection is dead.
*/
- endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQcancelStatus(cancel_conn) == CONNECTION_BAD)
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ return false;
+ }
+
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ }
+ PG_CATCH();
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PQcancelFinish(cancel_conn);
+ PG_RE_THROW();
+ }
+ PG_END_TRY();
+
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PQcancelFinish(cancel_conn);
+ return failed;
}
+ PQcancelFinish(cancel_conn);
/* Get and discard the result of the query. */
if (pgfdw_get_cleanup_result(conn, endtime, &result, &timed_out))
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 2350cfe1487..0e365415d45 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2688,6 +2688,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index c37aa803836..89dfe959243 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -326,6 +326,7 @@ DELETE FROM loct_empty;
ANALYZE ft_empty;
EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft_empty ORDER BY c1;
+
-- ===================================================================
-- WHERE with remotely-executable conditions
-- ===================================================================
@@ -713,6 +714,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 7a1edea7c8c..b32448c0103 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,11 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
-
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ PQcancelFinish(PQcancelSend(conn));
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153a..3781f7982b2 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelSend(conn);
- if (cancel != NULL)
+ if (PQcancelStatus(cancel_conn) == CONNECTION_OK)
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
v10-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patchapplication/octet-stream; name=v10-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patchDownload
From 48647b915065233c1bf30757f9ee098ac5a8e14b Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 30 Nov 2022 10:07:19 +0100
Subject: [PATCH v10 1/5] libpq: Run pgindent after a9e9a9f32b3
It seems that pgindent was not run after the error handling refactor in
commit a9e9a9f32b35edf129c88e8b929ef223f8511f59. This fixes that and
also addresses a few other things pgindent wanted to change in libpq.
---
src/interfaces/libpq/fe-auth-scram.c | 2 +-
src/interfaces/libpq/fe-auth.c | 8 +-
src/interfaces/libpq/fe-connect.c | 124 +++++++++++------------
src/interfaces/libpq/fe-exec.c | 16 +--
src/interfaces/libpq/fe-lobj.c | 42 ++++----
src/interfaces/libpq/fe-misc.c | 10 +-
src/interfaces/libpq/fe-protocol3.c | 2 +-
src/interfaces/libpq/fe-secure-common.c | 6 +-
src/interfaces/libpq/fe-secure-gssapi.c | 12 +--
src/interfaces/libpq/fe-secure-openssl.c | 64 ++++++------
src/interfaces/libpq/fe-secure.c | 8 +-
src/interfaces/libpq/libpq-int.h | 4 +-
12 files changed, 149 insertions(+), 149 deletions(-)
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 9c42ea4f819..12c3d0bc333 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -716,7 +716,7 @@ read_server_final_message(fe_scram_state *state, char *input)
return false;
}
libpq_append_conn_error(conn, "error received from server in SCRAM exchange: %s",
- errmsg);
+ errmsg);
return false;
}
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 9afc6f19b9a..ab454e6cd02 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -73,7 +73,7 @@ pg_GSS_continue(PGconn *conn, int payloadlen)
if (!ginbuf.value)
{
libpq_append_conn_error(conn, "out of memory allocating GSSAPI buffer (%d)",
- payloadlen);
+ payloadlen);
return STATUS_ERROR;
}
if (pqGetnchar(ginbuf.value, payloadlen, conn))
@@ -223,7 +223,7 @@ pg_SSPI_continue(PGconn *conn, int payloadlen)
if (!inputbuf)
{
libpq_append_conn_error(conn, "out of memory allocating SSPI buffer (%d)",
- payloadlen);
+ payloadlen);
return STATUS_ERROR;
}
if (pqGetnchar(inputbuf, payloadlen, conn))
@@ -623,7 +623,7 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
if (!challenge)
{
libpq_append_conn_error(conn, "out of memory allocating SASL buffer (%d)",
- payloadlen);
+ payloadlen);
return STATUS_ERROR;
}
@@ -1277,7 +1277,7 @@ PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user,
else
{
libpq_append_conn_error(conn, "unrecognized password encryption algorithm \"%s\"",
- algorithm);
+ algorithm);
return NULL;
}
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 50b5df3490b..773e9e1f3a2 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -1079,7 +1079,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "could not match %d host names to %d hostaddr values",
- count_comma_separated_elems(conn->pghost), conn->nconnhost);
+ count_comma_separated_elems(conn->pghost), conn->nconnhost);
return false;
}
}
@@ -1159,7 +1159,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "could not match %d port numbers to %d hosts",
- count_comma_separated_elems(conn->pgport), conn->nconnhost);
+ count_comma_separated_elems(conn->pgport), conn->nconnhost);
return false;
}
}
@@ -1248,7 +1248,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "channel_binding", conn->channel_binding);
+ "channel_binding", conn->channel_binding);
return false;
}
}
@@ -1273,7 +1273,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "sslmode", conn->sslmode);
+ "sslmode", conn->sslmode);
return false;
}
@@ -1293,7 +1293,7 @@ connectOptions2(PGconn *conn)
case 'v': /* "verify-ca" or "verify-full" */
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "sslmode value \"%s\" invalid when SSL support is not compiled in",
- conn->sslmode);
+ conn->sslmode);
return false;
}
#endif
@@ -1313,16 +1313,16 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "ssl_min_protocol_version",
- conn->ssl_min_protocol_version);
+ "ssl_min_protocol_version",
+ conn->ssl_min_protocol_version);
return false;
}
if (!sslVerifyProtocolVersion(conn->ssl_max_protocol_version))
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "ssl_max_protocol_version",
- conn->ssl_max_protocol_version);
+ "ssl_max_protocol_version",
+ conn->ssl_max_protocol_version);
return false;
}
@@ -1359,7 +1359,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "gssencmode value \"%s\" invalid when GSSAPI support is not compiled in",
- conn->gssencmode);
+ conn->gssencmode);
return false;
}
#endif
@@ -1392,8 +1392,8 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "target_session_attrs",
- conn->target_session_attrs);
+ "target_session_attrs",
+ conn->target_session_attrs);
return false;
}
}
@@ -1609,7 +1609,7 @@ connectNoDelay(PGconn *conn)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "could not set socket to TCP no delay mode: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -1787,7 +1787,7 @@ parse_int_param(const char *value, int *result, PGconn *conn,
error:
libpq_append_conn_error(conn, "invalid integer value \"%s\" for connection option \"%s\"",
- value, context);
+ value, context);
return false;
}
@@ -1816,9 +1816,9 @@ setKeepalivesIdle(PGconn *conn)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- PG_TCP_KEEPALIVE_IDLE_STR,
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ PG_TCP_KEEPALIVE_IDLE_STR,
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -1850,9 +1850,9 @@ setKeepalivesInterval(PGconn *conn)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- "TCP_KEEPINTVL",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ "TCP_KEEPINTVL",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -1885,9 +1885,9 @@ setKeepalivesCount(PGconn *conn)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- "TCP_KEEPCNT",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ "TCP_KEEPCNT",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -1949,8 +1949,8 @@ prepKeepalivesWin32(PGconn *conn)
if (!setKeepalivesWin32(conn->sock, idle, interval))
{
libpq_append_conn_error(conn, "%s(%s) failed: error code %d",
- "WSAIoctl", "SIO_KEEPALIVE_VALS",
- WSAGetLastError());
+ "WSAIoctl", "SIO_KEEPALIVE_VALS",
+ WSAGetLastError());
return 0;
}
return 1;
@@ -1983,9 +1983,9 @@ setTCPUserTimeout(PGconn *conn)
char sebuf[256];
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- "TCP_USER_TIMEOUT",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ "TCP_USER_TIMEOUT",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -2354,7 +2354,7 @@ keep_going: /* We will come back to here until there is
if (ret || !conn->addrlist)
{
libpq_append_conn_error(conn, "could not translate host name \"%s\" to address: %s",
- ch->host, gai_strerror(ret));
+ ch->host, gai_strerror(ret));
goto keep_going;
}
break;
@@ -2366,7 +2366,7 @@ keep_going: /* We will come back to here until there is
if (ret || !conn->addrlist)
{
libpq_append_conn_error(conn, "could not parse network address \"%s\": %s",
- ch->hostaddr, gai_strerror(ret));
+ ch->hostaddr, gai_strerror(ret));
goto keep_going;
}
break;
@@ -2377,8 +2377,8 @@ keep_going: /* We will come back to here until there is
if (strlen(portstr) >= UNIXSOCK_PATH_BUFLEN)
{
libpq_append_conn_error(conn, "Unix-domain socket path \"%s\" is too long (maximum %d bytes)",
- portstr,
- (int) (UNIXSOCK_PATH_BUFLEN - 1));
+ portstr,
+ (int) (UNIXSOCK_PATH_BUFLEN - 1));
goto keep_going;
}
@@ -2391,7 +2391,7 @@ keep_going: /* We will come back to here until there is
if (ret || !conn->addrlist)
{
libpq_append_conn_error(conn, "could not translate Unix-domain socket path \"%s\" to address: %s",
- portstr, gai_strerror(ret));
+ portstr, gai_strerror(ret));
goto keep_going;
}
break;
@@ -2513,7 +2513,7 @@ keep_going: /* We will come back to here until there is
}
emitHostIdentityInfo(conn, host_addr);
libpq_append_conn_error(conn, "could not create socket: %s",
- SOCK_STRERROR(errorno, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(errorno, sebuf, sizeof(sebuf)));
goto error_return;
}
@@ -2543,7 +2543,7 @@ keep_going: /* We will come back to here until there is
if (!pg_set_noblock(conn->sock))
{
libpq_append_conn_error(conn, "could not set socket to nonblocking mode: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
conn->try_next_addr = true;
goto keep_going;
}
@@ -2552,7 +2552,7 @@ keep_going: /* We will come back to here until there is
if (fcntl(conn->sock, F_SETFD, FD_CLOEXEC) == -1)
{
libpq_append_conn_error(conn, "could not set socket to close-on-exec mode: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
conn->try_next_addr = true;
goto keep_going;
}
@@ -2581,9 +2581,9 @@ keep_going: /* We will come back to here until there is
(char *) &on, sizeof(on)) < 0)
{
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- "SO_KEEPALIVE",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ "SO_KEEPALIVE",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
err = 1;
}
else if (!setKeepalivesIdle(conn)
@@ -2708,7 +2708,7 @@ keep_going: /* We will come back to here until there is
(char *) &optval, &optlen) == -1)
{
libpq_append_conn_error(conn, "could not get socket error status: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
else if (optval != 0)
@@ -2735,7 +2735,7 @@ keep_going: /* We will come back to here until there is
&conn->laddr.salen) < 0)
{
libpq_append_conn_error(conn, "could not get client address from socket: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
@@ -2775,7 +2775,7 @@ keep_going: /* We will come back to here until there is
libpq_append_conn_error(conn, "requirepeer parameter is not supported on this platform");
else
libpq_append_conn_error(conn, "could not get peer credentials: %s",
- strerror_r(errno, sebuf, sizeof(sebuf)));
+ strerror_r(errno, sebuf, sizeof(sebuf)));
goto error_return;
}
@@ -2788,7 +2788,7 @@ keep_going: /* We will come back to here until there is
if (strcmp(remote_username, conn->requirepeer) != 0)
{
libpq_append_conn_error(conn, "requirepeer specifies \"%s\", but actual peer user name is \"%s\"",
- conn->requirepeer, remote_username);
+ conn->requirepeer, remote_username);
free(remote_username);
goto error_return;
}
@@ -2829,7 +2829,7 @@ keep_going: /* We will come back to here until there is
if (pqPacketSend(conn, 0, &pv, sizeof(pv)) != STATUS_OK)
{
libpq_append_conn_error(conn, "could not send GSSAPI negotiation packet: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
@@ -2840,7 +2840,7 @@ keep_going: /* We will come back to here until there is
else if (!conn->gctx && conn->gssencmode[0] == 'r')
{
libpq_append_conn_error(conn,
- "GSSAPI encryption required but was impossible (possibly no credential cache, no server support, or using a local socket)");
+ "GSSAPI encryption required but was impossible (possibly no credential cache, no server support, or using a local socket)");
goto error_return;
}
#endif
@@ -2882,7 +2882,7 @@ keep_going: /* We will come back to here until there is
if (pqPacketSend(conn, 0, &pv, sizeof(pv)) != STATUS_OK)
{
libpq_append_conn_error(conn, "could not send SSL negotiation packet: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
/* Ok, wait for response */
@@ -2911,7 +2911,7 @@ keep_going: /* We will come back to here until there is
if (pqPacketSend(conn, 0, startpacket, packetlen) != STATUS_OK)
{
libpq_append_conn_error(conn, "could not send startup packet: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
free(startpacket);
goto error_return;
}
@@ -3012,7 +3012,7 @@ keep_going: /* We will come back to here until there is
else
{
libpq_append_conn_error(conn, "received invalid response to SSL negotiation: %c",
- SSLok);
+ SSLok);
goto error_return;
}
}
@@ -3123,7 +3123,7 @@ keep_going: /* We will come back to here until there is
else if (gss_ok != 'G')
{
libpq_append_conn_error(conn, "received invalid response to GSSAPI negotiation: %c",
- gss_ok);
+ gss_ok);
goto error_return;
}
}
@@ -3201,7 +3201,7 @@ keep_going: /* We will come back to here until there is
if (!(beresp == 'R' || beresp == 'v' || beresp == 'E'))
{
libpq_append_conn_error(conn, "expected authentication request from server, but received %c",
- beresp);
+ beresp);
goto error_return;
}
@@ -3216,17 +3216,17 @@ keep_going: /* We will come back to here until there is
* Try to validate message length before using it.
* Authentication requests can't be very large, although GSS
* auth requests may not be that small. Same for
- * NegotiateProtocolVersion. Errors can be a
- * little larger, but not huge. If we see a large apparent
- * length in an error, it means we're really talking to a
- * pre-3.0-protocol server; cope. (Before version 14, the
- * server also used the old protocol for errors that happened
- * before processing the startup packet.)
+ * NegotiateProtocolVersion. Errors can be a little larger,
+ * but not huge. If we see a large apparent length in an
+ * error, it means we're really talking to a pre-3.0-protocol
+ * server; cope. (Before version 14, the server also used the
+ * old protocol for errors that happened before processing the
+ * startup packet.)
*/
if ((beresp == 'R' || beresp == 'v') && (msgLength < 8 || msgLength > 2000))
{
libpq_append_conn_error(conn, "expected authentication request from server, but received %c",
- beresp);
+ beresp);
goto error_return;
}
@@ -3705,7 +3705,7 @@ keep_going: /* We will come back to here until there is
/* Append error report to conn->errorMessage. */
libpq_append_conn_error(conn, "\"%s\" failed",
- "SHOW transaction_read_only");
+ "SHOW transaction_read_only");
/* Close connection politely. */
conn->status = CONNECTION_OK;
@@ -3755,7 +3755,7 @@ keep_going: /* We will come back to here until there is
/* Append error report to conn->errorMessage. */
libpq_append_conn_error(conn, "\"%s\" failed",
- "SELECT pg_is_in_recovery()");
+ "SELECT pg_is_in_recovery()");
/* Close connection politely. */
conn->status = CONNECTION_OK;
@@ -3768,8 +3768,8 @@ keep_going: /* We will come back to here until there is
default:
libpq_append_conn_error(conn,
- "invalid connection state %d, probably indicative of memory corruption",
- conn->status);
+ "invalid connection state %d, probably indicative of memory corruption",
+ conn->status);
goto error_return;
}
@@ -7148,7 +7148,7 @@ pgpassfileWarning(PGconn *conn)
if (sqlstate && strcmp(sqlstate, ERRCODE_INVALID_PASSWORD) == 0)
libpq_append_conn_error(conn, "password retrieved from file \"%s\"",
- conn->pgpassfile);
+ conn->pgpassfile);
}
}
diff --git a/src/interfaces/libpq/fe-exec.c b/src/interfaces/libpq/fe-exec.c
index ec62550e385..0c2dae6ed9e 100644
--- a/src/interfaces/libpq/fe-exec.c
+++ b/src/interfaces/libpq/fe-exec.c
@@ -1444,7 +1444,7 @@ PQsendQueryInternal(PGconn *conn, const char *query, bool newQuery)
if (conn->pipelineStatus != PQ_PIPELINE_OFF)
{
libpq_append_conn_error(conn, "%s not allowed in pipeline mode",
- "PQsendQuery");
+ "PQsendQuery");
return 0;
}
@@ -1512,7 +1512,7 @@ PQsendQueryParams(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -1558,7 +1558,7 @@ PQsendPrepare(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -1652,7 +1652,7 @@ PQsendQueryPrepared(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -2099,10 +2099,9 @@ PQgetResult(PGconn *conn)
/*
* We're about to return the NULL that terminates the round of
- * results from the current query; prepare to send the results
- * of the next query, if any, when we're called next. If there's
- * no next element in the command queue, this gets us in IDLE
- * state.
+ * results from the current query; prepare to send the results of
+ * the next query, if any, when we're called next. If there's no
+ * next element in the command queue, this gets us in IDLE state.
*/
pqPipelineProcessQueue(conn);
res = NULL; /* query is complete */
@@ -3047,6 +3046,7 @@ pqPipelineProcessQueue(PGconn *conn)
return;
case PGASYNC_IDLE:
+
/*
* If we're in IDLE mode and there's some command in the queue,
* get us into PIPELINE_IDLE mode and process normally. Otherwise
diff --git a/src/interfaces/libpq/fe-lobj.c b/src/interfaces/libpq/fe-lobj.c
index 4cb6a468597..206266fd043 100644
--- a/src/interfaces/libpq/fe-lobj.c
+++ b/src/interfaces/libpq/fe-lobj.c
@@ -142,7 +142,7 @@ lo_truncate(PGconn *conn, int fd, size_t len)
if (conn->lobjfuncs->fn_lo_truncate == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_truncate");
+ "lo_truncate");
return -1;
}
@@ -205,7 +205,7 @@ lo_truncate64(PGconn *conn, int fd, pg_int64 len)
if (conn->lobjfuncs->fn_lo_truncate64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_truncate64");
+ "lo_truncate64");
return -1;
}
@@ -395,7 +395,7 @@ lo_lseek64(PGconn *conn, int fd, pg_int64 offset, int whence)
if (conn->lobjfuncs->fn_lo_lseek64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_lseek64");
+ "lo_lseek64");
return -1;
}
@@ -485,7 +485,7 @@ lo_create(PGconn *conn, Oid lobjId)
if (conn->lobjfuncs->fn_lo_create == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_create");
+ "lo_create");
return InvalidOid;
}
@@ -558,7 +558,7 @@ lo_tell64(PGconn *conn, int fd)
if (conn->lobjfuncs->fn_lo_tell64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_tell64");
+ "lo_tell64");
return -1;
}
@@ -667,7 +667,7 @@ lo_import_internal(PGconn *conn, const char *filename, Oid oid)
if (fd < 0)
{ /* error */
libpq_append_conn_error(conn, "could not open file \"%s\": %s",
- filename, strerror_r(errno, sebuf, sizeof(sebuf)));
+ filename, strerror_r(errno, sebuf, sizeof(sebuf)));
return InvalidOid;
}
@@ -723,8 +723,8 @@ lo_import_internal(PGconn *conn, const char *filename, Oid oid)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not read from file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return InvalidOid;
}
@@ -778,8 +778,8 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not open file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return -1;
}
@@ -799,8 +799,8 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not write to file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return -1;
}
}
@@ -822,7 +822,7 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
if (close(fd) != 0 && result >= 0)
{
libpq_append_conn_error(conn, "could not write to file \"%s\": %s",
- filename, strerror_r(errno, sebuf, sizeof(sebuf)));
+ filename, strerror_r(errno, sebuf, sizeof(sebuf)));
result = -1;
}
@@ -954,56 +954,56 @@ lo_initialize(PGconn *conn)
if (lobjfuncs->fn_lo_open == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_open");
+ "lo_open");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_close == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_close");
+ "lo_close");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_creat == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_creat");
+ "lo_creat");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_unlink == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_unlink");
+ "lo_unlink");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_lseek == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_lseek");
+ "lo_lseek");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_tell == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_tell");
+ "lo_tell");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_read == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "loread");
+ "loread");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_write == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lowrite");
+ "lowrite");
free(lobjfuncs);
return -1;
}
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 3653a1a8a62..660cdec93c9 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -749,8 +749,8 @@ retry4:
*/
definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
@@ -1067,7 +1067,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s() failed: %s", "select",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
}
return result;
@@ -1280,7 +1280,7 @@ libpq_ngettext(const char *msgid, const char *msgid_plural, unsigned long n)
* newline.
*/
void
-libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...)
+libpq_append_error(PQExpBuffer errorMessage, const char *fmt,...)
{
int save_errno = errno;
bool done;
@@ -1309,7 +1309,7 @@ libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...)
* format should not end with a newline.
*/
void
-libpq_append_conn_error(PGconn *conn, const char *fmt, ...)
+libpq_append_conn_error(PGconn *conn, const char *fmt,...)
{
int save_errno = errno;
bool done;
diff --git a/src/interfaces/libpq/fe-protocol3.c b/src/interfaces/libpq/fe-protocol3.c
index 8ab6a884165..b79d74f7489 100644
--- a/src/interfaces/libpq/fe-protocol3.c
+++ b/src/interfaces/libpq/fe-protocol3.c
@@ -466,7 +466,7 @@ static void
handleSyncLoss(PGconn *conn, char id, int msgLength)
{
libpq_append_conn_error(conn, "lost synchronization with server: got message type \"%c\", length %d",
- id, msgLength);
+ id, msgLength);
/* build an error result holding the error message */
pqSaveErrorResult(conn);
conn->asyncStatus = PGASYNC_READY; /* drop out of PQgetResult wait loop */
diff --git a/src/interfaces/libpq/fe-secure-common.c b/src/interfaces/libpq/fe-secure-common.c
index de115b37649..3ecc7bf6159 100644
--- a/src/interfaces/libpq/fe-secure-common.c
+++ b/src/interfaces/libpq/fe-secure-common.c
@@ -226,7 +226,7 @@ pq_verify_peer_name_matches_certificate_ip(PGconn *conn,
* wrong given the subject matter.
*/
libpq_append_conn_error(conn, "certificate contains IP address with invalid length %zu",
- iplen);
+ iplen);
return -1;
}
@@ -235,7 +235,7 @@ pq_verify_peer_name_matches_certificate_ip(PGconn *conn,
if (!addrstr)
{
libpq_append_conn_error(conn, "could not convert certificate's IP address to string: %s",
- strerror_r(errno, sebuf, sizeof(sebuf)));
+ strerror_r(errno, sebuf, sizeof(sebuf)));
return -1;
}
@@ -292,7 +292,7 @@ pq_verify_peer_name_matches_certificate(PGconn *conn)
else if (names_examined == 1)
{
libpq_append_conn_error(conn, "server certificate for \"%s\" does not match host name \"%s\"",
- first_name, host);
+ first_name, host);
}
else
{
diff --git a/src/interfaces/libpq/fe-secure-gssapi.c b/src/interfaces/libpq/fe-secure-gssapi.c
index 6220e4a1014..bed6e62435b 100644
--- a/src/interfaces/libpq/fe-secure-gssapi.c
+++ b/src/interfaces/libpq/fe-secure-gssapi.c
@@ -213,8 +213,8 @@ pg_GSS_write(PGconn *conn, const void *ptr, size_t len)
if (output.length > PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "client tried to send oversize GSSAPI packet (%zu > %zu)",
- (size_t) output.length,
- PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32));
+ (size_t) output.length,
+ PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32));
errno = EIO; /* for lack of a better idea */
goto cleanup;
}
@@ -349,8 +349,8 @@ pg_GSS_read(PGconn *conn, void *ptr, size_t len)
if (input.length > PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "oversize GSSAPI packet sent by the server (%zu > %zu)",
- (size_t) input.length,
- PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
+ (size_t) input.length,
+ PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
errno = EIO; /* for lack of a better idea */
return -1;
}
@@ -588,8 +588,8 @@ pqsecure_open_gss(PGconn *conn)
if (input.length > PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "oversize GSSAPI packet sent by the server (%zu > %zu)",
- (size_t) input.length,
- PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
+ (size_t) input.length,
+ PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
return PGRES_POLLING_FAILED;
}
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index 983536de251..ab2cbf045b8 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -213,12 +213,12 @@ rloop:
if (result_errno == EPIPE ||
result_errno == ECONNRESET)
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
else
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
}
else
{
@@ -313,12 +313,12 @@ pgtls_write(PGconn *conn, const void *ptr, size_t len)
result_errno = SOCK_ERRNO;
if (result_errno == EPIPE || result_errno == ECONNRESET)
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
else
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
}
else
{
@@ -410,7 +410,7 @@ pgtls_get_peer_certificate_hash(PGconn *conn, size_t *len)
if (algo_type == NULL)
{
libpq_append_conn_error(conn, "could not find digest for NID %s",
- OBJ_nid2sn(algo_nid));
+ OBJ_nid2sn(algo_nid));
return NULL;
}
break;
@@ -962,7 +962,7 @@ initialize_SSL(PGconn *conn)
if (ssl_min_ver == -1)
{
libpq_append_conn_error(conn, "invalid value \"%s\" for minimum SSL protocol version",
- conn->ssl_min_protocol_version);
+ conn->ssl_min_protocol_version);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -988,7 +988,7 @@ initialize_SSL(PGconn *conn)
if (ssl_max_ver == -1)
{
libpq_append_conn_error(conn, "invalid value \"%s\" for maximum SSL protocol version",
- conn->ssl_max_protocol_version);
+ conn->ssl_max_protocol_version);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1032,7 +1032,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read root certificate file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
SSL_CTX_free(SSL_context);
return -1;
@@ -1084,10 +1084,10 @@ initialize_SSL(PGconn *conn)
*/
if (fnbuf[0] == '\0')
libpq_append_conn_error(conn, "could not get home directory to locate root certificate file\n"
- "Either provide the file or change sslmode to disable server certificate verification.");
+ "Either provide the file or change sslmode to disable server certificate verification.");
else
libpq_append_conn_error(conn, "root certificate file \"%s\" does not exist\n"
- "Either provide the file or change sslmode to disable server certificate verification.", fnbuf);
+ "Either provide the file or change sslmode to disable server certificate verification.", fnbuf);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1117,7 +1117,7 @@ initialize_SSL(PGconn *conn)
if (errno != ENOENT && errno != ENOTDIR)
{
libpq_append_conn_error(conn, "could not open certificate file \"%s\": %s",
- fnbuf, strerror_r(errno, sebuf, sizeof(sebuf)));
+ fnbuf, strerror_r(errno, sebuf, sizeof(sebuf)));
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1135,7 +1135,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read certificate file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
SSL_CTX_free(SSL_context);
return -1;
@@ -1234,7 +1234,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not load SSL engine \"%s\": %s",
- engine_str, err);
+ engine_str, err);
SSLerrfree(err);
free(engine_str);
return -1;
@@ -1245,7 +1245,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not initialize SSL engine \"%s\": %s",
- engine_str, err);
+ engine_str, err);
SSLerrfree(err);
ENGINE_free(conn->engine);
conn->engine = NULL;
@@ -1260,7 +1260,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read private SSL key \"%s\" from engine \"%s\": %s",
- engine_colon, engine_str, err);
+ engine_colon, engine_str, err);
SSLerrfree(err);
ENGINE_finish(conn->engine);
ENGINE_free(conn->engine);
@@ -1273,7 +1273,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not load private SSL key \"%s\" from engine \"%s\": %s",
- engine_colon, engine_str, err);
+ engine_colon, engine_str, err);
SSLerrfree(err);
ENGINE_finish(conn->engine);
ENGINE_free(conn->engine);
@@ -1310,10 +1310,10 @@ initialize_SSL(PGconn *conn)
{
if (errno == ENOENT)
libpq_append_conn_error(conn, "certificate present, but not private key file \"%s\"",
- fnbuf);
+ fnbuf);
else
libpq_append_conn_error(conn, "could not stat private key file \"%s\": %m",
- fnbuf);
+ fnbuf);
return -1;
}
@@ -1321,7 +1321,7 @@ initialize_SSL(PGconn *conn)
if (!S_ISREG(buf.st_mode))
{
libpq_append_conn_error(conn, "private key file \"%s\" is not a regular file",
- fnbuf);
+ fnbuf);
return -1;
}
@@ -1378,7 +1378,7 @@ initialize_SSL(PGconn *conn)
if (SSL_use_PrivateKey_file(conn->ssl, fnbuf, SSL_FILETYPE_ASN1) != 1)
{
libpq_append_conn_error(conn, "could not load private key file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
return -1;
}
@@ -1394,7 +1394,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "certificate does not match private key file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
return -1;
}
@@ -1447,7 +1447,7 @@ open_client_SSL(PGconn *conn)
if (r == -1)
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
else
libpq_append_conn_error(conn, "SSL SYSCALL error: EOF detected");
pgtls_close(conn);
@@ -1489,12 +1489,12 @@ open_client_SSL(PGconn *conn)
case SSL_R_VERSION_TOO_LOW:
#endif
libpq_append_conn_error(conn, "This may indicate that the server does not support any SSL protocol version between %s and %s.",
- conn->ssl_min_protocol_version ?
- conn->ssl_min_protocol_version :
- MIN_OPENSSL_TLS_VERSION,
- conn->ssl_max_protocol_version ?
- conn->ssl_max_protocol_version :
- MAX_OPENSSL_TLS_VERSION);
+ conn->ssl_min_protocol_version ?
+ conn->ssl_min_protocol_version :
+ MIN_OPENSSL_TLS_VERSION,
+ conn->ssl_max_protocol_version ?
+ conn->ssl_max_protocol_version :
+ MAX_OPENSSL_TLS_VERSION);
break;
default:
break;
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 66e401bf3d9..8069e381424 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -255,14 +255,14 @@ pqsecure_raw_read(PGconn *conn, void *ptr, size_t len)
case EPIPE:
case ECONNRESET:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
break;
default:
libpq_append_conn_error(conn, "could not receive data from server: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
break;
}
}
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index d94b648ea5b..712d572373c 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -888,8 +888,8 @@ extern char *libpq_ngettext(const char *msgid, const char *msgid_plural, unsigne
*/
#undef _
-extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...) pg_attribute_printf(2, 3);
-extern void libpq_append_conn_error(PGconn *conn, const char *fmt, ...) pg_attribute_printf(2, 3);
+extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt,...) pg_attribute_printf(2, 3);
+extern void libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
/*
* These macros are needed to let error-handling code be portable between
--
2.34.1
Another small update. Mostly some trivial cleanup in the comments/docs/code. But
also change patch 0005 to call PQcancelFinish in more error cases.
Attachments:
v11-0003-Return-2-from-pqReadData-on-EOF.patchapplication/octet-stream; name=v11-0003-Return-2-from-pqReadData-on-EOF.patchDownload
From a3157ded0e07c64f8c132058508d87791cbc6ead Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Thu, 26 Jan 2023 12:24:38 +0100
Subject: [PATCH v11 3/5] Return -2 from pqReadData on EOF
This patch changes pqReadData to return -2 when a connection is cleanly
closed by the other side. For most of the Postgres protocol this is
considered an error, because the client will close the connection
instead of the server. But for Postgres its cancellation protocol
the distinction between errors and clean connection closure is
important, because clean connection closure is the way for the server to
signal that the cancellation was handled.
This patch is in preparation for a follow-up patch where pqReadData is
used for the cancellation protocol implementation.
No existing callsites of pqReadData or any of its internal functions
need to be updated as all of them check if the result is less than 0
instead a strict comparison against -1.
---
src/interfaces/libpq/fe-misc.c | 15 +++++++++++----
src/interfaces/libpq/fe-secure-openssl.c | 2 +-
src/interfaces/libpq/fe-secure.c | 6 ++++++
3 files changed, 18 insertions(+), 5 deletions(-)
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 660cdec93c..2d49188d91 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -556,8 +556,11 @@ pqPutMsgEnd(PGconn *conn)
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
- * -1: error detected (including EOF = connection closure);
+ * -1: error detected (excluding EOF = clean connection closure);
* conn->errorMessage set
+ * -2: EOF detected, connection is closed cleanly by other side;
+ * conn->errorMessage set
+ *
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
@@ -639,7 +642,7 @@ retry3:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -734,7 +737,7 @@ retry4:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -751,13 +754,17 @@ definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.");
+ /* Do *not* drop any already-read data; caller still wants it */
+ pqDropConnection(conn, false);
+ conn->status = CONNECTION_BAD; /* No more connection to backend */
+ return -2;
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
/* Do *not* drop any already-read data; caller still wants it */
pqDropConnection(conn, false);
conn->status = CONNECTION_BAD; /* No more connection to backend */
- return -1;
+ return nread < 0 ? nread : -1;
}
/*
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index ab2cbf045b..4e29a9f5c9 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -248,7 +248,7 @@ rloop:
*/
libpq_append_conn_error(conn, "SSL connection has been closed unexpectedly");
result_errno = ECONNRESET;
- n = -1;
+ n = -2;
break;
default:
libpq_append_conn_error(conn, "unrecognized SSL error code: %d", err);
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 8069e38142..20265dcb31 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -199,6 +199,12 @@ pqsecure_close(PGconn *conn)
* On failure, this function is responsible for appending a suitable message
* to conn->errorMessage. The caller must still inspect errno, but only
* to determine whether to continue/retry after error.
+ *
+ * Returns -1 in case of failures, except in the case of where a failure means
+ * that there was a clean connection closure, in those cases -2 is returned.
+ * Currently only the TLS implementation of pqsecure_read ever returns -2. For
+ * the other implementations a clean connection closure is detected in
+ * pqReadData instead.
*/
ssize_t
pqsecure_read(PGconn *conn, void *ptr, size_t len)
--
2.34.1
v11-0002-Refactor-libpq-to-store-addrinfo-in-a-libpq-owne.patchapplication/octet-stream; name=v11-0002-Refactor-libpq-to-store-addrinfo-in-a-libpq-owne.patchDownload
From 1e588d7099c8563ca94cb6471a51f278ba9c65b6 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 25 Jan 2023 10:22:41 +0100
Subject: [PATCH v11 2/5] Refactor libpq to store addrinfo in a libpq owned
array
This refactors libpq to copy addrinfos returned by getaddrinfo to
memory owned by us. This refactoring is useful for two upcoming patches,
which need to change the addrinfo list in some way. Doing that with the
original addrinfo list is risky since we don't control how memory is
freed. Also changing the contents of a C array is quite a bit easier
than changing a linked list.
As a nice side effect of this refactor the is that mechanism for
iteration over addresses in PQconnectPoll is now identical to its
iteration over hosts.
---
src/include/libpq/pqcomm.h | 6 ++
src/interfaces/libpq/fe-connect.c | 107 +++++++++++++++++++++---------
src/interfaces/libpq/libpq-int.h | 6 +-
src/tools/pgindent/typedefs.list | 1 +
4 files changed, 87 insertions(+), 33 deletions(-)
diff --git a/src/include/libpq/pqcomm.h b/src/include/libpq/pqcomm.h
index 66ba359390..ee28e223bd 100644
--- a/src/include/libpq/pqcomm.h
+++ b/src/include/libpq/pqcomm.h
@@ -27,6 +27,12 @@ typedef struct
socklen_t salen;
} SockAddr;
+typedef struct
+{
+ int family;
+ SockAddr addr;
+} AddrInfo;
+
/* Configure the UNIX socket location for the well known port. */
#define UNIXSOCK_PATH(path, port, sockdir) \
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 773e9e1f3a..46afe127f1 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -379,6 +379,7 @@ static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
+static bool store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
static PQconninfoOption *conninfo_init(PQExpBuffer errorMessage);
static PQconninfoOption *parse_connection_string(const char *connstr,
@@ -2077,7 +2078,7 @@ connectDBComplete(PGconn *conn)
time_t finish_time = ((time_t) -1);
int timeout = 0;
int last_whichhost = -2; /* certainly different from whichhost */
- struct addrinfo *last_addr_cur = NULL;
+ int last_whichaddr = -2; /* certainly different from whichaddr */
if (conn == NULL || conn->status == CONNECTION_BAD)
return 0;
@@ -2121,11 +2122,11 @@ connectDBComplete(PGconn *conn)
if (flag != PGRES_POLLING_OK &&
timeout > 0 &&
(conn->whichhost != last_whichhost ||
- conn->addr_cur != last_addr_cur))
+ conn->whichaddr != last_whichaddr))
{
finish_time = time(NULL) + timeout;
last_whichhost = conn->whichhost;
- last_addr_cur = conn->addr_cur;
+ last_whichaddr = conn->whichaddr;
}
/*
@@ -2272,9 +2273,9 @@ keep_going: /* We will come back to here until there is
/* Time to advance to next address, or next host if no more addresses? */
if (conn->try_next_addr)
{
- if (conn->addr_cur && conn->addr_cur->ai_next)
+ if (conn->whichaddr < conn->naddr)
{
- conn->addr_cur = conn->addr_cur->ai_next;
+ conn->whichaddr++;
reset_connection_state_machine = true;
}
else
@@ -2287,6 +2288,7 @@ keep_going: /* We will come back to here until there is
{
pg_conn_host *ch;
struct addrinfo hint;
+ struct addrinfo *addrlist;
int thisport;
int ret;
char portstr[MAXPGPATH];
@@ -2327,7 +2329,7 @@ keep_going: /* We will come back to here until there is
/* Initialize hint structure */
MemSet(&hint, 0, sizeof(hint));
hint.ai_socktype = SOCK_STREAM;
- conn->addrlist_family = hint.ai_family = AF_UNSPEC;
+ hint.ai_family = AF_UNSPEC;
/* Figure out the port number we're going to use. */
if (ch->port == NULL || ch->port[0] == '\0')
@@ -2350,8 +2352,8 @@ keep_going: /* We will come back to here until there is
{
case CHT_HOST_NAME:
ret = pg_getaddrinfo_all(ch->host, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not translate host name \"%s\" to address: %s",
ch->host, gai_strerror(ret));
@@ -2362,8 +2364,8 @@ keep_going: /* We will come back to here until there is
case CHT_HOST_ADDRESS:
hint.ai_flags = AI_NUMERICHOST;
ret = pg_getaddrinfo_all(ch->hostaddr, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not parse network address \"%s\": %s",
ch->hostaddr, gai_strerror(ret));
@@ -2372,7 +2374,7 @@ keep_going: /* We will come back to here until there is
break;
case CHT_UNIX_SOCKET:
- conn->addrlist_family = hint.ai_family = AF_UNIX;
+ hint.ai_family = AF_UNIX;
UNIXSOCK_PATH(portstr, thisport, ch->host);
if (strlen(portstr) >= UNIXSOCK_PATH_BUFLEN)
{
@@ -2387,8 +2389,8 @@ keep_going: /* We will come back to here until there is
* name as a Unix-domain socket path.
*/
ret = pg_getaddrinfo_all(NULL, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not translate Unix-domain socket path \"%s\" to address: %s",
portstr, gai_strerror(ret));
@@ -2397,8 +2399,14 @@ keep_going: /* We will come back to here until there is
break;
}
- /* OK, scan this addrlist for a working server address */
- conn->addr_cur = conn->addrlist;
+ if (!store_conn_addrinfo(conn, addrlist))
+ {
+ pg_freeaddrinfo_all(hint.ai_family, addrlist);
+ libpq_append_conn_error(conn, "out of memory");
+ goto error_return;
+ }
+ pg_freeaddrinfo_all(hint.ai_family, addrlist);
+
reset_connection_state_machine = true;
conn->try_next_host = false;
}
@@ -2455,30 +2463,29 @@ keep_going: /* We will come back to here until there is
{
/*
* Try to initiate a connection to one of the addresses
- * returned by pg_getaddrinfo_all(). conn->addr_cur is the
+ * returned by pg_getaddrinfo_all(). conn->whichaddr is the
* next one to try.
*
* The extra level of braces here is historical. It's not
* worth reindenting this whole switch case to remove 'em.
*/
{
- struct addrinfo *addr_cur = conn->addr_cur;
char host_addr[NI_MAXHOST];
+ AddrInfo *addr_cur;
/*
* Advance to next possible host, if we've tried all of
* the addresses for the current host.
*/
- if (addr_cur == NULL)
+ if (conn->whichaddr == conn->naddr)
{
conn->try_next_host = true;
goto keep_going;
}
+ addr_cur = &conn->addr[conn->whichaddr];
/* Remember current address for possible use later */
- memcpy(&conn->raddr.addr, addr_cur->ai_addr,
- addr_cur->ai_addrlen);
- conn->raddr.salen = addr_cur->ai_addrlen;
+ memcpy(&conn->raddr, &addr_cur->addr, sizeof(SockAddr));
/*
* Set connip, too. Note we purposely ignore strdup
@@ -2494,7 +2501,7 @@ keep_going: /* We will come back to here until there is
conn->connip = strdup(host_addr);
/* Try to create the socket */
- conn->sock = socket(addr_cur->ai_family, SOCK_STREAM, 0);
+ conn->sock = socket(addr_cur->family, SOCK_STREAM, 0);
if (conn->sock == PGINVALID_SOCKET)
{
int errorno = SOCK_ERRNO;
@@ -2505,7 +2512,7 @@ keep_going: /* We will come back to here until there is
* cases where the address list includes both IPv4 and
* IPv6 but kernel only accepts one family.
*/
- if (addr_cur->ai_next != NULL ||
+ if (conn->whichaddr < conn->naddr ||
conn->whichhost + 1 < conn->nconnhost)
{
conn->try_next_addr = true;
@@ -2531,7 +2538,7 @@ keep_going: /* We will come back to here until there is
* TCP sockets, nonblock mode, close-on-exec. Try the
* next address if any of this fails.
*/
- if (addr_cur->ai_family != AF_UNIX)
+ if (addr_cur->family != AF_UNIX)
{
if (!connectNoDelay(conn))
{
@@ -2558,7 +2565,7 @@ keep_going: /* We will come back to here until there is
}
#endif /* F_SETFD */
- if (addr_cur->ai_family != AF_UNIX)
+ if (addr_cur->family != AF_UNIX)
{
#ifndef WIN32
int on = 1;
@@ -2650,8 +2657,8 @@ keep_going: /* We will come back to here until there is
* Start/make connection. This should not block, since we
* are in nonblock mode. If it does, well, too bad.
*/
- if (connect(conn->sock, addr_cur->ai_addr,
- addr_cur->ai_addrlen) < 0)
+ if (connect(conn->sock, (struct sockaddr *) &addr_cur->addr.addr,
+ addr_cur->addr.salen) < 0)
{
if (SOCK_ERRNO == EINPROGRESS ||
#ifdef WIN32
@@ -4041,6 +4048,45 @@ freePGconn(PGconn *conn)
free(conn);
}
+/*
+ * Copies over the addrinfos from addrlist to the PGconn. The reason we do this
+ * so that we can edit the resulting list as we please, because now the memory
+ * is owned by us. Changing the original addrinfo directly is risky, since we
+ * don't control how the memory is freed and by changing it we might confuse
+ * the implementation of freeaddrinfo.
+ */
+static bool
+store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist)
+{
+ struct addrinfo *ai = addrlist;
+
+ conn->whichaddr = 0;
+
+ conn->naddr = 0;
+ while (ai)
+ {
+ ai = ai->ai_next;
+ conn->naddr++;
+ }
+
+ conn->addr = calloc(conn->naddr, sizeof(AddrInfo));
+ if (conn->addr == NULL)
+ return false;
+
+ ai = addrlist;
+ for (int i = 0; i < conn->naddr; i++)
+ {
+ conn->addr[i].family = ai->ai_family;
+
+ memcpy(&conn->addr[i].addr.addr, ai->ai_addr,
+ ai->ai_addrlen);
+ conn->addr[i].addr.salen = ai->ai_addrlen;
+ ai = ai->ai_next;
+ }
+
+ return true;
+}
+
/*
* release_conn_addrinfo
* - Free any addrinfo list in the PGconn.
@@ -4048,11 +4094,10 @@ freePGconn(PGconn *conn)
static void
release_conn_addrinfo(PGconn *conn)
{
- if (conn->addrlist)
+ if (conn->addr)
{
- pg_freeaddrinfo_all(conn->addrlist_family, conn->addrlist);
- conn->addrlist = NULL;
- conn->addr_cur = NULL; /* for safety */
+ free(conn->addr);
+ conn->addr = NULL;
}
}
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 712d572373..940db7ecc8 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -461,8 +461,10 @@ struct pg_conn
PGTargetServerType target_server_type; /* desired session properties */
bool try_next_addr; /* time to advance to next address/host? */
bool try_next_host; /* time to advance to next connhost[]? */
- struct addrinfo *addrlist; /* list of addresses for current connhost */
- struct addrinfo *addr_cur; /* the one currently being tried */
+ int naddr; /* number of addresses returned by getaddrinfo */
+ int whichaddr; /* the address currently being tried */
+ AddrInfo *addr; /* the array of addresses for the currently
+ * tried host */
int addrlist_family; /* needed to know how to free addrlist */
bool send_appname; /* okay to send application_name? */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 51484ca7e2..6762f4dc70 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -26,6 +26,7 @@ AcquireSampleRowsFunc
ActionList
ActiveSnapshotElt
AddForeignUpdateTargets_function
+AddrInfo
AffixNode
AffixNodeData
AfterTriggerEvent
--
2.34.1
v11-0004-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=v11-0004-Add-non-blocking-version-of-PQcancel.patchDownload
From 22e45642aeade25c0983e260a5ab93124e2da082 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 12 Jan 2022 09:52:05 +0100
Subject: [PATCH v11 4/5] Add non-blocking version of PQcancel
This patch makes the following changes in libpq:
1. Add a new PQcancelSend function, which sends cancellation requests
using the regular connection establishment code. This makes sure
that cancel requests support and use all connection options
including encryption.
2. Add a new PQcancelConn function which allows sending cancellation in
a non-blocking way by using it together with the newly added
PQcancelPoll and PQcancelSocket.
3. Use these two new cancellation APIs everywhere in the codebase where
signal-safety is not a necessity.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests. The postgres_fdw
cancellation code has been modified to make use of this.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
doc/src/sgml/libpq.sgml | 275 ++++++++++-
src/interfaces/libpq/exports.txt | 8 +
src/interfaces/libpq/fe-connect.c | 452 +++++++++++++++++-
src/interfaces/libpq/libpq-fe.h | 25 +-
src/interfaces/libpq/libpq-int.h | 9 +
.../modules/libpq_pipeline/libpq_pipeline.c | 265 +++++++++-
6 files changed, 982 insertions(+), 52 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 0e7ae70c70..53b64865bb 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -4909,7 +4909,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelSend"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -5628,13 +5628,218 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQcancelSend">
+ <term><function>PQcancelSend</function><indexterm><primary>PQcancelSend</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command.
+<synopsis>
+PGcancelConn *PQcancelSend(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This request is made over a connection that uses the same connection
+ options as the the original <structname>PGconn</structname>. So when the
+ original connection is encrypted (using TLS or GSS), the connection for
+ the cancel request connection is encrypted in the same. Any connection
+ options that only make sense for authentication or after authentication
+ are ignored though, because cancellation requests do not require
+ authentication.
+ </para>
+
+ <para>
+ This function returns a <structname>PGcancelConn</structname>
+ object. By using <xref linkend="libpq-PQcancelStatus"/>
+ it can be checked if there was any error when sending the cancellation
+ request. If <xref linkend="libpq-PQcancelStatus"/>
+ returns for <symbol>CONNECTION_OK</symbol> the request was
+ successfully sent, but if it returns <symbol>CONNECTION_BAD</symbol>
+ an error occured. If an error occured the error message can be retrieved using
+ <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being cancelled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelSend</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQcancelSend"/> that can be used
+ in a non-blocking manner.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>,
+ but it won't instantly start sending a cancel request over this
+ connection like <xref linkend="libpq-PQcancelSend"/>.
+ <xref linkend="libpq-PQcancelStatus"/> should be called on the return
+ value to check if the <structname> PGcancelConn </structname> was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe and non-blocking way.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQstatus"/> that can be used for
+ cancellation connections.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ <para>
+ In addition to all the statuses that a <structname>PGconn</structname>
+ can have, this connection can have one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQcancelPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelPoll"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually cancelled) and that the
+ connection is now closed. While a <symbol>CONNECTION_OK</symbol> result
+ for <structname>PGconn</structname> means that queries can be sent over
+ the connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelPoll">
+ <term><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQconnectPoll"/> that can be used for
+ cancellation connections.
+<synopsis>
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -5676,14 +5881,28 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ An insecure version of <xref linkend="libpq-PQcancelSend"/>, but one
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
+ <xref linkend="libpq-PQcancel"/> should only be used if it's necessary
+ to cancel a query from a signal-handler. If signal-safety is not needed,
+ <xref linkend="libpq-PQcancelSend"/> should be used to cancel the query
+ instead. <xref linkend="libpq-PQcancel"/> can be safely invoked from a
+ signal handler, if the <parameter>errbuf</parameter> is a local variable
+ in the signal handler. The <structname>PGcancel</structname> object is
+ read-only as far as <xref linkend="libpq-PQcancel"/> is concerned, so it
+ can also be invoked from a thread that is separate from the one
+ manipulating the <structname>PGconn</structname> object.
+ </para>
+
+ <para>
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
with an explanatory error message. <parameter>errbuf</parameter>
must be a char array of size <parameter>errbufsize</parameter> (the
@@ -5691,21 +5910,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
- <para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. This function connects to
+ postgres on the same address and port as tha original connection. The
+ only connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. <emphasis>This means the connection
+ is never encrypted using TLS or GSS</emphasis>.
</para>
</listitem>
</varlistentry>
@@ -5717,13 +5937,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelSend"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelSend"/> should be
+ used instead, to avoid the security and thread-safety issues that this
+ function has. This function has the same security issues as
+ <xref linkend="libpq-PQcancel"/>, but without the benefit of being
+ signal-safe.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -8872,7 +9101,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelSend"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index e8bcc88370..f56e8c185c 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -186,3 +186,11 @@ PQpipelineStatus 183
PQsetTraceFlags 184
PQmblenBounded 185
PQsendFlushRequest 186
+PQcancelSend 187
+PQcancelConn 188
+PQcancelPoll 189
+PQcancelStatus 190
+PQcancelSocket 191
+PQcancelErrorMessage 192
+PQcancelReset 193
+PQcancelFinish 194
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 46afe127f1..cef341adcd 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -376,8 +376,10 @@ static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
+static void release_conn_hosts(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
static bool store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
@@ -600,8 +602,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQcancelReset invocations. Otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -732,6 +743,113 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a query on the given connection. This requires polling
+ * the returned PGcancelConn to actually complete the cancellation of the
+ * query.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancelation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we we manually create the host and address arrays
+ * with a single element after freeing the host array that we generated
+ * from the connection options.
+ */
+ release_conn_hosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
/*
* PQconnectStartParams
*
@@ -907,6 +1025,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2031,10 +2188,18 @@ connectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address. These fields have already set up in PQcancelConn. So leave
+ * these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2176,7 +2341,10 @@ connectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2301,13 +2469,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -2898,6 +3070,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancelation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -3610,8 +3805,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -3978,19 +4179,8 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
- /* clean up pg_conn_host structures */
- for (int i = 0; i < conn->nconnhost; ++i)
- {
- free(conn->connhost[i].host);
- free(conn->connhost[i].hostaddr);
- free(conn->connhost[i].port);
- if (conn->connhost[i].password != NULL)
- {
- explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
- free(conn->connhost[i].password);
- }
- }
- free(conn->connhost);
+ release_conn_addrinfo(conn);
+ release_conn_hosts(conn);
free(conn->client_encoding_initial);
free(conn->events);
@@ -4101,6 +4291,31 @@ release_conn_addrinfo(PGconn *conn)
}
}
+/*
+ * release_conn_hosts
+ * - Free the host list in the PGconn.
+ */
+static void
+release_conn_hosts(PGconn *conn)
+{
+ if (conn->connhost)
+ {
+ for (int i = 0; i < conn->nconnhost; ++i)
+ {
+ free(conn->connhost[i].host);
+ free(conn->connhost[i].hostaddr);
+ free(conn->connhost[i].port);
+ if (conn->connhost[i].password != NULL)
+ {
+ explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
+ free(conn->connhost[i].password);
+ }
+ }
+ free(conn->connhost);
+ }
+}
+
+
/*
* sendTerminateConn
* - Send a terminate message to backend.
@@ -4108,6 +4323,15 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4161,7 +4385,13 @@ closePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we would have to rebuild it during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
@@ -4576,6 +4806,180 @@ cancel_errReturn:
return false;
}
+/*
+ * PQcancelSend
+ *
+ * Send a cancellation request in a blocking fashion.
+ */
+PGcancelConn *
+PQcancelSend(PGconn *conn)
+{
+ PGcancelConn *cancelConn = PQcancelConn(conn);
+
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return cancelConn;
+
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_BAD;
+ return cancelConn;
+ }
+
+ (void) connectDBComplete(&cancelConn->conn);
+
+ return cancelConn;
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn * cancelConn)
+{
+ PGconn *conn = (PGconn *) cancelConn;
+ int n;
+
+ /*
+ * Before we can call PQconnectPoll we first need to start the connection
+ * using connectDBstart. Non-cancel connections already do this whenever
+ * the connection is initialized. But cancel connections wait until the
+ * caller starts polling, because there might be a large delay between
+ * creating a cancel connection and actually wanting to use it.
+ */
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+ }
+
+ /*
+ * The rest of the connection establishement we leave to PQconnectPoll,
+ * since it's very similar to normal connection establishment. But once we
+ * get to the CONNECTION_AWAITING_RESPONSE we need to do our own thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * Windows is a bit special in its EOF behaviour for TCP. Sometimes it
+ * will error with an ECONNRESET when there is a clean connection closure.
+ * See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here. For
+ * all other OSes we consider any other error than EOF and report it as
+ * such.
+ */
+ if (n < 0 && n != -2)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangly do
+ * receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF. Which is what we were
+ * expecting. The cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn * cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn * cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn * cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn * cancelConn)
+{
+ closePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_STARTING;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn * cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
/*
* PQrequestCancel: old, not thread-safe function for requesting query cancel
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index f3d9220496..95899b9f55 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_STARTING /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,28 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* issue a cancel request */
+extern PGcancelConn * PQcancelSend(PGconn *conn);
+/* non-blocking version of PQcancelSend */
+extern PGcancelConn * PQcancelConn(PGconn *conn);
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn * cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
+extern int PQcancelSocket(const PGcancelConn * cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
+extern void PQcancelReset(PGcancelConn * cancelConn);
+extern void PQcancelFinish(PGcancelConn * cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* a less secure version of PQcancelSend, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 940db7ecc8..cd1857ea49 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -397,6 +397,10 @@ struct pg_conn
char *ssl_max_protocol_version; /* maximum TLS protocol version */
char *target_session_attrs; /* desired session properties */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -594,6 +598,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 6111bf9b67..6d88d419a6 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got cancelled.
+ *
+ * This is a function wrapped in a macrco to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_cancelled(conn) confirm_query_cancelled_impl(__LINE__, conn)
+static void
+confirm_query_cancelled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_cancelled(conn);
+
+ /* test PQcancelSend */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelSend(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -985,7 +1243,7 @@ test_prepared(PGconn *conn)
static void
notice_processor(void *arg, const char *message)
{
- int *n_notices = (int *) arg;
+ int *n_notices = (int *) arg;
(*n_notices)++;
fprintf(stderr, "NOTICE %d: %s", *n_notices, message);
@@ -1681,6 +1939,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1782,7 +2041,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
v11-0005-Start-using-new-libpq-cancel-APIs.patchapplication/octet-stream; name=v11-0005-Start-using-new-libpq-cancel-APIs.patchDownload
From e8d1c900e5599d460090b757e7a125d23dcf7bce Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 25 Jan 2023 13:32:15 +0100
Subject: [PATCH v11 5/5] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in the codebase with these newer
ones.
---
contrib/dblink/dblink.c | 30 ++++--
contrib/postgres_fdw/connection.c | 99 ++++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 10 +-
src/test/isolation/isolationtester.c | 29 +++---
6 files changed, 139 insertions(+), 51 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 8982d623d3..3a5f2d3424 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1326,22 +1326,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelSend(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 7760380f00..3630644f5e 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -1234,35 +1234,104 @@ pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel)
static bool
pgfdw_cancel_query(PGconn *conn)
{
- PGcancel *cancel;
- char errbuf[256];
PGresult *result = NULL;
- TimestampTz endtime;
- bool timed_out;
/*
* If it takes too long to cancel the query and discard the result, assume
* the connection is dead.
*/
- endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQcancelStatus(cancel_conn) == CONNECTION_BAD)
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
+ }
+
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
}
- PQfreeCancel(cancel);
}
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ if (failed)
+ return false;
/* Get and discard the result of the query. */
if (pgfdw_get_cleanup_result(conn, endtime, &result, &timed_out))
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 2350cfe148..0e365415d4 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2688,6 +2688,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index c37aa80383..202ecf0920 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -713,6 +713,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 7a1edea7c8..b32448c010 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,11 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
-
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ PQcancelFinish(PQcancelSend(conn));
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153..3781f7982b 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelSend(conn);
- if (cancel != NULL)
+ if (PQcancelStatus(cancel_conn) == CONNECTION_OK)
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
v11-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patchapplication/octet-stream; name=v11-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patchDownload
From 1c9c4c45602115471aa6feabbeef3386e0a2c88b Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 30 Nov 2022 10:07:19 +0100
Subject: [PATCH v11 1/5] libpq: Run pgindent after a9e9a9f32b3
It seems that pgindent was not run after the error handling refactor in
commit a9e9a9f32b35edf129c88e8b929ef223f8511f59. This fixes that and
also addresses a few other things pgindent wanted to change in libpq.
---
src/interfaces/libpq/fe-auth-scram.c | 2 +-
src/interfaces/libpq/fe-auth.c | 8 +-
src/interfaces/libpq/fe-connect.c | 124 +++++++++++------------
src/interfaces/libpq/fe-exec.c | 16 +--
src/interfaces/libpq/fe-lobj.c | 42 ++++----
src/interfaces/libpq/fe-misc.c | 10 +-
src/interfaces/libpq/fe-protocol3.c | 2 +-
src/interfaces/libpq/fe-secure-common.c | 6 +-
src/interfaces/libpq/fe-secure-gssapi.c | 12 +--
src/interfaces/libpq/fe-secure-openssl.c | 64 ++++++------
src/interfaces/libpq/fe-secure.c | 8 +-
src/interfaces/libpq/libpq-int.h | 4 +-
12 files changed, 149 insertions(+), 149 deletions(-)
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 9c42ea4f81..12c3d0bc33 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -716,7 +716,7 @@ read_server_final_message(fe_scram_state *state, char *input)
return false;
}
libpq_append_conn_error(conn, "error received from server in SCRAM exchange: %s",
- errmsg);
+ errmsg);
return false;
}
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 9afc6f19b9..ab454e6cd0 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -73,7 +73,7 @@ pg_GSS_continue(PGconn *conn, int payloadlen)
if (!ginbuf.value)
{
libpq_append_conn_error(conn, "out of memory allocating GSSAPI buffer (%d)",
- payloadlen);
+ payloadlen);
return STATUS_ERROR;
}
if (pqGetnchar(ginbuf.value, payloadlen, conn))
@@ -223,7 +223,7 @@ pg_SSPI_continue(PGconn *conn, int payloadlen)
if (!inputbuf)
{
libpq_append_conn_error(conn, "out of memory allocating SSPI buffer (%d)",
- payloadlen);
+ payloadlen);
return STATUS_ERROR;
}
if (pqGetnchar(inputbuf, payloadlen, conn))
@@ -623,7 +623,7 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
if (!challenge)
{
libpq_append_conn_error(conn, "out of memory allocating SASL buffer (%d)",
- payloadlen);
+ payloadlen);
return STATUS_ERROR;
}
@@ -1277,7 +1277,7 @@ PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user,
else
{
libpq_append_conn_error(conn, "unrecognized password encryption algorithm \"%s\"",
- algorithm);
+ algorithm);
return NULL;
}
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 50b5df3490..773e9e1f3a 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -1079,7 +1079,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "could not match %d host names to %d hostaddr values",
- count_comma_separated_elems(conn->pghost), conn->nconnhost);
+ count_comma_separated_elems(conn->pghost), conn->nconnhost);
return false;
}
}
@@ -1159,7 +1159,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "could not match %d port numbers to %d hosts",
- count_comma_separated_elems(conn->pgport), conn->nconnhost);
+ count_comma_separated_elems(conn->pgport), conn->nconnhost);
return false;
}
}
@@ -1248,7 +1248,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "channel_binding", conn->channel_binding);
+ "channel_binding", conn->channel_binding);
return false;
}
}
@@ -1273,7 +1273,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "sslmode", conn->sslmode);
+ "sslmode", conn->sslmode);
return false;
}
@@ -1293,7 +1293,7 @@ connectOptions2(PGconn *conn)
case 'v': /* "verify-ca" or "verify-full" */
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "sslmode value \"%s\" invalid when SSL support is not compiled in",
- conn->sslmode);
+ conn->sslmode);
return false;
}
#endif
@@ -1313,16 +1313,16 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "ssl_min_protocol_version",
- conn->ssl_min_protocol_version);
+ "ssl_min_protocol_version",
+ conn->ssl_min_protocol_version);
return false;
}
if (!sslVerifyProtocolVersion(conn->ssl_max_protocol_version))
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "ssl_max_protocol_version",
- conn->ssl_max_protocol_version);
+ "ssl_max_protocol_version",
+ conn->ssl_max_protocol_version);
return false;
}
@@ -1359,7 +1359,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "gssencmode value \"%s\" invalid when GSSAPI support is not compiled in",
- conn->gssencmode);
+ conn->gssencmode);
return false;
}
#endif
@@ -1392,8 +1392,8 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "target_session_attrs",
- conn->target_session_attrs);
+ "target_session_attrs",
+ conn->target_session_attrs);
return false;
}
}
@@ -1609,7 +1609,7 @@ connectNoDelay(PGconn *conn)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "could not set socket to TCP no delay mode: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -1787,7 +1787,7 @@ parse_int_param(const char *value, int *result, PGconn *conn,
error:
libpq_append_conn_error(conn, "invalid integer value \"%s\" for connection option \"%s\"",
- value, context);
+ value, context);
return false;
}
@@ -1816,9 +1816,9 @@ setKeepalivesIdle(PGconn *conn)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- PG_TCP_KEEPALIVE_IDLE_STR,
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ PG_TCP_KEEPALIVE_IDLE_STR,
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -1850,9 +1850,9 @@ setKeepalivesInterval(PGconn *conn)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- "TCP_KEEPINTVL",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ "TCP_KEEPINTVL",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -1885,9 +1885,9 @@ setKeepalivesCount(PGconn *conn)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- "TCP_KEEPCNT",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ "TCP_KEEPCNT",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -1949,8 +1949,8 @@ prepKeepalivesWin32(PGconn *conn)
if (!setKeepalivesWin32(conn->sock, idle, interval))
{
libpq_append_conn_error(conn, "%s(%s) failed: error code %d",
- "WSAIoctl", "SIO_KEEPALIVE_VALS",
- WSAGetLastError());
+ "WSAIoctl", "SIO_KEEPALIVE_VALS",
+ WSAGetLastError());
return 0;
}
return 1;
@@ -1983,9 +1983,9 @@ setTCPUserTimeout(PGconn *conn)
char sebuf[256];
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- "TCP_USER_TIMEOUT",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ "TCP_USER_TIMEOUT",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -2354,7 +2354,7 @@ keep_going: /* We will come back to here until there is
if (ret || !conn->addrlist)
{
libpq_append_conn_error(conn, "could not translate host name \"%s\" to address: %s",
- ch->host, gai_strerror(ret));
+ ch->host, gai_strerror(ret));
goto keep_going;
}
break;
@@ -2366,7 +2366,7 @@ keep_going: /* We will come back to here until there is
if (ret || !conn->addrlist)
{
libpq_append_conn_error(conn, "could not parse network address \"%s\": %s",
- ch->hostaddr, gai_strerror(ret));
+ ch->hostaddr, gai_strerror(ret));
goto keep_going;
}
break;
@@ -2377,8 +2377,8 @@ keep_going: /* We will come back to here until there is
if (strlen(portstr) >= UNIXSOCK_PATH_BUFLEN)
{
libpq_append_conn_error(conn, "Unix-domain socket path \"%s\" is too long (maximum %d bytes)",
- portstr,
- (int) (UNIXSOCK_PATH_BUFLEN - 1));
+ portstr,
+ (int) (UNIXSOCK_PATH_BUFLEN - 1));
goto keep_going;
}
@@ -2391,7 +2391,7 @@ keep_going: /* We will come back to here until there is
if (ret || !conn->addrlist)
{
libpq_append_conn_error(conn, "could not translate Unix-domain socket path \"%s\" to address: %s",
- portstr, gai_strerror(ret));
+ portstr, gai_strerror(ret));
goto keep_going;
}
break;
@@ -2513,7 +2513,7 @@ keep_going: /* We will come back to here until there is
}
emitHostIdentityInfo(conn, host_addr);
libpq_append_conn_error(conn, "could not create socket: %s",
- SOCK_STRERROR(errorno, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(errorno, sebuf, sizeof(sebuf)));
goto error_return;
}
@@ -2543,7 +2543,7 @@ keep_going: /* We will come back to here until there is
if (!pg_set_noblock(conn->sock))
{
libpq_append_conn_error(conn, "could not set socket to nonblocking mode: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
conn->try_next_addr = true;
goto keep_going;
}
@@ -2552,7 +2552,7 @@ keep_going: /* We will come back to here until there is
if (fcntl(conn->sock, F_SETFD, FD_CLOEXEC) == -1)
{
libpq_append_conn_error(conn, "could not set socket to close-on-exec mode: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
conn->try_next_addr = true;
goto keep_going;
}
@@ -2581,9 +2581,9 @@ keep_going: /* We will come back to here until there is
(char *) &on, sizeof(on)) < 0)
{
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- "SO_KEEPALIVE",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ "SO_KEEPALIVE",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
err = 1;
}
else if (!setKeepalivesIdle(conn)
@@ -2708,7 +2708,7 @@ keep_going: /* We will come back to here until there is
(char *) &optval, &optlen) == -1)
{
libpq_append_conn_error(conn, "could not get socket error status: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
else if (optval != 0)
@@ -2735,7 +2735,7 @@ keep_going: /* We will come back to here until there is
&conn->laddr.salen) < 0)
{
libpq_append_conn_error(conn, "could not get client address from socket: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
@@ -2775,7 +2775,7 @@ keep_going: /* We will come back to here until there is
libpq_append_conn_error(conn, "requirepeer parameter is not supported on this platform");
else
libpq_append_conn_error(conn, "could not get peer credentials: %s",
- strerror_r(errno, sebuf, sizeof(sebuf)));
+ strerror_r(errno, sebuf, sizeof(sebuf)));
goto error_return;
}
@@ -2788,7 +2788,7 @@ keep_going: /* We will come back to here until there is
if (strcmp(remote_username, conn->requirepeer) != 0)
{
libpq_append_conn_error(conn, "requirepeer specifies \"%s\", but actual peer user name is \"%s\"",
- conn->requirepeer, remote_username);
+ conn->requirepeer, remote_username);
free(remote_username);
goto error_return;
}
@@ -2829,7 +2829,7 @@ keep_going: /* We will come back to here until there is
if (pqPacketSend(conn, 0, &pv, sizeof(pv)) != STATUS_OK)
{
libpq_append_conn_error(conn, "could not send GSSAPI negotiation packet: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
@@ -2840,7 +2840,7 @@ keep_going: /* We will come back to here until there is
else if (!conn->gctx && conn->gssencmode[0] == 'r')
{
libpq_append_conn_error(conn,
- "GSSAPI encryption required but was impossible (possibly no credential cache, no server support, or using a local socket)");
+ "GSSAPI encryption required but was impossible (possibly no credential cache, no server support, or using a local socket)");
goto error_return;
}
#endif
@@ -2882,7 +2882,7 @@ keep_going: /* We will come back to here until there is
if (pqPacketSend(conn, 0, &pv, sizeof(pv)) != STATUS_OK)
{
libpq_append_conn_error(conn, "could not send SSL negotiation packet: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
/* Ok, wait for response */
@@ -2911,7 +2911,7 @@ keep_going: /* We will come back to here until there is
if (pqPacketSend(conn, 0, startpacket, packetlen) != STATUS_OK)
{
libpq_append_conn_error(conn, "could not send startup packet: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
free(startpacket);
goto error_return;
}
@@ -3012,7 +3012,7 @@ keep_going: /* We will come back to here until there is
else
{
libpq_append_conn_error(conn, "received invalid response to SSL negotiation: %c",
- SSLok);
+ SSLok);
goto error_return;
}
}
@@ -3123,7 +3123,7 @@ keep_going: /* We will come back to here until there is
else if (gss_ok != 'G')
{
libpq_append_conn_error(conn, "received invalid response to GSSAPI negotiation: %c",
- gss_ok);
+ gss_ok);
goto error_return;
}
}
@@ -3201,7 +3201,7 @@ keep_going: /* We will come back to here until there is
if (!(beresp == 'R' || beresp == 'v' || beresp == 'E'))
{
libpq_append_conn_error(conn, "expected authentication request from server, but received %c",
- beresp);
+ beresp);
goto error_return;
}
@@ -3216,17 +3216,17 @@ keep_going: /* We will come back to here until there is
* Try to validate message length before using it.
* Authentication requests can't be very large, although GSS
* auth requests may not be that small. Same for
- * NegotiateProtocolVersion. Errors can be a
- * little larger, but not huge. If we see a large apparent
- * length in an error, it means we're really talking to a
- * pre-3.0-protocol server; cope. (Before version 14, the
- * server also used the old protocol for errors that happened
- * before processing the startup packet.)
+ * NegotiateProtocolVersion. Errors can be a little larger,
+ * but not huge. If we see a large apparent length in an
+ * error, it means we're really talking to a pre-3.0-protocol
+ * server; cope. (Before version 14, the server also used the
+ * old protocol for errors that happened before processing the
+ * startup packet.)
*/
if ((beresp == 'R' || beresp == 'v') && (msgLength < 8 || msgLength > 2000))
{
libpq_append_conn_error(conn, "expected authentication request from server, but received %c",
- beresp);
+ beresp);
goto error_return;
}
@@ -3705,7 +3705,7 @@ keep_going: /* We will come back to here until there is
/* Append error report to conn->errorMessage. */
libpq_append_conn_error(conn, "\"%s\" failed",
- "SHOW transaction_read_only");
+ "SHOW transaction_read_only");
/* Close connection politely. */
conn->status = CONNECTION_OK;
@@ -3755,7 +3755,7 @@ keep_going: /* We will come back to here until there is
/* Append error report to conn->errorMessage. */
libpq_append_conn_error(conn, "\"%s\" failed",
- "SELECT pg_is_in_recovery()");
+ "SELECT pg_is_in_recovery()");
/* Close connection politely. */
conn->status = CONNECTION_OK;
@@ -3768,8 +3768,8 @@ keep_going: /* We will come back to here until there is
default:
libpq_append_conn_error(conn,
- "invalid connection state %d, probably indicative of memory corruption",
- conn->status);
+ "invalid connection state %d, probably indicative of memory corruption",
+ conn->status);
goto error_return;
}
@@ -7148,7 +7148,7 @@ pgpassfileWarning(PGconn *conn)
if (sqlstate && strcmp(sqlstate, ERRCODE_INVALID_PASSWORD) == 0)
libpq_append_conn_error(conn, "password retrieved from file \"%s\"",
- conn->pgpassfile);
+ conn->pgpassfile);
}
}
diff --git a/src/interfaces/libpq/fe-exec.c b/src/interfaces/libpq/fe-exec.c
index ec62550e38..0c2dae6ed9 100644
--- a/src/interfaces/libpq/fe-exec.c
+++ b/src/interfaces/libpq/fe-exec.c
@@ -1444,7 +1444,7 @@ PQsendQueryInternal(PGconn *conn, const char *query, bool newQuery)
if (conn->pipelineStatus != PQ_PIPELINE_OFF)
{
libpq_append_conn_error(conn, "%s not allowed in pipeline mode",
- "PQsendQuery");
+ "PQsendQuery");
return 0;
}
@@ -1512,7 +1512,7 @@ PQsendQueryParams(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -1558,7 +1558,7 @@ PQsendPrepare(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -1652,7 +1652,7 @@ PQsendQueryPrepared(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -2099,10 +2099,9 @@ PQgetResult(PGconn *conn)
/*
* We're about to return the NULL that terminates the round of
- * results from the current query; prepare to send the results
- * of the next query, if any, when we're called next. If there's
- * no next element in the command queue, this gets us in IDLE
- * state.
+ * results from the current query; prepare to send the results of
+ * the next query, if any, when we're called next. If there's no
+ * next element in the command queue, this gets us in IDLE state.
*/
pqPipelineProcessQueue(conn);
res = NULL; /* query is complete */
@@ -3047,6 +3046,7 @@ pqPipelineProcessQueue(PGconn *conn)
return;
case PGASYNC_IDLE:
+
/*
* If we're in IDLE mode and there's some command in the queue,
* get us into PIPELINE_IDLE mode and process normally. Otherwise
diff --git a/src/interfaces/libpq/fe-lobj.c b/src/interfaces/libpq/fe-lobj.c
index 4cb6a46859..206266fd04 100644
--- a/src/interfaces/libpq/fe-lobj.c
+++ b/src/interfaces/libpq/fe-lobj.c
@@ -142,7 +142,7 @@ lo_truncate(PGconn *conn, int fd, size_t len)
if (conn->lobjfuncs->fn_lo_truncate == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_truncate");
+ "lo_truncate");
return -1;
}
@@ -205,7 +205,7 @@ lo_truncate64(PGconn *conn, int fd, pg_int64 len)
if (conn->lobjfuncs->fn_lo_truncate64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_truncate64");
+ "lo_truncate64");
return -1;
}
@@ -395,7 +395,7 @@ lo_lseek64(PGconn *conn, int fd, pg_int64 offset, int whence)
if (conn->lobjfuncs->fn_lo_lseek64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_lseek64");
+ "lo_lseek64");
return -1;
}
@@ -485,7 +485,7 @@ lo_create(PGconn *conn, Oid lobjId)
if (conn->lobjfuncs->fn_lo_create == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_create");
+ "lo_create");
return InvalidOid;
}
@@ -558,7 +558,7 @@ lo_tell64(PGconn *conn, int fd)
if (conn->lobjfuncs->fn_lo_tell64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_tell64");
+ "lo_tell64");
return -1;
}
@@ -667,7 +667,7 @@ lo_import_internal(PGconn *conn, const char *filename, Oid oid)
if (fd < 0)
{ /* error */
libpq_append_conn_error(conn, "could not open file \"%s\": %s",
- filename, strerror_r(errno, sebuf, sizeof(sebuf)));
+ filename, strerror_r(errno, sebuf, sizeof(sebuf)));
return InvalidOid;
}
@@ -723,8 +723,8 @@ lo_import_internal(PGconn *conn, const char *filename, Oid oid)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not read from file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return InvalidOid;
}
@@ -778,8 +778,8 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not open file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return -1;
}
@@ -799,8 +799,8 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not write to file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return -1;
}
}
@@ -822,7 +822,7 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
if (close(fd) != 0 && result >= 0)
{
libpq_append_conn_error(conn, "could not write to file \"%s\": %s",
- filename, strerror_r(errno, sebuf, sizeof(sebuf)));
+ filename, strerror_r(errno, sebuf, sizeof(sebuf)));
result = -1;
}
@@ -954,56 +954,56 @@ lo_initialize(PGconn *conn)
if (lobjfuncs->fn_lo_open == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_open");
+ "lo_open");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_close == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_close");
+ "lo_close");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_creat == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_creat");
+ "lo_creat");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_unlink == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_unlink");
+ "lo_unlink");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_lseek == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_lseek");
+ "lo_lseek");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_tell == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_tell");
+ "lo_tell");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_read == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "loread");
+ "loread");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_write == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lowrite");
+ "lowrite");
free(lobjfuncs);
return -1;
}
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 3653a1a8a6..660cdec93c 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -749,8 +749,8 @@ retry4:
*/
definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
@@ -1067,7 +1067,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s() failed: %s", "select",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
}
return result;
@@ -1280,7 +1280,7 @@ libpq_ngettext(const char *msgid, const char *msgid_plural, unsigned long n)
* newline.
*/
void
-libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...)
+libpq_append_error(PQExpBuffer errorMessage, const char *fmt,...)
{
int save_errno = errno;
bool done;
@@ -1309,7 +1309,7 @@ libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...)
* format should not end with a newline.
*/
void
-libpq_append_conn_error(PGconn *conn, const char *fmt, ...)
+libpq_append_conn_error(PGconn *conn, const char *fmt,...)
{
int save_errno = errno;
bool done;
diff --git a/src/interfaces/libpq/fe-protocol3.c b/src/interfaces/libpq/fe-protocol3.c
index 8ab6a88416..b79d74f748 100644
--- a/src/interfaces/libpq/fe-protocol3.c
+++ b/src/interfaces/libpq/fe-protocol3.c
@@ -466,7 +466,7 @@ static void
handleSyncLoss(PGconn *conn, char id, int msgLength)
{
libpq_append_conn_error(conn, "lost synchronization with server: got message type \"%c\", length %d",
- id, msgLength);
+ id, msgLength);
/* build an error result holding the error message */
pqSaveErrorResult(conn);
conn->asyncStatus = PGASYNC_READY; /* drop out of PQgetResult wait loop */
diff --git a/src/interfaces/libpq/fe-secure-common.c b/src/interfaces/libpq/fe-secure-common.c
index de115b3764..3ecc7bf615 100644
--- a/src/interfaces/libpq/fe-secure-common.c
+++ b/src/interfaces/libpq/fe-secure-common.c
@@ -226,7 +226,7 @@ pq_verify_peer_name_matches_certificate_ip(PGconn *conn,
* wrong given the subject matter.
*/
libpq_append_conn_error(conn, "certificate contains IP address with invalid length %zu",
- iplen);
+ iplen);
return -1;
}
@@ -235,7 +235,7 @@ pq_verify_peer_name_matches_certificate_ip(PGconn *conn,
if (!addrstr)
{
libpq_append_conn_error(conn, "could not convert certificate's IP address to string: %s",
- strerror_r(errno, sebuf, sizeof(sebuf)));
+ strerror_r(errno, sebuf, sizeof(sebuf)));
return -1;
}
@@ -292,7 +292,7 @@ pq_verify_peer_name_matches_certificate(PGconn *conn)
else if (names_examined == 1)
{
libpq_append_conn_error(conn, "server certificate for \"%s\" does not match host name \"%s\"",
- first_name, host);
+ first_name, host);
}
else
{
diff --git a/src/interfaces/libpq/fe-secure-gssapi.c b/src/interfaces/libpq/fe-secure-gssapi.c
index 6220e4a101..bed6e62435 100644
--- a/src/interfaces/libpq/fe-secure-gssapi.c
+++ b/src/interfaces/libpq/fe-secure-gssapi.c
@@ -213,8 +213,8 @@ pg_GSS_write(PGconn *conn, const void *ptr, size_t len)
if (output.length > PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "client tried to send oversize GSSAPI packet (%zu > %zu)",
- (size_t) output.length,
- PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32));
+ (size_t) output.length,
+ PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32));
errno = EIO; /* for lack of a better idea */
goto cleanup;
}
@@ -349,8 +349,8 @@ pg_GSS_read(PGconn *conn, void *ptr, size_t len)
if (input.length > PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "oversize GSSAPI packet sent by the server (%zu > %zu)",
- (size_t) input.length,
- PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
+ (size_t) input.length,
+ PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
errno = EIO; /* for lack of a better idea */
return -1;
}
@@ -588,8 +588,8 @@ pqsecure_open_gss(PGconn *conn)
if (input.length > PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "oversize GSSAPI packet sent by the server (%zu > %zu)",
- (size_t) input.length,
- PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
+ (size_t) input.length,
+ PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
return PGRES_POLLING_FAILED;
}
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index 983536de25..ab2cbf045b 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -213,12 +213,12 @@ rloop:
if (result_errno == EPIPE ||
result_errno == ECONNRESET)
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
else
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
}
else
{
@@ -313,12 +313,12 @@ pgtls_write(PGconn *conn, const void *ptr, size_t len)
result_errno = SOCK_ERRNO;
if (result_errno == EPIPE || result_errno == ECONNRESET)
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
else
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
}
else
{
@@ -410,7 +410,7 @@ pgtls_get_peer_certificate_hash(PGconn *conn, size_t *len)
if (algo_type == NULL)
{
libpq_append_conn_error(conn, "could not find digest for NID %s",
- OBJ_nid2sn(algo_nid));
+ OBJ_nid2sn(algo_nid));
return NULL;
}
break;
@@ -962,7 +962,7 @@ initialize_SSL(PGconn *conn)
if (ssl_min_ver == -1)
{
libpq_append_conn_error(conn, "invalid value \"%s\" for minimum SSL protocol version",
- conn->ssl_min_protocol_version);
+ conn->ssl_min_protocol_version);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -988,7 +988,7 @@ initialize_SSL(PGconn *conn)
if (ssl_max_ver == -1)
{
libpq_append_conn_error(conn, "invalid value \"%s\" for maximum SSL protocol version",
- conn->ssl_max_protocol_version);
+ conn->ssl_max_protocol_version);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1032,7 +1032,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read root certificate file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
SSL_CTX_free(SSL_context);
return -1;
@@ -1084,10 +1084,10 @@ initialize_SSL(PGconn *conn)
*/
if (fnbuf[0] == '\0')
libpq_append_conn_error(conn, "could not get home directory to locate root certificate file\n"
- "Either provide the file or change sslmode to disable server certificate verification.");
+ "Either provide the file or change sslmode to disable server certificate verification.");
else
libpq_append_conn_error(conn, "root certificate file \"%s\" does not exist\n"
- "Either provide the file or change sslmode to disable server certificate verification.", fnbuf);
+ "Either provide the file or change sslmode to disable server certificate verification.", fnbuf);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1117,7 +1117,7 @@ initialize_SSL(PGconn *conn)
if (errno != ENOENT && errno != ENOTDIR)
{
libpq_append_conn_error(conn, "could not open certificate file \"%s\": %s",
- fnbuf, strerror_r(errno, sebuf, sizeof(sebuf)));
+ fnbuf, strerror_r(errno, sebuf, sizeof(sebuf)));
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1135,7 +1135,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read certificate file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
SSL_CTX_free(SSL_context);
return -1;
@@ -1234,7 +1234,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not load SSL engine \"%s\": %s",
- engine_str, err);
+ engine_str, err);
SSLerrfree(err);
free(engine_str);
return -1;
@@ -1245,7 +1245,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not initialize SSL engine \"%s\": %s",
- engine_str, err);
+ engine_str, err);
SSLerrfree(err);
ENGINE_free(conn->engine);
conn->engine = NULL;
@@ -1260,7 +1260,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read private SSL key \"%s\" from engine \"%s\": %s",
- engine_colon, engine_str, err);
+ engine_colon, engine_str, err);
SSLerrfree(err);
ENGINE_finish(conn->engine);
ENGINE_free(conn->engine);
@@ -1273,7 +1273,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not load private SSL key \"%s\" from engine \"%s\": %s",
- engine_colon, engine_str, err);
+ engine_colon, engine_str, err);
SSLerrfree(err);
ENGINE_finish(conn->engine);
ENGINE_free(conn->engine);
@@ -1310,10 +1310,10 @@ initialize_SSL(PGconn *conn)
{
if (errno == ENOENT)
libpq_append_conn_error(conn, "certificate present, but not private key file \"%s\"",
- fnbuf);
+ fnbuf);
else
libpq_append_conn_error(conn, "could not stat private key file \"%s\": %m",
- fnbuf);
+ fnbuf);
return -1;
}
@@ -1321,7 +1321,7 @@ initialize_SSL(PGconn *conn)
if (!S_ISREG(buf.st_mode))
{
libpq_append_conn_error(conn, "private key file \"%s\" is not a regular file",
- fnbuf);
+ fnbuf);
return -1;
}
@@ -1378,7 +1378,7 @@ initialize_SSL(PGconn *conn)
if (SSL_use_PrivateKey_file(conn->ssl, fnbuf, SSL_FILETYPE_ASN1) != 1)
{
libpq_append_conn_error(conn, "could not load private key file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
return -1;
}
@@ -1394,7 +1394,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "certificate does not match private key file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
return -1;
}
@@ -1447,7 +1447,7 @@ open_client_SSL(PGconn *conn)
if (r == -1)
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
else
libpq_append_conn_error(conn, "SSL SYSCALL error: EOF detected");
pgtls_close(conn);
@@ -1489,12 +1489,12 @@ open_client_SSL(PGconn *conn)
case SSL_R_VERSION_TOO_LOW:
#endif
libpq_append_conn_error(conn, "This may indicate that the server does not support any SSL protocol version between %s and %s.",
- conn->ssl_min_protocol_version ?
- conn->ssl_min_protocol_version :
- MIN_OPENSSL_TLS_VERSION,
- conn->ssl_max_protocol_version ?
- conn->ssl_max_protocol_version :
- MAX_OPENSSL_TLS_VERSION);
+ conn->ssl_min_protocol_version ?
+ conn->ssl_min_protocol_version :
+ MIN_OPENSSL_TLS_VERSION,
+ conn->ssl_max_protocol_version ?
+ conn->ssl_max_protocol_version :
+ MAX_OPENSSL_TLS_VERSION);
break;
default:
break;
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 66e401bf3d..8069e38142 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -255,14 +255,14 @@ pqsecure_raw_read(PGconn *conn, void *ptr, size_t len)
case EPIPE:
case ECONNRESET:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
break;
default:
libpq_append_conn_error(conn, "could not receive data from server: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
break;
}
}
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index d94b648ea5..712d572373 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -888,8 +888,8 @@ extern char *libpq_ngettext(const char *msgid, const char *msgid_plural, unsigne
*/
#undef _
-extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...) pg_attribute_printf(2, 3);
-extern void libpq_append_conn_error(PGconn *conn, const char *fmt, ...) pg_attribute_printf(2, 3);
+extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt,...) pg_attribute_printf(2, 3);
+extern void libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
/*
* These macros are needed to let error-handling code be portable between
--
2.34.1
This looks like it needs a rebase.
=== Applying patches on top of PostgreSQL commit ID
71a75626d5271f2bcdbdc43b8c13065c4634fd9f ===
=== applying patch ./v11-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patch
patching file src/interfaces/libpq/fe-auth-scram.c
patching file src/interfaces/libpq/fe-auth.c
patching file src/interfaces/libpq/fe-connect.c
Hunk #35 FAILED at 3216.
Hunk #36 succeeded at 3732 (offset 27 lines).
Hunk #37 succeeded at 3782 (offset 27 lines).
Hunk #38 succeeded at 3795 (offset 27 lines).
Hunk #39 succeeded at 7175 (offset 27 lines).
1 out of 39 hunks FAILED -- saving rejects to file
src/interfaces/libpq/fe-connect.c.rej
patching file src/interfaces/libpq/fe-exec.c
patching file src/interfaces/libpq/fe-lobj.c
patching file src/interfaces/libpq/fe-misc.c
patching file src/interfaces/libpq/fe-protocol3.c
patching file src/interfaces/libpq/fe-secure-common.c
patching file src/interfaces/libpq/fe-secure-gssapi.c
Hunk #3 succeeded at 590 (offset 2 lines).
patching file src/interfaces/libpq/fe-secure-openssl.c
Hunk #3 succeeded at 415 (offset 5 lines).
Hunk #4 succeeded at 967 (offset 5 lines).
Hunk #5 succeeded at 993 (offset 5 lines).
Hunk #6 succeeded at 1037 (offset 5 lines).
Hunk #7 succeeded at 1089 (offset 5 lines).
Hunk #8 succeeded at 1122 (offset 5 lines).
Hunk #9 succeeded at 1140 (offset 5 lines).
Hunk #10 succeeded at 1239 (offset 5 lines).
Hunk #11 succeeded at 1250 (offset 5 lines).
Hunk #12 succeeded at 1265 (offset 5 lines).
Hunk #13 succeeded at 1278 (offset 5 lines).
Hunk #14 succeeded at 1315 (offset 5 lines).
Hunk #15 succeeded at 1326 (offset 5 lines).
Hunk #16 succeeded at 1383 (offset 5 lines).
Hunk #17 succeeded at 1399 (offset 5 lines).
Hunk #18 succeeded at 1452 (offset 5 lines).
Hunk #19 succeeded at 1494 (offset 5 lines).
patching file src/interfaces/libpq/fe-secure.c
patching file src/interfaces/libpq/libpq-int.h
On Tue, 28 Feb 2023 at 15:59, Gregory Stark <stark@postgresql.org> wrote:
This looks like it needs a rebase.
So I'm updating the patch to Waiting on Author
On Wed, 1 Mar 2023 at 20:09, Greg S <stark.cfm@gmail.com> wrote:
On Tue, 28 Feb 2023 at 15:59, Gregory Stark <stark@postgresql.org> wrote:
This looks like it needs a rebase.
done
Attachments:
v12-0004-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=v12-0004-Add-non-blocking-version-of-PQcancel.patchDownload
From b05d03feaf2f758b3696a99d09ed1c66f2309488 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 12 Jan 2022 09:52:05 +0100
Subject: [PATCH v12 4/5] Add non-blocking version of PQcancel
This patch makes the following changes in libpq:
1. Add a new PQcancelSend function, which sends cancellation requests
using the regular connection establishment code. This makes sure
that cancel requests support and use all connection options
including encryption.
2. Add a new PQcancelConn function which allows sending cancellation in
a non-blocking way by using it together with the newly added
PQcancelPoll and PQcancelSocket.
3. Use these two new cancellation APIs everywhere in the codebase where
signal-safety is not a necessity.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests. The postgres_fdw
cancellation code has been modified to make use of this.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
doc/src/sgml/libpq.sgml | 275 ++++++++++-
src/interfaces/libpq/exports.txt | 8 +
src/interfaces/libpq/fe-connect.c | 457 +++++++++++++++++-
src/interfaces/libpq/libpq-fe.h | 25 +-
src/interfaces/libpq/libpq-int.h | 9 +
.../modules/libpq_pipeline/libpq_pipeline.c | 265 +++++++++-
6 files changed, 987 insertions(+), 52 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 3ccd8ff9421..6e0add50b12 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -4909,7 +4909,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelSend"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -5628,13 +5628,218 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQcancelSend">
+ <term><function>PQcancelSend</function><indexterm><primary>PQcancelSend</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command.
+<synopsis>
+PGcancelConn *PQcancelSend(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This request is made over a connection that uses the same connection
+ options as the the original <structname>PGconn</structname>. So when the
+ original connection is encrypted (using TLS or GSS), the connection for
+ the cancel request connection is encrypted in the same. Any connection
+ options that only make sense for authentication or after authentication
+ are ignored though, because cancellation requests do not require
+ authentication.
+ </para>
+
+ <para>
+ This function returns a <structname>PGcancelConn</structname>
+ object. By using <xref linkend="libpq-PQcancelStatus"/>
+ it can be checked if there was any error when sending the cancellation
+ request. If <xref linkend="libpq-PQcancelStatus"/>
+ returns for <symbol>CONNECTION_OK</symbol> the request was
+ successfully sent, but if it returns <symbol>CONNECTION_BAD</symbol>
+ an error occured. If an error occured the error message can be retrieved using
+ <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being cancelled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelSend</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQcancelSend"/> that can be used
+ in a non-blocking manner.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>,
+ but it won't instantly start sending a cancel request over this
+ connection like <xref linkend="libpq-PQcancelSend"/>.
+ <xref linkend="libpq-PQcancelStatus"/> should be called on the return
+ value to check if the <structname> PGcancelConn </structname> was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe and non-blocking way.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQstatus"/> that can be used for
+ cancellation connections.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ <para>
+ In addition to all the statuses that a <structname>PGconn</structname>
+ can have, this connection can have one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQcancelPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelPoll"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually cancelled) and that the
+ connection is now closed. While a <symbol>CONNECTION_OK</symbol> result
+ for <structname>PGconn</structname> means that queries can be sent over
+ the connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelPoll">
+ <term><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQconnectPoll"/> that can be used for
+ cancellation connections.
+<synopsis>
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -5676,14 +5881,28 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ An insecure version of <xref linkend="libpq-PQcancelSend"/>, but one
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
+ <xref linkend="libpq-PQcancel"/> should only be used if it's necessary
+ to cancel a query from a signal-handler. If signal-safety is not needed,
+ <xref linkend="libpq-PQcancelSend"/> should be used to cancel the query
+ instead. <xref linkend="libpq-PQcancel"/> can be safely invoked from a
+ signal handler, if the <parameter>errbuf</parameter> is a local variable
+ in the signal handler. The <structname>PGcancel</structname> object is
+ read-only as far as <xref linkend="libpq-PQcancel"/> is concerned, so it
+ can also be invoked from a thread that is separate from the one
+ manipulating the <structname>PGconn</structname> object.
+ </para>
+
+ <para>
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
with an explanatory error message. <parameter>errbuf</parameter>
must be a char array of size <parameter>errbufsize</parameter> (the
@@ -5691,21 +5910,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
- <para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. This function connects to
+ postgres on the same address and port as tha original connection. The
+ only connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. <emphasis>This means the connection
+ is never encrypted using TLS or GSS</emphasis>.
</para>
</listitem>
</varlistentry>
@@ -5717,13 +5937,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelSend"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelSend"/> should be
+ used instead, to avoid the security and thread-safety issues that this
+ function has. This function has the same security issues as
+ <xref linkend="libpq-PQcancel"/>, but without the benefit of being
+ signal-safe.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -8872,7 +9101,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelSend"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index e8bcc883709..f56e8c185c4 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -186,3 +186,11 @@ PQpipelineStatus 183
PQsetTraceFlags 184
PQmblenBounded 185
PQsendFlushRequest 186
+PQcancelSend 187
+PQcancelConn 188
+PQcancelPoll 189
+PQcancelStatus 190
+PQcancelSocket 191
+PQcancelErrorMessage 192
+PQcancelReset 193
+PQcancelFinish 194
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 41deeee9a63..ca2d1c31784 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -376,8 +376,10 @@ static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
+static void release_conn_hosts(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
static bool store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
@@ -600,8 +602,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQcancelReset invocations. Otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -732,6 +743,113 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a request on the given connection. This requires
+ * polling the returned PGconn to actually complete the cancellation of the
+ * request.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancelation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we we manually create the host and address arrays
+ * with a single element after freeing the host array that we generated
+ * from the connection options.
+ */
+ release_conn_hosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
/*
* PQconnectStartParams
*
@@ -907,6 +1025,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2031,10 +2188,18 @@ connectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address. These fields have already set up in PQcancelConn. So leave
+ * these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2176,7 +2341,10 @@ connectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2301,13 +2469,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -2898,6 +3070,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancelation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -3637,8 +3832,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4005,19 +4206,13 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
- /* clean up pg_conn_host structures */
- for (int i = 0; i < conn->nconnhost; ++i)
- {
- free(conn->connhost[i].host);
- free(conn->connhost[i].hostaddr);
- free(conn->connhost[i].port);
- if (conn->connhost[i].password != NULL)
- {
- explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
- free(conn->connhost[i].password);
- }
- }
- free(conn->connhost);
+ /*
+ * For cancel requests we don't free the addrinfo in closePGconn (see
+ * comment there for reasoning). So we still have to free it here.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
+ release_conn_hosts(conn);
free(conn->client_encoding_initial);
free(conn->events);
@@ -4128,6 +4323,31 @@ release_conn_addrinfo(PGconn *conn)
}
}
+/*
+ * release_conn_hosts
+ * - Free the host list in the PGconn.
+ */
+static void
+release_conn_hosts(PGconn *conn)
+{
+ if (conn->connhost)
+ {
+ for (int i = 0; i < conn->nconnhost; ++i)
+ {
+ free(conn->connhost[i].host);
+ free(conn->connhost[i].hostaddr);
+ free(conn->connhost[i].port);
+ if (conn->connhost[i].password != NULL)
+ {
+ explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
+ free(conn->connhost[i].password);
+ }
+ }
+ free(conn->connhost);
+ }
+}
+
+
/*
* sendTerminateConn
* - Send a terminate message to backend.
@@ -4135,6 +4355,15 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4188,7 +4417,13 @@ closePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we have to rebuild during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
@@ -4603,6 +4838,180 @@ cancel_errReturn:
return false;
}
+/*
+ * PQcancelSend
+ *
+ * Send a cancellation request in a blocking fashion.
+ */
+PGcancelConn *
+PQcancelSend(PGconn *conn)
+{
+ PGcancelConn *cancelConn = PQcancelConn(conn);
+
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return cancelConn;
+
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_BAD;
+ return cancelConn;
+ }
+
+ (void) connectDBComplete(&cancelConn->conn);
+
+ return cancelConn;
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn * cancelConn)
+{
+ PGconn *conn = (PGconn *) cancelConn;
+ int n;
+
+ /*
+ * Before we can call PQconnectPoll we first need to start the connection
+ * using connectDBstart. Non-cancel connections already do this whenever
+ * the connection is initialized. But cancel connections wait until the
+ * caller starts polling, because there might be a large delay between
+ * creating a cancel connection and actually wanting to use it.
+ */
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+ }
+
+ /*
+ * The rest of the connection establishement we leave to PQconnectPoll,
+ * since it's very similar to normal connection establishment. But once we
+ * get to the CONNECTION_AWAITING_RESPONSE we need to do our own thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * Windows is a bit special in its EOF behaviour for TCP. Sometimes it
+ * will error with an ECONNRESET when there is a clean connection closure.
+ * See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here. For
+ * all other OSes we consider any other error than EOF and report it as
+ * such.
+ */
+ if (n < 0 && n != -2)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we receive
+ * some we consider that an error.
+ */
+ if (n > 0)
+ {
+
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF. Which is what we were
+ * expecting. The cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn * cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn * cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn * cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn * cancelConn)
+{
+ closePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_STARTING;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn * cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
/*
* PQrequestCancel: old, not thread-safe function for requesting query cancel
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index f3d92204964..95899b9f55b 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_STARTING /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,28 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* issue a cancel request */
+extern PGcancelConn * PQcancelSend(PGconn *conn);
+/* non-blocking version of PQcancelSend */
+extern PGcancelConn * PQcancelConn(PGconn *conn);
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn * cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
+extern int PQcancelSocket(const PGcancelConn * cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
+extern void PQcancelReset(PGcancelConn * cancelConn);
+extern void PQcancelFinish(PGcancelConn * cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* a less secure version of PQcancelSend, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 4d40e8a2fbb..a6d9f6eae38 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -397,6 +397,10 @@ struct pg_conn
char *ssl_max_protocol_version; /* maximum TLS protocol version */
char *target_session_attrs; /* desired session properties */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -594,6 +598,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index f48da7d963e..e8e904892c7 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got cancelled.
+ *
+ * This is a function wrapped in a macrco to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_cancelled(conn) confirm_query_cancelled_impl(__LINE__, conn)
+static void
+confirm_query_cancelled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_cancelled(conn);
+
+ /* test PQcancelSend */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelSend(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -985,7 +1243,7 @@ test_prepared(PGconn *conn)
static void
notice_processor(void *arg, const char *message)
{
- int *n_notices = (int *) arg;
+ int *n_notices = (int *) arg;
(*n_notices)++;
fprintf(stderr, "NOTICE %d: %s", *n_notices, message);
@@ -1681,6 +1939,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1782,7 +2041,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
v12-0003-Return-2-from-pqReadData-on-EOF.patchapplication/octet-stream; name=v12-0003-Return-2-from-pqReadData-on-EOF.patchDownload
From c4af4559ed5f8d58cb6ccdd808324dc5ad8f339e Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Thu, 26 Jan 2023 12:24:38 +0100
Subject: [PATCH v12 3/5] Return -2 from pqReadData on EOF
This patch changes pqReadData to return -2 when a connection is cleanly
closed by the other side. For most of the Postgres protocol this is
considered an error, because the client will close the connection
instead of the server. But for Postgres its cancellation protocol
the distinction between errors and clean connection closure is
important, because clean connection closure is the way for the server to
signal that the cancellation was handled.
This patch is in preparation for a follow-up patch where pqReadData is
used for the cancellation protocol implementation.
No existing callsites of pqReadData or any of its internal functions
need to be updated as all of them check if the result is less than 0
instead a strict comparison against -1.
---
src/interfaces/libpq/fe-misc.c | 15 +++++++++++----
src/interfaces/libpq/fe-secure-openssl.c | 2 +-
src/interfaces/libpq/fe-secure.c | 6 ++++++
3 files changed, 18 insertions(+), 5 deletions(-)
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 660cdec93c9..2d49188d910 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -556,8 +556,11 @@ pqPutMsgEnd(PGconn *conn)
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
- * -1: error detected (including EOF = connection closure);
+ * -1: error detected (excluding EOF = clean connection closure);
* conn->errorMessage set
+ * -2: EOF detected, connection is closed cleanly by other side;
+ * conn->errorMessage set
+ *
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
@@ -639,7 +642,7 @@ retry3:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -734,7 +737,7 @@ retry4:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -751,13 +754,17 @@ definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.");
+ /* Do *not* drop any already-read data; caller still wants it */
+ pqDropConnection(conn, false);
+ conn->status = CONNECTION_BAD; /* No more connection to backend */
+ return -2;
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
/* Do *not* drop any already-read data; caller still wants it */
pqDropConnection(conn, false);
conn->status = CONNECTION_BAD; /* No more connection to backend */
- return -1;
+ return nread < 0 ? nread : -1;
}
/*
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index e6da377fb9d..8b5909e08ef 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -248,7 +248,7 @@ rloop:
*/
libpq_append_conn_error(conn, "SSL connection has been closed unexpectedly");
result_errno = ECONNRESET;
- n = -1;
+ n = -2;
break;
default:
libpq_append_conn_error(conn, "unrecognized SSL error code: %d", err);
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 8069e381424..20265dcb317 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -199,6 +199,12 @@ pqsecure_close(PGconn *conn)
* On failure, this function is responsible for appending a suitable message
* to conn->errorMessage. The caller must still inspect errno, but only
* to determine whether to continue/retry after error.
+ *
+ * Returns -1 in case of failures, except in the case of where a failure means
+ * that there was a clean connection closure, in those cases -2 is returned.
+ * Currently only the TLS implementation of pqsecure_read ever returns -2. For
+ * the other implementations a clean connection closure is detected in
+ * pqReadData instead.
*/
ssize_t
pqsecure_read(PGconn *conn, void *ptr, size_t len)
--
2.34.1
v12-0005-Start-using-new-libpq-cancel-APIs.patchapplication/octet-stream; name=v12-0005-Start-using-new-libpq-cancel-APIs.patchDownload
From ff3c079c349028de52be3a12e12e78e946c28bdd Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 25 Jan 2023 13:32:15 +0100
Subject: [PATCH v12 5/5] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in the codebase with these newer
ones.
---
contrib/dblink/dblink.c | 30 ++++--
contrib/postgres_fdw/connection.c | 99 ++++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 10 +-
src/test/isolation/isolationtester.c | 29 +++---
6 files changed, 139 insertions(+), 51 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 78a8bcee6e3..e139f66e116 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1326,22 +1326,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelSend(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 12b54f15cd6..bc3e5181683 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -1234,35 +1234,104 @@ pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel)
static bool
pgfdw_cancel_query(PGconn *conn)
{
- PGcancel *cancel;
- char errbuf[256];
PGresult *result = NULL;
- TimestampTz endtime;
- bool timed_out;
/*
* If it takes too long to cancel the query and discard the result, assume
* the connection is dead.
*/
- endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQcancelStatus(cancel_conn) == CONNECTION_BAD)
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
+ }
+
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
}
- PQfreeCancel(cancel);
}
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ if (failed)
+ return false;
/* Get and discard the result of the query. */
if (pgfdw_get_cleanup_result(conn, endtime, &result, &timed_out))
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 04a3ef450cf..064c3103a5e 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2688,6 +2688,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index 4f3088c03ea..640958df136 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -713,6 +713,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 7a1edea7c8c..b32448c0103 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,11 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
-
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ PQcancelFinish(PQcancelSend(conn));
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153a..3781f7982b2 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelSend(conn);
- if (cancel != NULL)
+ if (PQcancelStatus(cancel_conn) == CONNECTION_OK)
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
v12-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patchapplication/octet-stream; name=v12-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patchDownload
From a8a8d3c176bb5e6c041e708ab7fc49950ee7dd64 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 30 Nov 2022 10:07:19 +0100
Subject: [PATCH v12 1/5] libpq: Run pgindent after a9e9a9f32b3
It seems that pgindent was not run after the error handling refactor in
commit a9e9a9f32b35edf129c88e8b929ef223f8511f59. This fixes that and
also addresses a few other things pgindent wanted to change in libpq.
---
src/interfaces/libpq/fe-auth-scram.c | 2 +-
src/interfaces/libpq/fe-auth.c | 8 +-
src/interfaces/libpq/fe-connect.c | 110 +++++++++++------------
src/interfaces/libpq/fe-exec.c | 16 ++--
src/interfaces/libpq/fe-lobj.c | 42 ++++-----
src/interfaces/libpq/fe-misc.c | 10 +--
src/interfaces/libpq/fe-protocol3.c | 2 +-
src/interfaces/libpq/fe-secure-common.c | 6 +-
src/interfaces/libpq/fe-secure-gssapi.c | 12 +--
src/interfaces/libpq/fe-secure-openssl.c | 64 ++++++-------
src/interfaces/libpq/fe-secure.c | 8 +-
src/interfaces/libpq/libpq-int.h | 4 +-
12 files changed, 142 insertions(+), 142 deletions(-)
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 9c42ea4f819..12c3d0bc333 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -716,7 +716,7 @@ read_server_final_message(fe_scram_state *state, char *input)
return false;
}
libpq_append_conn_error(conn, "error received from server in SCRAM exchange: %s",
- errmsg);
+ errmsg);
return false;
}
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 9afc6f19b9a..ab454e6cd02 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -73,7 +73,7 @@ pg_GSS_continue(PGconn *conn, int payloadlen)
if (!ginbuf.value)
{
libpq_append_conn_error(conn, "out of memory allocating GSSAPI buffer (%d)",
- payloadlen);
+ payloadlen);
return STATUS_ERROR;
}
if (pqGetnchar(ginbuf.value, payloadlen, conn))
@@ -223,7 +223,7 @@ pg_SSPI_continue(PGconn *conn, int payloadlen)
if (!inputbuf)
{
libpq_append_conn_error(conn, "out of memory allocating SSPI buffer (%d)",
- payloadlen);
+ payloadlen);
return STATUS_ERROR;
}
if (pqGetnchar(inputbuf, payloadlen, conn))
@@ -623,7 +623,7 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
if (!challenge)
{
libpq_append_conn_error(conn, "out of memory allocating SASL buffer (%d)",
- payloadlen);
+ payloadlen);
return STATUS_ERROR;
}
@@ -1277,7 +1277,7 @@ PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user,
else
{
libpq_append_conn_error(conn, "unrecognized password encryption algorithm \"%s\"",
- algorithm);
+ algorithm);
return NULL;
}
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 8f80c35c894..97e47f05852 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -1079,7 +1079,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "could not match %d host names to %d hostaddr values",
- count_comma_separated_elems(conn->pghost), conn->nconnhost);
+ count_comma_separated_elems(conn->pghost), conn->nconnhost);
return false;
}
}
@@ -1159,7 +1159,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "could not match %d port numbers to %d hosts",
- count_comma_separated_elems(conn->pgport), conn->nconnhost);
+ count_comma_separated_elems(conn->pgport), conn->nconnhost);
return false;
}
}
@@ -1248,7 +1248,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "channel_binding", conn->channel_binding);
+ "channel_binding", conn->channel_binding);
return false;
}
}
@@ -1273,7 +1273,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "sslmode", conn->sslmode);
+ "sslmode", conn->sslmode);
return false;
}
@@ -1293,7 +1293,7 @@ connectOptions2(PGconn *conn)
case 'v': /* "verify-ca" or "verify-full" */
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "sslmode value \"%s\" invalid when SSL support is not compiled in",
- conn->sslmode);
+ conn->sslmode);
return false;
}
#endif
@@ -1313,16 +1313,16 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "ssl_min_protocol_version",
- conn->ssl_min_protocol_version);
+ "ssl_min_protocol_version",
+ conn->ssl_min_protocol_version);
return false;
}
if (!sslVerifyProtocolVersion(conn->ssl_max_protocol_version))
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "ssl_max_protocol_version",
- conn->ssl_max_protocol_version);
+ "ssl_max_protocol_version",
+ conn->ssl_max_protocol_version);
return false;
}
@@ -1359,7 +1359,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "gssencmode value \"%s\" invalid when GSSAPI support is not compiled in",
- conn->gssencmode);
+ conn->gssencmode);
return false;
}
#endif
@@ -1392,8 +1392,8 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "target_session_attrs",
- conn->target_session_attrs);
+ "target_session_attrs",
+ conn->target_session_attrs);
return false;
}
}
@@ -1609,7 +1609,7 @@ connectNoDelay(PGconn *conn)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "could not set socket to TCP no delay mode: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -1787,7 +1787,7 @@ parse_int_param(const char *value, int *result, PGconn *conn,
error:
libpq_append_conn_error(conn, "invalid integer value \"%s\" for connection option \"%s\"",
- value, context);
+ value, context);
return false;
}
@@ -1816,9 +1816,9 @@ setKeepalivesIdle(PGconn *conn)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- PG_TCP_KEEPALIVE_IDLE_STR,
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ PG_TCP_KEEPALIVE_IDLE_STR,
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -1850,9 +1850,9 @@ setKeepalivesInterval(PGconn *conn)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- "TCP_KEEPINTVL",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ "TCP_KEEPINTVL",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -1885,9 +1885,9 @@ setKeepalivesCount(PGconn *conn)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- "TCP_KEEPCNT",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ "TCP_KEEPCNT",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -1949,8 +1949,8 @@ prepKeepalivesWin32(PGconn *conn)
if (!setKeepalivesWin32(conn->sock, idle, interval))
{
libpq_append_conn_error(conn, "%s(%s) failed: error code %d",
- "WSAIoctl", "SIO_KEEPALIVE_VALS",
- WSAGetLastError());
+ "WSAIoctl", "SIO_KEEPALIVE_VALS",
+ WSAGetLastError());
return 0;
}
return 1;
@@ -1983,9 +1983,9 @@ setTCPUserTimeout(PGconn *conn)
char sebuf[256];
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- "TCP_USER_TIMEOUT",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ "TCP_USER_TIMEOUT",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -2354,7 +2354,7 @@ keep_going: /* We will come back to here until there is
if (ret || !conn->addrlist)
{
libpq_append_conn_error(conn, "could not translate host name \"%s\" to address: %s",
- ch->host, gai_strerror(ret));
+ ch->host, gai_strerror(ret));
goto keep_going;
}
break;
@@ -2366,7 +2366,7 @@ keep_going: /* We will come back to here until there is
if (ret || !conn->addrlist)
{
libpq_append_conn_error(conn, "could not parse network address \"%s\": %s",
- ch->hostaddr, gai_strerror(ret));
+ ch->hostaddr, gai_strerror(ret));
goto keep_going;
}
break;
@@ -2377,8 +2377,8 @@ keep_going: /* We will come back to here until there is
if (strlen(portstr) >= UNIXSOCK_PATH_BUFLEN)
{
libpq_append_conn_error(conn, "Unix-domain socket path \"%s\" is too long (maximum %d bytes)",
- portstr,
- (int) (UNIXSOCK_PATH_BUFLEN - 1));
+ portstr,
+ (int) (UNIXSOCK_PATH_BUFLEN - 1));
goto keep_going;
}
@@ -2391,7 +2391,7 @@ keep_going: /* We will come back to here until there is
if (ret || !conn->addrlist)
{
libpq_append_conn_error(conn, "could not translate Unix-domain socket path \"%s\" to address: %s",
- portstr, gai_strerror(ret));
+ portstr, gai_strerror(ret));
goto keep_going;
}
break;
@@ -2513,7 +2513,7 @@ keep_going: /* We will come back to here until there is
}
emitHostIdentityInfo(conn, host_addr);
libpq_append_conn_error(conn, "could not create socket: %s",
- SOCK_STRERROR(errorno, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(errorno, sebuf, sizeof(sebuf)));
goto error_return;
}
@@ -2543,7 +2543,7 @@ keep_going: /* We will come back to here until there is
if (!pg_set_noblock(conn->sock))
{
libpq_append_conn_error(conn, "could not set socket to nonblocking mode: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
conn->try_next_addr = true;
goto keep_going;
}
@@ -2552,7 +2552,7 @@ keep_going: /* We will come back to here until there is
if (fcntl(conn->sock, F_SETFD, FD_CLOEXEC) == -1)
{
libpq_append_conn_error(conn, "could not set socket to close-on-exec mode: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
conn->try_next_addr = true;
goto keep_going;
}
@@ -2581,9 +2581,9 @@ keep_going: /* We will come back to here until there is
(char *) &on, sizeof(on)) < 0)
{
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- "SO_KEEPALIVE",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ "SO_KEEPALIVE",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
err = 1;
}
else if (!setKeepalivesIdle(conn)
@@ -2708,7 +2708,7 @@ keep_going: /* We will come back to here until there is
(char *) &optval, &optlen) == -1)
{
libpq_append_conn_error(conn, "could not get socket error status: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
else if (optval != 0)
@@ -2735,7 +2735,7 @@ keep_going: /* We will come back to here until there is
&conn->laddr.salen) < 0)
{
libpq_append_conn_error(conn, "could not get client address from socket: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
@@ -2775,7 +2775,7 @@ keep_going: /* We will come back to here until there is
libpq_append_conn_error(conn, "requirepeer parameter is not supported on this platform");
else
libpq_append_conn_error(conn, "could not get peer credentials: %s",
- strerror_r(errno, sebuf, sizeof(sebuf)));
+ strerror_r(errno, sebuf, sizeof(sebuf)));
goto error_return;
}
@@ -2788,7 +2788,7 @@ keep_going: /* We will come back to here until there is
if (strcmp(remote_username, conn->requirepeer) != 0)
{
libpq_append_conn_error(conn, "requirepeer specifies \"%s\", but actual peer user name is \"%s\"",
- conn->requirepeer, remote_username);
+ conn->requirepeer, remote_username);
free(remote_username);
goto error_return;
}
@@ -2829,7 +2829,7 @@ keep_going: /* We will come back to here until there is
if (pqPacketSend(conn, 0, &pv, sizeof(pv)) != STATUS_OK)
{
libpq_append_conn_error(conn, "could not send GSSAPI negotiation packet: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
@@ -2840,7 +2840,7 @@ keep_going: /* We will come back to here until there is
else if (!conn->gctx && conn->gssencmode[0] == 'r')
{
libpq_append_conn_error(conn,
- "GSSAPI encryption required but was impossible (possibly no credential cache, no server support, or using a local socket)");
+ "GSSAPI encryption required but was impossible (possibly no credential cache, no server support, or using a local socket)");
goto error_return;
}
#endif
@@ -2882,7 +2882,7 @@ keep_going: /* We will come back to here until there is
if (pqPacketSend(conn, 0, &pv, sizeof(pv)) != STATUS_OK)
{
libpq_append_conn_error(conn, "could not send SSL negotiation packet: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
/* Ok, wait for response */
@@ -2911,7 +2911,7 @@ keep_going: /* We will come back to here until there is
if (pqPacketSend(conn, 0, startpacket, packetlen) != STATUS_OK)
{
libpq_append_conn_error(conn, "could not send startup packet: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
free(startpacket);
goto error_return;
}
@@ -3012,7 +3012,7 @@ keep_going: /* We will come back to here until there is
else
{
libpq_append_conn_error(conn, "received invalid response to SSL negotiation: %c",
- SSLok);
+ SSLok);
goto error_return;
}
}
@@ -3123,7 +3123,7 @@ keep_going: /* We will come back to here until there is
else if (gss_ok != 'G')
{
libpq_append_conn_error(conn, "received invalid response to GSSAPI negotiation: %c",
- gss_ok);
+ gss_ok);
goto error_return;
}
}
@@ -3201,7 +3201,7 @@ keep_going: /* We will come back to here until there is
if (!(beresp == 'R' || beresp == 'v' || beresp == 'E'))
{
libpq_append_conn_error(conn, "expected authentication request from server, but received %c",
- beresp);
+ beresp);
goto error_return;
}
@@ -3732,7 +3732,7 @@ keep_going: /* We will come back to here until there is
/* Append error report to conn->errorMessage. */
libpq_append_conn_error(conn, "\"%s\" failed",
- "SHOW transaction_read_only");
+ "SHOW transaction_read_only");
/* Close connection politely. */
conn->status = CONNECTION_OK;
@@ -3782,7 +3782,7 @@ keep_going: /* We will come back to here until there is
/* Append error report to conn->errorMessage. */
libpq_append_conn_error(conn, "\"%s\" failed",
- "SELECT pg_is_in_recovery()");
+ "SELECT pg_is_in_recovery()");
/* Close connection politely. */
conn->status = CONNECTION_OK;
@@ -3795,8 +3795,8 @@ keep_going: /* We will come back to here until there is
default:
libpq_append_conn_error(conn,
- "invalid connection state %d, probably indicative of memory corruption",
- conn->status);
+ "invalid connection state %d, probably indicative of memory corruption",
+ conn->status);
goto error_return;
}
@@ -7175,7 +7175,7 @@ pgpassfileWarning(PGconn *conn)
if (sqlstate && strcmp(sqlstate, ERRCODE_INVALID_PASSWORD) == 0)
libpq_append_conn_error(conn, "password retrieved from file \"%s\"",
- conn->pgpassfile);
+ conn->pgpassfile);
}
}
diff --git a/src/interfaces/libpq/fe-exec.c b/src/interfaces/libpq/fe-exec.c
index ec62550e385..0c2dae6ed9e 100644
--- a/src/interfaces/libpq/fe-exec.c
+++ b/src/interfaces/libpq/fe-exec.c
@@ -1444,7 +1444,7 @@ PQsendQueryInternal(PGconn *conn, const char *query, bool newQuery)
if (conn->pipelineStatus != PQ_PIPELINE_OFF)
{
libpq_append_conn_error(conn, "%s not allowed in pipeline mode",
- "PQsendQuery");
+ "PQsendQuery");
return 0;
}
@@ -1512,7 +1512,7 @@ PQsendQueryParams(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -1558,7 +1558,7 @@ PQsendPrepare(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -1652,7 +1652,7 @@ PQsendQueryPrepared(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -2099,10 +2099,9 @@ PQgetResult(PGconn *conn)
/*
* We're about to return the NULL that terminates the round of
- * results from the current query; prepare to send the results
- * of the next query, if any, when we're called next. If there's
- * no next element in the command queue, this gets us in IDLE
- * state.
+ * results from the current query; prepare to send the results of
+ * the next query, if any, when we're called next. If there's no
+ * next element in the command queue, this gets us in IDLE state.
*/
pqPipelineProcessQueue(conn);
res = NULL; /* query is complete */
@@ -3047,6 +3046,7 @@ pqPipelineProcessQueue(PGconn *conn)
return;
case PGASYNC_IDLE:
+
/*
* If we're in IDLE mode and there's some command in the queue,
* get us into PIPELINE_IDLE mode and process normally. Otherwise
diff --git a/src/interfaces/libpq/fe-lobj.c b/src/interfaces/libpq/fe-lobj.c
index 4cb6a468597..206266fd043 100644
--- a/src/interfaces/libpq/fe-lobj.c
+++ b/src/interfaces/libpq/fe-lobj.c
@@ -142,7 +142,7 @@ lo_truncate(PGconn *conn, int fd, size_t len)
if (conn->lobjfuncs->fn_lo_truncate == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_truncate");
+ "lo_truncate");
return -1;
}
@@ -205,7 +205,7 @@ lo_truncate64(PGconn *conn, int fd, pg_int64 len)
if (conn->lobjfuncs->fn_lo_truncate64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_truncate64");
+ "lo_truncate64");
return -1;
}
@@ -395,7 +395,7 @@ lo_lseek64(PGconn *conn, int fd, pg_int64 offset, int whence)
if (conn->lobjfuncs->fn_lo_lseek64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_lseek64");
+ "lo_lseek64");
return -1;
}
@@ -485,7 +485,7 @@ lo_create(PGconn *conn, Oid lobjId)
if (conn->lobjfuncs->fn_lo_create == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_create");
+ "lo_create");
return InvalidOid;
}
@@ -558,7 +558,7 @@ lo_tell64(PGconn *conn, int fd)
if (conn->lobjfuncs->fn_lo_tell64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_tell64");
+ "lo_tell64");
return -1;
}
@@ -667,7 +667,7 @@ lo_import_internal(PGconn *conn, const char *filename, Oid oid)
if (fd < 0)
{ /* error */
libpq_append_conn_error(conn, "could not open file \"%s\": %s",
- filename, strerror_r(errno, sebuf, sizeof(sebuf)));
+ filename, strerror_r(errno, sebuf, sizeof(sebuf)));
return InvalidOid;
}
@@ -723,8 +723,8 @@ lo_import_internal(PGconn *conn, const char *filename, Oid oid)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not read from file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return InvalidOid;
}
@@ -778,8 +778,8 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not open file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return -1;
}
@@ -799,8 +799,8 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not write to file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return -1;
}
}
@@ -822,7 +822,7 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
if (close(fd) != 0 && result >= 0)
{
libpq_append_conn_error(conn, "could not write to file \"%s\": %s",
- filename, strerror_r(errno, sebuf, sizeof(sebuf)));
+ filename, strerror_r(errno, sebuf, sizeof(sebuf)));
result = -1;
}
@@ -954,56 +954,56 @@ lo_initialize(PGconn *conn)
if (lobjfuncs->fn_lo_open == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_open");
+ "lo_open");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_close == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_close");
+ "lo_close");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_creat == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_creat");
+ "lo_creat");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_unlink == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_unlink");
+ "lo_unlink");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_lseek == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_lseek");
+ "lo_lseek");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_tell == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_tell");
+ "lo_tell");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_read == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "loread");
+ "loread");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_write == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lowrite");
+ "lowrite");
free(lobjfuncs);
return -1;
}
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 3653a1a8a62..660cdec93c9 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -749,8 +749,8 @@ retry4:
*/
definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
@@ -1067,7 +1067,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s() failed: %s", "select",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
}
return result;
@@ -1280,7 +1280,7 @@ libpq_ngettext(const char *msgid, const char *msgid_plural, unsigned long n)
* newline.
*/
void
-libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...)
+libpq_append_error(PQExpBuffer errorMessage, const char *fmt,...)
{
int save_errno = errno;
bool done;
@@ -1309,7 +1309,7 @@ libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...)
* format should not end with a newline.
*/
void
-libpq_append_conn_error(PGconn *conn, const char *fmt, ...)
+libpq_append_conn_error(PGconn *conn, const char *fmt,...)
{
int save_errno = errno;
bool done;
diff --git a/src/interfaces/libpq/fe-protocol3.c b/src/interfaces/libpq/fe-protocol3.c
index 8ab6a884165..b79d74f7489 100644
--- a/src/interfaces/libpq/fe-protocol3.c
+++ b/src/interfaces/libpq/fe-protocol3.c
@@ -466,7 +466,7 @@ static void
handleSyncLoss(PGconn *conn, char id, int msgLength)
{
libpq_append_conn_error(conn, "lost synchronization with server: got message type \"%c\", length %d",
- id, msgLength);
+ id, msgLength);
/* build an error result holding the error message */
pqSaveErrorResult(conn);
conn->asyncStatus = PGASYNC_READY; /* drop out of PQgetResult wait loop */
diff --git a/src/interfaces/libpq/fe-secure-common.c b/src/interfaces/libpq/fe-secure-common.c
index de115b37649..3ecc7bf6159 100644
--- a/src/interfaces/libpq/fe-secure-common.c
+++ b/src/interfaces/libpq/fe-secure-common.c
@@ -226,7 +226,7 @@ pq_verify_peer_name_matches_certificate_ip(PGconn *conn,
* wrong given the subject matter.
*/
libpq_append_conn_error(conn, "certificate contains IP address with invalid length %zu",
- iplen);
+ iplen);
return -1;
}
@@ -235,7 +235,7 @@ pq_verify_peer_name_matches_certificate_ip(PGconn *conn,
if (!addrstr)
{
libpq_append_conn_error(conn, "could not convert certificate's IP address to string: %s",
- strerror_r(errno, sebuf, sizeof(sebuf)));
+ strerror_r(errno, sebuf, sizeof(sebuf)));
return -1;
}
@@ -292,7 +292,7 @@ pq_verify_peer_name_matches_certificate(PGconn *conn)
else if (names_examined == 1)
{
libpq_append_conn_error(conn, "server certificate for \"%s\" does not match host name \"%s\"",
- first_name, host);
+ first_name, host);
}
else
{
diff --git a/src/interfaces/libpq/fe-secure-gssapi.c b/src/interfaces/libpq/fe-secure-gssapi.c
index 038e847b7e9..0af4de941af 100644
--- a/src/interfaces/libpq/fe-secure-gssapi.c
+++ b/src/interfaces/libpq/fe-secure-gssapi.c
@@ -213,8 +213,8 @@ pg_GSS_write(PGconn *conn, const void *ptr, size_t len)
if (output.length > PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "client tried to send oversize GSSAPI packet (%zu > %zu)",
- (size_t) output.length,
- PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32));
+ (size_t) output.length,
+ PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32));
errno = EIO; /* for lack of a better idea */
goto cleanup;
}
@@ -349,8 +349,8 @@ pg_GSS_read(PGconn *conn, void *ptr, size_t len)
if (input.length > PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "oversize GSSAPI packet sent by the server (%zu > %zu)",
- (size_t) input.length,
- PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
+ (size_t) input.length,
+ PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
errno = EIO; /* for lack of a better idea */
return -1;
}
@@ -590,8 +590,8 @@ pqsecure_open_gss(PGconn *conn)
if (input.length > PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "oversize GSSAPI packet sent by the server (%zu > %zu)",
- (size_t) input.length,
- PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
+ (size_t) input.length,
+ PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
return PGRES_POLLING_FAILED;
}
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index 6a4431ddfe9..e6da377fb9d 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -213,12 +213,12 @@ rloop:
if (result_errno == EPIPE ||
result_errno == ECONNRESET)
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
else
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
}
else
{
@@ -313,12 +313,12 @@ pgtls_write(PGconn *conn, const void *ptr, size_t len)
result_errno = SOCK_ERRNO;
if (result_errno == EPIPE || result_errno == ECONNRESET)
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
else
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
}
else
{
@@ -415,7 +415,7 @@ pgtls_get_peer_certificate_hash(PGconn *conn, size_t *len)
if (algo_type == NULL)
{
libpq_append_conn_error(conn, "could not find digest for NID %s",
- OBJ_nid2sn(algo_nid));
+ OBJ_nid2sn(algo_nid));
return NULL;
}
break;
@@ -967,7 +967,7 @@ initialize_SSL(PGconn *conn)
if (ssl_min_ver == -1)
{
libpq_append_conn_error(conn, "invalid value \"%s\" for minimum SSL protocol version",
- conn->ssl_min_protocol_version);
+ conn->ssl_min_protocol_version);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -993,7 +993,7 @@ initialize_SSL(PGconn *conn)
if (ssl_max_ver == -1)
{
libpq_append_conn_error(conn, "invalid value \"%s\" for maximum SSL protocol version",
- conn->ssl_max_protocol_version);
+ conn->ssl_max_protocol_version);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1037,7 +1037,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read root certificate file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
SSL_CTX_free(SSL_context);
return -1;
@@ -1089,10 +1089,10 @@ initialize_SSL(PGconn *conn)
*/
if (fnbuf[0] == '\0')
libpq_append_conn_error(conn, "could not get home directory to locate root certificate file\n"
- "Either provide the file or change sslmode to disable server certificate verification.");
+ "Either provide the file or change sslmode to disable server certificate verification.");
else
libpq_append_conn_error(conn, "root certificate file \"%s\" does not exist\n"
- "Either provide the file or change sslmode to disable server certificate verification.", fnbuf);
+ "Either provide the file or change sslmode to disable server certificate verification.", fnbuf);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1122,7 +1122,7 @@ initialize_SSL(PGconn *conn)
if (errno != ENOENT && errno != ENOTDIR)
{
libpq_append_conn_error(conn, "could not open certificate file \"%s\": %s",
- fnbuf, strerror_r(errno, sebuf, sizeof(sebuf)));
+ fnbuf, strerror_r(errno, sebuf, sizeof(sebuf)));
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1140,7 +1140,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read certificate file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
SSL_CTX_free(SSL_context);
return -1;
@@ -1239,7 +1239,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not load SSL engine \"%s\": %s",
- engine_str, err);
+ engine_str, err);
SSLerrfree(err);
free(engine_str);
return -1;
@@ -1250,7 +1250,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not initialize SSL engine \"%s\": %s",
- engine_str, err);
+ engine_str, err);
SSLerrfree(err);
ENGINE_free(conn->engine);
conn->engine = NULL;
@@ -1265,7 +1265,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read private SSL key \"%s\" from engine \"%s\": %s",
- engine_colon, engine_str, err);
+ engine_colon, engine_str, err);
SSLerrfree(err);
ENGINE_finish(conn->engine);
ENGINE_free(conn->engine);
@@ -1278,7 +1278,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not load private SSL key \"%s\" from engine \"%s\": %s",
- engine_colon, engine_str, err);
+ engine_colon, engine_str, err);
SSLerrfree(err);
ENGINE_finish(conn->engine);
ENGINE_free(conn->engine);
@@ -1315,10 +1315,10 @@ initialize_SSL(PGconn *conn)
{
if (errno == ENOENT)
libpq_append_conn_error(conn, "certificate present, but not private key file \"%s\"",
- fnbuf);
+ fnbuf);
else
libpq_append_conn_error(conn, "could not stat private key file \"%s\": %m",
- fnbuf);
+ fnbuf);
return -1;
}
@@ -1326,7 +1326,7 @@ initialize_SSL(PGconn *conn)
if (!S_ISREG(buf.st_mode))
{
libpq_append_conn_error(conn, "private key file \"%s\" is not a regular file",
- fnbuf);
+ fnbuf);
return -1;
}
@@ -1383,7 +1383,7 @@ initialize_SSL(PGconn *conn)
if (SSL_use_PrivateKey_file(conn->ssl, fnbuf, SSL_FILETYPE_ASN1) != 1)
{
libpq_append_conn_error(conn, "could not load private key file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
return -1;
}
@@ -1399,7 +1399,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "certificate does not match private key file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
return -1;
}
@@ -1452,7 +1452,7 @@ open_client_SSL(PGconn *conn)
if (r == -1)
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
else
libpq_append_conn_error(conn, "SSL SYSCALL error: EOF detected");
pgtls_close(conn);
@@ -1494,12 +1494,12 @@ open_client_SSL(PGconn *conn)
case SSL_R_VERSION_TOO_LOW:
#endif
libpq_append_conn_error(conn, "This may indicate that the server does not support any SSL protocol version between %s and %s.",
- conn->ssl_min_protocol_version ?
- conn->ssl_min_protocol_version :
- MIN_OPENSSL_TLS_VERSION,
- conn->ssl_max_protocol_version ?
- conn->ssl_max_protocol_version :
- MAX_OPENSSL_TLS_VERSION);
+ conn->ssl_min_protocol_version ?
+ conn->ssl_min_protocol_version :
+ MIN_OPENSSL_TLS_VERSION,
+ conn->ssl_max_protocol_version ?
+ conn->ssl_max_protocol_version :
+ MAX_OPENSSL_TLS_VERSION);
break;
default:
break;
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 66e401bf3d9..8069e381424 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -255,14 +255,14 @@ pqsecure_raw_read(PGconn *conn, void *ptr, size_t len)
case EPIPE:
case ECONNRESET:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
break;
default:
libpq_append_conn_error(conn, "could not receive data from server: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
break;
}
}
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index d7ec5ed4293..85289980a11 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -888,8 +888,8 @@ extern char *libpq_ngettext(const char *msgid, const char *msgid_plural, unsigne
*/
#undef _
-extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...) pg_attribute_printf(2, 3);
-extern void libpq_append_conn_error(PGconn *conn, const char *fmt, ...) pg_attribute_printf(2, 3);
+extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt,...) pg_attribute_printf(2, 3);
+extern void libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
/*
* These macros are needed to let error-handling code be portable between
--
2.34.1
v12-0002-Refactor-libpq-to-store-addrinfo-in-a-libpq-owne.patchapplication/octet-stream; name=v12-0002-Refactor-libpq-to-store-addrinfo-in-a-libpq-owne.patchDownload
From a5c2265eb634610e06061e511e65172d9fae7734 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 25 Jan 2023 10:22:41 +0100
Subject: [PATCH v12 2/5] Refactor libpq to store addrinfo in a libpq owned
array
This refactors libpq to copy addrinfos returned by getaddrinfo to
memory owned by us. This refactoring is useful for two upcoming patches,
which need to change the addrinfo list in some way. Doing that with the
original addrinfo list is risky since we don't control how memory is
freed. Also changing the contents of a C array is quite a bit easier
than changing a linked list.
As a nice side effect of this refactor the is that mechanism for
iteration over addresses in PQconnectPoll is now identical to its
iteration over hosts.
---
src/include/libpq/pqcomm.h | 6 ++
src/interfaces/libpq/fe-connect.c | 107 +++++++++++++++++++++---------
src/interfaces/libpq/libpq-int.h | 6 +-
src/tools/pgindent/typedefs.list | 1 +
4 files changed, 87 insertions(+), 33 deletions(-)
diff --git a/src/include/libpq/pqcomm.h b/src/include/libpq/pqcomm.h
index 66ba359390f..ee28e223bd7 100644
--- a/src/include/libpq/pqcomm.h
+++ b/src/include/libpq/pqcomm.h
@@ -27,6 +27,12 @@ typedef struct
socklen_t salen;
} SockAddr;
+typedef struct
+{
+ int family;
+ SockAddr addr;
+} AddrInfo;
+
/* Configure the UNIX socket location for the well known port. */
#define UNIXSOCK_PATH(path, port, sockdir) \
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 97e47f05852..41deeee9a63 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -379,6 +379,7 @@ static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
+static bool store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
static PQconninfoOption *conninfo_init(PQExpBuffer errorMessage);
static PQconninfoOption *parse_connection_string(const char *connstr,
@@ -2077,7 +2078,7 @@ connectDBComplete(PGconn *conn)
time_t finish_time = ((time_t) -1);
int timeout = 0;
int last_whichhost = -2; /* certainly different from whichhost */
- struct addrinfo *last_addr_cur = NULL;
+ int last_whichaddr = -2; /* certainly different from whichaddr */
if (conn == NULL || conn->status == CONNECTION_BAD)
return 0;
@@ -2121,11 +2122,11 @@ connectDBComplete(PGconn *conn)
if (flag != PGRES_POLLING_OK &&
timeout > 0 &&
(conn->whichhost != last_whichhost ||
- conn->addr_cur != last_addr_cur))
+ conn->whichaddr != last_whichaddr))
{
finish_time = time(NULL) + timeout;
last_whichhost = conn->whichhost;
- last_addr_cur = conn->addr_cur;
+ last_whichaddr = conn->whichaddr;
}
/*
@@ -2272,9 +2273,9 @@ keep_going: /* We will come back to here until there is
/* Time to advance to next address, or next host if no more addresses? */
if (conn->try_next_addr)
{
- if (conn->addr_cur && conn->addr_cur->ai_next)
+ if (conn->whichaddr < conn->naddr)
{
- conn->addr_cur = conn->addr_cur->ai_next;
+ conn->whichaddr++;
reset_connection_state_machine = true;
}
else
@@ -2287,6 +2288,7 @@ keep_going: /* We will come back to here until there is
{
pg_conn_host *ch;
struct addrinfo hint;
+ struct addrinfo *addrlist;
int thisport;
int ret;
char portstr[MAXPGPATH];
@@ -2327,7 +2329,7 @@ keep_going: /* We will come back to here until there is
/* Initialize hint structure */
MemSet(&hint, 0, sizeof(hint));
hint.ai_socktype = SOCK_STREAM;
- conn->addrlist_family = hint.ai_family = AF_UNSPEC;
+ hint.ai_family = AF_UNSPEC;
/* Figure out the port number we're going to use. */
if (ch->port == NULL || ch->port[0] == '\0')
@@ -2350,8 +2352,8 @@ keep_going: /* We will come back to here until there is
{
case CHT_HOST_NAME:
ret = pg_getaddrinfo_all(ch->host, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not translate host name \"%s\" to address: %s",
ch->host, gai_strerror(ret));
@@ -2362,8 +2364,8 @@ keep_going: /* We will come back to here until there is
case CHT_HOST_ADDRESS:
hint.ai_flags = AI_NUMERICHOST;
ret = pg_getaddrinfo_all(ch->hostaddr, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not parse network address \"%s\": %s",
ch->hostaddr, gai_strerror(ret));
@@ -2372,7 +2374,7 @@ keep_going: /* We will come back to here until there is
break;
case CHT_UNIX_SOCKET:
- conn->addrlist_family = hint.ai_family = AF_UNIX;
+ hint.ai_family = AF_UNIX;
UNIXSOCK_PATH(portstr, thisport, ch->host);
if (strlen(portstr) >= UNIXSOCK_PATH_BUFLEN)
{
@@ -2387,8 +2389,8 @@ keep_going: /* We will come back to here until there is
* name as a Unix-domain socket path.
*/
ret = pg_getaddrinfo_all(NULL, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not translate Unix-domain socket path \"%s\" to address: %s",
portstr, gai_strerror(ret));
@@ -2397,8 +2399,14 @@ keep_going: /* We will come back to here until there is
break;
}
- /* OK, scan this addrlist for a working server address */
- conn->addr_cur = conn->addrlist;
+ if (!store_conn_addrinfo(conn, addrlist))
+ {
+ pg_freeaddrinfo_all(hint.ai_family, addrlist);
+ libpq_append_conn_error(conn, "out of memory");
+ goto error_return;
+ }
+ pg_freeaddrinfo_all(hint.ai_family, addrlist);
+
reset_connection_state_machine = true;
conn->try_next_host = false;
}
@@ -2455,30 +2463,29 @@ keep_going: /* We will come back to here until there is
{
/*
* Try to initiate a connection to one of the addresses
- * returned by pg_getaddrinfo_all(). conn->addr_cur is the
+ * returned by pg_getaddrinfo_all(). conn->whichaddr is the
* next one to try.
*
* The extra level of braces here is historical. It's not
* worth reindenting this whole switch case to remove 'em.
*/
{
- struct addrinfo *addr_cur = conn->addr_cur;
char host_addr[NI_MAXHOST];
+ AddrInfo *addr_cur;
/*
* Advance to next possible host, if we've tried all of
* the addresses for the current host.
*/
- if (addr_cur == NULL)
+ if (conn->whichaddr == conn->naddr)
{
conn->try_next_host = true;
goto keep_going;
}
+ addr_cur = &conn->addr[conn->whichaddr];
/* Remember current address for possible use later */
- memcpy(&conn->raddr.addr, addr_cur->ai_addr,
- addr_cur->ai_addrlen);
- conn->raddr.salen = addr_cur->ai_addrlen;
+ memcpy(&conn->raddr, &addr_cur->addr, sizeof(SockAddr));
/*
* Set connip, too. Note we purposely ignore strdup
@@ -2494,7 +2501,7 @@ keep_going: /* We will come back to here until there is
conn->connip = strdup(host_addr);
/* Try to create the socket */
- conn->sock = socket(addr_cur->ai_family, SOCK_STREAM, 0);
+ conn->sock = socket(addr_cur->family, SOCK_STREAM, 0);
if (conn->sock == PGINVALID_SOCKET)
{
int errorno = SOCK_ERRNO;
@@ -2505,7 +2512,7 @@ keep_going: /* We will come back to here until there is
* cases where the address list includes both IPv4 and
* IPv6 but kernel only accepts one family.
*/
- if (addr_cur->ai_next != NULL ||
+ if (conn->whichaddr < conn->naddr ||
conn->whichhost + 1 < conn->nconnhost)
{
conn->try_next_addr = true;
@@ -2531,7 +2538,7 @@ keep_going: /* We will come back to here until there is
* TCP sockets, nonblock mode, close-on-exec. Try the
* next address if any of this fails.
*/
- if (addr_cur->ai_family != AF_UNIX)
+ if (addr_cur->family != AF_UNIX)
{
if (!connectNoDelay(conn))
{
@@ -2558,7 +2565,7 @@ keep_going: /* We will come back to here until there is
}
#endif /* F_SETFD */
- if (addr_cur->ai_family != AF_UNIX)
+ if (addr_cur->family != AF_UNIX)
{
#ifndef WIN32
int on = 1;
@@ -2650,8 +2657,8 @@ keep_going: /* We will come back to here until there is
* Start/make connection. This should not block, since we
* are in nonblock mode. If it does, well, too bad.
*/
- if (connect(conn->sock, addr_cur->ai_addr,
- addr_cur->ai_addrlen) < 0)
+ if (connect(conn->sock, (struct sockaddr *) &addr_cur->addr.addr,
+ addr_cur->addr.salen) < 0)
{
if (SOCK_ERRNO == EINPROGRESS ||
#ifdef WIN32
@@ -4068,6 +4075,45 @@ freePGconn(PGconn *conn)
free(conn);
}
+/*
+ * Copies over the addrinfos from addrlist to the PGconn. The reason we do this
+ * so that we can edit the resulting list as we please, because now the memory
+ * is owned by us. Changing the original addrinfo directly is risky, since we
+ * don't control how the memory is freed and by changing it we might confuse
+ * the implementation of freeaddrinfo.
+ */
+static bool
+store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist)
+{
+ struct addrinfo *ai = addrlist;
+
+ conn->whichaddr = 0;
+
+ conn->naddr = 0;
+ while (ai)
+ {
+ ai = ai->ai_next;
+ conn->naddr++;
+ }
+
+ conn->addr = calloc(conn->naddr, sizeof(AddrInfo));
+ if (conn->addr == NULL)
+ return false;
+
+ ai = addrlist;
+ for (int i = 0; i < conn->naddr; i++)
+ {
+ conn->addr[i].family = ai->ai_family;
+
+ memcpy(&conn->addr[i].addr.addr, ai->ai_addr,
+ ai->ai_addrlen);
+ conn->addr[i].addr.salen = ai->ai_addrlen;
+ ai = ai->ai_next;
+ }
+
+ return true;
+}
+
/*
* release_conn_addrinfo
* - Free any addrinfo list in the PGconn.
@@ -4075,11 +4121,10 @@ freePGconn(PGconn *conn)
static void
release_conn_addrinfo(PGconn *conn)
{
- if (conn->addrlist)
+ if (conn->addr)
{
- pg_freeaddrinfo_all(conn->addrlist_family, conn->addrlist);
- conn->addrlist = NULL;
- conn->addr_cur = NULL; /* for safety */
+ free(conn->addr);
+ conn->addr = NULL;
}
}
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 85289980a11..4d40e8a2fbb 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -461,8 +461,10 @@ struct pg_conn
PGTargetServerType target_server_type; /* desired session properties */
bool try_next_addr; /* time to advance to next address/host? */
bool try_next_host; /* time to advance to next connhost[]? */
- struct addrinfo *addrlist; /* list of addresses for current connhost */
- struct addrinfo *addr_cur; /* the one currently being tried */
+ int naddr; /* number of addresses returned by getaddrinfo */
+ int whichaddr; /* the address currently being tried */
+ AddrInfo *addr; /* the array of addresses for the currently
+ * tried host */
int addrlist_family; /* needed to know how to free addrlist */
bool send_appname; /* okay to send application_name? */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 86a9303bf56..fa8881c9d93 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -26,6 +26,7 @@ AcquireSampleRowsFunc
ActionList
ActiveSnapshotElt
AddForeignUpdateTargets_function
+AddrInfo
AffixNode
AffixNodeData
AfterTriggerEvent
--
2.34.1
On Wed, 1 Mar 2023 at 14:48, Jelte Fennema <postgres@jeltef.nl> wrote:
This looks like it needs a rebase.
done
Great. Please update the CF entry to Needs Review or Ready for
Committer as appropriate :)
--
Gregory Stark
As Commitfest Manager
On Wed, 1 Mar 2023 at 20:51, Gregory Stark (as CFM) <stark.cfm@gmail.com> wrote:
Great. Please update the CF entry to Needs Review or Ready for
Committer as appropriate :)
I realised I rebased a slightly outdated version of my branch (thanks
to git its --force-with-lease flag). Attached is the newest version
rebased (only patch 0004 changed slightly).
And I updated the CF entry to Ready for Committer now.
Attachments:
v13-0002-Refactor-libpq-to-store-addrinfo-in-a-libpq-owne.patchapplication/x-patch; name=v13-0002-Refactor-libpq-to-store-addrinfo-in-a-libpq-owne.patchDownload
From 3150f762dd0f63fa3be70e0e5e4dd57d9758beb6 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 25 Jan 2023 10:22:41 +0100
Subject: [PATCH v13 2/5] Refactor libpq to store addrinfo in a libpq owned
array
This refactors libpq to copy addrinfos returned by getaddrinfo to
memory owned by us. This refactoring is useful for two upcoming patches,
which need to change the addrinfo list in some way. Doing that with the
original addrinfo list is risky since we don't control how memory is
freed. Also changing the contents of a C array is quite a bit easier
than changing a linked list.
As a nice side effect of this refactor the is that mechanism for
iteration over addresses in PQconnectPoll is now identical to its
iteration over hosts.
---
src/include/libpq/pqcomm.h | 6 ++
src/interfaces/libpq/fe-connect.c | 107 +++++++++++++++++++++---------
src/interfaces/libpq/libpq-int.h | 6 +-
src/tools/pgindent/typedefs.list | 1 +
4 files changed, 87 insertions(+), 33 deletions(-)
diff --git a/src/include/libpq/pqcomm.h b/src/include/libpq/pqcomm.h
index 66ba359390f..ee28e223bd7 100644
--- a/src/include/libpq/pqcomm.h
+++ b/src/include/libpq/pqcomm.h
@@ -27,6 +27,12 @@ typedef struct
socklen_t salen;
} SockAddr;
+typedef struct
+{
+ int family;
+ SockAddr addr;
+} AddrInfo;
+
/* Configure the UNIX socket location for the well known port. */
#define UNIXSOCK_PATH(path, port, sockdir) \
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 97e47f05852..41deeee9a63 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -379,6 +379,7 @@ static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
+static bool store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
static PQconninfoOption *conninfo_init(PQExpBuffer errorMessage);
static PQconninfoOption *parse_connection_string(const char *connstr,
@@ -2077,7 +2078,7 @@ connectDBComplete(PGconn *conn)
time_t finish_time = ((time_t) -1);
int timeout = 0;
int last_whichhost = -2; /* certainly different from whichhost */
- struct addrinfo *last_addr_cur = NULL;
+ int last_whichaddr = -2; /* certainly different from whichaddr */
if (conn == NULL || conn->status == CONNECTION_BAD)
return 0;
@@ -2121,11 +2122,11 @@ connectDBComplete(PGconn *conn)
if (flag != PGRES_POLLING_OK &&
timeout > 0 &&
(conn->whichhost != last_whichhost ||
- conn->addr_cur != last_addr_cur))
+ conn->whichaddr != last_whichaddr))
{
finish_time = time(NULL) + timeout;
last_whichhost = conn->whichhost;
- last_addr_cur = conn->addr_cur;
+ last_whichaddr = conn->whichaddr;
}
/*
@@ -2272,9 +2273,9 @@ keep_going: /* We will come back to here until there is
/* Time to advance to next address, or next host if no more addresses? */
if (conn->try_next_addr)
{
- if (conn->addr_cur && conn->addr_cur->ai_next)
+ if (conn->whichaddr < conn->naddr)
{
- conn->addr_cur = conn->addr_cur->ai_next;
+ conn->whichaddr++;
reset_connection_state_machine = true;
}
else
@@ -2287,6 +2288,7 @@ keep_going: /* We will come back to here until there is
{
pg_conn_host *ch;
struct addrinfo hint;
+ struct addrinfo *addrlist;
int thisport;
int ret;
char portstr[MAXPGPATH];
@@ -2327,7 +2329,7 @@ keep_going: /* We will come back to here until there is
/* Initialize hint structure */
MemSet(&hint, 0, sizeof(hint));
hint.ai_socktype = SOCK_STREAM;
- conn->addrlist_family = hint.ai_family = AF_UNSPEC;
+ hint.ai_family = AF_UNSPEC;
/* Figure out the port number we're going to use. */
if (ch->port == NULL || ch->port[0] == '\0')
@@ -2350,8 +2352,8 @@ keep_going: /* We will come back to here until there is
{
case CHT_HOST_NAME:
ret = pg_getaddrinfo_all(ch->host, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not translate host name \"%s\" to address: %s",
ch->host, gai_strerror(ret));
@@ -2362,8 +2364,8 @@ keep_going: /* We will come back to here until there is
case CHT_HOST_ADDRESS:
hint.ai_flags = AI_NUMERICHOST;
ret = pg_getaddrinfo_all(ch->hostaddr, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not parse network address \"%s\": %s",
ch->hostaddr, gai_strerror(ret));
@@ -2372,7 +2374,7 @@ keep_going: /* We will come back to here until there is
break;
case CHT_UNIX_SOCKET:
- conn->addrlist_family = hint.ai_family = AF_UNIX;
+ hint.ai_family = AF_UNIX;
UNIXSOCK_PATH(portstr, thisport, ch->host);
if (strlen(portstr) >= UNIXSOCK_PATH_BUFLEN)
{
@@ -2387,8 +2389,8 @@ keep_going: /* We will come back to here until there is
* name as a Unix-domain socket path.
*/
ret = pg_getaddrinfo_all(NULL, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not translate Unix-domain socket path \"%s\" to address: %s",
portstr, gai_strerror(ret));
@@ -2397,8 +2399,14 @@ keep_going: /* We will come back to here until there is
break;
}
- /* OK, scan this addrlist for a working server address */
- conn->addr_cur = conn->addrlist;
+ if (!store_conn_addrinfo(conn, addrlist))
+ {
+ pg_freeaddrinfo_all(hint.ai_family, addrlist);
+ libpq_append_conn_error(conn, "out of memory");
+ goto error_return;
+ }
+ pg_freeaddrinfo_all(hint.ai_family, addrlist);
+
reset_connection_state_machine = true;
conn->try_next_host = false;
}
@@ -2455,30 +2463,29 @@ keep_going: /* We will come back to here until there is
{
/*
* Try to initiate a connection to one of the addresses
- * returned by pg_getaddrinfo_all(). conn->addr_cur is the
+ * returned by pg_getaddrinfo_all(). conn->whichaddr is the
* next one to try.
*
* The extra level of braces here is historical. It's not
* worth reindenting this whole switch case to remove 'em.
*/
{
- struct addrinfo *addr_cur = conn->addr_cur;
char host_addr[NI_MAXHOST];
+ AddrInfo *addr_cur;
/*
* Advance to next possible host, if we've tried all of
* the addresses for the current host.
*/
- if (addr_cur == NULL)
+ if (conn->whichaddr == conn->naddr)
{
conn->try_next_host = true;
goto keep_going;
}
+ addr_cur = &conn->addr[conn->whichaddr];
/* Remember current address for possible use later */
- memcpy(&conn->raddr.addr, addr_cur->ai_addr,
- addr_cur->ai_addrlen);
- conn->raddr.salen = addr_cur->ai_addrlen;
+ memcpy(&conn->raddr, &addr_cur->addr, sizeof(SockAddr));
/*
* Set connip, too. Note we purposely ignore strdup
@@ -2494,7 +2501,7 @@ keep_going: /* We will come back to here until there is
conn->connip = strdup(host_addr);
/* Try to create the socket */
- conn->sock = socket(addr_cur->ai_family, SOCK_STREAM, 0);
+ conn->sock = socket(addr_cur->family, SOCK_STREAM, 0);
if (conn->sock == PGINVALID_SOCKET)
{
int errorno = SOCK_ERRNO;
@@ -2505,7 +2512,7 @@ keep_going: /* We will come back to here until there is
* cases where the address list includes both IPv4 and
* IPv6 but kernel only accepts one family.
*/
- if (addr_cur->ai_next != NULL ||
+ if (conn->whichaddr < conn->naddr ||
conn->whichhost + 1 < conn->nconnhost)
{
conn->try_next_addr = true;
@@ -2531,7 +2538,7 @@ keep_going: /* We will come back to here until there is
* TCP sockets, nonblock mode, close-on-exec. Try the
* next address if any of this fails.
*/
- if (addr_cur->ai_family != AF_UNIX)
+ if (addr_cur->family != AF_UNIX)
{
if (!connectNoDelay(conn))
{
@@ -2558,7 +2565,7 @@ keep_going: /* We will come back to here until there is
}
#endif /* F_SETFD */
- if (addr_cur->ai_family != AF_UNIX)
+ if (addr_cur->family != AF_UNIX)
{
#ifndef WIN32
int on = 1;
@@ -2650,8 +2657,8 @@ keep_going: /* We will come back to here until there is
* Start/make connection. This should not block, since we
* are in nonblock mode. If it does, well, too bad.
*/
- if (connect(conn->sock, addr_cur->ai_addr,
- addr_cur->ai_addrlen) < 0)
+ if (connect(conn->sock, (struct sockaddr *) &addr_cur->addr.addr,
+ addr_cur->addr.salen) < 0)
{
if (SOCK_ERRNO == EINPROGRESS ||
#ifdef WIN32
@@ -4068,6 +4075,45 @@ freePGconn(PGconn *conn)
free(conn);
}
+/*
+ * Copies over the addrinfos from addrlist to the PGconn. The reason we do this
+ * so that we can edit the resulting list as we please, because now the memory
+ * is owned by us. Changing the original addrinfo directly is risky, since we
+ * don't control how the memory is freed and by changing it we might confuse
+ * the implementation of freeaddrinfo.
+ */
+static bool
+store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist)
+{
+ struct addrinfo *ai = addrlist;
+
+ conn->whichaddr = 0;
+
+ conn->naddr = 0;
+ while (ai)
+ {
+ ai = ai->ai_next;
+ conn->naddr++;
+ }
+
+ conn->addr = calloc(conn->naddr, sizeof(AddrInfo));
+ if (conn->addr == NULL)
+ return false;
+
+ ai = addrlist;
+ for (int i = 0; i < conn->naddr; i++)
+ {
+ conn->addr[i].family = ai->ai_family;
+
+ memcpy(&conn->addr[i].addr.addr, ai->ai_addr,
+ ai->ai_addrlen);
+ conn->addr[i].addr.salen = ai->ai_addrlen;
+ ai = ai->ai_next;
+ }
+
+ return true;
+}
+
/*
* release_conn_addrinfo
* - Free any addrinfo list in the PGconn.
@@ -4075,11 +4121,10 @@ freePGconn(PGconn *conn)
static void
release_conn_addrinfo(PGconn *conn)
{
- if (conn->addrlist)
+ if (conn->addr)
{
- pg_freeaddrinfo_all(conn->addrlist_family, conn->addrlist);
- conn->addrlist = NULL;
- conn->addr_cur = NULL; /* for safety */
+ free(conn->addr);
+ conn->addr = NULL;
}
}
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 85289980a11..4d40e8a2fbb 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -461,8 +461,10 @@ struct pg_conn
PGTargetServerType target_server_type; /* desired session properties */
bool try_next_addr; /* time to advance to next address/host? */
bool try_next_host; /* time to advance to next connhost[]? */
- struct addrinfo *addrlist; /* list of addresses for current connhost */
- struct addrinfo *addr_cur; /* the one currently being tried */
+ int naddr; /* number of addresses returned by getaddrinfo */
+ int whichaddr; /* the address currently being tried */
+ AddrInfo *addr; /* the array of addresses for the currently
+ * tried host */
int addrlist_family; /* needed to know how to free addrlist */
bool send_appname; /* okay to send application_name? */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 86a9303bf56..fa8881c9d93 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -26,6 +26,7 @@ AcquireSampleRowsFunc
ActionList
ActiveSnapshotElt
AddForeignUpdateTargets_function
+AddrInfo
AffixNode
AffixNodeData
AfterTriggerEvent
--
2.34.1
v13-0004-Add-non-blocking-version-of-PQcancel.patchapplication/x-patch; name=v13-0004-Add-non-blocking-version-of-PQcancel.patchDownload
From bcc6d26f63cbccf465bc54ce8aa8936aa719b402 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 12 Jan 2022 09:52:05 +0100
Subject: [PATCH v13 4/5] Add non-blocking version of PQcancel
This patch makes the following changes in libpq:
1. Add a new PQcancelSend function, which sends cancellation requests
using the regular connection establishment code. This makes sure
that cancel requests support and use all connection options
including encryption.
2. Add a new PQcancelConn function which allows sending cancellation in
a non-blocking way by using it together with the newly added
PQcancelPoll and PQcancelSocket.
3. Use these two new cancellation APIs everywhere in the codebase where
signal-safety is not a necessity.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests. The postgres_fdw
cancellation code has been modified to make use of this.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
doc/src/sgml/libpq.sgml | 275 ++++++++++-
src/interfaces/libpq/exports.txt | 8 +
src/interfaces/libpq/fe-connect.c | 452 +++++++++++++++++-
src/interfaces/libpq/libpq-fe.h | 25 +-
src/interfaces/libpq/libpq-int.h | 9 +
.../modules/libpq_pipeline/libpq_pipeline.c | 265 +++++++++-
6 files changed, 982 insertions(+), 52 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 3ccd8ff9421..6e0add50b12 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -4909,7 +4909,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelSend"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -5628,13 +5628,218 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQcancelSend">
+ <term><function>PQcancelSend</function><indexterm><primary>PQcancelSend</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command.
+<synopsis>
+PGcancelConn *PQcancelSend(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This request is made over a connection that uses the same connection
+ options as the the original <structname>PGconn</structname>. So when the
+ original connection is encrypted (using TLS or GSS), the connection for
+ the cancel request connection is encrypted in the same. Any connection
+ options that only make sense for authentication or after authentication
+ are ignored though, because cancellation requests do not require
+ authentication.
+ </para>
+
+ <para>
+ This function returns a <structname>PGcancelConn</structname>
+ object. By using <xref linkend="libpq-PQcancelStatus"/>
+ it can be checked if there was any error when sending the cancellation
+ request. If <xref linkend="libpq-PQcancelStatus"/>
+ returns for <symbol>CONNECTION_OK</symbol> the request was
+ successfully sent, but if it returns <symbol>CONNECTION_BAD</symbol>
+ an error occured. If an error occured the error message can be retrieved using
+ <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being cancelled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelSend</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQcancelSend"/> that can be used
+ in a non-blocking manner.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>,
+ but it won't instantly start sending a cancel request over this
+ connection like <xref linkend="libpq-PQcancelSend"/>.
+ <xref linkend="libpq-PQcancelStatus"/> should be called on the return
+ value to check if the <structname> PGcancelConn </structname> was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe and non-blocking way.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQstatus"/> that can be used for
+ cancellation connections.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ <para>
+ In addition to all the statuses that a <structname>PGconn</structname>
+ can have, this connection can have one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQcancelPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelPoll"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually cancelled) and that the
+ connection is now closed. While a <symbol>CONNECTION_OK</symbol> result
+ for <structname>PGconn</structname> means that queries can be sent over
+ the connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelPoll">
+ <term><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQconnectPoll"/> that can be used for
+ cancellation connections.
+<synopsis>
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -5676,14 +5881,28 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ An insecure version of <xref linkend="libpq-PQcancelSend"/>, but one
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
+ <xref linkend="libpq-PQcancel"/> should only be used if it's necessary
+ to cancel a query from a signal-handler. If signal-safety is not needed,
+ <xref linkend="libpq-PQcancelSend"/> should be used to cancel the query
+ instead. <xref linkend="libpq-PQcancel"/> can be safely invoked from a
+ signal handler, if the <parameter>errbuf</parameter> is a local variable
+ in the signal handler. The <structname>PGcancel</structname> object is
+ read-only as far as <xref linkend="libpq-PQcancel"/> is concerned, so it
+ can also be invoked from a thread that is separate from the one
+ manipulating the <structname>PGconn</structname> object.
+ </para>
+
+ <para>
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
with an explanatory error message. <parameter>errbuf</parameter>
must be a char array of size <parameter>errbufsize</parameter> (the
@@ -5691,21 +5910,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
- <para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. This function connects to
+ postgres on the same address and port as tha original connection. The
+ only connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. <emphasis>This means the connection
+ is never encrypted using TLS or GSS</emphasis>.
</para>
</listitem>
</varlistentry>
@@ -5717,13 +5937,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelSend"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelSend"/> should be
+ used instead, to avoid the security and thread-safety issues that this
+ function has. This function has the same security issues as
+ <xref linkend="libpq-PQcancel"/>, but without the benefit of being
+ signal-safe.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -8872,7 +9101,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelSend"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index e8bcc883709..f56e8c185c4 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -186,3 +186,11 @@ PQpipelineStatus 183
PQsetTraceFlags 184
PQmblenBounded 185
PQsendFlushRequest 186
+PQcancelSend 187
+PQcancelConn 188
+PQcancelPoll 189
+PQcancelStatus 190
+PQcancelSocket 191
+PQcancelErrorMessage 192
+PQcancelReset 193
+PQcancelFinish 194
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 41deeee9a63..163768d07c4 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -376,8 +376,10 @@ static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
+static void release_conn_hosts(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
static bool store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
@@ -600,8 +602,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQcancelReset invocations. Otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -732,6 +743,113 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a query on the given connection. This requires polling
+ * the returned PGcancelConn to actually complete the cancellation of the
+ * query.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancelation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we we manually create the host and address arrays
+ * with a single element after freeing the host array that we generated
+ * from the connection options.
+ */
+ release_conn_hosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
/*
* PQconnectStartParams
*
@@ -907,6 +1025,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2031,10 +2188,18 @@ connectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address. These fields have already set up in PQcancelConn. So leave
+ * these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2176,7 +2341,10 @@ connectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2301,13 +2469,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -2898,6 +3070,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancelation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -3637,8 +3832,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4005,19 +4206,8 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
- /* clean up pg_conn_host structures */
- for (int i = 0; i < conn->nconnhost; ++i)
- {
- free(conn->connhost[i].host);
- free(conn->connhost[i].hostaddr);
- free(conn->connhost[i].port);
- if (conn->connhost[i].password != NULL)
- {
- explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
- free(conn->connhost[i].password);
- }
- }
- free(conn->connhost);
+ release_conn_addrinfo(conn);
+ release_conn_hosts(conn);
free(conn->client_encoding_initial);
free(conn->events);
@@ -4128,6 +4318,31 @@ release_conn_addrinfo(PGconn *conn)
}
}
+/*
+ * release_conn_hosts
+ * - Free the host list in the PGconn.
+ */
+static void
+release_conn_hosts(PGconn *conn)
+{
+ if (conn->connhost)
+ {
+ for (int i = 0; i < conn->nconnhost; ++i)
+ {
+ free(conn->connhost[i].host);
+ free(conn->connhost[i].hostaddr);
+ free(conn->connhost[i].port);
+ if (conn->connhost[i].password != NULL)
+ {
+ explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
+ free(conn->connhost[i].password);
+ }
+ }
+ free(conn->connhost);
+ }
+}
+
+
/*
* sendTerminateConn
* - Send a terminate message to backend.
@@ -4135,6 +4350,15 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4188,7 +4412,13 @@ closePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we would have to rebuild it during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
@@ -4603,6 +4833,180 @@ cancel_errReturn:
return false;
}
+/*
+ * PQcancelSend
+ *
+ * Send a cancellation request in a blocking fashion.
+ */
+PGcancelConn *
+PQcancelSend(PGconn *conn)
+{
+ PGcancelConn *cancelConn = PQcancelConn(conn);
+
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return cancelConn;
+
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_BAD;
+ return cancelConn;
+ }
+
+ (void) connectDBComplete(&cancelConn->conn);
+
+ return cancelConn;
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn * cancelConn)
+{
+ PGconn *conn = (PGconn *) cancelConn;
+ int n;
+
+ /*
+ * Before we can call PQconnectPoll we first need to start the connection
+ * using connectDBstart. Non-cancel connections already do this whenever
+ * the connection is initialized. But cancel connections wait until the
+ * caller starts polling, because there might be a large delay between
+ * creating a cancel connection and actually wanting to use it.
+ */
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+ }
+
+ /*
+ * The rest of the connection establishement we leave to PQconnectPoll,
+ * since it's very similar to normal connection establishment. But once we
+ * get to the CONNECTION_AWAITING_RESPONSE we need to do our own thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * Windows is a bit special in its EOF behaviour for TCP. Sometimes it
+ * will error with an ECONNRESET when there is a clean connection closure.
+ * See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here. For
+ * all other OSes we consider any other error than EOF and report it as
+ * such.
+ */
+ if (n < 0 && n != -2)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangly do
+ * receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF. Which is what we were
+ * expecting. The cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn * cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn * cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn * cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn * cancelConn)
+{
+ closePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_STARTING;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn * cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
/*
* PQrequestCancel: old, not thread-safe function for requesting query cancel
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index f3d92204964..95899b9f55b 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_STARTING /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,28 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* issue a cancel request */
+extern PGcancelConn * PQcancelSend(PGconn *conn);
+/* non-blocking version of PQcancelSend */
+extern PGcancelConn * PQcancelConn(PGconn *conn);
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn * cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
+extern int PQcancelSocket(const PGcancelConn * cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
+extern void PQcancelReset(PGcancelConn * cancelConn);
+extern void PQcancelFinish(PGcancelConn * cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* a less secure version of PQcancelSend, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 4d40e8a2fbb..a6d9f6eae38 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -397,6 +397,10 @@ struct pg_conn
char *ssl_max_protocol_version; /* maximum TLS protocol version */
char *target_session_attrs; /* desired session properties */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -594,6 +598,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index f48da7d963e..e8e904892c7 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got cancelled.
+ *
+ * This is a function wrapped in a macrco to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_cancelled(conn) confirm_query_cancelled_impl(__LINE__, conn)
+static void
+confirm_query_cancelled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_cancelled(conn);
+
+ /* test PQcancelSend */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelSend(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -985,7 +1243,7 @@ test_prepared(PGconn *conn)
static void
notice_processor(void *arg, const char *message)
{
- int *n_notices = (int *) arg;
+ int *n_notices = (int *) arg;
(*n_notices)++;
fprintf(stderr, "NOTICE %d: %s", *n_notices, message);
@@ -1681,6 +1939,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1782,7 +2041,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
v13-0003-Return-2-from-pqReadData-on-EOF.patchapplication/x-patch; name=v13-0003-Return-2-from-pqReadData-on-EOF.patchDownload
From aaee6d690752bdc93544769f635292b4661f74c6 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Thu, 26 Jan 2023 12:24:38 +0100
Subject: [PATCH v13 3/5] Return -2 from pqReadData on EOF
This patch changes pqReadData to return -2 when a connection is cleanly
closed by the other side. For most of the Postgres protocol this is
considered an error, because the client will close the connection
instead of the server. But for Postgres its cancellation protocol
the distinction between errors and clean connection closure is
important, because clean connection closure is the way for the server to
signal that the cancellation was handled.
This patch is in preparation for a follow-up patch where pqReadData is
used for the cancellation protocol implementation.
No existing callsites of pqReadData or any of its internal functions
need to be updated as all of them check if the result is less than 0
instead a strict comparison against -1.
---
src/interfaces/libpq/fe-misc.c | 15 +++++++++++----
src/interfaces/libpq/fe-secure-openssl.c | 2 +-
src/interfaces/libpq/fe-secure.c | 6 ++++++
3 files changed, 18 insertions(+), 5 deletions(-)
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 660cdec93c9..2d49188d910 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -556,8 +556,11 @@ pqPutMsgEnd(PGconn *conn)
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
- * -1: error detected (including EOF = connection closure);
+ * -1: error detected (excluding EOF = clean connection closure);
* conn->errorMessage set
+ * -2: EOF detected, connection is closed cleanly by other side;
+ * conn->errorMessage set
+ *
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
@@ -639,7 +642,7 @@ retry3:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -734,7 +737,7 @@ retry4:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -751,13 +754,17 @@ definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.");
+ /* Do *not* drop any already-read data; caller still wants it */
+ pqDropConnection(conn, false);
+ conn->status = CONNECTION_BAD; /* No more connection to backend */
+ return -2;
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
/* Do *not* drop any already-read data; caller still wants it */
pqDropConnection(conn, false);
conn->status = CONNECTION_BAD; /* No more connection to backend */
- return -1;
+ return nread < 0 ? nread : -1;
}
/*
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index e6da377fb9d..8b5909e08ef 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -248,7 +248,7 @@ rloop:
*/
libpq_append_conn_error(conn, "SSL connection has been closed unexpectedly");
result_errno = ECONNRESET;
- n = -1;
+ n = -2;
break;
default:
libpq_append_conn_error(conn, "unrecognized SSL error code: %d", err);
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 8069e381424..20265dcb317 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -199,6 +199,12 @@ pqsecure_close(PGconn *conn)
* On failure, this function is responsible for appending a suitable message
* to conn->errorMessage. The caller must still inspect errno, but only
* to determine whether to continue/retry after error.
+ *
+ * Returns -1 in case of failures, except in the case of where a failure means
+ * that there was a clean connection closure, in those cases -2 is returned.
+ * Currently only the TLS implementation of pqsecure_read ever returns -2. For
+ * the other implementations a clean connection closure is detected in
+ * pqReadData instead.
*/
ssize_t
pqsecure_read(PGconn *conn, void *ptr, size_t len)
--
2.34.1
v13-0005-Start-using-new-libpq-cancel-APIs.patchapplication/x-patch; name=v13-0005-Start-using-new-libpq-cancel-APIs.patchDownload
From 9f860014bfbfea6e01338b540f4e387de61c7b72 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 25 Jan 2023 13:32:15 +0100
Subject: [PATCH v13 5/5] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in the codebase with these newer
ones.
---
contrib/dblink/dblink.c | 30 ++++--
contrib/postgres_fdw/connection.c | 99 ++++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 10 +-
src/test/isolation/isolationtester.c | 29 +++---
6 files changed, 139 insertions(+), 51 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 78a8bcee6e3..e139f66e116 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1326,22 +1326,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelSend(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 12b54f15cd6..bc3e5181683 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -1234,35 +1234,104 @@ pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel)
static bool
pgfdw_cancel_query(PGconn *conn)
{
- PGcancel *cancel;
- char errbuf[256];
PGresult *result = NULL;
- TimestampTz endtime;
- bool timed_out;
/*
* If it takes too long to cancel the query and discard the result, assume
* the connection is dead.
*/
- endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQcancelStatus(cancel_conn) == CONNECTION_BAD)
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
+ }
+
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
}
- PQfreeCancel(cancel);
}
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ if (failed)
+ return false;
/* Get and discard the result of the query. */
if (pgfdw_get_cleanup_result(conn, endtime, &result, &timed_out))
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 04a3ef450cf..064c3103a5e 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2688,6 +2688,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index 4f3088c03ea..640958df136 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -713,6 +713,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 7a1edea7c8c..b32448c0103 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,11 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
-
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ PQcancelFinish(PQcancelSend(conn));
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153a..3781f7982b2 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelSend(conn);
- if (cancel != NULL)
+ if (PQcancelStatus(cancel_conn) == CONNECTION_OK)
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
v13-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patchapplication/x-patch; name=v13-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patchDownload
From 14491838d2060d255bd77baadc565516e68cfc23 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 30 Nov 2022 10:07:19 +0100
Subject: [PATCH v13 1/5] libpq: Run pgindent after a9e9a9f32b3
It seems that pgindent was not run after the error handling refactor in
commit a9e9a9f32b35edf129c88e8b929ef223f8511f59. This fixes that and
also addresses a few other things pgindent wanted to change in libpq.
---
src/interfaces/libpq/fe-auth-scram.c | 2 +-
src/interfaces/libpq/fe-auth.c | 8 +-
src/interfaces/libpq/fe-connect.c | 110 +++++++++++------------
src/interfaces/libpq/fe-exec.c | 16 ++--
src/interfaces/libpq/fe-lobj.c | 42 ++++-----
src/interfaces/libpq/fe-misc.c | 10 +--
src/interfaces/libpq/fe-protocol3.c | 2 +-
src/interfaces/libpq/fe-secure-common.c | 6 +-
src/interfaces/libpq/fe-secure-gssapi.c | 12 +--
src/interfaces/libpq/fe-secure-openssl.c | 64 ++++++-------
src/interfaces/libpq/fe-secure.c | 8 +-
src/interfaces/libpq/libpq-int.h | 4 +-
12 files changed, 142 insertions(+), 142 deletions(-)
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 9c42ea4f819..12c3d0bc333 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -716,7 +716,7 @@ read_server_final_message(fe_scram_state *state, char *input)
return false;
}
libpq_append_conn_error(conn, "error received from server in SCRAM exchange: %s",
- errmsg);
+ errmsg);
return false;
}
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 9afc6f19b9a..ab454e6cd02 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -73,7 +73,7 @@ pg_GSS_continue(PGconn *conn, int payloadlen)
if (!ginbuf.value)
{
libpq_append_conn_error(conn, "out of memory allocating GSSAPI buffer (%d)",
- payloadlen);
+ payloadlen);
return STATUS_ERROR;
}
if (pqGetnchar(ginbuf.value, payloadlen, conn))
@@ -223,7 +223,7 @@ pg_SSPI_continue(PGconn *conn, int payloadlen)
if (!inputbuf)
{
libpq_append_conn_error(conn, "out of memory allocating SSPI buffer (%d)",
- payloadlen);
+ payloadlen);
return STATUS_ERROR;
}
if (pqGetnchar(inputbuf, payloadlen, conn))
@@ -623,7 +623,7 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
if (!challenge)
{
libpq_append_conn_error(conn, "out of memory allocating SASL buffer (%d)",
- payloadlen);
+ payloadlen);
return STATUS_ERROR;
}
@@ -1277,7 +1277,7 @@ PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user,
else
{
libpq_append_conn_error(conn, "unrecognized password encryption algorithm \"%s\"",
- algorithm);
+ algorithm);
return NULL;
}
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 8f80c35c894..97e47f05852 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -1079,7 +1079,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "could not match %d host names to %d hostaddr values",
- count_comma_separated_elems(conn->pghost), conn->nconnhost);
+ count_comma_separated_elems(conn->pghost), conn->nconnhost);
return false;
}
}
@@ -1159,7 +1159,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "could not match %d port numbers to %d hosts",
- count_comma_separated_elems(conn->pgport), conn->nconnhost);
+ count_comma_separated_elems(conn->pgport), conn->nconnhost);
return false;
}
}
@@ -1248,7 +1248,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "channel_binding", conn->channel_binding);
+ "channel_binding", conn->channel_binding);
return false;
}
}
@@ -1273,7 +1273,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "sslmode", conn->sslmode);
+ "sslmode", conn->sslmode);
return false;
}
@@ -1293,7 +1293,7 @@ connectOptions2(PGconn *conn)
case 'v': /* "verify-ca" or "verify-full" */
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "sslmode value \"%s\" invalid when SSL support is not compiled in",
- conn->sslmode);
+ conn->sslmode);
return false;
}
#endif
@@ -1313,16 +1313,16 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "ssl_min_protocol_version",
- conn->ssl_min_protocol_version);
+ "ssl_min_protocol_version",
+ conn->ssl_min_protocol_version);
return false;
}
if (!sslVerifyProtocolVersion(conn->ssl_max_protocol_version))
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "ssl_max_protocol_version",
- conn->ssl_max_protocol_version);
+ "ssl_max_protocol_version",
+ conn->ssl_max_protocol_version);
return false;
}
@@ -1359,7 +1359,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "gssencmode value \"%s\" invalid when GSSAPI support is not compiled in",
- conn->gssencmode);
+ conn->gssencmode);
return false;
}
#endif
@@ -1392,8 +1392,8 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "target_session_attrs",
- conn->target_session_attrs);
+ "target_session_attrs",
+ conn->target_session_attrs);
return false;
}
}
@@ -1609,7 +1609,7 @@ connectNoDelay(PGconn *conn)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "could not set socket to TCP no delay mode: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -1787,7 +1787,7 @@ parse_int_param(const char *value, int *result, PGconn *conn,
error:
libpq_append_conn_error(conn, "invalid integer value \"%s\" for connection option \"%s\"",
- value, context);
+ value, context);
return false;
}
@@ -1816,9 +1816,9 @@ setKeepalivesIdle(PGconn *conn)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- PG_TCP_KEEPALIVE_IDLE_STR,
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ PG_TCP_KEEPALIVE_IDLE_STR,
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -1850,9 +1850,9 @@ setKeepalivesInterval(PGconn *conn)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- "TCP_KEEPINTVL",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ "TCP_KEEPINTVL",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -1885,9 +1885,9 @@ setKeepalivesCount(PGconn *conn)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- "TCP_KEEPCNT",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ "TCP_KEEPCNT",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -1949,8 +1949,8 @@ prepKeepalivesWin32(PGconn *conn)
if (!setKeepalivesWin32(conn->sock, idle, interval))
{
libpq_append_conn_error(conn, "%s(%s) failed: error code %d",
- "WSAIoctl", "SIO_KEEPALIVE_VALS",
- WSAGetLastError());
+ "WSAIoctl", "SIO_KEEPALIVE_VALS",
+ WSAGetLastError());
return 0;
}
return 1;
@@ -1983,9 +1983,9 @@ setTCPUserTimeout(PGconn *conn)
char sebuf[256];
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- "TCP_USER_TIMEOUT",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ "TCP_USER_TIMEOUT",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -2354,7 +2354,7 @@ keep_going: /* We will come back to here until there is
if (ret || !conn->addrlist)
{
libpq_append_conn_error(conn, "could not translate host name \"%s\" to address: %s",
- ch->host, gai_strerror(ret));
+ ch->host, gai_strerror(ret));
goto keep_going;
}
break;
@@ -2366,7 +2366,7 @@ keep_going: /* We will come back to here until there is
if (ret || !conn->addrlist)
{
libpq_append_conn_error(conn, "could not parse network address \"%s\": %s",
- ch->hostaddr, gai_strerror(ret));
+ ch->hostaddr, gai_strerror(ret));
goto keep_going;
}
break;
@@ -2377,8 +2377,8 @@ keep_going: /* We will come back to here until there is
if (strlen(portstr) >= UNIXSOCK_PATH_BUFLEN)
{
libpq_append_conn_error(conn, "Unix-domain socket path \"%s\" is too long (maximum %d bytes)",
- portstr,
- (int) (UNIXSOCK_PATH_BUFLEN - 1));
+ portstr,
+ (int) (UNIXSOCK_PATH_BUFLEN - 1));
goto keep_going;
}
@@ -2391,7 +2391,7 @@ keep_going: /* We will come back to here until there is
if (ret || !conn->addrlist)
{
libpq_append_conn_error(conn, "could not translate Unix-domain socket path \"%s\" to address: %s",
- portstr, gai_strerror(ret));
+ portstr, gai_strerror(ret));
goto keep_going;
}
break;
@@ -2513,7 +2513,7 @@ keep_going: /* We will come back to here until there is
}
emitHostIdentityInfo(conn, host_addr);
libpq_append_conn_error(conn, "could not create socket: %s",
- SOCK_STRERROR(errorno, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(errorno, sebuf, sizeof(sebuf)));
goto error_return;
}
@@ -2543,7 +2543,7 @@ keep_going: /* We will come back to here until there is
if (!pg_set_noblock(conn->sock))
{
libpq_append_conn_error(conn, "could not set socket to nonblocking mode: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
conn->try_next_addr = true;
goto keep_going;
}
@@ -2552,7 +2552,7 @@ keep_going: /* We will come back to here until there is
if (fcntl(conn->sock, F_SETFD, FD_CLOEXEC) == -1)
{
libpq_append_conn_error(conn, "could not set socket to close-on-exec mode: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
conn->try_next_addr = true;
goto keep_going;
}
@@ -2581,9 +2581,9 @@ keep_going: /* We will come back to here until there is
(char *) &on, sizeof(on)) < 0)
{
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- "SO_KEEPALIVE",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ "SO_KEEPALIVE",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
err = 1;
}
else if (!setKeepalivesIdle(conn)
@@ -2708,7 +2708,7 @@ keep_going: /* We will come back to here until there is
(char *) &optval, &optlen) == -1)
{
libpq_append_conn_error(conn, "could not get socket error status: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
else if (optval != 0)
@@ -2735,7 +2735,7 @@ keep_going: /* We will come back to here until there is
&conn->laddr.salen) < 0)
{
libpq_append_conn_error(conn, "could not get client address from socket: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
@@ -2775,7 +2775,7 @@ keep_going: /* We will come back to here until there is
libpq_append_conn_error(conn, "requirepeer parameter is not supported on this platform");
else
libpq_append_conn_error(conn, "could not get peer credentials: %s",
- strerror_r(errno, sebuf, sizeof(sebuf)));
+ strerror_r(errno, sebuf, sizeof(sebuf)));
goto error_return;
}
@@ -2788,7 +2788,7 @@ keep_going: /* We will come back to here until there is
if (strcmp(remote_username, conn->requirepeer) != 0)
{
libpq_append_conn_error(conn, "requirepeer specifies \"%s\", but actual peer user name is \"%s\"",
- conn->requirepeer, remote_username);
+ conn->requirepeer, remote_username);
free(remote_username);
goto error_return;
}
@@ -2829,7 +2829,7 @@ keep_going: /* We will come back to here until there is
if (pqPacketSend(conn, 0, &pv, sizeof(pv)) != STATUS_OK)
{
libpq_append_conn_error(conn, "could not send GSSAPI negotiation packet: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
@@ -2840,7 +2840,7 @@ keep_going: /* We will come back to here until there is
else if (!conn->gctx && conn->gssencmode[0] == 'r')
{
libpq_append_conn_error(conn,
- "GSSAPI encryption required but was impossible (possibly no credential cache, no server support, or using a local socket)");
+ "GSSAPI encryption required but was impossible (possibly no credential cache, no server support, or using a local socket)");
goto error_return;
}
#endif
@@ -2882,7 +2882,7 @@ keep_going: /* We will come back to here until there is
if (pqPacketSend(conn, 0, &pv, sizeof(pv)) != STATUS_OK)
{
libpq_append_conn_error(conn, "could not send SSL negotiation packet: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
/* Ok, wait for response */
@@ -2911,7 +2911,7 @@ keep_going: /* We will come back to here until there is
if (pqPacketSend(conn, 0, startpacket, packetlen) != STATUS_OK)
{
libpq_append_conn_error(conn, "could not send startup packet: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
free(startpacket);
goto error_return;
}
@@ -3012,7 +3012,7 @@ keep_going: /* We will come back to here until there is
else
{
libpq_append_conn_error(conn, "received invalid response to SSL negotiation: %c",
- SSLok);
+ SSLok);
goto error_return;
}
}
@@ -3123,7 +3123,7 @@ keep_going: /* We will come back to here until there is
else if (gss_ok != 'G')
{
libpq_append_conn_error(conn, "received invalid response to GSSAPI negotiation: %c",
- gss_ok);
+ gss_ok);
goto error_return;
}
}
@@ -3201,7 +3201,7 @@ keep_going: /* We will come back to here until there is
if (!(beresp == 'R' || beresp == 'v' || beresp == 'E'))
{
libpq_append_conn_error(conn, "expected authentication request from server, but received %c",
- beresp);
+ beresp);
goto error_return;
}
@@ -3732,7 +3732,7 @@ keep_going: /* We will come back to here until there is
/* Append error report to conn->errorMessage. */
libpq_append_conn_error(conn, "\"%s\" failed",
- "SHOW transaction_read_only");
+ "SHOW transaction_read_only");
/* Close connection politely. */
conn->status = CONNECTION_OK;
@@ -3782,7 +3782,7 @@ keep_going: /* We will come back to here until there is
/* Append error report to conn->errorMessage. */
libpq_append_conn_error(conn, "\"%s\" failed",
- "SELECT pg_is_in_recovery()");
+ "SELECT pg_is_in_recovery()");
/* Close connection politely. */
conn->status = CONNECTION_OK;
@@ -3795,8 +3795,8 @@ keep_going: /* We will come back to here until there is
default:
libpq_append_conn_error(conn,
- "invalid connection state %d, probably indicative of memory corruption",
- conn->status);
+ "invalid connection state %d, probably indicative of memory corruption",
+ conn->status);
goto error_return;
}
@@ -7175,7 +7175,7 @@ pgpassfileWarning(PGconn *conn)
if (sqlstate && strcmp(sqlstate, ERRCODE_INVALID_PASSWORD) == 0)
libpq_append_conn_error(conn, "password retrieved from file \"%s\"",
- conn->pgpassfile);
+ conn->pgpassfile);
}
}
diff --git a/src/interfaces/libpq/fe-exec.c b/src/interfaces/libpq/fe-exec.c
index ec62550e385..0c2dae6ed9e 100644
--- a/src/interfaces/libpq/fe-exec.c
+++ b/src/interfaces/libpq/fe-exec.c
@@ -1444,7 +1444,7 @@ PQsendQueryInternal(PGconn *conn, const char *query, bool newQuery)
if (conn->pipelineStatus != PQ_PIPELINE_OFF)
{
libpq_append_conn_error(conn, "%s not allowed in pipeline mode",
- "PQsendQuery");
+ "PQsendQuery");
return 0;
}
@@ -1512,7 +1512,7 @@ PQsendQueryParams(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -1558,7 +1558,7 @@ PQsendPrepare(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -1652,7 +1652,7 @@ PQsendQueryPrepared(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -2099,10 +2099,9 @@ PQgetResult(PGconn *conn)
/*
* We're about to return the NULL that terminates the round of
- * results from the current query; prepare to send the results
- * of the next query, if any, when we're called next. If there's
- * no next element in the command queue, this gets us in IDLE
- * state.
+ * results from the current query; prepare to send the results of
+ * the next query, if any, when we're called next. If there's no
+ * next element in the command queue, this gets us in IDLE state.
*/
pqPipelineProcessQueue(conn);
res = NULL; /* query is complete */
@@ -3047,6 +3046,7 @@ pqPipelineProcessQueue(PGconn *conn)
return;
case PGASYNC_IDLE:
+
/*
* If we're in IDLE mode and there's some command in the queue,
* get us into PIPELINE_IDLE mode and process normally. Otherwise
diff --git a/src/interfaces/libpq/fe-lobj.c b/src/interfaces/libpq/fe-lobj.c
index 4cb6a468597..206266fd043 100644
--- a/src/interfaces/libpq/fe-lobj.c
+++ b/src/interfaces/libpq/fe-lobj.c
@@ -142,7 +142,7 @@ lo_truncate(PGconn *conn, int fd, size_t len)
if (conn->lobjfuncs->fn_lo_truncate == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_truncate");
+ "lo_truncate");
return -1;
}
@@ -205,7 +205,7 @@ lo_truncate64(PGconn *conn, int fd, pg_int64 len)
if (conn->lobjfuncs->fn_lo_truncate64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_truncate64");
+ "lo_truncate64");
return -1;
}
@@ -395,7 +395,7 @@ lo_lseek64(PGconn *conn, int fd, pg_int64 offset, int whence)
if (conn->lobjfuncs->fn_lo_lseek64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_lseek64");
+ "lo_lseek64");
return -1;
}
@@ -485,7 +485,7 @@ lo_create(PGconn *conn, Oid lobjId)
if (conn->lobjfuncs->fn_lo_create == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_create");
+ "lo_create");
return InvalidOid;
}
@@ -558,7 +558,7 @@ lo_tell64(PGconn *conn, int fd)
if (conn->lobjfuncs->fn_lo_tell64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_tell64");
+ "lo_tell64");
return -1;
}
@@ -667,7 +667,7 @@ lo_import_internal(PGconn *conn, const char *filename, Oid oid)
if (fd < 0)
{ /* error */
libpq_append_conn_error(conn, "could not open file \"%s\": %s",
- filename, strerror_r(errno, sebuf, sizeof(sebuf)));
+ filename, strerror_r(errno, sebuf, sizeof(sebuf)));
return InvalidOid;
}
@@ -723,8 +723,8 @@ lo_import_internal(PGconn *conn, const char *filename, Oid oid)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not read from file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return InvalidOid;
}
@@ -778,8 +778,8 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not open file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return -1;
}
@@ -799,8 +799,8 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not write to file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return -1;
}
}
@@ -822,7 +822,7 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
if (close(fd) != 0 && result >= 0)
{
libpq_append_conn_error(conn, "could not write to file \"%s\": %s",
- filename, strerror_r(errno, sebuf, sizeof(sebuf)));
+ filename, strerror_r(errno, sebuf, sizeof(sebuf)));
result = -1;
}
@@ -954,56 +954,56 @@ lo_initialize(PGconn *conn)
if (lobjfuncs->fn_lo_open == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_open");
+ "lo_open");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_close == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_close");
+ "lo_close");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_creat == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_creat");
+ "lo_creat");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_unlink == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_unlink");
+ "lo_unlink");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_lseek == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_lseek");
+ "lo_lseek");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_tell == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_tell");
+ "lo_tell");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_read == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "loread");
+ "loread");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_write == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lowrite");
+ "lowrite");
free(lobjfuncs);
return -1;
}
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 3653a1a8a62..660cdec93c9 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -749,8 +749,8 @@ retry4:
*/
definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
@@ -1067,7 +1067,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s() failed: %s", "select",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
}
return result;
@@ -1280,7 +1280,7 @@ libpq_ngettext(const char *msgid, const char *msgid_plural, unsigned long n)
* newline.
*/
void
-libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...)
+libpq_append_error(PQExpBuffer errorMessage, const char *fmt,...)
{
int save_errno = errno;
bool done;
@@ -1309,7 +1309,7 @@ libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...)
* format should not end with a newline.
*/
void
-libpq_append_conn_error(PGconn *conn, const char *fmt, ...)
+libpq_append_conn_error(PGconn *conn, const char *fmt,...)
{
int save_errno = errno;
bool done;
diff --git a/src/interfaces/libpq/fe-protocol3.c b/src/interfaces/libpq/fe-protocol3.c
index 8ab6a884165..b79d74f7489 100644
--- a/src/interfaces/libpq/fe-protocol3.c
+++ b/src/interfaces/libpq/fe-protocol3.c
@@ -466,7 +466,7 @@ static void
handleSyncLoss(PGconn *conn, char id, int msgLength)
{
libpq_append_conn_error(conn, "lost synchronization with server: got message type \"%c\", length %d",
- id, msgLength);
+ id, msgLength);
/* build an error result holding the error message */
pqSaveErrorResult(conn);
conn->asyncStatus = PGASYNC_READY; /* drop out of PQgetResult wait loop */
diff --git a/src/interfaces/libpq/fe-secure-common.c b/src/interfaces/libpq/fe-secure-common.c
index de115b37649..3ecc7bf6159 100644
--- a/src/interfaces/libpq/fe-secure-common.c
+++ b/src/interfaces/libpq/fe-secure-common.c
@@ -226,7 +226,7 @@ pq_verify_peer_name_matches_certificate_ip(PGconn *conn,
* wrong given the subject matter.
*/
libpq_append_conn_error(conn, "certificate contains IP address with invalid length %zu",
- iplen);
+ iplen);
return -1;
}
@@ -235,7 +235,7 @@ pq_verify_peer_name_matches_certificate_ip(PGconn *conn,
if (!addrstr)
{
libpq_append_conn_error(conn, "could not convert certificate's IP address to string: %s",
- strerror_r(errno, sebuf, sizeof(sebuf)));
+ strerror_r(errno, sebuf, sizeof(sebuf)));
return -1;
}
@@ -292,7 +292,7 @@ pq_verify_peer_name_matches_certificate(PGconn *conn)
else if (names_examined == 1)
{
libpq_append_conn_error(conn, "server certificate for \"%s\" does not match host name \"%s\"",
- first_name, host);
+ first_name, host);
}
else
{
diff --git a/src/interfaces/libpq/fe-secure-gssapi.c b/src/interfaces/libpq/fe-secure-gssapi.c
index 038e847b7e9..0af4de941af 100644
--- a/src/interfaces/libpq/fe-secure-gssapi.c
+++ b/src/interfaces/libpq/fe-secure-gssapi.c
@@ -213,8 +213,8 @@ pg_GSS_write(PGconn *conn, const void *ptr, size_t len)
if (output.length > PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "client tried to send oversize GSSAPI packet (%zu > %zu)",
- (size_t) output.length,
- PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32));
+ (size_t) output.length,
+ PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32));
errno = EIO; /* for lack of a better idea */
goto cleanup;
}
@@ -349,8 +349,8 @@ pg_GSS_read(PGconn *conn, void *ptr, size_t len)
if (input.length > PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "oversize GSSAPI packet sent by the server (%zu > %zu)",
- (size_t) input.length,
- PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
+ (size_t) input.length,
+ PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
errno = EIO; /* for lack of a better idea */
return -1;
}
@@ -590,8 +590,8 @@ pqsecure_open_gss(PGconn *conn)
if (input.length > PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "oversize GSSAPI packet sent by the server (%zu > %zu)",
- (size_t) input.length,
- PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
+ (size_t) input.length,
+ PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
return PGRES_POLLING_FAILED;
}
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index 6a4431ddfe9..e6da377fb9d 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -213,12 +213,12 @@ rloop:
if (result_errno == EPIPE ||
result_errno == ECONNRESET)
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
else
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
}
else
{
@@ -313,12 +313,12 @@ pgtls_write(PGconn *conn, const void *ptr, size_t len)
result_errno = SOCK_ERRNO;
if (result_errno == EPIPE || result_errno == ECONNRESET)
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
else
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
}
else
{
@@ -415,7 +415,7 @@ pgtls_get_peer_certificate_hash(PGconn *conn, size_t *len)
if (algo_type == NULL)
{
libpq_append_conn_error(conn, "could not find digest for NID %s",
- OBJ_nid2sn(algo_nid));
+ OBJ_nid2sn(algo_nid));
return NULL;
}
break;
@@ -967,7 +967,7 @@ initialize_SSL(PGconn *conn)
if (ssl_min_ver == -1)
{
libpq_append_conn_error(conn, "invalid value \"%s\" for minimum SSL protocol version",
- conn->ssl_min_protocol_version);
+ conn->ssl_min_protocol_version);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -993,7 +993,7 @@ initialize_SSL(PGconn *conn)
if (ssl_max_ver == -1)
{
libpq_append_conn_error(conn, "invalid value \"%s\" for maximum SSL protocol version",
- conn->ssl_max_protocol_version);
+ conn->ssl_max_protocol_version);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1037,7 +1037,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read root certificate file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
SSL_CTX_free(SSL_context);
return -1;
@@ -1089,10 +1089,10 @@ initialize_SSL(PGconn *conn)
*/
if (fnbuf[0] == '\0')
libpq_append_conn_error(conn, "could not get home directory to locate root certificate file\n"
- "Either provide the file or change sslmode to disable server certificate verification.");
+ "Either provide the file or change sslmode to disable server certificate verification.");
else
libpq_append_conn_error(conn, "root certificate file \"%s\" does not exist\n"
- "Either provide the file or change sslmode to disable server certificate verification.", fnbuf);
+ "Either provide the file or change sslmode to disable server certificate verification.", fnbuf);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1122,7 +1122,7 @@ initialize_SSL(PGconn *conn)
if (errno != ENOENT && errno != ENOTDIR)
{
libpq_append_conn_error(conn, "could not open certificate file \"%s\": %s",
- fnbuf, strerror_r(errno, sebuf, sizeof(sebuf)));
+ fnbuf, strerror_r(errno, sebuf, sizeof(sebuf)));
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1140,7 +1140,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read certificate file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
SSL_CTX_free(SSL_context);
return -1;
@@ -1239,7 +1239,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not load SSL engine \"%s\": %s",
- engine_str, err);
+ engine_str, err);
SSLerrfree(err);
free(engine_str);
return -1;
@@ -1250,7 +1250,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not initialize SSL engine \"%s\": %s",
- engine_str, err);
+ engine_str, err);
SSLerrfree(err);
ENGINE_free(conn->engine);
conn->engine = NULL;
@@ -1265,7 +1265,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read private SSL key \"%s\" from engine \"%s\": %s",
- engine_colon, engine_str, err);
+ engine_colon, engine_str, err);
SSLerrfree(err);
ENGINE_finish(conn->engine);
ENGINE_free(conn->engine);
@@ -1278,7 +1278,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not load private SSL key \"%s\" from engine \"%s\": %s",
- engine_colon, engine_str, err);
+ engine_colon, engine_str, err);
SSLerrfree(err);
ENGINE_finish(conn->engine);
ENGINE_free(conn->engine);
@@ -1315,10 +1315,10 @@ initialize_SSL(PGconn *conn)
{
if (errno == ENOENT)
libpq_append_conn_error(conn, "certificate present, but not private key file \"%s\"",
- fnbuf);
+ fnbuf);
else
libpq_append_conn_error(conn, "could not stat private key file \"%s\": %m",
- fnbuf);
+ fnbuf);
return -1;
}
@@ -1326,7 +1326,7 @@ initialize_SSL(PGconn *conn)
if (!S_ISREG(buf.st_mode))
{
libpq_append_conn_error(conn, "private key file \"%s\" is not a regular file",
- fnbuf);
+ fnbuf);
return -1;
}
@@ -1383,7 +1383,7 @@ initialize_SSL(PGconn *conn)
if (SSL_use_PrivateKey_file(conn->ssl, fnbuf, SSL_FILETYPE_ASN1) != 1)
{
libpq_append_conn_error(conn, "could not load private key file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
return -1;
}
@@ -1399,7 +1399,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "certificate does not match private key file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
return -1;
}
@@ -1452,7 +1452,7 @@ open_client_SSL(PGconn *conn)
if (r == -1)
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
else
libpq_append_conn_error(conn, "SSL SYSCALL error: EOF detected");
pgtls_close(conn);
@@ -1494,12 +1494,12 @@ open_client_SSL(PGconn *conn)
case SSL_R_VERSION_TOO_LOW:
#endif
libpq_append_conn_error(conn, "This may indicate that the server does not support any SSL protocol version between %s and %s.",
- conn->ssl_min_protocol_version ?
- conn->ssl_min_protocol_version :
- MIN_OPENSSL_TLS_VERSION,
- conn->ssl_max_protocol_version ?
- conn->ssl_max_protocol_version :
- MAX_OPENSSL_TLS_VERSION);
+ conn->ssl_min_protocol_version ?
+ conn->ssl_min_protocol_version :
+ MIN_OPENSSL_TLS_VERSION,
+ conn->ssl_max_protocol_version ?
+ conn->ssl_max_protocol_version :
+ MAX_OPENSSL_TLS_VERSION);
break;
default:
break;
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 66e401bf3d9..8069e381424 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -255,14 +255,14 @@ pqsecure_raw_read(PGconn *conn, void *ptr, size_t len)
case EPIPE:
case ECONNRESET:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
break;
default:
libpq_append_conn_error(conn, "could not receive data from server: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
break;
}
}
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index d7ec5ed4293..85289980a11 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -888,8 +888,8 @@ extern char *libpq_ngettext(const char *msgid, const char *msgid_plural, unsigne
*/
#undef _
-extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...) pg_attribute_printf(2, 3);
-extern void libpq_append_conn_error(PGconn *conn, const char *fmt, ...) pg_attribute_printf(2, 3);
+extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt,...) pg_attribute_printf(2, 3);
+extern void libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
/*
* These macros are needed to let error-handling code be portable between
--
2.34.1
Updated wording in the docs slightly.
Show quoted text
On Wed, 1 Mar 2023 at 21:00, Jelte Fennema <postgres@jeltef.nl> wrote:
On Wed, 1 Mar 2023 at 20:51, Gregory Stark (as CFM) <stark.cfm@gmail.com> wrote:
Great. Please update the CF entry to Needs Review or Ready for
Committer as appropriate :)I realised I rebased a slightly outdated version of my branch (thanks
to git its --force-with-lease flag). Attached is the newest version
rebased (only patch 0004 changed slightly).And I updated the CF entry to Ready for Committer now.
Attachments:
v14-0004-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=v14-0004-Add-non-blocking-version-of-PQcancel.patchDownload
From ac942144724dae7baa34dfa8bb153f2ac96acbbe Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 12 Jan 2022 09:52:05 +0100
Subject: [PATCH v14 4/5] Add non-blocking version of PQcancel
This patch makes the following changes in libpq:
1. Add a new PQcancelSend function, which sends cancellation requests
using the regular connection establishment code. This makes sure
that cancel requests support and use all connection options
including encryption.
2. Add a new PQcancelConn function which allows sending cancellation in
a non-blocking way by using it together with the newly added
PQcancelPoll and PQcancelSocket.
3. Use these two new cancellation APIs everywhere in the codebase where
signal-safety is not a necessity.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests. The postgres_fdw
cancellation code has been modified to make use of this.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
doc/src/sgml/libpq.sgml | 275 ++++++++++-
src/interfaces/libpq/exports.txt | 8 +
src/interfaces/libpq/fe-connect.c | 452 +++++++++++++++++-
src/interfaces/libpq/libpq-fe.h | 25 +-
src/interfaces/libpq/libpq-int.h | 9 +
.../modules/libpq_pipeline/libpq_pipeline.c | 265 +++++++++-
6 files changed, 982 insertions(+), 52 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 3ccd8ff9421..9e4af64e291 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -4909,7 +4909,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelSend"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -5628,13 +5628,218 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQcancelSend">
+ <term><function>PQcancelSend</function><indexterm><primary>PQcancelSend</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command.
+<synopsis>
+PGcancelConn *PQcancelSend(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This request is made over a connection that uses the same connection
+ options as the the original <structname>PGconn</structname>. So when the
+ original connection is encrypted (using TLS or GSS), the connection for
+ the cancel request is encrypted in the same way. Any connection options
+ that only are only used during authentication or after authentication of
+ the client are ignored though, because cancellation requests do not
+ require authentication and the connection is closed right after the
+ cancellation request is submitted.
+ </para>
+
+ <para>
+ This function returns a <structname>PGcancelConn</structname>
+ object. <xref linkend="libpq-PQcancelStatus"/> can be used to check
+ if any error occured while sending the cancellation request. If
+ <xref linkend="libpq-PQcancelStatus"/> returns <symbol>CONNECTION_OK</symbol>
+ the request was sent successfully, but if it returns <symbol>CONNECTION_BAD</symbol>
+ an error occured. If an error occured the error message can be retrieved using
+ <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being cancelled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelSend</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQcancelSend"/> that can be used to
+ send cancellation requests in a non-blocking manner.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>
+ object, but it won't instantly start sending a cancel request over this
+ connection, unlike <xref linkend="libpq-PQcancelSend"/>.
+ The return value should still be passed to <xref linkend="libpq-PQcancelStatus"/>
+ though, to check if the <structname>PGcancelConn</structname> object was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe and non-blocking way.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQstatus"/> that can be used for
+ cancellation connections.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ <para>
+ In addition to all the statuses that a <structname>PGconn</structname>
+ can have, this connection can have one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQcancelPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelPoll"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually cancelled) and that the
+ connection is now closed. While a <symbol>CONNECTION_OK</symbol> result
+ for <structname>PGconn</structname> means that queries can be sent over
+ the connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelPoll">
+ <term><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQconnectPoll"/> that can be used for
+ cancellation connections.
+<synopsis>
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -5676,14 +5881,28 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ An insecure version of <xref linkend="libpq-PQcancelSend"/>, but one
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
+ <xref linkend="libpq-PQcancel"/> should only be used if it's necessary
+ to cancel a query from a signal-handler. If signal-safety is not needed,
+ <xref linkend="libpq-PQcancelSend"/> should be used to cancel the query
+ instead. <xref linkend="libpq-PQcancel"/> can be safely invoked from a
+ signal handler, if the <parameter>errbuf</parameter> is a local variable
+ in the signal handler. The <structname>PGcancel</structname> object is
+ read-only as far as <xref linkend="libpq-PQcancel"/> is concerned, so it
+ can also be invoked from a thread that is separate from the one
+ manipulating the <structname>PGconn</structname> object.
+ </para>
+
+ <para>
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
with an explanatory error message. <parameter>errbuf</parameter>
must be a char array of size <parameter>errbufsize</parameter> (the
@@ -5691,21 +5910,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
- <para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. This function connects to
+ postgres on the same address and port as the original connection. The
+ only connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. <emphasis>This means the connection
+ for the cancel request is never encrypted using TLS or GSS</emphasis>.
</para>
</listitem>
</varlistentry>
@@ -5717,13 +5937,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelSend"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelSend"/> should be
+ used instead, to avoid the security and thread-safety issues that this
+ function has. This function has the same security issues as
+ <xref linkend="libpq-PQcancel"/>, but without the benefit of being
+ signal-safe.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -8872,7 +9101,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelSend"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index e8bcc883709..f56e8c185c4 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -186,3 +186,11 @@ PQpipelineStatus 183
PQsetTraceFlags 184
PQmblenBounded 185
PQsendFlushRequest 186
+PQcancelSend 187
+PQcancelConn 188
+PQcancelPoll 189
+PQcancelStatus 190
+PQcancelSocket 191
+PQcancelErrorMessage 192
+PQcancelReset 193
+PQcancelFinish 194
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 41deeee9a63..163768d07c4 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -376,8 +376,10 @@ static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
+static void release_conn_hosts(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
static bool store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
@@ -600,8 +602,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQcancelReset invocations. Otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -732,6 +743,113 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a query on the given connection. This requires polling
+ * the returned PGcancelConn to actually complete the cancellation of the
+ * query.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancelation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we we manually create the host and address arrays
+ * with a single element after freeing the host array that we generated
+ * from the connection options.
+ */
+ release_conn_hosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
/*
* PQconnectStartParams
*
@@ -907,6 +1025,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2031,10 +2188,18 @@ connectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address. These fields have already set up in PQcancelConn. So leave
+ * these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2176,7 +2341,10 @@ connectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2301,13 +2469,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -2898,6 +3070,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancelation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -3637,8 +3832,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4005,19 +4206,8 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
- /* clean up pg_conn_host structures */
- for (int i = 0; i < conn->nconnhost; ++i)
- {
- free(conn->connhost[i].host);
- free(conn->connhost[i].hostaddr);
- free(conn->connhost[i].port);
- if (conn->connhost[i].password != NULL)
- {
- explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
- free(conn->connhost[i].password);
- }
- }
- free(conn->connhost);
+ release_conn_addrinfo(conn);
+ release_conn_hosts(conn);
free(conn->client_encoding_initial);
free(conn->events);
@@ -4128,6 +4318,31 @@ release_conn_addrinfo(PGconn *conn)
}
}
+/*
+ * release_conn_hosts
+ * - Free the host list in the PGconn.
+ */
+static void
+release_conn_hosts(PGconn *conn)
+{
+ if (conn->connhost)
+ {
+ for (int i = 0; i < conn->nconnhost; ++i)
+ {
+ free(conn->connhost[i].host);
+ free(conn->connhost[i].hostaddr);
+ free(conn->connhost[i].port);
+ if (conn->connhost[i].password != NULL)
+ {
+ explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
+ free(conn->connhost[i].password);
+ }
+ }
+ free(conn->connhost);
+ }
+}
+
+
/*
* sendTerminateConn
* - Send a terminate message to backend.
@@ -4135,6 +4350,15 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4188,7 +4412,13 @@ closePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we would have to rebuild it during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
@@ -4603,6 +4833,180 @@ cancel_errReturn:
return false;
}
+/*
+ * PQcancelSend
+ *
+ * Send a cancellation request in a blocking fashion.
+ */
+PGcancelConn *
+PQcancelSend(PGconn *conn)
+{
+ PGcancelConn *cancelConn = PQcancelConn(conn);
+
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return cancelConn;
+
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_BAD;
+ return cancelConn;
+ }
+
+ (void) connectDBComplete(&cancelConn->conn);
+
+ return cancelConn;
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn * cancelConn)
+{
+ PGconn *conn = (PGconn *) cancelConn;
+ int n;
+
+ /*
+ * Before we can call PQconnectPoll we first need to start the connection
+ * using connectDBstart. Non-cancel connections already do this whenever
+ * the connection is initialized. But cancel connections wait until the
+ * caller starts polling, because there might be a large delay between
+ * creating a cancel connection and actually wanting to use it.
+ */
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+ }
+
+ /*
+ * The rest of the connection establishement we leave to PQconnectPoll,
+ * since it's very similar to normal connection establishment. But once we
+ * get to the CONNECTION_AWAITING_RESPONSE we need to do our own thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * Windows is a bit special in its EOF behaviour for TCP. Sometimes it
+ * will error with an ECONNRESET when there is a clean connection closure.
+ * See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here. For
+ * all other OSes we consider any other error than EOF and report it as
+ * such.
+ */
+ if (n < 0 && n != -2)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangly do
+ * receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF. Which is what we were
+ * expecting. The cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn * cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn * cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn * cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn * cancelConn)
+{
+ closePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_STARTING;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn * cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
/*
* PQrequestCancel: old, not thread-safe function for requesting query cancel
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index f3d92204964..95899b9f55b 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_STARTING /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,28 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* issue a cancel request */
+extern PGcancelConn * PQcancelSend(PGconn *conn);
+/* non-blocking version of PQcancelSend */
+extern PGcancelConn * PQcancelConn(PGconn *conn);
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn * cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
+extern int PQcancelSocket(const PGcancelConn * cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
+extern void PQcancelReset(PGcancelConn * cancelConn);
+extern void PQcancelFinish(PGcancelConn * cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* a less secure version of PQcancelSend, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 4d40e8a2fbb..a6d9f6eae38 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -397,6 +397,10 @@ struct pg_conn
char *ssl_max_protocol_version; /* maximum TLS protocol version */
char *target_session_attrs; /* desired session properties */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -594,6 +598,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index f48da7d963e..e8e904892c7 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got cancelled.
+ *
+ * This is a function wrapped in a macrco to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_cancelled(conn) confirm_query_cancelled_impl(__LINE__, conn)
+static void
+confirm_query_cancelled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_cancelled(conn);
+
+ /* test PQcancelSend */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelSend(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -985,7 +1243,7 @@ test_prepared(PGconn *conn)
static void
notice_processor(void *arg, const char *message)
{
- int *n_notices = (int *) arg;
+ int *n_notices = (int *) arg;
(*n_notices)++;
fprintf(stderr, "NOTICE %d: %s", *n_notices, message);
@@ -1681,6 +1939,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1782,7 +2041,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
v14-0003-Return-2-from-pqReadData-on-EOF.patchapplication/octet-stream; name=v14-0003-Return-2-from-pqReadData-on-EOF.patchDownload
From b043079a922797b44f360dc731a0bd5f3b272609 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Thu, 26 Jan 2023 12:24:38 +0100
Subject: [PATCH v14 3/5] Return -2 from pqReadData on EOF
This patch changes pqReadData to return -2 when a connection is cleanly
closed by the other side. For most of the Postgres protocol this is
considered an error, because the client will close the connection
instead of the server. But for Postgres its cancellation protocol
the distinction between errors and clean connection closure is
important, because clean connection closure is the way for the server to
signal that the cancellation was handled.
This patch is in preparation for a follow-up patch where pqReadData is
used for the cancellation protocol implementation.
No existing callsites of pqReadData or any of its internal functions
need to be updated as all of them check if the result is less than 0
instead a strict comparison against -1.
---
src/interfaces/libpq/fe-misc.c | 15 +++++++++++----
src/interfaces/libpq/fe-secure-openssl.c | 2 +-
src/interfaces/libpq/fe-secure.c | 6 ++++++
3 files changed, 18 insertions(+), 5 deletions(-)
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 660cdec93c9..2d49188d910 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -556,8 +556,11 @@ pqPutMsgEnd(PGconn *conn)
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
- * -1: error detected (including EOF = connection closure);
+ * -1: error detected (excluding EOF = clean connection closure);
* conn->errorMessage set
+ * -2: EOF detected, connection is closed cleanly by other side;
+ * conn->errorMessage set
+ *
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
@@ -639,7 +642,7 @@ retry3:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -734,7 +737,7 @@ retry4:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -751,13 +754,17 @@ definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.");
+ /* Do *not* drop any already-read data; caller still wants it */
+ pqDropConnection(conn, false);
+ conn->status = CONNECTION_BAD; /* No more connection to backend */
+ return -2;
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
/* Do *not* drop any already-read data; caller still wants it */
pqDropConnection(conn, false);
conn->status = CONNECTION_BAD; /* No more connection to backend */
- return -1;
+ return nread < 0 ? nread : -1;
}
/*
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index e6da377fb9d..8b5909e08ef 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -248,7 +248,7 @@ rloop:
*/
libpq_append_conn_error(conn, "SSL connection has been closed unexpectedly");
result_errno = ECONNRESET;
- n = -1;
+ n = -2;
break;
default:
libpq_append_conn_error(conn, "unrecognized SSL error code: %d", err);
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 8069e381424..20265dcb317 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -199,6 +199,12 @@ pqsecure_close(PGconn *conn)
* On failure, this function is responsible for appending a suitable message
* to conn->errorMessage. The caller must still inspect errno, but only
* to determine whether to continue/retry after error.
+ *
+ * Returns -1 in case of failures, except in the case of where a failure means
+ * that there was a clean connection closure, in those cases -2 is returned.
+ * Currently only the TLS implementation of pqsecure_read ever returns -2. For
+ * the other implementations a clean connection closure is detected in
+ * pqReadData instead.
*/
ssize_t
pqsecure_read(PGconn *conn, void *ptr, size_t len)
--
2.34.1
v14-0005-Start-using-new-libpq-cancel-APIs.patchapplication/octet-stream; name=v14-0005-Start-using-new-libpq-cancel-APIs.patchDownload
From 44f529d3e5844f537fb9749175fae39e32e771a3 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 25 Jan 2023 13:32:15 +0100
Subject: [PATCH v14 5/5] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in the codebase with these newer
ones.
---
contrib/dblink/dblink.c | 30 ++++--
contrib/postgres_fdw/connection.c | 99 ++++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 10 +-
src/test/isolation/isolationtester.c | 29 +++---
6 files changed, 139 insertions(+), 51 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 78a8bcee6e3..e139f66e116 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1326,22 +1326,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelSend(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 12b54f15cd6..bc3e5181683 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -1234,35 +1234,104 @@ pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel)
static bool
pgfdw_cancel_query(PGconn *conn)
{
- PGcancel *cancel;
- char errbuf[256];
PGresult *result = NULL;
- TimestampTz endtime;
- bool timed_out;
/*
* If it takes too long to cancel the query and discard the result, assume
* the connection is dead.
*/
- endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQcancelStatus(cancel_conn) == CONNECTION_BAD)
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
+ }
+
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
}
- PQfreeCancel(cancel);
}
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ if (failed)
+ return false;
/* Get and discard the result of the query. */
if (pgfdw_get_cleanup_result(conn, endtime, &result, &timed_out))
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 04a3ef450cf..064c3103a5e 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2688,6 +2688,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index 4f3088c03ea..640958df136 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -713,6 +713,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 7a1edea7c8c..b32448c0103 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,11 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
-
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ PQcancelFinish(PQcancelSend(conn));
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153a..3781f7982b2 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelSend(conn);
- if (cancel != NULL)
+ if (PQcancelStatus(cancel_conn) == CONNECTION_OK)
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
v14-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patchapplication/octet-stream; name=v14-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patchDownload
From 2205205498ac07fe81a71835f9c3fe4df3d8b36a Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 30 Nov 2022 10:07:19 +0100
Subject: [PATCH v14 1/5] libpq: Run pgindent after a9e9a9f32b3
It seems that pgindent was not run after the error handling refactor in
commit a9e9a9f32b35edf129c88e8b929ef223f8511f59. This fixes that and
also addresses a few other things pgindent wanted to change in libpq.
---
src/interfaces/libpq/fe-auth-scram.c | 2 +-
src/interfaces/libpq/fe-auth.c | 8 +-
src/interfaces/libpq/fe-connect.c | 110 +++++++++++------------
src/interfaces/libpq/fe-exec.c | 16 ++--
src/interfaces/libpq/fe-lobj.c | 42 ++++-----
src/interfaces/libpq/fe-misc.c | 10 +--
src/interfaces/libpq/fe-protocol3.c | 2 +-
src/interfaces/libpq/fe-secure-common.c | 6 +-
src/interfaces/libpq/fe-secure-gssapi.c | 12 +--
src/interfaces/libpq/fe-secure-openssl.c | 64 ++++++-------
src/interfaces/libpq/fe-secure.c | 8 +-
src/interfaces/libpq/libpq-int.h | 4 +-
12 files changed, 142 insertions(+), 142 deletions(-)
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 9c42ea4f819..12c3d0bc333 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -716,7 +716,7 @@ read_server_final_message(fe_scram_state *state, char *input)
return false;
}
libpq_append_conn_error(conn, "error received from server in SCRAM exchange: %s",
- errmsg);
+ errmsg);
return false;
}
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 9afc6f19b9a..ab454e6cd02 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -73,7 +73,7 @@ pg_GSS_continue(PGconn *conn, int payloadlen)
if (!ginbuf.value)
{
libpq_append_conn_error(conn, "out of memory allocating GSSAPI buffer (%d)",
- payloadlen);
+ payloadlen);
return STATUS_ERROR;
}
if (pqGetnchar(ginbuf.value, payloadlen, conn))
@@ -223,7 +223,7 @@ pg_SSPI_continue(PGconn *conn, int payloadlen)
if (!inputbuf)
{
libpq_append_conn_error(conn, "out of memory allocating SSPI buffer (%d)",
- payloadlen);
+ payloadlen);
return STATUS_ERROR;
}
if (pqGetnchar(inputbuf, payloadlen, conn))
@@ -623,7 +623,7 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
if (!challenge)
{
libpq_append_conn_error(conn, "out of memory allocating SASL buffer (%d)",
- payloadlen);
+ payloadlen);
return STATUS_ERROR;
}
@@ -1277,7 +1277,7 @@ PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user,
else
{
libpq_append_conn_error(conn, "unrecognized password encryption algorithm \"%s\"",
- algorithm);
+ algorithm);
return NULL;
}
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 8f80c35c894..97e47f05852 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -1079,7 +1079,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "could not match %d host names to %d hostaddr values",
- count_comma_separated_elems(conn->pghost), conn->nconnhost);
+ count_comma_separated_elems(conn->pghost), conn->nconnhost);
return false;
}
}
@@ -1159,7 +1159,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "could not match %d port numbers to %d hosts",
- count_comma_separated_elems(conn->pgport), conn->nconnhost);
+ count_comma_separated_elems(conn->pgport), conn->nconnhost);
return false;
}
}
@@ -1248,7 +1248,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "channel_binding", conn->channel_binding);
+ "channel_binding", conn->channel_binding);
return false;
}
}
@@ -1273,7 +1273,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "sslmode", conn->sslmode);
+ "sslmode", conn->sslmode);
return false;
}
@@ -1293,7 +1293,7 @@ connectOptions2(PGconn *conn)
case 'v': /* "verify-ca" or "verify-full" */
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "sslmode value \"%s\" invalid when SSL support is not compiled in",
- conn->sslmode);
+ conn->sslmode);
return false;
}
#endif
@@ -1313,16 +1313,16 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "ssl_min_protocol_version",
- conn->ssl_min_protocol_version);
+ "ssl_min_protocol_version",
+ conn->ssl_min_protocol_version);
return false;
}
if (!sslVerifyProtocolVersion(conn->ssl_max_protocol_version))
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "ssl_max_protocol_version",
- conn->ssl_max_protocol_version);
+ "ssl_max_protocol_version",
+ conn->ssl_max_protocol_version);
return false;
}
@@ -1359,7 +1359,7 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "gssencmode value \"%s\" invalid when GSSAPI support is not compiled in",
- conn->gssencmode);
+ conn->gssencmode);
return false;
}
#endif
@@ -1392,8 +1392,8 @@ connectOptions2(PGconn *conn)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: \"%s\"",
- "target_session_attrs",
- conn->target_session_attrs);
+ "target_session_attrs",
+ conn->target_session_attrs);
return false;
}
}
@@ -1609,7 +1609,7 @@ connectNoDelay(PGconn *conn)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "could not set socket to TCP no delay mode: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -1787,7 +1787,7 @@ parse_int_param(const char *value, int *result, PGconn *conn,
error:
libpq_append_conn_error(conn, "invalid integer value \"%s\" for connection option \"%s\"",
- value, context);
+ value, context);
return false;
}
@@ -1816,9 +1816,9 @@ setKeepalivesIdle(PGconn *conn)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- PG_TCP_KEEPALIVE_IDLE_STR,
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ PG_TCP_KEEPALIVE_IDLE_STR,
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -1850,9 +1850,9 @@ setKeepalivesInterval(PGconn *conn)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- "TCP_KEEPINTVL",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ "TCP_KEEPINTVL",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -1885,9 +1885,9 @@ setKeepalivesCount(PGconn *conn)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- "TCP_KEEPCNT",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ "TCP_KEEPCNT",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -1949,8 +1949,8 @@ prepKeepalivesWin32(PGconn *conn)
if (!setKeepalivesWin32(conn->sock, idle, interval))
{
libpq_append_conn_error(conn, "%s(%s) failed: error code %d",
- "WSAIoctl", "SIO_KEEPALIVE_VALS",
- WSAGetLastError());
+ "WSAIoctl", "SIO_KEEPALIVE_VALS",
+ WSAGetLastError());
return 0;
}
return 1;
@@ -1983,9 +1983,9 @@ setTCPUserTimeout(PGconn *conn)
char sebuf[256];
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- "TCP_USER_TIMEOUT",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ "TCP_USER_TIMEOUT",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
return 0;
}
#endif
@@ -2354,7 +2354,7 @@ keep_going: /* We will come back to here until there is
if (ret || !conn->addrlist)
{
libpq_append_conn_error(conn, "could not translate host name \"%s\" to address: %s",
- ch->host, gai_strerror(ret));
+ ch->host, gai_strerror(ret));
goto keep_going;
}
break;
@@ -2366,7 +2366,7 @@ keep_going: /* We will come back to here until there is
if (ret || !conn->addrlist)
{
libpq_append_conn_error(conn, "could not parse network address \"%s\": %s",
- ch->hostaddr, gai_strerror(ret));
+ ch->hostaddr, gai_strerror(ret));
goto keep_going;
}
break;
@@ -2377,8 +2377,8 @@ keep_going: /* We will come back to here until there is
if (strlen(portstr) >= UNIXSOCK_PATH_BUFLEN)
{
libpq_append_conn_error(conn, "Unix-domain socket path \"%s\" is too long (maximum %d bytes)",
- portstr,
- (int) (UNIXSOCK_PATH_BUFLEN - 1));
+ portstr,
+ (int) (UNIXSOCK_PATH_BUFLEN - 1));
goto keep_going;
}
@@ -2391,7 +2391,7 @@ keep_going: /* We will come back to here until there is
if (ret || !conn->addrlist)
{
libpq_append_conn_error(conn, "could not translate Unix-domain socket path \"%s\" to address: %s",
- portstr, gai_strerror(ret));
+ portstr, gai_strerror(ret));
goto keep_going;
}
break;
@@ -2513,7 +2513,7 @@ keep_going: /* We will come back to here until there is
}
emitHostIdentityInfo(conn, host_addr);
libpq_append_conn_error(conn, "could not create socket: %s",
- SOCK_STRERROR(errorno, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(errorno, sebuf, sizeof(sebuf)));
goto error_return;
}
@@ -2543,7 +2543,7 @@ keep_going: /* We will come back to here until there is
if (!pg_set_noblock(conn->sock))
{
libpq_append_conn_error(conn, "could not set socket to nonblocking mode: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
conn->try_next_addr = true;
goto keep_going;
}
@@ -2552,7 +2552,7 @@ keep_going: /* We will come back to here until there is
if (fcntl(conn->sock, F_SETFD, FD_CLOEXEC) == -1)
{
libpq_append_conn_error(conn, "could not set socket to close-on-exec mode: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
conn->try_next_addr = true;
goto keep_going;
}
@@ -2581,9 +2581,9 @@ keep_going: /* We will come back to here until there is
(char *) &on, sizeof(on)) < 0)
{
libpq_append_conn_error(conn, "%s(%s) failed: %s",
- "setsockopt",
- "SO_KEEPALIVE",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ "setsockopt",
+ "SO_KEEPALIVE",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
err = 1;
}
else if (!setKeepalivesIdle(conn)
@@ -2708,7 +2708,7 @@ keep_going: /* We will come back to here until there is
(char *) &optval, &optlen) == -1)
{
libpq_append_conn_error(conn, "could not get socket error status: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
else if (optval != 0)
@@ -2735,7 +2735,7 @@ keep_going: /* We will come back to here until there is
&conn->laddr.salen) < 0)
{
libpq_append_conn_error(conn, "could not get client address from socket: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
@@ -2775,7 +2775,7 @@ keep_going: /* We will come back to here until there is
libpq_append_conn_error(conn, "requirepeer parameter is not supported on this platform");
else
libpq_append_conn_error(conn, "could not get peer credentials: %s",
- strerror_r(errno, sebuf, sizeof(sebuf)));
+ strerror_r(errno, sebuf, sizeof(sebuf)));
goto error_return;
}
@@ -2788,7 +2788,7 @@ keep_going: /* We will come back to here until there is
if (strcmp(remote_username, conn->requirepeer) != 0)
{
libpq_append_conn_error(conn, "requirepeer specifies \"%s\", but actual peer user name is \"%s\"",
- conn->requirepeer, remote_username);
+ conn->requirepeer, remote_username);
free(remote_username);
goto error_return;
}
@@ -2829,7 +2829,7 @@ keep_going: /* We will come back to here until there is
if (pqPacketSend(conn, 0, &pv, sizeof(pv)) != STATUS_OK)
{
libpq_append_conn_error(conn, "could not send GSSAPI negotiation packet: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
@@ -2840,7 +2840,7 @@ keep_going: /* We will come back to here until there is
else if (!conn->gctx && conn->gssencmode[0] == 'r')
{
libpq_append_conn_error(conn,
- "GSSAPI encryption required but was impossible (possibly no credential cache, no server support, or using a local socket)");
+ "GSSAPI encryption required but was impossible (possibly no credential cache, no server support, or using a local socket)");
goto error_return;
}
#endif
@@ -2882,7 +2882,7 @@ keep_going: /* We will come back to here until there is
if (pqPacketSend(conn, 0, &pv, sizeof(pv)) != STATUS_OK)
{
libpq_append_conn_error(conn, "could not send SSL negotiation packet: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
/* Ok, wait for response */
@@ -2911,7 +2911,7 @@ keep_going: /* We will come back to here until there is
if (pqPacketSend(conn, 0, startpacket, packetlen) != STATUS_OK)
{
libpq_append_conn_error(conn, "could not send startup packet: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
free(startpacket);
goto error_return;
}
@@ -3012,7 +3012,7 @@ keep_going: /* We will come back to here until there is
else
{
libpq_append_conn_error(conn, "received invalid response to SSL negotiation: %c",
- SSLok);
+ SSLok);
goto error_return;
}
}
@@ -3123,7 +3123,7 @@ keep_going: /* We will come back to here until there is
else if (gss_ok != 'G')
{
libpq_append_conn_error(conn, "received invalid response to GSSAPI negotiation: %c",
- gss_ok);
+ gss_ok);
goto error_return;
}
}
@@ -3201,7 +3201,7 @@ keep_going: /* We will come back to here until there is
if (!(beresp == 'R' || beresp == 'v' || beresp == 'E'))
{
libpq_append_conn_error(conn, "expected authentication request from server, but received %c",
- beresp);
+ beresp);
goto error_return;
}
@@ -3732,7 +3732,7 @@ keep_going: /* We will come back to here until there is
/* Append error report to conn->errorMessage. */
libpq_append_conn_error(conn, "\"%s\" failed",
- "SHOW transaction_read_only");
+ "SHOW transaction_read_only");
/* Close connection politely. */
conn->status = CONNECTION_OK;
@@ -3782,7 +3782,7 @@ keep_going: /* We will come back to here until there is
/* Append error report to conn->errorMessage. */
libpq_append_conn_error(conn, "\"%s\" failed",
- "SELECT pg_is_in_recovery()");
+ "SELECT pg_is_in_recovery()");
/* Close connection politely. */
conn->status = CONNECTION_OK;
@@ -3795,8 +3795,8 @@ keep_going: /* We will come back to here until there is
default:
libpq_append_conn_error(conn,
- "invalid connection state %d, probably indicative of memory corruption",
- conn->status);
+ "invalid connection state %d, probably indicative of memory corruption",
+ conn->status);
goto error_return;
}
@@ -7175,7 +7175,7 @@ pgpassfileWarning(PGconn *conn)
if (sqlstate && strcmp(sqlstate, ERRCODE_INVALID_PASSWORD) == 0)
libpq_append_conn_error(conn, "password retrieved from file \"%s\"",
- conn->pgpassfile);
+ conn->pgpassfile);
}
}
diff --git a/src/interfaces/libpq/fe-exec.c b/src/interfaces/libpq/fe-exec.c
index ec62550e385..0c2dae6ed9e 100644
--- a/src/interfaces/libpq/fe-exec.c
+++ b/src/interfaces/libpq/fe-exec.c
@@ -1444,7 +1444,7 @@ PQsendQueryInternal(PGconn *conn, const char *query, bool newQuery)
if (conn->pipelineStatus != PQ_PIPELINE_OFF)
{
libpq_append_conn_error(conn, "%s not allowed in pipeline mode",
- "PQsendQuery");
+ "PQsendQuery");
return 0;
}
@@ -1512,7 +1512,7 @@ PQsendQueryParams(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -1558,7 +1558,7 @@ PQsendPrepare(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -1652,7 +1652,7 @@ PQsendQueryPrepared(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -2099,10 +2099,9 @@ PQgetResult(PGconn *conn)
/*
* We're about to return the NULL that terminates the round of
- * results from the current query; prepare to send the results
- * of the next query, if any, when we're called next. If there's
- * no next element in the command queue, this gets us in IDLE
- * state.
+ * results from the current query; prepare to send the results of
+ * the next query, if any, when we're called next. If there's no
+ * next element in the command queue, this gets us in IDLE state.
*/
pqPipelineProcessQueue(conn);
res = NULL; /* query is complete */
@@ -3047,6 +3046,7 @@ pqPipelineProcessQueue(PGconn *conn)
return;
case PGASYNC_IDLE:
+
/*
* If we're in IDLE mode and there's some command in the queue,
* get us into PIPELINE_IDLE mode and process normally. Otherwise
diff --git a/src/interfaces/libpq/fe-lobj.c b/src/interfaces/libpq/fe-lobj.c
index 4cb6a468597..206266fd043 100644
--- a/src/interfaces/libpq/fe-lobj.c
+++ b/src/interfaces/libpq/fe-lobj.c
@@ -142,7 +142,7 @@ lo_truncate(PGconn *conn, int fd, size_t len)
if (conn->lobjfuncs->fn_lo_truncate == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_truncate");
+ "lo_truncate");
return -1;
}
@@ -205,7 +205,7 @@ lo_truncate64(PGconn *conn, int fd, pg_int64 len)
if (conn->lobjfuncs->fn_lo_truncate64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_truncate64");
+ "lo_truncate64");
return -1;
}
@@ -395,7 +395,7 @@ lo_lseek64(PGconn *conn, int fd, pg_int64 offset, int whence)
if (conn->lobjfuncs->fn_lo_lseek64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_lseek64");
+ "lo_lseek64");
return -1;
}
@@ -485,7 +485,7 @@ lo_create(PGconn *conn, Oid lobjId)
if (conn->lobjfuncs->fn_lo_create == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_create");
+ "lo_create");
return InvalidOid;
}
@@ -558,7 +558,7 @@ lo_tell64(PGconn *conn, int fd)
if (conn->lobjfuncs->fn_lo_tell64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_tell64");
+ "lo_tell64");
return -1;
}
@@ -667,7 +667,7 @@ lo_import_internal(PGconn *conn, const char *filename, Oid oid)
if (fd < 0)
{ /* error */
libpq_append_conn_error(conn, "could not open file \"%s\": %s",
- filename, strerror_r(errno, sebuf, sizeof(sebuf)));
+ filename, strerror_r(errno, sebuf, sizeof(sebuf)));
return InvalidOid;
}
@@ -723,8 +723,8 @@ lo_import_internal(PGconn *conn, const char *filename, Oid oid)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not read from file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return InvalidOid;
}
@@ -778,8 +778,8 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not open file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return -1;
}
@@ -799,8 +799,8 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not write to file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return -1;
}
}
@@ -822,7 +822,7 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
if (close(fd) != 0 && result >= 0)
{
libpq_append_conn_error(conn, "could not write to file \"%s\": %s",
- filename, strerror_r(errno, sebuf, sizeof(sebuf)));
+ filename, strerror_r(errno, sebuf, sizeof(sebuf)));
result = -1;
}
@@ -954,56 +954,56 @@ lo_initialize(PGconn *conn)
if (lobjfuncs->fn_lo_open == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_open");
+ "lo_open");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_close == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_close");
+ "lo_close");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_creat == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_creat");
+ "lo_creat");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_unlink == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_unlink");
+ "lo_unlink");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_lseek == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_lseek");
+ "lo_lseek");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_tell == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_tell");
+ "lo_tell");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_read == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "loread");
+ "loread");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_write == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lowrite");
+ "lowrite");
free(lobjfuncs);
return -1;
}
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 3653a1a8a62..660cdec93c9 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -749,8 +749,8 @@ retry4:
*/
definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
@@ -1067,7 +1067,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s() failed: %s", "select",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
}
return result;
@@ -1280,7 +1280,7 @@ libpq_ngettext(const char *msgid, const char *msgid_plural, unsigned long n)
* newline.
*/
void
-libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...)
+libpq_append_error(PQExpBuffer errorMessage, const char *fmt,...)
{
int save_errno = errno;
bool done;
@@ -1309,7 +1309,7 @@ libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...)
* format should not end with a newline.
*/
void
-libpq_append_conn_error(PGconn *conn, const char *fmt, ...)
+libpq_append_conn_error(PGconn *conn, const char *fmt,...)
{
int save_errno = errno;
bool done;
diff --git a/src/interfaces/libpq/fe-protocol3.c b/src/interfaces/libpq/fe-protocol3.c
index 8ab6a884165..b79d74f7489 100644
--- a/src/interfaces/libpq/fe-protocol3.c
+++ b/src/interfaces/libpq/fe-protocol3.c
@@ -466,7 +466,7 @@ static void
handleSyncLoss(PGconn *conn, char id, int msgLength)
{
libpq_append_conn_error(conn, "lost synchronization with server: got message type \"%c\", length %d",
- id, msgLength);
+ id, msgLength);
/* build an error result holding the error message */
pqSaveErrorResult(conn);
conn->asyncStatus = PGASYNC_READY; /* drop out of PQgetResult wait loop */
diff --git a/src/interfaces/libpq/fe-secure-common.c b/src/interfaces/libpq/fe-secure-common.c
index de115b37649..3ecc7bf6159 100644
--- a/src/interfaces/libpq/fe-secure-common.c
+++ b/src/interfaces/libpq/fe-secure-common.c
@@ -226,7 +226,7 @@ pq_verify_peer_name_matches_certificate_ip(PGconn *conn,
* wrong given the subject matter.
*/
libpq_append_conn_error(conn, "certificate contains IP address with invalid length %zu",
- iplen);
+ iplen);
return -1;
}
@@ -235,7 +235,7 @@ pq_verify_peer_name_matches_certificate_ip(PGconn *conn,
if (!addrstr)
{
libpq_append_conn_error(conn, "could not convert certificate's IP address to string: %s",
- strerror_r(errno, sebuf, sizeof(sebuf)));
+ strerror_r(errno, sebuf, sizeof(sebuf)));
return -1;
}
@@ -292,7 +292,7 @@ pq_verify_peer_name_matches_certificate(PGconn *conn)
else if (names_examined == 1)
{
libpq_append_conn_error(conn, "server certificate for \"%s\" does not match host name \"%s\"",
- first_name, host);
+ first_name, host);
}
else
{
diff --git a/src/interfaces/libpq/fe-secure-gssapi.c b/src/interfaces/libpq/fe-secure-gssapi.c
index 038e847b7e9..0af4de941af 100644
--- a/src/interfaces/libpq/fe-secure-gssapi.c
+++ b/src/interfaces/libpq/fe-secure-gssapi.c
@@ -213,8 +213,8 @@ pg_GSS_write(PGconn *conn, const void *ptr, size_t len)
if (output.length > PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "client tried to send oversize GSSAPI packet (%zu > %zu)",
- (size_t) output.length,
- PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32));
+ (size_t) output.length,
+ PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32));
errno = EIO; /* for lack of a better idea */
goto cleanup;
}
@@ -349,8 +349,8 @@ pg_GSS_read(PGconn *conn, void *ptr, size_t len)
if (input.length > PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "oversize GSSAPI packet sent by the server (%zu > %zu)",
- (size_t) input.length,
- PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
+ (size_t) input.length,
+ PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
errno = EIO; /* for lack of a better idea */
return -1;
}
@@ -590,8 +590,8 @@ pqsecure_open_gss(PGconn *conn)
if (input.length > PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "oversize GSSAPI packet sent by the server (%zu > %zu)",
- (size_t) input.length,
- PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
+ (size_t) input.length,
+ PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
return PGRES_POLLING_FAILED;
}
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index 6a4431ddfe9..e6da377fb9d 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -213,12 +213,12 @@ rloop:
if (result_errno == EPIPE ||
result_errno == ECONNRESET)
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
else
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
}
else
{
@@ -313,12 +313,12 @@ pgtls_write(PGconn *conn, const void *ptr, size_t len)
result_errno = SOCK_ERRNO;
if (result_errno == EPIPE || result_errno == ECONNRESET)
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
else
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
}
else
{
@@ -415,7 +415,7 @@ pgtls_get_peer_certificate_hash(PGconn *conn, size_t *len)
if (algo_type == NULL)
{
libpq_append_conn_error(conn, "could not find digest for NID %s",
- OBJ_nid2sn(algo_nid));
+ OBJ_nid2sn(algo_nid));
return NULL;
}
break;
@@ -967,7 +967,7 @@ initialize_SSL(PGconn *conn)
if (ssl_min_ver == -1)
{
libpq_append_conn_error(conn, "invalid value \"%s\" for minimum SSL protocol version",
- conn->ssl_min_protocol_version);
+ conn->ssl_min_protocol_version);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -993,7 +993,7 @@ initialize_SSL(PGconn *conn)
if (ssl_max_ver == -1)
{
libpq_append_conn_error(conn, "invalid value \"%s\" for maximum SSL protocol version",
- conn->ssl_max_protocol_version);
+ conn->ssl_max_protocol_version);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1037,7 +1037,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read root certificate file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
SSL_CTX_free(SSL_context);
return -1;
@@ -1089,10 +1089,10 @@ initialize_SSL(PGconn *conn)
*/
if (fnbuf[0] == '\0')
libpq_append_conn_error(conn, "could not get home directory to locate root certificate file\n"
- "Either provide the file or change sslmode to disable server certificate verification.");
+ "Either provide the file or change sslmode to disable server certificate verification.");
else
libpq_append_conn_error(conn, "root certificate file \"%s\" does not exist\n"
- "Either provide the file or change sslmode to disable server certificate verification.", fnbuf);
+ "Either provide the file or change sslmode to disable server certificate verification.", fnbuf);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1122,7 +1122,7 @@ initialize_SSL(PGconn *conn)
if (errno != ENOENT && errno != ENOTDIR)
{
libpq_append_conn_error(conn, "could not open certificate file \"%s\": %s",
- fnbuf, strerror_r(errno, sebuf, sizeof(sebuf)));
+ fnbuf, strerror_r(errno, sebuf, sizeof(sebuf)));
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1140,7 +1140,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read certificate file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
SSL_CTX_free(SSL_context);
return -1;
@@ -1239,7 +1239,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not load SSL engine \"%s\": %s",
- engine_str, err);
+ engine_str, err);
SSLerrfree(err);
free(engine_str);
return -1;
@@ -1250,7 +1250,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not initialize SSL engine \"%s\": %s",
- engine_str, err);
+ engine_str, err);
SSLerrfree(err);
ENGINE_free(conn->engine);
conn->engine = NULL;
@@ -1265,7 +1265,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read private SSL key \"%s\" from engine \"%s\": %s",
- engine_colon, engine_str, err);
+ engine_colon, engine_str, err);
SSLerrfree(err);
ENGINE_finish(conn->engine);
ENGINE_free(conn->engine);
@@ -1278,7 +1278,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not load private SSL key \"%s\" from engine \"%s\": %s",
- engine_colon, engine_str, err);
+ engine_colon, engine_str, err);
SSLerrfree(err);
ENGINE_finish(conn->engine);
ENGINE_free(conn->engine);
@@ -1315,10 +1315,10 @@ initialize_SSL(PGconn *conn)
{
if (errno == ENOENT)
libpq_append_conn_error(conn, "certificate present, but not private key file \"%s\"",
- fnbuf);
+ fnbuf);
else
libpq_append_conn_error(conn, "could not stat private key file \"%s\": %m",
- fnbuf);
+ fnbuf);
return -1;
}
@@ -1326,7 +1326,7 @@ initialize_SSL(PGconn *conn)
if (!S_ISREG(buf.st_mode))
{
libpq_append_conn_error(conn, "private key file \"%s\" is not a regular file",
- fnbuf);
+ fnbuf);
return -1;
}
@@ -1383,7 +1383,7 @@ initialize_SSL(PGconn *conn)
if (SSL_use_PrivateKey_file(conn->ssl, fnbuf, SSL_FILETYPE_ASN1) != 1)
{
libpq_append_conn_error(conn, "could not load private key file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
return -1;
}
@@ -1399,7 +1399,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "certificate does not match private key file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
return -1;
}
@@ -1452,7 +1452,7 @@ open_client_SSL(PGconn *conn)
if (r == -1)
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
else
libpq_append_conn_error(conn, "SSL SYSCALL error: EOF detected");
pgtls_close(conn);
@@ -1494,12 +1494,12 @@ open_client_SSL(PGconn *conn)
case SSL_R_VERSION_TOO_LOW:
#endif
libpq_append_conn_error(conn, "This may indicate that the server does not support any SSL protocol version between %s and %s.",
- conn->ssl_min_protocol_version ?
- conn->ssl_min_protocol_version :
- MIN_OPENSSL_TLS_VERSION,
- conn->ssl_max_protocol_version ?
- conn->ssl_max_protocol_version :
- MAX_OPENSSL_TLS_VERSION);
+ conn->ssl_min_protocol_version ?
+ conn->ssl_min_protocol_version :
+ MIN_OPENSSL_TLS_VERSION,
+ conn->ssl_max_protocol_version ?
+ conn->ssl_max_protocol_version :
+ MAX_OPENSSL_TLS_VERSION);
break;
default:
break;
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 66e401bf3d9..8069e381424 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -255,14 +255,14 @@ pqsecure_raw_read(PGconn *conn, void *ptr, size_t len)
case EPIPE:
case ECONNRESET:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
break;
default:
libpq_append_conn_error(conn, "could not receive data from server: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
break;
}
}
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index d7ec5ed4293..85289980a11 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -888,8 +888,8 @@ extern char *libpq_ngettext(const char *msgid, const char *msgid_plural, unsigne
*/
#undef _
-extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...) pg_attribute_printf(2, 3);
-extern void libpq_append_conn_error(PGconn *conn, const char *fmt, ...) pg_attribute_printf(2, 3);
+extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt,...) pg_attribute_printf(2, 3);
+extern void libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
/*
* These macros are needed to let error-handling code be portable between
--
2.34.1
v14-0002-Refactor-libpq-to-store-addrinfo-in-a-libpq-owne.patchapplication/octet-stream; name=v14-0002-Refactor-libpq-to-store-addrinfo-in-a-libpq-owne.patchDownload
From 879072565bebb377470fe847ed27e2b5b1b0a2f3 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 25 Jan 2023 10:22:41 +0100
Subject: [PATCH v14 2/5] Refactor libpq to store addrinfo in a libpq owned
array
This refactors libpq to copy addrinfos returned by getaddrinfo to
memory owned by us. This refactoring is useful for two upcoming patches,
which need to change the addrinfo list in some way. Doing that with the
original addrinfo list is risky since we don't control how memory is
freed. Also changing the contents of a C array is quite a bit easier
than changing a linked list.
As a nice side effect of this refactor the is that mechanism for
iteration over addresses in PQconnectPoll is now identical to its
iteration over hosts.
---
src/include/libpq/pqcomm.h | 6 ++
src/interfaces/libpq/fe-connect.c | 107 +++++++++++++++++++++---------
src/interfaces/libpq/libpq-int.h | 6 +-
src/tools/pgindent/typedefs.list | 1 +
4 files changed, 87 insertions(+), 33 deletions(-)
diff --git a/src/include/libpq/pqcomm.h b/src/include/libpq/pqcomm.h
index 66ba359390f..ee28e223bd7 100644
--- a/src/include/libpq/pqcomm.h
+++ b/src/include/libpq/pqcomm.h
@@ -27,6 +27,12 @@ typedef struct
socklen_t salen;
} SockAddr;
+typedef struct
+{
+ int family;
+ SockAddr addr;
+} AddrInfo;
+
/* Configure the UNIX socket location for the well known port. */
#define UNIXSOCK_PATH(path, port, sockdir) \
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 97e47f05852..41deeee9a63 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -379,6 +379,7 @@ static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
+static bool store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
static PQconninfoOption *conninfo_init(PQExpBuffer errorMessage);
static PQconninfoOption *parse_connection_string(const char *connstr,
@@ -2077,7 +2078,7 @@ connectDBComplete(PGconn *conn)
time_t finish_time = ((time_t) -1);
int timeout = 0;
int last_whichhost = -2; /* certainly different from whichhost */
- struct addrinfo *last_addr_cur = NULL;
+ int last_whichaddr = -2; /* certainly different from whichaddr */
if (conn == NULL || conn->status == CONNECTION_BAD)
return 0;
@@ -2121,11 +2122,11 @@ connectDBComplete(PGconn *conn)
if (flag != PGRES_POLLING_OK &&
timeout > 0 &&
(conn->whichhost != last_whichhost ||
- conn->addr_cur != last_addr_cur))
+ conn->whichaddr != last_whichaddr))
{
finish_time = time(NULL) + timeout;
last_whichhost = conn->whichhost;
- last_addr_cur = conn->addr_cur;
+ last_whichaddr = conn->whichaddr;
}
/*
@@ -2272,9 +2273,9 @@ keep_going: /* We will come back to here until there is
/* Time to advance to next address, or next host if no more addresses? */
if (conn->try_next_addr)
{
- if (conn->addr_cur && conn->addr_cur->ai_next)
+ if (conn->whichaddr < conn->naddr)
{
- conn->addr_cur = conn->addr_cur->ai_next;
+ conn->whichaddr++;
reset_connection_state_machine = true;
}
else
@@ -2287,6 +2288,7 @@ keep_going: /* We will come back to here until there is
{
pg_conn_host *ch;
struct addrinfo hint;
+ struct addrinfo *addrlist;
int thisport;
int ret;
char portstr[MAXPGPATH];
@@ -2327,7 +2329,7 @@ keep_going: /* We will come back to here until there is
/* Initialize hint structure */
MemSet(&hint, 0, sizeof(hint));
hint.ai_socktype = SOCK_STREAM;
- conn->addrlist_family = hint.ai_family = AF_UNSPEC;
+ hint.ai_family = AF_UNSPEC;
/* Figure out the port number we're going to use. */
if (ch->port == NULL || ch->port[0] == '\0')
@@ -2350,8 +2352,8 @@ keep_going: /* We will come back to here until there is
{
case CHT_HOST_NAME:
ret = pg_getaddrinfo_all(ch->host, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not translate host name \"%s\" to address: %s",
ch->host, gai_strerror(ret));
@@ -2362,8 +2364,8 @@ keep_going: /* We will come back to here until there is
case CHT_HOST_ADDRESS:
hint.ai_flags = AI_NUMERICHOST;
ret = pg_getaddrinfo_all(ch->hostaddr, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not parse network address \"%s\": %s",
ch->hostaddr, gai_strerror(ret));
@@ -2372,7 +2374,7 @@ keep_going: /* We will come back to here until there is
break;
case CHT_UNIX_SOCKET:
- conn->addrlist_family = hint.ai_family = AF_UNIX;
+ hint.ai_family = AF_UNIX;
UNIXSOCK_PATH(portstr, thisport, ch->host);
if (strlen(portstr) >= UNIXSOCK_PATH_BUFLEN)
{
@@ -2387,8 +2389,8 @@ keep_going: /* We will come back to here until there is
* name as a Unix-domain socket path.
*/
ret = pg_getaddrinfo_all(NULL, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not translate Unix-domain socket path \"%s\" to address: %s",
portstr, gai_strerror(ret));
@@ -2397,8 +2399,14 @@ keep_going: /* We will come back to here until there is
break;
}
- /* OK, scan this addrlist for a working server address */
- conn->addr_cur = conn->addrlist;
+ if (!store_conn_addrinfo(conn, addrlist))
+ {
+ pg_freeaddrinfo_all(hint.ai_family, addrlist);
+ libpq_append_conn_error(conn, "out of memory");
+ goto error_return;
+ }
+ pg_freeaddrinfo_all(hint.ai_family, addrlist);
+
reset_connection_state_machine = true;
conn->try_next_host = false;
}
@@ -2455,30 +2463,29 @@ keep_going: /* We will come back to here until there is
{
/*
* Try to initiate a connection to one of the addresses
- * returned by pg_getaddrinfo_all(). conn->addr_cur is the
+ * returned by pg_getaddrinfo_all(). conn->whichaddr is the
* next one to try.
*
* The extra level of braces here is historical. It's not
* worth reindenting this whole switch case to remove 'em.
*/
{
- struct addrinfo *addr_cur = conn->addr_cur;
char host_addr[NI_MAXHOST];
+ AddrInfo *addr_cur;
/*
* Advance to next possible host, if we've tried all of
* the addresses for the current host.
*/
- if (addr_cur == NULL)
+ if (conn->whichaddr == conn->naddr)
{
conn->try_next_host = true;
goto keep_going;
}
+ addr_cur = &conn->addr[conn->whichaddr];
/* Remember current address for possible use later */
- memcpy(&conn->raddr.addr, addr_cur->ai_addr,
- addr_cur->ai_addrlen);
- conn->raddr.salen = addr_cur->ai_addrlen;
+ memcpy(&conn->raddr, &addr_cur->addr, sizeof(SockAddr));
/*
* Set connip, too. Note we purposely ignore strdup
@@ -2494,7 +2501,7 @@ keep_going: /* We will come back to here until there is
conn->connip = strdup(host_addr);
/* Try to create the socket */
- conn->sock = socket(addr_cur->ai_family, SOCK_STREAM, 0);
+ conn->sock = socket(addr_cur->family, SOCK_STREAM, 0);
if (conn->sock == PGINVALID_SOCKET)
{
int errorno = SOCK_ERRNO;
@@ -2505,7 +2512,7 @@ keep_going: /* We will come back to here until there is
* cases where the address list includes both IPv4 and
* IPv6 but kernel only accepts one family.
*/
- if (addr_cur->ai_next != NULL ||
+ if (conn->whichaddr < conn->naddr ||
conn->whichhost + 1 < conn->nconnhost)
{
conn->try_next_addr = true;
@@ -2531,7 +2538,7 @@ keep_going: /* We will come back to here until there is
* TCP sockets, nonblock mode, close-on-exec. Try the
* next address if any of this fails.
*/
- if (addr_cur->ai_family != AF_UNIX)
+ if (addr_cur->family != AF_UNIX)
{
if (!connectNoDelay(conn))
{
@@ -2558,7 +2565,7 @@ keep_going: /* We will come back to here until there is
}
#endif /* F_SETFD */
- if (addr_cur->ai_family != AF_UNIX)
+ if (addr_cur->family != AF_UNIX)
{
#ifndef WIN32
int on = 1;
@@ -2650,8 +2657,8 @@ keep_going: /* We will come back to here until there is
* Start/make connection. This should not block, since we
* are in nonblock mode. If it does, well, too bad.
*/
- if (connect(conn->sock, addr_cur->ai_addr,
- addr_cur->ai_addrlen) < 0)
+ if (connect(conn->sock, (struct sockaddr *) &addr_cur->addr.addr,
+ addr_cur->addr.salen) < 0)
{
if (SOCK_ERRNO == EINPROGRESS ||
#ifdef WIN32
@@ -4068,6 +4075,45 @@ freePGconn(PGconn *conn)
free(conn);
}
+/*
+ * Copies over the addrinfos from addrlist to the PGconn. The reason we do this
+ * so that we can edit the resulting list as we please, because now the memory
+ * is owned by us. Changing the original addrinfo directly is risky, since we
+ * don't control how the memory is freed and by changing it we might confuse
+ * the implementation of freeaddrinfo.
+ */
+static bool
+store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist)
+{
+ struct addrinfo *ai = addrlist;
+
+ conn->whichaddr = 0;
+
+ conn->naddr = 0;
+ while (ai)
+ {
+ ai = ai->ai_next;
+ conn->naddr++;
+ }
+
+ conn->addr = calloc(conn->naddr, sizeof(AddrInfo));
+ if (conn->addr == NULL)
+ return false;
+
+ ai = addrlist;
+ for (int i = 0; i < conn->naddr; i++)
+ {
+ conn->addr[i].family = ai->ai_family;
+
+ memcpy(&conn->addr[i].addr.addr, ai->ai_addr,
+ ai->ai_addrlen);
+ conn->addr[i].addr.salen = ai->ai_addrlen;
+ ai = ai->ai_next;
+ }
+
+ return true;
+}
+
/*
* release_conn_addrinfo
* - Free any addrinfo list in the PGconn.
@@ -4075,11 +4121,10 @@ freePGconn(PGconn *conn)
static void
release_conn_addrinfo(PGconn *conn)
{
- if (conn->addrlist)
+ if (conn->addr)
{
- pg_freeaddrinfo_all(conn->addrlist_family, conn->addrlist);
- conn->addrlist = NULL;
- conn->addr_cur = NULL; /* for safety */
+ free(conn->addr);
+ conn->addr = NULL;
}
}
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 85289980a11..4d40e8a2fbb 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -461,8 +461,10 @@ struct pg_conn
PGTargetServerType target_server_type; /* desired session properties */
bool try_next_addr; /* time to advance to next address/host? */
bool try_next_host; /* time to advance to next connhost[]? */
- struct addrinfo *addrlist; /* list of addresses for current connhost */
- struct addrinfo *addr_cur; /* the one currently being tried */
+ int naddr; /* number of addresses returned by getaddrinfo */
+ int whichaddr; /* the address currently being tried */
+ AddrInfo *addr; /* the array of addresses for the currently
+ * tried host */
int addrlist_family; /* needed to know how to free addrlist */
bool send_appname; /* okay to send application_name? */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 86a9303bf56..fa8881c9d93 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -26,6 +26,7 @@ AcquireSampleRowsFunc
ActionList
ActiveSnapshotElt
AddForeignUpdateTargets_function
+AddrInfo
AffixNode
AffixNodeData
AfterTriggerEvent
--
2.34.1
It looks like this needs a big rebase in fea-uth.c fe-auth-scram.c and
fe-connect.c. Every hunk is failing which perhaps means the code
you're patching has been moved or refactored?
"Gregory Stark (as CFM)" <stark.cfm@gmail.com> writes:
It looks like this needs a big rebase in fea-uth.c fe-auth-scram.c and
fe-connect.c. Every hunk is failing which perhaps means the code
you're patching has been moved or refactored?
The cfbot is giving up after
v14-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patch fails,
but that's been superseded (at least in part) by b6dfee28f.
regards, tom lane
On Tue, 14 Mar 2023 at 13:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:
"Gregory Stark (as CFM)" <stark.cfm@gmail.com> writes:
It looks like this needs a big rebase in fea-uth.c fe-auth-scram.c and
fe-connect.c. Every hunk is failing which perhaps means the code
you're patching has been moved or refactored?The cfbot is giving up after
v14-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patch fails,
but that's been superseded (at least in part) by b6dfee28f.
Ah, same with Jelte Fennema's patch for load balancing in libpq.
--
greg
The rebase was indeed trivial (git handled everything automatically),
because my first patch was doing a superset of the changes that were
committed in b6dfee28f. Attached are the new patches.
Show quoted text
On Tue, 14 Mar 2023 at 19:04, Greg Stark <stark@mit.edu> wrote:
On Tue, 14 Mar 2023 at 13:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:
"Gregory Stark (as CFM)" <stark.cfm@gmail.com> writes:
It looks like this needs a big rebase in fea-uth.c fe-auth-scram.c and
fe-connect.c. Every hunk is failing which perhaps means the code
you're patching has been moved or refactored?The cfbot is giving up after
v14-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patch fails,
but that's been superseded (at least in part) by b6dfee28f.Ah, same with Jelte Fennema's patch for load balancing in libpq.
--
greg
Attachments:
v15-0003-Return-2-from-pqReadData-on-EOF.patchapplication/octet-stream; name=v15-0003-Return-2-from-pqReadData-on-EOF.patchDownload
From 9c6a7f0d524036ea6e9dcb5e2ec360ff0b12b670 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Thu, 26 Jan 2023 12:24:38 +0100
Subject: [PATCH v15 3/5] Return -2 from pqReadData on EOF
This patch changes pqReadData to return -2 when a connection is cleanly
closed by the other side. For most of the Postgres protocol this is
considered an error, because the client will close the connection
instead of the server. But for Postgres its cancellation protocol
the distinction between errors and clean connection closure is
important, because clean connection closure is the way for the server to
signal that the cancellation was handled.
This patch is in preparation for a follow-up patch where pqReadData is
used for the cancellation protocol implementation.
No existing callsites of pqReadData or any of its internal functions
need to be updated as all of them check if the result is less than 0
instead a strict comparison against -1.
---
src/interfaces/libpq/fe-misc.c | 15 +++++++++++----
src/interfaces/libpq/fe-secure-openssl.c | 2 +-
src/interfaces/libpq/fe-secure.c | 6 ++++++
3 files changed, 18 insertions(+), 5 deletions(-)
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 660cdec93c9..2d49188d910 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -556,8 +556,11 @@ pqPutMsgEnd(PGconn *conn)
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
- * -1: error detected (including EOF = connection closure);
+ * -1: error detected (excluding EOF = clean connection closure);
* conn->errorMessage set
+ * -2: EOF detected, connection is closed cleanly by other side;
+ * conn->errorMessage set
+ *
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
@@ -639,7 +642,7 @@ retry3:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -734,7 +737,7 @@ retry4:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -751,13 +754,17 @@ definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.");
+ /* Do *not* drop any already-read data; caller still wants it */
+ pqDropConnection(conn, false);
+ conn->status = CONNECTION_BAD; /* No more connection to backend */
+ return -2;
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
/* Do *not* drop any already-read data; caller still wants it */
pqDropConnection(conn, false);
conn->status = CONNECTION_BAD; /* No more connection to backend */
- return -1;
+ return nread < 0 ? nread : -1;
}
/*
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index e6da377fb9d..8b5909e08ef 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -248,7 +248,7 @@ rloop:
*/
libpq_append_conn_error(conn, "SSL connection has been closed unexpectedly");
result_errno = ECONNRESET;
- n = -1;
+ n = -2;
break;
default:
libpq_append_conn_error(conn, "unrecognized SSL error code: %d", err);
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 8069e381424..20265dcb317 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -199,6 +199,12 @@ pqsecure_close(PGconn *conn)
* On failure, this function is responsible for appending a suitable message
* to conn->errorMessage. The caller must still inspect errno, but only
* to determine whether to continue/retry after error.
+ *
+ * Returns -1 in case of failures, except in the case of where a failure means
+ * that there was a clean connection closure, in those cases -2 is returned.
+ * Currently only the TLS implementation of pqsecure_read ever returns -2. For
+ * the other implementations a clean connection closure is detected in
+ * pqReadData instead.
*/
ssize_t
pqsecure_read(PGconn *conn, void *ptr, size_t len)
--
2.34.1
v15-0002-Refactor-libpq-to-store-addrinfo-in-a-libpq-owne.patchapplication/octet-stream; name=v15-0002-Refactor-libpq-to-store-addrinfo-in-a-libpq-owne.patchDownload
From e6e480d539dcccb342b3687449ff6700a5f053a7 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 25 Jan 2023 10:22:41 +0100
Subject: [PATCH v15 2/5] Refactor libpq to store addrinfo in a libpq owned
array
This refactors libpq to copy addrinfos returned by getaddrinfo to
memory owned by us. This refactoring is useful for two upcoming patches,
which need to change the addrinfo list in some way. Doing that with the
original addrinfo list is risky since we don't control how memory is
freed. Also changing the contents of a C array is quite a bit easier
than changing a linked list.
As a nice side effect of this refactor the is that mechanism for
iteration over addresses in PQconnectPoll is now identical to its
iteration over hosts.
---
src/include/libpq/pqcomm.h | 6 ++
src/interfaces/libpq/fe-connect.c | 107 +++++++++++++++++++++---------
src/interfaces/libpq/libpq-int.h | 6 +-
src/tools/pgindent/typedefs.list | 1 +
4 files changed, 87 insertions(+), 33 deletions(-)
diff --git a/src/include/libpq/pqcomm.h b/src/include/libpq/pqcomm.h
index 5268d442abe..507ee825824 100644
--- a/src/include/libpq/pqcomm.h
+++ b/src/include/libpq/pqcomm.h
@@ -27,6 +27,12 @@ typedef struct
socklen_t salen;
} SockAddr;
+typedef struct
+{
+ int family;
+ SockAddr addr;
+} AddrInfo;
+
/* Configure the UNIX socket location for the well known port. */
#define UNIXSOCK_PATH(path, port, sockdir) \
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index dd4b98e0998..b085892feac 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -383,6 +383,7 @@ static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
+static bool store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
static PQconninfoOption *conninfo_init(PQExpBuffer errorMessage);
static PQconninfoOption *parse_connection_string(const char *connstr,
@@ -2246,7 +2247,7 @@ connectDBComplete(PGconn *conn)
time_t finish_time = ((time_t) -1);
int timeout = 0;
int last_whichhost = -2; /* certainly different from whichhost */
- struct addrinfo *last_addr_cur = NULL;
+ int last_whichaddr = -2; /* certainly different from whichaddr */
if (conn == NULL || conn->status == CONNECTION_BAD)
return 0;
@@ -2290,11 +2291,11 @@ connectDBComplete(PGconn *conn)
if (flag != PGRES_POLLING_OK &&
timeout > 0 &&
(conn->whichhost != last_whichhost ||
- conn->addr_cur != last_addr_cur))
+ conn->whichaddr != last_whichaddr))
{
finish_time = time(NULL) + timeout;
last_whichhost = conn->whichhost;
- last_addr_cur = conn->addr_cur;
+ last_whichaddr = conn->whichaddr;
}
/*
@@ -2441,9 +2442,9 @@ keep_going: /* We will come back to here until there is
/* Time to advance to next address, or next host if no more addresses? */
if (conn->try_next_addr)
{
- if (conn->addr_cur && conn->addr_cur->ai_next)
+ if (conn->whichaddr < conn->naddr)
{
- conn->addr_cur = conn->addr_cur->ai_next;
+ conn->whichaddr++;
reset_connection_state_machine = true;
}
else
@@ -2456,6 +2457,7 @@ keep_going: /* We will come back to here until there is
{
pg_conn_host *ch;
struct addrinfo hint;
+ struct addrinfo *addrlist;
int thisport;
int ret;
char portstr[MAXPGPATH];
@@ -2496,7 +2498,7 @@ keep_going: /* We will come back to here until there is
/* Initialize hint structure */
MemSet(&hint, 0, sizeof(hint));
hint.ai_socktype = SOCK_STREAM;
- conn->addrlist_family = hint.ai_family = AF_UNSPEC;
+ hint.ai_family = AF_UNSPEC;
/* Figure out the port number we're going to use. */
if (ch->port == NULL || ch->port[0] == '\0')
@@ -2519,8 +2521,8 @@ keep_going: /* We will come back to here until there is
{
case CHT_HOST_NAME:
ret = pg_getaddrinfo_all(ch->host, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not translate host name \"%s\" to address: %s",
ch->host, gai_strerror(ret));
@@ -2531,8 +2533,8 @@ keep_going: /* We will come back to here until there is
case CHT_HOST_ADDRESS:
hint.ai_flags = AI_NUMERICHOST;
ret = pg_getaddrinfo_all(ch->hostaddr, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not parse network address \"%s\": %s",
ch->hostaddr, gai_strerror(ret));
@@ -2541,7 +2543,7 @@ keep_going: /* We will come back to here until there is
break;
case CHT_UNIX_SOCKET:
- conn->addrlist_family = hint.ai_family = AF_UNIX;
+ hint.ai_family = AF_UNIX;
UNIXSOCK_PATH(portstr, thisport, ch->host);
if (strlen(portstr) >= UNIXSOCK_PATH_BUFLEN)
{
@@ -2556,8 +2558,8 @@ keep_going: /* We will come back to here until there is
* name as a Unix-domain socket path.
*/
ret = pg_getaddrinfo_all(NULL, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not translate Unix-domain socket path \"%s\" to address: %s",
portstr, gai_strerror(ret));
@@ -2566,8 +2568,14 @@ keep_going: /* We will come back to here until there is
break;
}
- /* OK, scan this addrlist for a working server address */
- conn->addr_cur = conn->addrlist;
+ if (!store_conn_addrinfo(conn, addrlist))
+ {
+ pg_freeaddrinfo_all(hint.ai_family, addrlist);
+ libpq_append_conn_error(conn, "out of memory");
+ goto error_return;
+ }
+ pg_freeaddrinfo_all(hint.ai_family, addrlist);
+
reset_connection_state_machine = true;
conn->try_next_host = false;
}
@@ -2624,30 +2632,29 @@ keep_going: /* We will come back to here until there is
{
/*
* Try to initiate a connection to one of the addresses
- * returned by pg_getaddrinfo_all(). conn->addr_cur is the
+ * returned by pg_getaddrinfo_all(). conn->whichaddr is the
* next one to try.
*
* The extra level of braces here is historical. It's not
* worth reindenting this whole switch case to remove 'em.
*/
{
- struct addrinfo *addr_cur = conn->addr_cur;
char host_addr[NI_MAXHOST];
+ AddrInfo *addr_cur;
/*
* Advance to next possible host, if we've tried all of
* the addresses for the current host.
*/
- if (addr_cur == NULL)
+ if (conn->whichaddr == conn->naddr)
{
conn->try_next_host = true;
goto keep_going;
}
+ addr_cur = &conn->addr[conn->whichaddr];
/* Remember current address for possible use later */
- memcpy(&conn->raddr.addr, addr_cur->ai_addr,
- addr_cur->ai_addrlen);
- conn->raddr.salen = addr_cur->ai_addrlen;
+ memcpy(&conn->raddr, &addr_cur->addr, sizeof(SockAddr));
/*
* Set connip, too. Note we purposely ignore strdup
@@ -2663,7 +2670,7 @@ keep_going: /* We will come back to here until there is
conn->connip = strdup(host_addr);
/* Try to create the socket */
- conn->sock = socket(addr_cur->ai_family, SOCK_STREAM, 0);
+ conn->sock = socket(addr_cur->family, SOCK_STREAM, 0);
if (conn->sock == PGINVALID_SOCKET)
{
int errorno = SOCK_ERRNO;
@@ -2674,7 +2681,7 @@ keep_going: /* We will come back to here until there is
* cases where the address list includes both IPv4 and
* IPv6 but kernel only accepts one family.
*/
- if (addr_cur->ai_next != NULL ||
+ if (conn->whichaddr < conn->naddr ||
conn->whichhost + 1 < conn->nconnhost)
{
conn->try_next_addr = true;
@@ -2700,7 +2707,7 @@ keep_going: /* We will come back to here until there is
* TCP sockets, nonblock mode, close-on-exec. Try the
* next address if any of this fails.
*/
- if (addr_cur->ai_family != AF_UNIX)
+ if (addr_cur->family != AF_UNIX)
{
if (!connectNoDelay(conn))
{
@@ -2727,7 +2734,7 @@ keep_going: /* We will come back to here until there is
}
#endif /* F_SETFD */
- if (addr_cur->ai_family != AF_UNIX)
+ if (addr_cur->family != AF_UNIX)
{
#ifndef WIN32
int on = 1;
@@ -2819,8 +2826,8 @@ keep_going: /* We will come back to here until there is
* Start/make connection. This should not block, since we
* are in nonblock mode. If it does, well, too bad.
*/
- if (connect(conn->sock, addr_cur->ai_addr,
- addr_cur->ai_addrlen) < 0)
+ if (connect(conn->sock, (struct sockaddr *) &addr_cur->addr.addr,
+ addr_cur->addr.salen) < 0)
{
if (SOCK_ERRNO == EINPROGRESS ||
#ifdef WIN32
@@ -4243,6 +4250,45 @@ freePGconn(PGconn *conn)
free(conn);
}
+/*
+ * Copies over the addrinfos from addrlist to the PGconn. The reason we do this
+ * so that we can edit the resulting list as we please, because now the memory
+ * is owned by us. Changing the original addrinfo directly is risky, since we
+ * don't control how the memory is freed and by changing it we might confuse
+ * the implementation of freeaddrinfo.
+ */
+static bool
+store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist)
+{
+ struct addrinfo *ai = addrlist;
+
+ conn->whichaddr = 0;
+
+ conn->naddr = 0;
+ while (ai)
+ {
+ ai = ai->ai_next;
+ conn->naddr++;
+ }
+
+ conn->addr = calloc(conn->naddr, sizeof(AddrInfo));
+ if (conn->addr == NULL)
+ return false;
+
+ ai = addrlist;
+ for (int i = 0; i < conn->naddr; i++)
+ {
+ conn->addr[i].family = ai->ai_family;
+
+ memcpy(&conn->addr[i].addr.addr, ai->ai_addr,
+ ai->ai_addrlen);
+ conn->addr[i].addr.salen = ai->ai_addrlen;
+ ai = ai->ai_next;
+ }
+
+ return true;
+}
+
/*
* release_conn_addrinfo
* - Free any addrinfo list in the PGconn.
@@ -4250,11 +4296,10 @@ freePGconn(PGconn *conn)
static void
release_conn_addrinfo(PGconn *conn)
{
- if (conn->addrlist)
+ if (conn->addr)
{
- pg_freeaddrinfo_all(conn->addrlist_family, conn->addrlist);
- conn->addrlist = NULL;
- conn->addr_cur = NULL; /* for safety */
+ free(conn->addr);
+ conn->addr = NULL;
}
}
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 8890525cdf4..cf10ea15aa1 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -470,8 +470,10 @@ struct pg_conn
PGTargetServerType target_server_type; /* desired session properties */
bool try_next_addr; /* time to advance to next address/host? */
bool try_next_host; /* time to advance to next connhost[]? */
- struct addrinfo *addrlist; /* list of addresses for current connhost */
- struct addrinfo *addr_cur; /* the one currently being tried */
+ int naddr; /* number of addresses returned by getaddrinfo */
+ int whichaddr; /* the address currently being tried */
+ AddrInfo *addr; /* the array of addresses for the currently
+ * tried host */
int addrlist_family; /* needed to know how to free addrlist */
bool send_appname; /* okay to send application_name? */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 86a9303bf56..fa8881c9d93 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -26,6 +26,7 @@ AcquireSampleRowsFunc
ActionList
ActiveSnapshotElt
AddForeignUpdateTargets_function
+AddrInfo
AffixNode
AffixNodeData
AfterTriggerEvent
--
2.34.1
v15-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patchapplication/octet-stream; name=v15-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patchDownload
From 70a7560ae3d88818598ff9932c39a8a2d3c64740 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 30 Nov 2022 10:07:19 +0100
Subject: [PATCH v15 1/5] libpq: Run pgindent after a9e9a9f32b3
It seems that pgindent was not run after the error handling refactor in
commit a9e9a9f32b35edf129c88e8b929ef223f8511f59. This fixes that and
also addresses a few other things pgindent wanted to change in libpq.
---
src/interfaces/libpq/fe-exec.c | 16 +++---
src/interfaces/libpq/fe-lobj.c | 42 ++++++++--------
src/interfaces/libpq/fe-misc.c | 10 ++--
src/interfaces/libpq/fe-protocol3.c | 2 +-
src/interfaces/libpq/fe-secure-common.c | 6 +--
src/interfaces/libpq/fe-secure-gssapi.c | 12 ++---
src/interfaces/libpq/fe-secure-openssl.c | 64 ++++++++++++------------
src/interfaces/libpq/fe-secure.c | 8 +--
src/interfaces/libpq/libpq-int.h | 4 +-
9 files changed, 82 insertions(+), 82 deletions(-)
diff --git a/src/interfaces/libpq/fe-exec.c b/src/interfaces/libpq/fe-exec.c
index ec62550e385..0c2dae6ed9e 100644
--- a/src/interfaces/libpq/fe-exec.c
+++ b/src/interfaces/libpq/fe-exec.c
@@ -1444,7 +1444,7 @@ PQsendQueryInternal(PGconn *conn, const char *query, bool newQuery)
if (conn->pipelineStatus != PQ_PIPELINE_OFF)
{
libpq_append_conn_error(conn, "%s not allowed in pipeline mode",
- "PQsendQuery");
+ "PQsendQuery");
return 0;
}
@@ -1512,7 +1512,7 @@ PQsendQueryParams(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -1558,7 +1558,7 @@ PQsendPrepare(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -1652,7 +1652,7 @@ PQsendQueryPrepared(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -2099,10 +2099,9 @@ PQgetResult(PGconn *conn)
/*
* We're about to return the NULL that terminates the round of
- * results from the current query; prepare to send the results
- * of the next query, if any, when we're called next. If there's
- * no next element in the command queue, this gets us in IDLE
- * state.
+ * results from the current query; prepare to send the results of
+ * the next query, if any, when we're called next. If there's no
+ * next element in the command queue, this gets us in IDLE state.
*/
pqPipelineProcessQueue(conn);
res = NULL; /* query is complete */
@@ -3047,6 +3046,7 @@ pqPipelineProcessQueue(PGconn *conn)
return;
case PGASYNC_IDLE:
+
/*
* If we're in IDLE mode and there's some command in the queue,
* get us into PIPELINE_IDLE mode and process normally. Otherwise
diff --git a/src/interfaces/libpq/fe-lobj.c b/src/interfaces/libpq/fe-lobj.c
index 4cb6a468597..206266fd043 100644
--- a/src/interfaces/libpq/fe-lobj.c
+++ b/src/interfaces/libpq/fe-lobj.c
@@ -142,7 +142,7 @@ lo_truncate(PGconn *conn, int fd, size_t len)
if (conn->lobjfuncs->fn_lo_truncate == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_truncate");
+ "lo_truncate");
return -1;
}
@@ -205,7 +205,7 @@ lo_truncate64(PGconn *conn, int fd, pg_int64 len)
if (conn->lobjfuncs->fn_lo_truncate64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_truncate64");
+ "lo_truncate64");
return -1;
}
@@ -395,7 +395,7 @@ lo_lseek64(PGconn *conn, int fd, pg_int64 offset, int whence)
if (conn->lobjfuncs->fn_lo_lseek64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_lseek64");
+ "lo_lseek64");
return -1;
}
@@ -485,7 +485,7 @@ lo_create(PGconn *conn, Oid lobjId)
if (conn->lobjfuncs->fn_lo_create == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_create");
+ "lo_create");
return InvalidOid;
}
@@ -558,7 +558,7 @@ lo_tell64(PGconn *conn, int fd)
if (conn->lobjfuncs->fn_lo_tell64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_tell64");
+ "lo_tell64");
return -1;
}
@@ -667,7 +667,7 @@ lo_import_internal(PGconn *conn, const char *filename, Oid oid)
if (fd < 0)
{ /* error */
libpq_append_conn_error(conn, "could not open file \"%s\": %s",
- filename, strerror_r(errno, sebuf, sizeof(sebuf)));
+ filename, strerror_r(errno, sebuf, sizeof(sebuf)));
return InvalidOid;
}
@@ -723,8 +723,8 @@ lo_import_internal(PGconn *conn, const char *filename, Oid oid)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not read from file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return InvalidOid;
}
@@ -778,8 +778,8 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not open file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return -1;
}
@@ -799,8 +799,8 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not write to file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return -1;
}
}
@@ -822,7 +822,7 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
if (close(fd) != 0 && result >= 0)
{
libpq_append_conn_error(conn, "could not write to file \"%s\": %s",
- filename, strerror_r(errno, sebuf, sizeof(sebuf)));
+ filename, strerror_r(errno, sebuf, sizeof(sebuf)));
result = -1;
}
@@ -954,56 +954,56 @@ lo_initialize(PGconn *conn)
if (lobjfuncs->fn_lo_open == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_open");
+ "lo_open");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_close == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_close");
+ "lo_close");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_creat == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_creat");
+ "lo_creat");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_unlink == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_unlink");
+ "lo_unlink");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_lseek == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_lseek");
+ "lo_lseek");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_tell == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_tell");
+ "lo_tell");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_read == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "loread");
+ "loread");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_write == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lowrite");
+ "lowrite");
free(lobjfuncs);
return -1;
}
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 3653a1a8a62..660cdec93c9 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -749,8 +749,8 @@ retry4:
*/
definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
@@ -1067,7 +1067,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s() failed: %s", "select",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
}
return result;
@@ -1280,7 +1280,7 @@ libpq_ngettext(const char *msgid, const char *msgid_plural, unsigned long n)
* newline.
*/
void
-libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...)
+libpq_append_error(PQExpBuffer errorMessage, const char *fmt,...)
{
int save_errno = errno;
bool done;
@@ -1309,7 +1309,7 @@ libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...)
* format should not end with a newline.
*/
void
-libpq_append_conn_error(PGconn *conn, const char *fmt, ...)
+libpq_append_conn_error(PGconn *conn, const char *fmt,...)
{
int save_errno = errno;
bool done;
diff --git a/src/interfaces/libpq/fe-protocol3.c b/src/interfaces/libpq/fe-protocol3.c
index 8ab6a884165..b79d74f7489 100644
--- a/src/interfaces/libpq/fe-protocol3.c
+++ b/src/interfaces/libpq/fe-protocol3.c
@@ -466,7 +466,7 @@ static void
handleSyncLoss(PGconn *conn, char id, int msgLength)
{
libpq_append_conn_error(conn, "lost synchronization with server: got message type \"%c\", length %d",
- id, msgLength);
+ id, msgLength);
/* build an error result holding the error message */
pqSaveErrorResult(conn);
conn->asyncStatus = PGASYNC_READY; /* drop out of PQgetResult wait loop */
diff --git a/src/interfaces/libpq/fe-secure-common.c b/src/interfaces/libpq/fe-secure-common.c
index de115b37649..3ecc7bf6159 100644
--- a/src/interfaces/libpq/fe-secure-common.c
+++ b/src/interfaces/libpq/fe-secure-common.c
@@ -226,7 +226,7 @@ pq_verify_peer_name_matches_certificate_ip(PGconn *conn,
* wrong given the subject matter.
*/
libpq_append_conn_error(conn, "certificate contains IP address with invalid length %zu",
- iplen);
+ iplen);
return -1;
}
@@ -235,7 +235,7 @@ pq_verify_peer_name_matches_certificate_ip(PGconn *conn,
if (!addrstr)
{
libpq_append_conn_error(conn, "could not convert certificate's IP address to string: %s",
- strerror_r(errno, sebuf, sizeof(sebuf)));
+ strerror_r(errno, sebuf, sizeof(sebuf)));
return -1;
}
@@ -292,7 +292,7 @@ pq_verify_peer_name_matches_certificate(PGconn *conn)
else if (names_examined == 1)
{
libpq_append_conn_error(conn, "server certificate for \"%s\" does not match host name \"%s\"",
- first_name, host);
+ first_name, host);
}
else
{
diff --git a/src/interfaces/libpq/fe-secure-gssapi.c b/src/interfaces/libpq/fe-secure-gssapi.c
index 038e847b7e9..0af4de941af 100644
--- a/src/interfaces/libpq/fe-secure-gssapi.c
+++ b/src/interfaces/libpq/fe-secure-gssapi.c
@@ -213,8 +213,8 @@ pg_GSS_write(PGconn *conn, const void *ptr, size_t len)
if (output.length > PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "client tried to send oversize GSSAPI packet (%zu > %zu)",
- (size_t) output.length,
- PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32));
+ (size_t) output.length,
+ PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32));
errno = EIO; /* for lack of a better idea */
goto cleanup;
}
@@ -349,8 +349,8 @@ pg_GSS_read(PGconn *conn, void *ptr, size_t len)
if (input.length > PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "oversize GSSAPI packet sent by the server (%zu > %zu)",
- (size_t) input.length,
- PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
+ (size_t) input.length,
+ PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
errno = EIO; /* for lack of a better idea */
return -1;
}
@@ -590,8 +590,8 @@ pqsecure_open_gss(PGconn *conn)
if (input.length > PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "oversize GSSAPI packet sent by the server (%zu > %zu)",
- (size_t) input.length,
- PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
+ (size_t) input.length,
+ PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
return PGRES_POLLING_FAILED;
}
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index 6a4431ddfe9..e6da377fb9d 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -213,12 +213,12 @@ rloop:
if (result_errno == EPIPE ||
result_errno == ECONNRESET)
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
else
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
}
else
{
@@ -313,12 +313,12 @@ pgtls_write(PGconn *conn, const void *ptr, size_t len)
result_errno = SOCK_ERRNO;
if (result_errno == EPIPE || result_errno == ECONNRESET)
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
else
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
}
else
{
@@ -415,7 +415,7 @@ pgtls_get_peer_certificate_hash(PGconn *conn, size_t *len)
if (algo_type == NULL)
{
libpq_append_conn_error(conn, "could not find digest for NID %s",
- OBJ_nid2sn(algo_nid));
+ OBJ_nid2sn(algo_nid));
return NULL;
}
break;
@@ -967,7 +967,7 @@ initialize_SSL(PGconn *conn)
if (ssl_min_ver == -1)
{
libpq_append_conn_error(conn, "invalid value \"%s\" for minimum SSL protocol version",
- conn->ssl_min_protocol_version);
+ conn->ssl_min_protocol_version);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -993,7 +993,7 @@ initialize_SSL(PGconn *conn)
if (ssl_max_ver == -1)
{
libpq_append_conn_error(conn, "invalid value \"%s\" for maximum SSL protocol version",
- conn->ssl_max_protocol_version);
+ conn->ssl_max_protocol_version);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1037,7 +1037,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read root certificate file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
SSL_CTX_free(SSL_context);
return -1;
@@ -1089,10 +1089,10 @@ initialize_SSL(PGconn *conn)
*/
if (fnbuf[0] == '\0')
libpq_append_conn_error(conn, "could not get home directory to locate root certificate file\n"
- "Either provide the file or change sslmode to disable server certificate verification.");
+ "Either provide the file or change sslmode to disable server certificate verification.");
else
libpq_append_conn_error(conn, "root certificate file \"%s\" does not exist\n"
- "Either provide the file or change sslmode to disable server certificate verification.", fnbuf);
+ "Either provide the file or change sslmode to disable server certificate verification.", fnbuf);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1122,7 +1122,7 @@ initialize_SSL(PGconn *conn)
if (errno != ENOENT && errno != ENOTDIR)
{
libpq_append_conn_error(conn, "could not open certificate file \"%s\": %s",
- fnbuf, strerror_r(errno, sebuf, sizeof(sebuf)));
+ fnbuf, strerror_r(errno, sebuf, sizeof(sebuf)));
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1140,7 +1140,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read certificate file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
SSL_CTX_free(SSL_context);
return -1;
@@ -1239,7 +1239,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not load SSL engine \"%s\": %s",
- engine_str, err);
+ engine_str, err);
SSLerrfree(err);
free(engine_str);
return -1;
@@ -1250,7 +1250,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not initialize SSL engine \"%s\": %s",
- engine_str, err);
+ engine_str, err);
SSLerrfree(err);
ENGINE_free(conn->engine);
conn->engine = NULL;
@@ -1265,7 +1265,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read private SSL key \"%s\" from engine \"%s\": %s",
- engine_colon, engine_str, err);
+ engine_colon, engine_str, err);
SSLerrfree(err);
ENGINE_finish(conn->engine);
ENGINE_free(conn->engine);
@@ -1278,7 +1278,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not load private SSL key \"%s\" from engine \"%s\": %s",
- engine_colon, engine_str, err);
+ engine_colon, engine_str, err);
SSLerrfree(err);
ENGINE_finish(conn->engine);
ENGINE_free(conn->engine);
@@ -1315,10 +1315,10 @@ initialize_SSL(PGconn *conn)
{
if (errno == ENOENT)
libpq_append_conn_error(conn, "certificate present, but not private key file \"%s\"",
- fnbuf);
+ fnbuf);
else
libpq_append_conn_error(conn, "could not stat private key file \"%s\": %m",
- fnbuf);
+ fnbuf);
return -1;
}
@@ -1326,7 +1326,7 @@ initialize_SSL(PGconn *conn)
if (!S_ISREG(buf.st_mode))
{
libpq_append_conn_error(conn, "private key file \"%s\" is not a regular file",
- fnbuf);
+ fnbuf);
return -1;
}
@@ -1383,7 +1383,7 @@ initialize_SSL(PGconn *conn)
if (SSL_use_PrivateKey_file(conn->ssl, fnbuf, SSL_FILETYPE_ASN1) != 1)
{
libpq_append_conn_error(conn, "could not load private key file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
return -1;
}
@@ -1399,7 +1399,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "certificate does not match private key file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
return -1;
}
@@ -1452,7 +1452,7 @@ open_client_SSL(PGconn *conn)
if (r == -1)
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
else
libpq_append_conn_error(conn, "SSL SYSCALL error: EOF detected");
pgtls_close(conn);
@@ -1494,12 +1494,12 @@ open_client_SSL(PGconn *conn)
case SSL_R_VERSION_TOO_LOW:
#endif
libpq_append_conn_error(conn, "This may indicate that the server does not support any SSL protocol version between %s and %s.",
- conn->ssl_min_protocol_version ?
- conn->ssl_min_protocol_version :
- MIN_OPENSSL_TLS_VERSION,
- conn->ssl_max_protocol_version ?
- conn->ssl_max_protocol_version :
- MAX_OPENSSL_TLS_VERSION);
+ conn->ssl_min_protocol_version ?
+ conn->ssl_min_protocol_version :
+ MIN_OPENSSL_TLS_VERSION,
+ conn->ssl_max_protocol_version ?
+ conn->ssl_max_protocol_version :
+ MAX_OPENSSL_TLS_VERSION);
break;
default:
break;
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 66e401bf3d9..8069e381424 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -255,14 +255,14 @@ pqsecure_raw_read(PGconn *conn, void *ptr, size_t len)
case EPIPE:
case ECONNRESET:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
break;
default:
libpq_append_conn_error(conn, "could not receive data from server: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
break;
}
}
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 1dc264fe544..8890525cdf4 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -897,8 +897,8 @@ extern char *libpq_ngettext(const char *msgid, const char *msgid_plural, unsigne
*/
#undef _
-extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...) pg_attribute_printf(2, 3);
-extern void libpq_append_conn_error(PGconn *conn, const char *fmt, ...) pg_attribute_printf(2, 3);
+extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt,...) pg_attribute_printf(2, 3);
+extern void libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
/*
* These macros are needed to let error-handling code be portable between
--
2.34.1
v15-0004-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=v15-0004-Add-non-blocking-version-of-PQcancel.patchDownload
From 303ccd535b2b414093b9bb3c9ce2dd8ecdea4553 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 12 Jan 2022 09:52:05 +0100
Subject: [PATCH v15 4/5] Add non-blocking version of PQcancel
This patch makes the following changes in libpq:
1. Add a new PQcancelSend function, which sends cancellation requests
using the regular connection establishment code. This makes sure
that cancel requests support and use all connection options
including encryption.
2. Add a new PQcancelConn function which allows sending cancellation in
a non-blocking way by using it together with the newly added
PQcancelPoll and PQcancelSocket.
3. Use these two new cancellation APIs everywhere in the codebase where
signal-safety is not a necessity.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests. The postgres_fdw
cancellation code has been modified to make use of this.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
doc/src/sgml/libpq.sgml | 275 ++++++++++-
src/interfaces/libpq/exports.txt | 8 +
src/interfaces/libpq/fe-connect.c | 452 +++++++++++++++++-
src/interfaces/libpq/libpq-fe.h | 25 +-
src/interfaces/libpq/libpq-int.h | 9 +
.../modules/libpq_pipeline/libpq_pipeline.c | 265 +++++++++-
6 files changed, 982 insertions(+), 52 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 3706d349abc..ee4ab9831f0 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -5014,7 +5014,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelSend"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -5733,13 +5733,218 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQcancelSend">
+ <term><function>PQcancelSend</function><indexterm><primary>PQcancelSend</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command.
+<synopsis>
+PGcancelConn *PQcancelSend(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This request is made over a connection that uses the same connection
+ options as the the original <structname>PGconn</structname>. So when the
+ original connection is encrypted (using TLS or GSS), the connection for
+ the cancel request is encrypted in the same way. Any connection options
+ that only are only used during authentication or after authentication of
+ the client are ignored though, because cancellation requests do not
+ require authentication and the connection is closed right after the
+ cancellation request is submitted.
+ </para>
+
+ <para>
+ This function returns a <structname>PGcancelConn</structname>
+ object. <xref linkend="libpq-PQcancelStatus"/> can be used to check
+ if any error occured while sending the cancellation request. If
+ <xref linkend="libpq-PQcancelStatus"/> returns <symbol>CONNECTION_OK</symbol>
+ the request was sent successfully, but if it returns <symbol>CONNECTION_BAD</symbol>
+ an error occured. If an error occured the error message can be retrieved using
+ <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being cancelled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelSend</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQcancelSend"/> that can be used to
+ send cancellation requests in a non-blocking manner.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>
+ object, but it won't instantly start sending a cancel request over this
+ connection, unlike <xref linkend="libpq-PQcancelSend"/>.
+ The return value should still be passed to <xref linkend="libpq-PQcancelStatus"/>
+ though, to check if the <structname>PGcancelConn</structname> object was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe and non-blocking way.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQstatus"/> that can be used for
+ cancellation connections.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ <para>
+ In addition to all the statuses that a <structname>PGconn</structname>
+ can have, this connection can have one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQcancelPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelPoll"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually cancelled) and that the
+ connection is now closed. While a <symbol>CONNECTION_OK</symbol> result
+ for <structname>PGconn</structname> means that queries can be sent over
+ the connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelPoll">
+ <term><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQconnectPoll"/> that can be used for
+ cancellation connections.
+<synopsis>
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -5781,14 +5986,28 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ An insecure version of <xref linkend="libpq-PQcancelSend"/>, but one
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
+ <xref linkend="libpq-PQcancel"/> should only be used if it's necessary
+ to cancel a query from a signal-handler. If signal-safety is not needed,
+ <xref linkend="libpq-PQcancelSend"/> should be used to cancel the query
+ instead. <xref linkend="libpq-PQcancel"/> can be safely invoked from a
+ signal handler, if the <parameter>errbuf</parameter> is a local variable
+ in the signal handler. The <structname>PGcancel</structname> object is
+ read-only as far as <xref linkend="libpq-PQcancel"/> is concerned, so it
+ can also be invoked from a thread that is separate from the one
+ manipulating the <structname>PGconn</structname> object.
+ </para>
+
+ <para>
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
with an explanatory error message. <parameter>errbuf</parameter>
must be a char array of size <parameter>errbufsize</parameter> (the
@@ -5796,21 +6015,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
- <para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. This function connects to
+ postgres on the same address and port as the original connection. The
+ only connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. <emphasis>This means the connection
+ for the cancel request is never encrypted using TLS or GSS</emphasis>.
</para>
</listitem>
</varlistentry>
@@ -5822,13 +6042,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelSend"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelSend"/> should be
+ used instead, to avoid the security and thread-safety issues that this
+ function has. This function has the same security issues as
+ <xref linkend="libpq-PQcancel"/>, but without the benefit of being
+ signal-safe.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -8987,7 +9216,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelSend"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index e8bcc883709..f56e8c185c4 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -186,3 +186,11 @@ PQpipelineStatus 183
PQsetTraceFlags 184
PQmblenBounded 185
PQsendFlushRequest 186
+PQcancelSend 187
+PQcancelConn 188
+PQcancelPoll 189
+PQcancelStatus 190
+PQcancelSocket 191
+PQcancelErrorMessage 192
+PQcancelReset 193
+PQcancelFinish 194
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index b085892feac..3bf1fdc4dd1 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -380,8 +380,10 @@ static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
+static void release_conn_hosts(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
static bool store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
@@ -605,8 +607,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQcancelReset invocations. Otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -737,6 +748,113 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a query on the given connection. This requires polling
+ * the returned PGcancelConn to actually complete the cancellation of the
+ * query.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancelation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we we manually create the host and address arrays
+ * with a single element after freeing the host array that we generated
+ * from the connection options.
+ */
+ release_conn_hosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
/*
* PQconnectStartParams
*
@@ -912,6 +1030,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2200,10 +2357,18 @@ connectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address. These fields have already set up in PQcancelConn. So leave
+ * these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2345,7 +2510,10 @@ connectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2470,13 +2638,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -3067,6 +3239,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancelation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -3811,8 +4006,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4179,19 +4380,8 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
- /* clean up pg_conn_host structures */
- for (int i = 0; i < conn->nconnhost; ++i)
- {
- free(conn->connhost[i].host);
- free(conn->connhost[i].hostaddr);
- free(conn->connhost[i].port);
- if (conn->connhost[i].password != NULL)
- {
- explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
- free(conn->connhost[i].password);
- }
- }
- free(conn->connhost);
+ release_conn_addrinfo(conn);
+ release_conn_hosts(conn);
free(conn->client_encoding_initial);
free(conn->events);
@@ -4303,6 +4493,31 @@ release_conn_addrinfo(PGconn *conn)
}
}
+/*
+ * release_conn_hosts
+ * - Free the host list in the PGconn.
+ */
+static void
+release_conn_hosts(PGconn *conn)
+{
+ if (conn->connhost)
+ {
+ for (int i = 0; i < conn->nconnhost; ++i)
+ {
+ free(conn->connhost[i].host);
+ free(conn->connhost[i].hostaddr);
+ free(conn->connhost[i].port);
+ if (conn->connhost[i].password != NULL)
+ {
+ explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
+ free(conn->connhost[i].password);
+ }
+ }
+ free(conn->connhost);
+ }
+}
+
+
/*
* sendTerminateConn
* - Send a terminate message to backend.
@@ -4310,6 +4525,15 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4363,7 +4587,13 @@ closePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we would have to rebuild it during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
@@ -4778,6 +5008,180 @@ cancel_errReturn:
return false;
}
+/*
+ * PQcancelSend
+ *
+ * Send a cancellation request in a blocking fashion.
+ */
+PGcancelConn *
+PQcancelSend(PGconn *conn)
+{
+ PGcancelConn *cancelConn = PQcancelConn(conn);
+
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return cancelConn;
+
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_BAD;
+ return cancelConn;
+ }
+
+ (void) connectDBComplete(&cancelConn->conn);
+
+ return cancelConn;
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn * cancelConn)
+{
+ PGconn *conn = (PGconn *) cancelConn;
+ int n;
+
+ /*
+ * Before we can call PQconnectPoll we first need to start the connection
+ * using connectDBstart. Non-cancel connections already do this whenever
+ * the connection is initialized. But cancel connections wait until the
+ * caller starts polling, because there might be a large delay between
+ * creating a cancel connection and actually wanting to use it.
+ */
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+ }
+
+ /*
+ * The rest of the connection establishement we leave to PQconnectPoll,
+ * since it's very similar to normal connection establishment. But once we
+ * get to the CONNECTION_AWAITING_RESPONSE we need to do our own thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * Windows is a bit special in its EOF behaviour for TCP. Sometimes it
+ * will error with an ECONNRESET when there is a clean connection closure.
+ * See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here. For
+ * all other OSes we consider any other error than EOF and report it as
+ * such.
+ */
+ if (n < 0 && n != -2)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangly do
+ * receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF. Which is what we were
+ * expecting. The cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn * cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn * cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn * cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn * cancelConn)
+{
+ closePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_STARTING;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn * cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
/*
* PQrequestCancel: old, not thread-safe function for requesting query cancel
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index f3d92204964..95899b9f55b 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_STARTING /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,28 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* issue a cancel request */
+extern PGcancelConn * PQcancelSend(PGconn *conn);
+/* non-blocking version of PQcancelSend */
+extern PGcancelConn * PQcancelConn(PGconn *conn);
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn * cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
+extern int PQcancelSocket(const PGcancelConn * cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
+extern void PQcancelReset(PGcancelConn * cancelConn);
+extern void PQcancelFinish(PGcancelConn * cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* a less secure version of PQcancelSend, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index cf10ea15aa1..57dead1c79c 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -398,6 +398,10 @@ struct pg_conn
char *target_session_attrs; /* desired session properties */
char *require_auth; /* name of the expected auth method */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -603,6 +607,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index f48da7d963e..e8e904892c7 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got cancelled.
+ *
+ * This is a function wrapped in a macrco to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_cancelled(conn) confirm_query_cancelled_impl(__LINE__, conn)
+static void
+confirm_query_cancelled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_cancelled(conn);
+
+ /* test PQcancelSend */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelSend(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -985,7 +1243,7 @@ test_prepared(PGconn *conn)
static void
notice_processor(void *arg, const char *message)
{
- int *n_notices = (int *) arg;
+ int *n_notices = (int *) arg;
(*n_notices)++;
fprintf(stderr, "NOTICE %d: %s", *n_notices, message);
@@ -1681,6 +1939,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1782,7 +2041,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
v15-0005-Start-using-new-libpq-cancel-APIs.patchapplication/octet-stream; name=v15-0005-Start-using-new-libpq-cancel-APIs.patchDownload
From 1a911d4f06a71ac99a0c63b673fbde667f380305 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 25 Jan 2023 13:32:15 +0100
Subject: [PATCH v15 5/5] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in the codebase with these newer
ones.
---
contrib/dblink/dblink.c | 30 ++++--
contrib/postgres_fdw/connection.c | 99 ++++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 10 +-
src/test/isolation/isolationtester.c | 29 +++---
6 files changed, 139 insertions(+), 51 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 78a8bcee6e3..e139f66e116 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1326,22 +1326,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelSend(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 12b54f15cd6..bc3e5181683 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -1234,35 +1234,104 @@ pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel)
static bool
pgfdw_cancel_query(PGconn *conn)
{
- PGcancel *cancel;
- char errbuf[256];
PGresult *result = NULL;
- TimestampTz endtime;
- bool timed_out;
/*
* If it takes too long to cancel the query and discard the result, assume
* the connection is dead.
*/
- endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQcancelStatus(cancel_conn) == CONNECTION_BAD)
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
+ }
+
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
}
- PQfreeCancel(cancel);
}
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ if (failed)
+ return false;
/* Get and discard the result of the query. */
if (pgfdw_get_cleanup_result(conn, endtime, &result, &timed_out))
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 04a3ef450cf..064c3103a5e 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2688,6 +2688,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index 4f3088c03ea..640958df136 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -713,6 +713,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 7a1edea7c8c..b32448c0103 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,11 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
-
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ PQcancelFinish(PQcancelSend(conn));
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153a..3781f7982b2 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelSend(conn);
- if (cancel != NULL)
+ if (PQcancelStatus(cancel_conn) == CONNECTION_OK)
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
Rebased after conflicts with bfc9497ece01c7c45437bc36387cb1ebe346f4d2
Also included the fix for feedback from Daniel on patch 2, which he
had shared in the load balancing thread.
Show quoted text
On Wed, 15 Mar 2023 at 09:49, Jelte Fennema <postgres@jeltef.nl> wrote:
The rebase was indeed trivial (git handled everything automatically),
because my first patch was doing a superset of the changes that were
committed in b6dfee28f. Attached are the new patches.On Tue, 14 Mar 2023 at 19:04, Greg Stark <stark@mit.edu> wrote:
On Tue, 14 Mar 2023 at 13:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:
"Gregory Stark (as CFM)" <stark.cfm@gmail.com> writes:
It looks like this needs a big rebase in fea-uth.c fe-auth-scram.c and
fe-connect.c. Every hunk is failing which perhaps means the code
you're patching has been moved or refactored?The cfbot is giving up after
v14-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patch fails,
but that's been superseded (at least in part) by b6dfee28f.Ah, same with Jelte Fennema's patch for load balancing in libpq.
--
greg
Attachments:
v16-0003-Return-2-from-pqReadData-on-EOF.patchapplication/octet-stream; name=v16-0003-Return-2-from-pqReadData-on-EOF.patchDownload
From ab07383d5cb10de0500fb695c4f895890a87bc89 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Thu, 26 Jan 2023 12:24:38 +0100
Subject: [PATCH v16 3/5] Return -2 from pqReadData on EOF
This patch changes pqReadData to return -2 when a connection is cleanly
closed by the other side. For most of the Postgres protocol this is
considered an error, because the client will close the connection
instead of the server. But for Postgres its cancellation protocol
the distinction between errors and clean connection closure is
important, because clean connection closure is the way for the server to
signal that the cancellation was handled.
This patch is in preparation for a follow-up patch where pqReadData is
used for the cancellation protocol implementation.
No existing callsites of pqReadData or any of its internal functions
need to be updated as all of them check if the result is less than 0
instead a strict comparison against -1.
---
src/interfaces/libpq/fe-misc.c | 15 +++++++++++----
src/interfaces/libpq/fe-secure-openssl.c | 2 +-
src/interfaces/libpq/fe-secure.c | 6 ++++++
3 files changed, 18 insertions(+), 5 deletions(-)
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 660cdec93c9..2d49188d910 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -556,8 +556,11 @@ pqPutMsgEnd(PGconn *conn)
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
- * -1: error detected (including EOF = connection closure);
+ * -1: error detected (excluding EOF = clean connection closure);
* conn->errorMessage set
+ * -2: EOF detected, connection is closed cleanly by other side;
+ * conn->errorMessage set
+ *
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
@@ -639,7 +642,7 @@ retry3:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -734,7 +737,7 @@ retry4:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -751,13 +754,17 @@ definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.");
+ /* Do *not* drop any already-read data; caller still wants it */
+ pqDropConnection(conn, false);
+ conn->status = CONNECTION_BAD; /* No more connection to backend */
+ return -2;
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
/* Do *not* drop any already-read data; caller still wants it */
pqDropConnection(conn, false);
conn->status = CONNECTION_BAD; /* No more connection to backend */
- return -1;
+ return nread < 0 ? nread : -1;
}
/*
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index e6da377fb9d..8b5909e08ef 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -248,7 +248,7 @@ rloop:
*/
libpq_append_conn_error(conn, "SSL connection has been closed unexpectedly");
result_errno = ECONNRESET;
- n = -1;
+ n = -2;
break;
default:
libpq_append_conn_error(conn, "unrecognized SSL error code: %d", err);
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 8069e381424..20265dcb317 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -199,6 +199,12 @@ pqsecure_close(PGconn *conn)
* On failure, this function is responsible for appending a suitable message
* to conn->errorMessage. The caller must still inspect errno, but only
* to determine whether to continue/retry after error.
+ *
+ * Returns -1 in case of failures, except in the case of where a failure means
+ * that there was a clean connection closure, in those cases -2 is returned.
+ * Currently only the TLS implementation of pqsecure_read ever returns -2. For
+ * the other implementations a clean connection closure is detected in
+ * pqReadData instead.
*/
ssize_t
pqsecure_read(PGconn *conn, void *ptr, size_t len)
--
2.34.1
v16-0005-Start-using-new-libpq-cancel-APIs.patchapplication/octet-stream; name=v16-0005-Start-using-new-libpq-cancel-APIs.patchDownload
From 9a595d95b17f7c712fdd065025c5d0e1dc6d2948 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 25 Jan 2023 13:32:15 +0100
Subject: [PATCH v16 5/5] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in the codebase with these newer
ones.
---
contrib/dblink/dblink.c | 30 ++++--
contrib/postgres_fdw/connection.c | 99 ++++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 10 +-
src/test/isolation/isolationtester.c | 29 +++---
6 files changed, 139 insertions(+), 51 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 78a8bcee6e3..e139f66e116 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1326,22 +1326,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelSend(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 8eb9194506c..3f9a408a6af 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -1233,35 +1233,104 @@ pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel)
static bool
pgfdw_cancel_query(PGconn *conn)
{
- PGcancel *cancel;
- char errbuf[256];
PGresult *result = NULL;
- TimestampTz endtime;
- bool timed_out;
/*
* If it takes too long to cancel the query and discard the result, assume
* the connection is dead.
*/
- endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQcancelStatus(cancel_conn) == CONNECTION_BAD)
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
+ }
+
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
}
- PQfreeCancel(cancel);
}
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ if (failed)
+ return false;
/* Get and discard the result of the query. */
if (pgfdw_get_cleanup_result(conn, endtime, &result, &timed_out))
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 04a3ef450cf..064c3103a5e 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2688,6 +2688,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index 4f3088c03ea..640958df136 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -713,6 +713,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 7a1edea7c8c..b32448c0103 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,11 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
-
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ PQcancelFinish(PQcancelSend(conn));
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153a..3781f7982b2 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelSend(conn);
- if (cancel != NULL)
+ if (PQcancelStatus(cancel_conn) == CONNECTION_OK)
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
v16-0002-Refactor-libpq-to-store-addrinfo-in-a-libpq-owne.patchapplication/octet-stream; name=v16-0002-Refactor-libpq-to-store-addrinfo-in-a-libpq-owne.patchDownload
From 3b69f56ed8eefad9728b0e38498b4c6f1b1d722c Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 25 Jan 2023 10:22:41 +0100
Subject: [PATCH v16 2/5] Refactor libpq to store addrinfo in a libpq owned
array
This refactors libpq to copy addrinfos returned by getaddrinfo to
memory owned by us. This refactoring is useful for two upcoming patches,
which need to change the addrinfo list in some way. Doing that with the
original addrinfo list is risky since we don't control how memory is
freed. Also changing the contents of a C array is quite a bit easier
than changing a linked list.
As a nice side effect of this refactor the is that mechanism for
iteration over addresses in PQconnectPoll is now identical to its
iteration over hosts.
---
src/include/libpq/pqcomm.h | 6 ++
src/interfaces/libpq/fe-connect.c | 107 +++++++++++++++++++++---------
src/interfaces/libpq/libpq-int.h | 7 +-
src/tools/pgindent/typedefs.list | 1 +
4 files changed, 87 insertions(+), 34 deletions(-)
diff --git a/src/include/libpq/pqcomm.h b/src/include/libpq/pqcomm.h
index bff7dd18a23..c85090259d9 100644
--- a/src/include/libpq/pqcomm.h
+++ b/src/include/libpq/pqcomm.h
@@ -27,6 +27,12 @@ typedef struct
socklen_t salen;
} SockAddr;
+typedef struct
+{
+ int family;
+ SockAddr addr;
+} AddrInfo;
+
/* Configure the UNIX socket location for the well known port. */
#define UNIXSOCK_PATH(path, port, sockdir) \
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index b9f899c552e..4a0ea51a864 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -383,6 +383,7 @@ static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
+static bool store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
static PQconninfoOption *conninfo_init(PQExpBuffer errorMessage);
static PQconninfoOption *parse_connection_string(const char *connstr,
@@ -2242,7 +2243,7 @@ connectDBComplete(PGconn *conn)
time_t finish_time = ((time_t) -1);
int timeout = 0;
int last_whichhost = -2; /* certainly different from whichhost */
- struct addrinfo *last_addr_cur = NULL;
+ int last_whichaddr = -2; /* certainly different from whichaddr */
if (conn == NULL || conn->status == CONNECTION_BAD)
return 0;
@@ -2286,11 +2287,11 @@ connectDBComplete(PGconn *conn)
if (flag != PGRES_POLLING_OK &&
timeout > 0 &&
(conn->whichhost != last_whichhost ||
- conn->addr_cur != last_addr_cur))
+ conn->whichaddr != last_whichaddr))
{
finish_time = time(NULL) + timeout;
last_whichhost = conn->whichhost;
- last_addr_cur = conn->addr_cur;
+ last_whichaddr = conn->whichaddr;
}
/*
@@ -2437,9 +2438,9 @@ keep_going: /* We will come back to here until there is
/* Time to advance to next address, or next host if no more addresses? */
if (conn->try_next_addr)
{
- if (conn->addr_cur && conn->addr_cur->ai_next)
+ if (conn->whichaddr < conn->naddr)
{
- conn->addr_cur = conn->addr_cur->ai_next;
+ conn->whichaddr++;
reset_connection_state_machine = true;
}
else
@@ -2452,6 +2453,7 @@ keep_going: /* We will come back to here until there is
{
pg_conn_host *ch;
struct addrinfo hint;
+ struct addrinfo *addrlist;
int thisport;
int ret;
char portstr[MAXPGPATH];
@@ -2492,7 +2494,7 @@ keep_going: /* We will come back to here until there is
/* Initialize hint structure */
MemSet(&hint, 0, sizeof(hint));
hint.ai_socktype = SOCK_STREAM;
- conn->addrlist_family = hint.ai_family = AF_UNSPEC;
+ hint.ai_family = AF_UNSPEC;
/* Figure out the port number we're going to use. */
if (ch->port == NULL || ch->port[0] == '\0')
@@ -2515,8 +2517,8 @@ keep_going: /* We will come back to here until there is
{
case CHT_HOST_NAME:
ret = pg_getaddrinfo_all(ch->host, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not translate host name \"%s\" to address: %s",
ch->host, gai_strerror(ret));
@@ -2527,8 +2529,8 @@ keep_going: /* We will come back to here until there is
case CHT_HOST_ADDRESS:
hint.ai_flags = AI_NUMERICHOST;
ret = pg_getaddrinfo_all(ch->hostaddr, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not parse network address \"%s\": %s",
ch->hostaddr, gai_strerror(ret));
@@ -2537,7 +2539,7 @@ keep_going: /* We will come back to here until there is
break;
case CHT_UNIX_SOCKET:
- conn->addrlist_family = hint.ai_family = AF_UNIX;
+ hint.ai_family = AF_UNIX;
UNIXSOCK_PATH(portstr, thisport, ch->host);
if (strlen(portstr) >= UNIXSOCK_PATH_BUFLEN)
{
@@ -2552,8 +2554,8 @@ keep_going: /* We will come back to here until there is
* name as a Unix-domain socket path.
*/
ret = pg_getaddrinfo_all(NULL, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not translate Unix-domain socket path \"%s\" to address: %s",
portstr, gai_strerror(ret));
@@ -2562,8 +2564,14 @@ keep_going: /* We will come back to here until there is
break;
}
- /* OK, scan this addrlist for a working server address */
- conn->addr_cur = conn->addrlist;
+ if (!store_conn_addrinfo(conn, addrlist))
+ {
+ pg_freeaddrinfo_all(hint.ai_family, addrlist);
+ libpq_append_conn_error(conn, "out of memory");
+ goto error_return;
+ }
+ pg_freeaddrinfo_all(hint.ai_family, addrlist);
+
reset_connection_state_machine = true;
conn->try_next_host = false;
}
@@ -2620,31 +2628,30 @@ keep_going: /* We will come back to here until there is
{
/*
* Try to initiate a connection to one of the addresses
- * returned by pg_getaddrinfo_all(). conn->addr_cur is the
+ * returned by pg_getaddrinfo_all(). conn->whichaddr is the
* next one to try.
*
* The extra level of braces here is historical. It's not
* worth reindenting this whole switch case to remove 'em.
*/
{
- struct addrinfo *addr_cur = conn->addr_cur;
char host_addr[NI_MAXHOST];
int sock_type;
+ AddrInfo *addr_cur;
/*
* Advance to next possible host, if we've tried all of
* the addresses for the current host.
*/
- if (addr_cur == NULL)
+ if (conn->whichaddr == conn->naddr)
{
conn->try_next_host = true;
goto keep_going;
}
+ addr_cur = &conn->addr[conn->whichaddr];
/* Remember current address for possible use later */
- memcpy(&conn->raddr.addr, addr_cur->ai_addr,
- addr_cur->ai_addrlen);
- conn->raddr.salen = addr_cur->ai_addrlen;
+ memcpy(&conn->raddr, &addr_cur->addr, sizeof(SockAddr));
/*
* Set connip, too. Note we purposely ignore strdup
@@ -2679,7 +2686,7 @@ keep_going: /* We will come back to here until there is
*/
sock_type |= SOCK_NONBLOCK;
#endif
- conn->sock = socket(addr_cur->ai_family, sock_type, 0);
+ conn->sock = socket(addr_cur->family, sock_type, 0);
if (conn->sock == PGINVALID_SOCKET)
{
int errorno = SOCK_ERRNO;
@@ -2690,7 +2697,7 @@ keep_going: /* We will come back to here until there is
* cases where the address list includes both IPv4 and
* IPv6 but kernel only accepts one family.
*/
- if (addr_cur->ai_next != NULL ||
+ if (conn->whichaddr < conn->naddr ||
conn->whichhost + 1 < conn->nconnhost)
{
conn->try_next_addr = true;
@@ -2716,7 +2723,7 @@ keep_going: /* We will come back to here until there is
* TCP sockets, nonblock mode, close-on-exec. Try the
* next address if any of this fails.
*/
- if (addr_cur->ai_family != AF_UNIX)
+ if (addr_cur->family != AF_UNIX)
{
if (!connectNoDelay(conn))
{
@@ -2747,7 +2754,7 @@ keep_going: /* We will come back to here until there is
#endif /* F_SETFD */
#endif
- if (addr_cur->ai_family != AF_UNIX)
+ if (addr_cur->family != AF_UNIX)
{
#ifndef WIN32
int on = 1;
@@ -2839,8 +2846,8 @@ keep_going: /* We will come back to here until there is
* Start/make connection. This should not block, since we
* are in nonblock mode. If it does, well, too bad.
*/
- if (connect(conn->sock, addr_cur->ai_addr,
- addr_cur->ai_addrlen) < 0)
+ if (connect(conn->sock, (struct sockaddr *) &addr_cur->addr.addr,
+ addr_cur->addr.salen) < 0)
{
if (SOCK_ERRNO == EINPROGRESS ||
#ifdef WIN32
@@ -4263,6 +4270,45 @@ freePGconn(PGconn *conn)
free(conn);
}
+/*
+ * Copies over the addrinfos from addrlist to the PGconn. The reason we do this
+ * so that we can edit the resulting list as we please, because now the memory
+ * is owned by us. Changing the original addrinfo directly is risky, since we
+ * don't control how the memory is freed and by changing it we might confuse
+ * the implementation of freeaddrinfo.
+ */
+static bool
+store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist)
+{
+ struct addrinfo *ai = addrlist;
+
+ conn->whichaddr = 0;
+
+ conn->naddr = 0;
+ while (ai)
+ {
+ ai = ai->ai_next;
+ conn->naddr++;
+ }
+
+ conn->addr = calloc(conn->naddr, sizeof(AddrInfo));
+ if (conn->addr == NULL)
+ return false;
+
+ ai = addrlist;
+ for (int i = 0; i < conn->naddr; i++)
+ {
+ conn->addr[i].family = ai->ai_family;
+
+ memcpy(&conn->addr[i].addr.addr, ai->ai_addr,
+ ai->ai_addrlen);
+ conn->addr[i].addr.salen = ai->ai_addrlen;
+ ai = ai->ai_next;
+ }
+
+ return true;
+}
+
/*
* release_conn_addrinfo
* - Free any addrinfo list in the PGconn.
@@ -4270,11 +4316,10 @@ freePGconn(PGconn *conn)
static void
release_conn_addrinfo(PGconn *conn)
{
- if (conn->addrlist)
+ if (conn->addr)
{
- pg_freeaddrinfo_all(conn->addrlist_family, conn->addrlist);
- conn->addrlist = NULL;
- conn->addr_cur = NULL; /* for safety */
+ free(conn->addr);
+ conn->addr = NULL;
}
}
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 8890525cdf4..8f96c52e6c3 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -470,9 +470,10 @@ struct pg_conn
PGTargetServerType target_server_type; /* desired session properties */
bool try_next_addr; /* time to advance to next address/host? */
bool try_next_host; /* time to advance to next connhost[]? */
- struct addrinfo *addrlist; /* list of addresses for current connhost */
- struct addrinfo *addr_cur; /* the one currently being tried */
- int addrlist_family; /* needed to know how to free addrlist */
+ int naddr; /* number of addresses returned by getaddrinfo */
+ int whichaddr; /* the address currently being tried */
+ AddrInfo *addr; /* the array of addresses for the currently
+ * tried host */
bool send_appname; /* okay to send application_name? */
/* Miscellaneous stuff */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 097f42e1b34..5c5aa8bf4c9 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -26,6 +26,7 @@ AcquireSampleRowsFunc
ActionList
ActiveSnapshotElt
AddForeignUpdateTargets_function
+AddrInfo
AffixNode
AffixNodeData
AfterTriggerEvent
--
2.34.1
v16-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patchapplication/octet-stream; name=v16-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patchDownload
From 7b79e71db2b4f53724fadfc86bfaec72e6d610a9 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 30 Nov 2022 10:07:19 +0100
Subject: [PATCH v16 1/5] libpq: Run pgindent after a9e9a9f32b3
It seems that pgindent was not run after the error handling refactor in
commit a9e9a9f32b35edf129c88e8b929ef223f8511f59. This fixes that and
also addresses a few other things pgindent wanted to change in libpq.
---
src/interfaces/libpq/fe-exec.c | 16 +++---
src/interfaces/libpq/fe-lobj.c | 42 ++++++++--------
src/interfaces/libpq/fe-misc.c | 10 ++--
src/interfaces/libpq/fe-protocol3.c | 2 +-
src/interfaces/libpq/fe-secure-common.c | 6 +--
src/interfaces/libpq/fe-secure-gssapi.c | 12 ++---
src/interfaces/libpq/fe-secure-openssl.c | 64 ++++++++++++------------
src/interfaces/libpq/fe-secure.c | 8 +--
src/interfaces/libpq/libpq-int.h | 4 +-
9 files changed, 82 insertions(+), 82 deletions(-)
diff --git a/src/interfaces/libpq/fe-exec.c b/src/interfaces/libpq/fe-exec.c
index ec62550e385..0c2dae6ed9e 100644
--- a/src/interfaces/libpq/fe-exec.c
+++ b/src/interfaces/libpq/fe-exec.c
@@ -1444,7 +1444,7 @@ PQsendQueryInternal(PGconn *conn, const char *query, bool newQuery)
if (conn->pipelineStatus != PQ_PIPELINE_OFF)
{
libpq_append_conn_error(conn, "%s not allowed in pipeline mode",
- "PQsendQuery");
+ "PQsendQuery");
return 0;
}
@@ -1512,7 +1512,7 @@ PQsendQueryParams(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -1558,7 +1558,7 @@ PQsendPrepare(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -1652,7 +1652,7 @@ PQsendQueryPrepared(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -2099,10 +2099,9 @@ PQgetResult(PGconn *conn)
/*
* We're about to return the NULL that terminates the round of
- * results from the current query; prepare to send the results
- * of the next query, if any, when we're called next. If there's
- * no next element in the command queue, this gets us in IDLE
- * state.
+ * results from the current query; prepare to send the results of
+ * the next query, if any, when we're called next. If there's no
+ * next element in the command queue, this gets us in IDLE state.
*/
pqPipelineProcessQueue(conn);
res = NULL; /* query is complete */
@@ -3047,6 +3046,7 @@ pqPipelineProcessQueue(PGconn *conn)
return;
case PGASYNC_IDLE:
+
/*
* If we're in IDLE mode and there's some command in the queue,
* get us into PIPELINE_IDLE mode and process normally. Otherwise
diff --git a/src/interfaces/libpq/fe-lobj.c b/src/interfaces/libpq/fe-lobj.c
index 4cb6a468597..206266fd043 100644
--- a/src/interfaces/libpq/fe-lobj.c
+++ b/src/interfaces/libpq/fe-lobj.c
@@ -142,7 +142,7 @@ lo_truncate(PGconn *conn, int fd, size_t len)
if (conn->lobjfuncs->fn_lo_truncate == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_truncate");
+ "lo_truncate");
return -1;
}
@@ -205,7 +205,7 @@ lo_truncate64(PGconn *conn, int fd, pg_int64 len)
if (conn->lobjfuncs->fn_lo_truncate64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_truncate64");
+ "lo_truncate64");
return -1;
}
@@ -395,7 +395,7 @@ lo_lseek64(PGconn *conn, int fd, pg_int64 offset, int whence)
if (conn->lobjfuncs->fn_lo_lseek64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_lseek64");
+ "lo_lseek64");
return -1;
}
@@ -485,7 +485,7 @@ lo_create(PGconn *conn, Oid lobjId)
if (conn->lobjfuncs->fn_lo_create == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_create");
+ "lo_create");
return InvalidOid;
}
@@ -558,7 +558,7 @@ lo_tell64(PGconn *conn, int fd)
if (conn->lobjfuncs->fn_lo_tell64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_tell64");
+ "lo_tell64");
return -1;
}
@@ -667,7 +667,7 @@ lo_import_internal(PGconn *conn, const char *filename, Oid oid)
if (fd < 0)
{ /* error */
libpq_append_conn_error(conn, "could not open file \"%s\": %s",
- filename, strerror_r(errno, sebuf, sizeof(sebuf)));
+ filename, strerror_r(errno, sebuf, sizeof(sebuf)));
return InvalidOid;
}
@@ -723,8 +723,8 @@ lo_import_internal(PGconn *conn, const char *filename, Oid oid)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not read from file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return InvalidOid;
}
@@ -778,8 +778,8 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not open file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return -1;
}
@@ -799,8 +799,8 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not write to file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return -1;
}
}
@@ -822,7 +822,7 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
if (close(fd) != 0 && result >= 0)
{
libpq_append_conn_error(conn, "could not write to file \"%s\": %s",
- filename, strerror_r(errno, sebuf, sizeof(sebuf)));
+ filename, strerror_r(errno, sebuf, sizeof(sebuf)));
result = -1;
}
@@ -954,56 +954,56 @@ lo_initialize(PGconn *conn)
if (lobjfuncs->fn_lo_open == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_open");
+ "lo_open");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_close == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_close");
+ "lo_close");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_creat == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_creat");
+ "lo_creat");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_unlink == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_unlink");
+ "lo_unlink");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_lseek == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_lseek");
+ "lo_lseek");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_tell == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_tell");
+ "lo_tell");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_read == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "loread");
+ "loread");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_write == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lowrite");
+ "lowrite");
free(lobjfuncs);
return -1;
}
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 3653a1a8a62..660cdec93c9 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -749,8 +749,8 @@ retry4:
*/
definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
@@ -1067,7 +1067,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s() failed: %s", "select",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
}
return result;
@@ -1280,7 +1280,7 @@ libpq_ngettext(const char *msgid, const char *msgid_plural, unsigned long n)
* newline.
*/
void
-libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...)
+libpq_append_error(PQExpBuffer errorMessage, const char *fmt,...)
{
int save_errno = errno;
bool done;
@@ -1309,7 +1309,7 @@ libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...)
* format should not end with a newline.
*/
void
-libpq_append_conn_error(PGconn *conn, const char *fmt, ...)
+libpq_append_conn_error(PGconn *conn, const char *fmt,...)
{
int save_errno = errno;
bool done;
diff --git a/src/interfaces/libpq/fe-protocol3.c b/src/interfaces/libpq/fe-protocol3.c
index 8ab6a884165..b79d74f7489 100644
--- a/src/interfaces/libpq/fe-protocol3.c
+++ b/src/interfaces/libpq/fe-protocol3.c
@@ -466,7 +466,7 @@ static void
handleSyncLoss(PGconn *conn, char id, int msgLength)
{
libpq_append_conn_error(conn, "lost synchronization with server: got message type \"%c\", length %d",
- id, msgLength);
+ id, msgLength);
/* build an error result holding the error message */
pqSaveErrorResult(conn);
conn->asyncStatus = PGASYNC_READY; /* drop out of PQgetResult wait loop */
diff --git a/src/interfaces/libpq/fe-secure-common.c b/src/interfaces/libpq/fe-secure-common.c
index de115b37649..3ecc7bf6159 100644
--- a/src/interfaces/libpq/fe-secure-common.c
+++ b/src/interfaces/libpq/fe-secure-common.c
@@ -226,7 +226,7 @@ pq_verify_peer_name_matches_certificate_ip(PGconn *conn,
* wrong given the subject matter.
*/
libpq_append_conn_error(conn, "certificate contains IP address with invalid length %zu",
- iplen);
+ iplen);
return -1;
}
@@ -235,7 +235,7 @@ pq_verify_peer_name_matches_certificate_ip(PGconn *conn,
if (!addrstr)
{
libpq_append_conn_error(conn, "could not convert certificate's IP address to string: %s",
- strerror_r(errno, sebuf, sizeof(sebuf)));
+ strerror_r(errno, sebuf, sizeof(sebuf)));
return -1;
}
@@ -292,7 +292,7 @@ pq_verify_peer_name_matches_certificate(PGconn *conn)
else if (names_examined == 1)
{
libpq_append_conn_error(conn, "server certificate for \"%s\" does not match host name \"%s\"",
- first_name, host);
+ first_name, host);
}
else
{
diff --git a/src/interfaces/libpq/fe-secure-gssapi.c b/src/interfaces/libpq/fe-secure-gssapi.c
index 038e847b7e9..0af4de941af 100644
--- a/src/interfaces/libpq/fe-secure-gssapi.c
+++ b/src/interfaces/libpq/fe-secure-gssapi.c
@@ -213,8 +213,8 @@ pg_GSS_write(PGconn *conn, const void *ptr, size_t len)
if (output.length > PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "client tried to send oversize GSSAPI packet (%zu > %zu)",
- (size_t) output.length,
- PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32));
+ (size_t) output.length,
+ PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32));
errno = EIO; /* for lack of a better idea */
goto cleanup;
}
@@ -349,8 +349,8 @@ pg_GSS_read(PGconn *conn, void *ptr, size_t len)
if (input.length > PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "oversize GSSAPI packet sent by the server (%zu > %zu)",
- (size_t) input.length,
- PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
+ (size_t) input.length,
+ PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
errno = EIO; /* for lack of a better idea */
return -1;
}
@@ -590,8 +590,8 @@ pqsecure_open_gss(PGconn *conn)
if (input.length > PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "oversize GSSAPI packet sent by the server (%zu > %zu)",
- (size_t) input.length,
- PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
+ (size_t) input.length,
+ PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
return PGRES_POLLING_FAILED;
}
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index 6a4431ddfe9..e6da377fb9d 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -213,12 +213,12 @@ rloop:
if (result_errno == EPIPE ||
result_errno == ECONNRESET)
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
else
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
}
else
{
@@ -313,12 +313,12 @@ pgtls_write(PGconn *conn, const void *ptr, size_t len)
result_errno = SOCK_ERRNO;
if (result_errno == EPIPE || result_errno == ECONNRESET)
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
else
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
}
else
{
@@ -415,7 +415,7 @@ pgtls_get_peer_certificate_hash(PGconn *conn, size_t *len)
if (algo_type == NULL)
{
libpq_append_conn_error(conn, "could not find digest for NID %s",
- OBJ_nid2sn(algo_nid));
+ OBJ_nid2sn(algo_nid));
return NULL;
}
break;
@@ -967,7 +967,7 @@ initialize_SSL(PGconn *conn)
if (ssl_min_ver == -1)
{
libpq_append_conn_error(conn, "invalid value \"%s\" for minimum SSL protocol version",
- conn->ssl_min_protocol_version);
+ conn->ssl_min_protocol_version);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -993,7 +993,7 @@ initialize_SSL(PGconn *conn)
if (ssl_max_ver == -1)
{
libpq_append_conn_error(conn, "invalid value \"%s\" for maximum SSL protocol version",
- conn->ssl_max_protocol_version);
+ conn->ssl_max_protocol_version);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1037,7 +1037,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read root certificate file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
SSL_CTX_free(SSL_context);
return -1;
@@ -1089,10 +1089,10 @@ initialize_SSL(PGconn *conn)
*/
if (fnbuf[0] == '\0')
libpq_append_conn_error(conn, "could not get home directory to locate root certificate file\n"
- "Either provide the file or change sslmode to disable server certificate verification.");
+ "Either provide the file or change sslmode to disable server certificate verification.");
else
libpq_append_conn_error(conn, "root certificate file \"%s\" does not exist\n"
- "Either provide the file or change sslmode to disable server certificate verification.", fnbuf);
+ "Either provide the file or change sslmode to disable server certificate verification.", fnbuf);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1122,7 +1122,7 @@ initialize_SSL(PGconn *conn)
if (errno != ENOENT && errno != ENOTDIR)
{
libpq_append_conn_error(conn, "could not open certificate file \"%s\": %s",
- fnbuf, strerror_r(errno, sebuf, sizeof(sebuf)));
+ fnbuf, strerror_r(errno, sebuf, sizeof(sebuf)));
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1140,7 +1140,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read certificate file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
SSL_CTX_free(SSL_context);
return -1;
@@ -1239,7 +1239,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not load SSL engine \"%s\": %s",
- engine_str, err);
+ engine_str, err);
SSLerrfree(err);
free(engine_str);
return -1;
@@ -1250,7 +1250,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not initialize SSL engine \"%s\": %s",
- engine_str, err);
+ engine_str, err);
SSLerrfree(err);
ENGINE_free(conn->engine);
conn->engine = NULL;
@@ -1265,7 +1265,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read private SSL key \"%s\" from engine \"%s\": %s",
- engine_colon, engine_str, err);
+ engine_colon, engine_str, err);
SSLerrfree(err);
ENGINE_finish(conn->engine);
ENGINE_free(conn->engine);
@@ -1278,7 +1278,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not load private SSL key \"%s\" from engine \"%s\": %s",
- engine_colon, engine_str, err);
+ engine_colon, engine_str, err);
SSLerrfree(err);
ENGINE_finish(conn->engine);
ENGINE_free(conn->engine);
@@ -1315,10 +1315,10 @@ initialize_SSL(PGconn *conn)
{
if (errno == ENOENT)
libpq_append_conn_error(conn, "certificate present, but not private key file \"%s\"",
- fnbuf);
+ fnbuf);
else
libpq_append_conn_error(conn, "could not stat private key file \"%s\": %m",
- fnbuf);
+ fnbuf);
return -1;
}
@@ -1326,7 +1326,7 @@ initialize_SSL(PGconn *conn)
if (!S_ISREG(buf.st_mode))
{
libpq_append_conn_error(conn, "private key file \"%s\" is not a regular file",
- fnbuf);
+ fnbuf);
return -1;
}
@@ -1383,7 +1383,7 @@ initialize_SSL(PGconn *conn)
if (SSL_use_PrivateKey_file(conn->ssl, fnbuf, SSL_FILETYPE_ASN1) != 1)
{
libpq_append_conn_error(conn, "could not load private key file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
return -1;
}
@@ -1399,7 +1399,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "certificate does not match private key file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
return -1;
}
@@ -1452,7 +1452,7 @@ open_client_SSL(PGconn *conn)
if (r == -1)
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
else
libpq_append_conn_error(conn, "SSL SYSCALL error: EOF detected");
pgtls_close(conn);
@@ -1494,12 +1494,12 @@ open_client_SSL(PGconn *conn)
case SSL_R_VERSION_TOO_LOW:
#endif
libpq_append_conn_error(conn, "This may indicate that the server does not support any SSL protocol version between %s and %s.",
- conn->ssl_min_protocol_version ?
- conn->ssl_min_protocol_version :
- MIN_OPENSSL_TLS_VERSION,
- conn->ssl_max_protocol_version ?
- conn->ssl_max_protocol_version :
- MAX_OPENSSL_TLS_VERSION);
+ conn->ssl_min_protocol_version ?
+ conn->ssl_min_protocol_version :
+ MIN_OPENSSL_TLS_VERSION,
+ conn->ssl_max_protocol_version ?
+ conn->ssl_max_protocol_version :
+ MAX_OPENSSL_TLS_VERSION);
break;
default:
break;
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 66e401bf3d9..8069e381424 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -255,14 +255,14 @@ pqsecure_raw_read(PGconn *conn, void *ptr, size_t len)
case EPIPE:
case ECONNRESET:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
break;
default:
libpq_append_conn_error(conn, "could not receive data from server: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
break;
}
}
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 1dc264fe544..8890525cdf4 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -897,8 +897,8 @@ extern char *libpq_ngettext(const char *msgid, const char *msgid_plural, unsigne
*/
#undef _
-extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...) pg_attribute_printf(2, 3);
-extern void libpq_append_conn_error(PGconn *conn, const char *fmt, ...) pg_attribute_printf(2, 3);
+extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt,...) pg_attribute_printf(2, 3);
+extern void libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
/*
* These macros are needed to let error-handling code be portable between
base-commit: d69c404c4cc5985d8ae5b5ed38bed3400b317f82
--
2.34.1
v16-0004-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=v16-0004-Add-non-blocking-version-of-PQcancel.patchDownload
From aafad8ca3b50542bfd3333d39a8351ba7b0c0fae Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 12 Jan 2022 09:52:05 +0100
Subject: [PATCH v16 4/5] Add non-blocking version of PQcancel
This patch makes the following changes in libpq:
1. Add a new PQcancelSend function, which sends cancellation requests
using the regular connection establishment code. This makes sure
that cancel requests support and use all connection options
including encryption.
2. Add a new PQcancelConn function which allows sending cancellation in
a non-blocking way by using it together with the newly added
PQcancelPoll and PQcancelSocket.
3. Use these two new cancellation APIs everywhere in the codebase where
signal-safety is not a necessity.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests. The postgres_fdw
cancellation code has been modified to make use of this.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
doc/src/sgml/libpq.sgml | 275 ++++++++++-
src/interfaces/libpq/exports.txt | 8 +
src/interfaces/libpq/fe-connect.c | 452 +++++++++++++++++-
src/interfaces/libpq/libpq-fe.h | 25 +-
src/interfaces/libpq/libpq-int.h | 9 +
.../modules/libpq_pipeline/libpq_pipeline.c | 265 +++++++++-
6 files changed, 982 insertions(+), 52 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 9ee5532c076..5d70dbe632e 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -5004,7 +5004,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelSend"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -5723,13 +5723,218 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQcancelSend">
+ <term><function>PQcancelSend</function><indexterm><primary>PQcancelSend</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command.
+<synopsis>
+PGcancelConn *PQcancelSend(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This request is made over a connection that uses the same connection
+ options as the the original <structname>PGconn</structname>. So when the
+ original connection is encrypted (using TLS or GSS), the connection for
+ the cancel request is encrypted in the same way. Any connection options
+ that only are only used during authentication or after authentication of
+ the client are ignored though, because cancellation requests do not
+ require authentication and the connection is closed right after the
+ cancellation request is submitted.
+ </para>
+
+ <para>
+ This function returns a <structname>PGcancelConn</structname>
+ object. <xref linkend="libpq-PQcancelStatus"/> can be used to check
+ if any error occured while sending the cancellation request. If
+ <xref linkend="libpq-PQcancelStatus"/> returns <symbol>CONNECTION_OK</symbol>
+ the request was sent successfully, but if it returns <symbol>CONNECTION_BAD</symbol>
+ an error occured. If an error occured the error message can be retrieved using
+ <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being cancelled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelSend</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQcancelSend"/> that can be used to
+ send cancellation requests in a non-blocking manner.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>
+ object, but it won't instantly start sending a cancel request over this
+ connection, unlike <xref linkend="libpq-PQcancelSend"/>.
+ The return value should still be passed to <xref linkend="libpq-PQcancelStatus"/>
+ though, to check if the <structname>PGcancelConn</structname> object was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe and non-blocking way.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQstatus"/> that can be used for
+ cancellation connections.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ <para>
+ In addition to all the statuses that a <structname>PGconn</structname>
+ can have, this connection can have one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQcancelPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelPoll"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually cancelled) and that the
+ connection is now closed. While a <symbol>CONNECTION_OK</symbol> result
+ for <structname>PGconn</structname> means that queries can be sent over
+ the connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelPoll">
+ <term><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQconnectPoll"/> that can be used for
+ cancellation connections.
+<synopsis>
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -5771,14 +5976,28 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ An insecure version of <xref linkend="libpq-PQcancelSend"/>, but one
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
+ <xref linkend="libpq-PQcancel"/> should only be used if it's necessary
+ to cancel a query from a signal-handler. If signal-safety is not needed,
+ <xref linkend="libpq-PQcancelSend"/> should be used to cancel the query
+ instead. <xref linkend="libpq-PQcancel"/> can be safely invoked from a
+ signal handler, if the <parameter>errbuf</parameter> is a local variable
+ in the signal handler. The <structname>PGcancel</structname> object is
+ read-only as far as <xref linkend="libpq-PQcancel"/> is concerned, so it
+ can also be invoked from a thread that is separate from the one
+ manipulating the <structname>PGconn</structname> object.
+ </para>
+
+ <para>
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
with an explanatory error message. <parameter>errbuf</parameter>
must be a char array of size <parameter>errbufsize</parameter> (the
@@ -5786,21 +6005,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
- <para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. This function connects to
+ postgres on the same address and port as the original connection. The
+ only connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. <emphasis>This means the connection
+ for the cancel request is never encrypted using TLS or GSS</emphasis>.
</para>
</listitem>
</varlistentry>
@@ -5812,13 +6032,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelSend"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelSend"/> should be
+ used instead, to avoid the security and thread-safety issues that this
+ function has. This function has the same security issues as
+ <xref linkend="libpq-PQcancel"/>, but without the benefit of being
+ signal-safe.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -8977,7 +9206,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelSend"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index e8bcc883709..f56e8c185c4 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -186,3 +186,11 @@ PQpipelineStatus 183
PQsetTraceFlags 184
PQmblenBounded 185
PQsendFlushRequest 186
+PQcancelSend 187
+PQcancelConn 188
+PQcancelPoll 189
+PQcancelStatus 190
+PQcancelSocket 191
+PQcancelErrorMessage 192
+PQcancelReset 193
+PQcancelFinish 194
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 4a0ea51a864..48a937b6707 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -380,8 +380,10 @@ static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
+static void release_conn_hosts(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
static bool store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
@@ -605,8 +607,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQcancelReset invocations. Otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -737,6 +748,113 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a query on the given connection. This requires polling
+ * the returned PGcancelConn to actually complete the cancellation of the
+ * query.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancelation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we we manually create the host and address arrays
+ * with a single element after freeing the host array that we generated
+ * from the connection options.
+ */
+ release_conn_hosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
/*
* PQconnectStartParams
*
@@ -912,6 +1030,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2196,10 +2353,18 @@ connectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address. These fields have already set up in PQcancelConn. So leave
+ * these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2341,7 +2506,10 @@ connectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2466,13 +2634,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -3087,6 +3259,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancelation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -3831,8 +4026,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4199,19 +4400,8 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
- /* clean up pg_conn_host structures */
- for (int i = 0; i < conn->nconnhost; ++i)
- {
- free(conn->connhost[i].host);
- free(conn->connhost[i].hostaddr);
- free(conn->connhost[i].port);
- if (conn->connhost[i].password != NULL)
- {
- explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
- free(conn->connhost[i].password);
- }
- }
- free(conn->connhost);
+ release_conn_addrinfo(conn);
+ release_conn_hosts(conn);
free(conn->client_encoding_initial);
free(conn->events);
@@ -4323,6 +4513,31 @@ release_conn_addrinfo(PGconn *conn)
}
}
+/*
+ * release_conn_hosts
+ * - Free the host list in the PGconn.
+ */
+static void
+release_conn_hosts(PGconn *conn)
+{
+ if (conn->connhost)
+ {
+ for (int i = 0; i < conn->nconnhost; ++i)
+ {
+ free(conn->connhost[i].host);
+ free(conn->connhost[i].hostaddr);
+ free(conn->connhost[i].port);
+ if (conn->connhost[i].password != NULL)
+ {
+ explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
+ free(conn->connhost[i].password);
+ }
+ }
+ free(conn->connhost);
+ }
+}
+
+
/*
* sendTerminateConn
* - Send a terminate message to backend.
@@ -4330,6 +4545,15 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4383,7 +4607,13 @@ closePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we would have to rebuild it during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
@@ -4798,6 +5028,180 @@ cancel_errReturn:
return false;
}
+/*
+ * PQcancelSend
+ *
+ * Send a cancellation request in a blocking fashion.
+ */
+PGcancelConn *
+PQcancelSend(PGconn *conn)
+{
+ PGcancelConn *cancelConn = PQcancelConn(conn);
+
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return cancelConn;
+
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_BAD;
+ return cancelConn;
+ }
+
+ (void) connectDBComplete(&cancelConn->conn);
+
+ return cancelConn;
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn * cancelConn)
+{
+ PGconn *conn = (PGconn *) cancelConn;
+ int n;
+
+ /*
+ * Before we can call PQconnectPoll we first need to start the connection
+ * using connectDBstart. Non-cancel connections already do this whenever
+ * the connection is initialized. But cancel connections wait until the
+ * caller starts polling, because there might be a large delay between
+ * creating a cancel connection and actually wanting to use it.
+ */
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+ }
+
+ /*
+ * The rest of the connection establishement we leave to PQconnectPoll,
+ * since it's very similar to normal connection establishment. But once we
+ * get to the CONNECTION_AWAITING_RESPONSE we need to do our own thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * Windows is a bit special in its EOF behaviour for TCP. Sometimes it
+ * will error with an ECONNRESET when there is a clean connection closure.
+ * See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here. For
+ * all other OSes we consider any other error than EOF and report it as
+ * such.
+ */
+ if (n < 0 && n != -2)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangly do
+ * receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF. Which is what we were
+ * expecting. The cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn * cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn * cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn * cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn * cancelConn)
+{
+ closePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_STARTING;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn * cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
/*
* PQrequestCancel: old, not thread-safe function for requesting query cancel
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index f3d92204964..95899b9f55b 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_STARTING /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,28 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* issue a cancel request */
+extern PGcancelConn * PQcancelSend(PGconn *conn);
+/* non-blocking version of PQcancelSend */
+extern PGcancelConn * PQcancelConn(PGconn *conn);
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn * cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
+extern int PQcancelSocket(const PGcancelConn * cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
+extern void PQcancelReset(PGcancelConn * cancelConn);
+extern void PQcancelFinish(PGcancelConn * cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* a less secure version of PQcancelSend, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 8f96c52e6c3..9c0a3e2e5e6 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -398,6 +398,10 @@ struct pg_conn
char *target_session_attrs; /* desired session properties */
char *require_auth; /* name of the expected auth method */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -602,6 +606,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index f48da7d963e..e8e904892c7 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got cancelled.
+ *
+ * This is a function wrapped in a macrco to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_cancelled(conn) confirm_query_cancelled_impl(__LINE__, conn)
+static void
+confirm_query_cancelled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_cancelled(conn);
+
+ /* test PQcancelSend */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelSend(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -985,7 +1243,7 @@ test_prepared(PGconn *conn)
static void
notice_processor(void *arg, const char *message)
{
- int *n_notices = (int *) arg;
+ int *n_notices = (int *) arg;
(*n_notices)++;
fprintf(stderr, "NOTICE %d: %s", *n_notices, message);
@@ -1681,6 +1939,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1782,7 +2041,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
Hi Jelte,
I had a look into your patchset (v16), did a quick review and played a
bit with the feature.
Patch 2 is missing the documentation about PQcancelSocket() and contains
a few typos; please find attached a (fixup) patch to correct these.
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -321,16 +328,28 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* issue a cancel request */
+extern PGcancelConn * PQcancelSend(PGconn *conn);
[...]
Maybe I'm missing something, but this function above seems a bit
strange. Namely, I wonder why it returns a PGcancelConn and what's the
point of requiring the user to call PQcancelStatus() to see if something
got wrong. Maybe it could be defined as:
int PQcancelSend(PGcancelConn *cancelConn);
where the return value would be status? And the user would only need to
call PQcancelErrorMessage() in case of error. This would leave only one
single way to create a PGcancelConn value (i.e. PQcancelConn()), which
seems less confusing to me.
Jelte Fennema wrote:
Especially since I ran into another use case that I would want to use
this patch for recently: Adding an async cancel function to Python
it's psycopg3 library. This library exposes both a Connection class
and an AsyncConnection class (using python its asyncio feature). But
one downside of the AsyncConnection type is that it doesn't have a
cancel method.
As part of my testing, I've implemented non-blocking cancellation in
Psycopg, based on v16 on this patchset. Overall this worked fine and
seems useful; if you want to try it:
https://github.com/dlax/psycopg3/tree/pg16/non-blocking-pqcancel
(The only thing I found slightly inconvenient is the need to convey the
connection encoding (from PGconn) when handling error message from the
PGcancelConn.)
Cheers,
Denis
Attachments:
0001-fixup-Add-non-blocking-version-of-PQcancel.patchtext/x-diff; charset=us-asciiDownload
From a5f9cc680ffa520b05fe34b7cac5df2e60a6d4ad Mon Sep 17 00:00:00 2001
From: Denis Laxalde <denis.laxalde@dalibo.com>
Date: Tue, 28 Mar 2023 16:06:42 +0200
Subject: [PATCH] fixup! Add non-blocking version of PQcancel
---
doc/src/sgml/libpq.sgml | 16 +++++++++++++++-
src/interfaces/libpq/fe-connect.c | 2 +-
src/test/modules/libpq_pipeline/libpq_pipeline.c | 2 +-
3 files changed, 17 insertions(+), 3 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 29f08a4317..aa404c4d15 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -5795,7 +5795,7 @@ PGcancelConn *PQcancelSend(PGconn *conn);
options as the the original <structname>PGconn</structname>. So when the
original connection is encrypted (using TLS or GSS), the connection for
the cancel request is encrypted in the same way. Any connection options
- that only are only used during authentication or after authentication of
+ that are only used during authentication or after authentication of
the client are ignored though, because cancellation requests do not
require authentication and the connection is closed right after the
cancellation request is submitted.
@@ -5912,6 +5912,20 @@ ConnStatusType PQcancelStatus(const PGcancelConn *conn);
</listitem>
</varlistentry>
+ <varlistentry id="libpq-PQcancelSocket">
+ <term><function>PQcancelSocket</function><indexterm><primary>PQcancelSocket</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQsocket"/> that can be used for
+ cancellation connections.
+<synopsis>
+int PQcancelSocket(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQcancelPoll">
<term><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 74e337fddf..16af7303d4 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -808,7 +808,7 @@ PQcancelConn(PGconn *conn)
/*
* Cancel requests should not iterate over all possible hosts. The request
* needs to be sent to the exact host and address that the original
- * connection used. So we we manually create the host and address arrays
+ * connection used. So we manually create the host and address arrays
* with a single element after freeing the host array that we generated
* from the connection options.
*/
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index e8e904892c..6764ab513b 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -89,7 +89,7 @@ pg_fatal_impl(int line, const char *fmt,...)
/*
* Check that the query on the given connection got cancelled.
*
- * This is a function wrapped in a macrco to make the reported line number
+ * This is a function wrapped in a macro to make the reported line number
* in an error match the line number of the invocation.
*/
#define confirm_query_cancelled(conn) confirm_query_cancelled_impl(__LINE__, conn)
--
2.30.2
On Tue, 28 Mar 2023 at 16:54, Denis Laxalde <denis.laxalde@dalibo.com> wrote:
I had a look into your patchset (v16), did a quick review and played a
bit with the feature.Patch 2 is missing the documentation about PQcancelSocket() and contains
a few typos; please find attached a (fixup) patch to correct these.
Thanks applied that patch and attached a new patchset
Namely, I wonder why it returns a PGcancelConn and what's the
point of requiring the user to call PQcancelStatus() to see if something
got wrong. Maybe it could be defined as:int PQcancelSend(PGcancelConn *cancelConn);
where the return value would be status? And the user would only need to
call PQcancelErrorMessage() in case of error. This would leave only one
single way to create a PGcancelConn value (i.e. PQcancelConn()), which
seems less confusing to me.
To clarify what you mean, the API would then be like this:
PGcancelConn cancelConn = PQcancelConn(conn);
if (PQcancelSend(cancelConn) == CONNECTION_BAD) {
printf("ERROR %s\n", PQcancelErrorMessage(cancelConn))
exit(1)
}
Instead of:
PGcancelConn cancelConn = PQcancelSend(conn);
if (PQcancelStatus(cancelConn) == CONNECTION_BAD) {
printf("ERROR %s\n", PQcancelErrorMessage(cancelConn))
exit(1)
}
Those are so similar, that I have no preference either way. If more
people prefer one over the other I'm happy to change it, but for now
I'll keep it as is.
As part of my testing, I've implemented non-blocking cancellation in
Psycopg, based on v16 on this patchset. Overall this worked fine and
seems useful; if you want to try it:https://github.com/dlax/psycopg3/tree/pg16/non-blocking-pqcancel
That's great to hear! I'll try to take a closer look at that change tomorrow.
(The only thing I found slightly inconvenient is the need to convey the
connection encoding (from PGconn) when handling error message from the
PGcancelConn.)
Could you expand a bit more on this? And if you have any idea on how
to improve the API with regards to this?
Attachments:
v17-0002-Copy-and-store-addrinfo-in-libpq-owned-private-m.patchapplication/octet-stream; name=v17-0002-Copy-and-store-addrinfo-in-libpq-owned-private-m.patchDownload
From fa487867fa7f14b165dc79bc4d0fc9b71c12b5d3 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <dgustafsson@postgresql.org>
Date: Mon, 27 Mar 2023 11:17:56 +0200
Subject: [PATCH v17 2/5] Copy and store addrinfo in libpq-owned private memory
This refactors libpq to copy addrinfos returned by getaddrinfo to
memory owned by libpq such that future improvements can alter for
example the order of entries.
As a nice side effect of this refactor the mechanism for iteration
over addresses in PQconnectPoll is now identical to its iteration
over hosts.
Author: Jelte Fennema <postgres@jeltef.nl>
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Reviewed-by: Michael Banck <mbanck@gmx.net>
Reviewed-by: Andrey Borodin <amborodin86@gmail.com>
Discussion: https://postgr.es/m/PR3PR83MB04768E2FF04818EEB2179949F7A69@PR3PR83MB0476.EURPRD83.prod.outlook.com
---
src/include/libpq/pqcomm.h | 6 ++
src/interfaces/libpq/fe-connect.c | 112 +++++++++++++++++++++---------
src/interfaces/libpq/libpq-int.h | 7 +-
src/tools/pgindent/typedefs.list | 1 +
4 files changed, 92 insertions(+), 34 deletions(-)
diff --git a/src/include/libpq/pqcomm.h b/src/include/libpq/pqcomm.h
index bff7dd18a23..c85090259d9 100644
--- a/src/include/libpq/pqcomm.h
+++ b/src/include/libpq/pqcomm.h
@@ -27,6 +27,12 @@ typedef struct
socklen_t salen;
} SockAddr;
+typedef struct
+{
+ int family;
+ SockAddr addr;
+} AddrInfo;
+
/* Configure the UNIX socket location for the well known port. */
#define UNIXSOCK_PATH(path, port, sockdir) \
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index b71378d94c5..4e798e1672c 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -389,6 +389,7 @@ static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
+static int store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
static PQconninfoOption *conninfo_init(PQExpBuffer errorMessage);
static PQconninfoOption *parse_connection_string(const char *connstr,
@@ -2295,7 +2296,7 @@ connectDBComplete(PGconn *conn)
time_t finish_time = ((time_t) -1);
int timeout = 0;
int last_whichhost = -2; /* certainly different from whichhost */
- struct addrinfo *last_addr_cur = NULL;
+ int last_whichaddr = -2; /* certainly different from whichaddr */
if (conn == NULL || conn->status == CONNECTION_BAD)
return 0;
@@ -2339,11 +2340,11 @@ connectDBComplete(PGconn *conn)
if (flag != PGRES_POLLING_OK &&
timeout > 0 &&
(conn->whichhost != last_whichhost ||
- conn->addr_cur != last_addr_cur))
+ conn->whichaddr != last_whichaddr))
{
finish_time = time(NULL) + timeout;
last_whichhost = conn->whichhost;
- last_addr_cur = conn->addr_cur;
+ last_whichaddr = conn->whichaddr;
}
/*
@@ -2490,9 +2491,9 @@ keep_going: /* We will come back to here until there is
/* Time to advance to next address, or next host if no more addresses? */
if (conn->try_next_addr)
{
- if (conn->addr_cur && conn->addr_cur->ai_next)
+ if (conn->whichaddr < conn->naddr)
{
- conn->addr_cur = conn->addr_cur->ai_next;
+ conn->whichaddr++;
reset_connection_state_machine = true;
}
else
@@ -2505,6 +2506,7 @@ keep_going: /* We will come back to here until there is
{
pg_conn_host *ch;
struct addrinfo hint;
+ struct addrinfo *addrlist;
int thisport;
int ret;
char portstr[MAXPGPATH];
@@ -2545,7 +2547,7 @@ keep_going: /* We will come back to here until there is
/* Initialize hint structure */
MemSet(&hint, 0, sizeof(hint));
hint.ai_socktype = SOCK_STREAM;
- conn->addrlist_family = hint.ai_family = AF_UNSPEC;
+ hint.ai_family = AF_UNSPEC;
/* Figure out the port number we're going to use. */
if (ch->port == NULL || ch->port[0] == '\0')
@@ -2568,8 +2570,8 @@ keep_going: /* We will come back to here until there is
{
case CHT_HOST_NAME:
ret = pg_getaddrinfo_all(ch->host, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not translate host name \"%s\" to address: %s",
ch->host, gai_strerror(ret));
@@ -2580,8 +2582,8 @@ keep_going: /* We will come back to here until there is
case CHT_HOST_ADDRESS:
hint.ai_flags = AI_NUMERICHOST;
ret = pg_getaddrinfo_all(ch->hostaddr, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not parse network address \"%s\": %s",
ch->hostaddr, gai_strerror(ret));
@@ -2590,7 +2592,7 @@ keep_going: /* We will come back to here until there is
break;
case CHT_UNIX_SOCKET:
- conn->addrlist_family = hint.ai_family = AF_UNIX;
+ hint.ai_family = AF_UNIX;
UNIXSOCK_PATH(portstr, thisport, ch->host);
if (strlen(portstr) >= UNIXSOCK_PATH_BUFLEN)
{
@@ -2605,8 +2607,8 @@ keep_going: /* We will come back to here until there is
* name as a Unix-domain socket path.
*/
ret = pg_getaddrinfo_all(NULL, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not translate Unix-domain socket path \"%s\" to address: %s",
portstr, gai_strerror(ret));
@@ -2615,8 +2617,15 @@ keep_going: /* We will come back to here until there is
break;
}
- /* OK, scan this addrlist for a working server address */
- conn->addr_cur = conn->addrlist;
+ /*
+ * Store a copy of the addrlist in private memory so we can perform
+ * randomization for load balancing.
+ */
+ ret = store_conn_addrinfo(conn, addrlist);
+ pg_freeaddrinfo_all(hint.ai_family, addrlist);
+ if (ret)
+ goto error_return; /* message already logged */
+
reset_connection_state_machine = true;
conn->try_next_host = false;
}
@@ -2673,31 +2682,30 @@ keep_going: /* We will come back to here until there is
{
/*
* Try to initiate a connection to one of the addresses
- * returned by pg_getaddrinfo_all(). conn->addr_cur is the
+ * returned by pg_getaddrinfo_all(). conn->whichaddr is the
* next one to try.
*
* The extra level of braces here is historical. It's not
* worth reindenting this whole switch case to remove 'em.
*/
{
- struct addrinfo *addr_cur = conn->addr_cur;
char host_addr[NI_MAXHOST];
int sock_type;
+ AddrInfo *addr_cur;
/*
* Advance to next possible host, if we've tried all of
* the addresses for the current host.
*/
- if (addr_cur == NULL)
+ if (conn->whichaddr == conn->naddr)
{
conn->try_next_host = true;
goto keep_going;
}
+ addr_cur = &conn->addr[conn->whichaddr];
/* Remember current address for possible use later */
- memcpy(&conn->raddr.addr, addr_cur->ai_addr,
- addr_cur->ai_addrlen);
- conn->raddr.salen = addr_cur->ai_addrlen;
+ memcpy(&conn->raddr, &addr_cur->addr, sizeof(SockAddr));
/*
* Set connip, too. Note we purposely ignore strdup
@@ -2732,7 +2740,7 @@ keep_going: /* We will come back to here until there is
*/
sock_type |= SOCK_NONBLOCK;
#endif
- conn->sock = socket(addr_cur->ai_family, sock_type, 0);
+ conn->sock = socket(addr_cur->family, sock_type, 0);
if (conn->sock == PGINVALID_SOCKET)
{
int errorno = SOCK_ERRNO;
@@ -2743,7 +2751,7 @@ keep_going: /* We will come back to here until there is
* cases where the address list includes both IPv4 and
* IPv6 but kernel only accepts one family.
*/
- if (addr_cur->ai_next != NULL ||
+ if (conn->whichaddr < conn->naddr ||
conn->whichhost + 1 < conn->nconnhost)
{
conn->try_next_addr = true;
@@ -2769,7 +2777,7 @@ keep_going: /* We will come back to here until there is
* TCP sockets, nonblock mode, close-on-exec. Try the
* next address if any of this fails.
*/
- if (addr_cur->ai_family != AF_UNIX)
+ if (addr_cur->family != AF_UNIX)
{
if (!connectNoDelay(conn))
{
@@ -2800,7 +2808,7 @@ keep_going: /* We will come back to here until there is
#endif /* F_SETFD */
#endif
- if (addr_cur->ai_family != AF_UNIX)
+ if (addr_cur->family != AF_UNIX)
{
#ifndef WIN32
int on = 1;
@@ -2892,8 +2900,8 @@ keep_going: /* We will come back to here until there is
* Start/make connection. This should not block, since we
* are in nonblock mode. If it does, well, too bad.
*/
- if (connect(conn->sock, addr_cur->ai_addr,
- addr_cur->ai_addrlen) < 0)
+ if (connect(conn->sock, (struct sockaddr *) &addr_cur->addr.addr,
+ addr_cur->addr.salen) < 0)
{
if (SOCK_ERRNO == EINPROGRESS ||
#ifdef WIN32
@@ -4318,6 +4326,49 @@ freePGconn(PGconn *conn)
free(conn);
}
+/*
+ * store_conn_addrinfo
+ * - copy addrinfo to PGconn object
+ *
+ * Copies the addrinfos from addrlist to the PGconn object such that the
+ * addrinfos can be manipulated by libpq. Returns a positive integer on
+ * failure, otherwise zero.
+ */
+static int
+store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist)
+{
+ struct addrinfo *ai = addrlist;
+
+ conn->whichaddr = 0;
+
+ conn->naddr = 0;
+ while (ai)
+ {
+ ai = ai->ai_next;
+ conn->naddr++;
+ }
+
+ conn->addr = calloc(conn->naddr, sizeof(AddrInfo));
+ if (conn->addr == NULL)
+ {
+ libpq_append_conn_error(conn, "out of memory");
+ return 1;
+ }
+
+ ai = addrlist;
+ for (int i = 0; i < conn->naddr; i++)
+ {
+ conn->addr[i].family = ai->ai_family;
+
+ memcpy(&conn->addr[i].addr.addr, ai->ai_addr,
+ ai->ai_addrlen);
+ conn->addr[i].addr.salen = ai->ai_addrlen;
+ ai = ai->ai_next;
+ }
+
+ return 0;
+}
+
/*
* release_conn_addrinfo
* - Free any addrinfo list in the PGconn.
@@ -4325,11 +4376,10 @@ freePGconn(PGconn *conn)
static void
release_conn_addrinfo(PGconn *conn)
{
- if (conn->addrlist)
+ if (conn->addr)
{
- pg_freeaddrinfo_all(conn->addrlist_family, conn->addrlist);
- conn->addrlist = NULL;
- conn->addr_cur = NULL; /* for safety */
+ free(conn->addr);
+ conn->addr = NULL;
}
}
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 1ff57044508..760ee3f6912 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -471,9 +471,10 @@ struct pg_conn
PGTargetServerType target_server_type; /* desired session properties */
bool try_next_addr; /* time to advance to next address/host? */
bool try_next_host; /* time to advance to next connhost[]? */
- struct addrinfo *addrlist; /* list of addresses for current connhost */
- struct addrinfo *addr_cur; /* the one currently being tried */
- int addrlist_family; /* needed to know how to free addrlist */
+ int naddr; /* number of addresses returned by getaddrinfo */
+ int whichaddr; /* the address currently being tried */
+ AddrInfo *addr; /* the array of addresses for the currently
+ * tried host */
bool send_appname; /* okay to send application_name? */
/* Miscellaneous stuff */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 0b7bc457671..dfa1e309ee3 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -26,6 +26,7 @@ AcquireSampleRowsFunc
ActionList
ActiveSnapshotElt
AddForeignUpdateTargets_function
+AddrInfo
AffixNode
AffixNodeData
AfterTriggerEvent
--
2.34.1
v17-0005-Start-using-new-libpq-cancel-APIs.patchapplication/octet-stream; name=v17-0005-Start-using-new-libpq-cancel-APIs.patchDownload
From 4157ff7eba87420258ac750eb5654ad3447b2576 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 25 Jan 2023 13:32:15 +0100
Subject: [PATCH v17 5/5] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in the codebase with these newer
ones.
---
contrib/dblink/dblink.c | 30 ++++--
contrib/postgres_fdw/connection.c | 99 ++++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 10 +-
src/test/isolation/isolationtester.c | 29 +++---
6 files changed, 139 insertions(+), 51 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 78a8bcee6e3..e139f66e116 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1326,22 +1326,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelSend(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 8eb9194506c..3f9a408a6af 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -1233,35 +1233,104 @@ pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel)
static bool
pgfdw_cancel_query(PGconn *conn)
{
- PGcancel *cancel;
- char errbuf[256];
PGresult *result = NULL;
- TimestampTz endtime;
- bool timed_out;
/*
* If it takes too long to cancel the query and discard the result, assume
* the connection is dead.
*/
- endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQcancelStatus(cancel_conn) == CONNECTION_BAD)
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
+ }
+
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
}
- PQfreeCancel(cancel);
}
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ if (failed)
+ return false;
/* Get and discard the result of the query. */
if (pgfdw_get_cleanup_result(conn, endtime, &result, &timed_out))
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 04a3ef450cf..064c3103a5e 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2688,6 +2688,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index 4f3088c03ea..640958df136 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -713,6 +713,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 7a1edea7c8c..b32448c0103 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,11 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
-
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ PQcancelFinish(PQcancelSend(conn));
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153a..3781f7982b2 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelSend(conn);
- if (cancel != NULL)
+ if (PQcancelStatus(cancel_conn) == CONNECTION_OK)
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
v17-0003-Return-2-from-pqReadData-on-EOF.patchapplication/octet-stream; name=v17-0003-Return-2-from-pqReadData-on-EOF.patchDownload
From 55513ba5245ce92afba84fc1a0951e34f45928c8 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Thu, 26 Jan 2023 12:24:38 +0100
Subject: [PATCH v17 3/5] Return -2 from pqReadData on EOF
This patch changes pqReadData to return -2 when a connection is cleanly
closed by the other side. For most of the Postgres protocol this is
considered an error, because the client will close the connection
instead of the server. But for Postgres its cancellation protocol
the distinction between errors and clean connection closure is
important, because clean connection closure is the way for the server to
signal that the cancellation was handled.
This patch is in preparation for a follow-up patch where pqReadData is
used for the cancellation protocol implementation.
No existing callsites of pqReadData or any of its internal functions
need to be updated as all of them check if the result is less than 0
instead a strict comparison against -1.
---
src/interfaces/libpq/fe-misc.c | 15 +++++++++++----
src/interfaces/libpq/fe-secure-openssl.c | 2 +-
src/interfaces/libpq/fe-secure.c | 6 ++++++
3 files changed, 18 insertions(+), 5 deletions(-)
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 660cdec93c9..2d49188d910 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -556,8 +556,11 @@ pqPutMsgEnd(PGconn *conn)
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
- * -1: error detected (including EOF = connection closure);
+ * -1: error detected (excluding EOF = clean connection closure);
* conn->errorMessage set
+ * -2: EOF detected, connection is closed cleanly by other side;
+ * conn->errorMessage set
+ *
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
@@ -639,7 +642,7 @@ retry3:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -734,7 +737,7 @@ retry4:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -751,13 +754,17 @@ definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.");
+ /* Do *not* drop any already-read data; caller still wants it */
+ pqDropConnection(conn, false);
+ conn->status = CONNECTION_BAD; /* No more connection to backend */
+ return -2;
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
/* Do *not* drop any already-read data; caller still wants it */
pqDropConnection(conn, false);
conn->status = CONNECTION_BAD; /* No more connection to backend */
- return -1;
+ return nread < 0 ? nread : -1;
}
/*
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index 61f8a5c9c6c..351161bd0f9 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -248,7 +248,7 @@ rloop:
*/
libpq_append_conn_error(conn, "SSL connection has been closed unexpectedly");
result_errno = ECONNRESET;
- n = -1;
+ n = -2;
break;
default:
libpq_append_conn_error(conn, "unrecognized SSL error code: %d", err);
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 8069e381424..20265dcb317 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -199,6 +199,12 @@ pqsecure_close(PGconn *conn)
* On failure, this function is responsible for appending a suitable message
* to conn->errorMessage. The caller must still inspect errno, but only
* to determine whether to continue/retry after error.
+ *
+ * Returns -1 in case of failures, except in the case of where a failure means
+ * that there was a clean connection closure, in those cases -2 is returned.
+ * Currently only the TLS implementation of pqsecure_read ever returns -2. For
+ * the other implementations a clean connection closure is detected in
+ * pqReadData instead.
*/
ssize_t
pqsecure_read(PGconn *conn, void *ptr, size_t len)
--
2.34.1
v17-0004-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=v17-0004-Add-non-blocking-version-of-PQcancel.patchDownload
From 3d9f70f5847f184696dbd4380b79c4f853285621 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 12 Jan 2022 09:52:05 +0100
Subject: [PATCH v17 4/5] Add non-blocking version of PQcancel
This patch makes the following changes in libpq:
1. Add a new PQcancelSend function, which sends cancellation requests
using the regular connection establishment code. This makes sure
that cancel requests support and use all connection options
including encryption.
2. Add a new PQcancelConn function which allows sending cancellation in
a non-blocking way by using it together with the newly added
PQcancelPoll and PQcancelSocket.
3. Use these two new cancellation APIs everywhere in the codebase where
signal-safety is not a necessity.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests. The postgres_fdw
cancellation code has been modified to make use of this.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
doc/src/sgml/libpq.sgml | 289 ++++++++++-
src/interfaces/libpq/exports.txt | 8 +
src/interfaces/libpq/fe-connect.c | 452 +++++++++++++++++-
src/interfaces/libpq/libpq-fe.h | 25 +-
src/interfaces/libpq/libpq-int.h | 9 +
.../modules/libpq_pipeline/libpq_pipeline.c | 265 +++++++++-
6 files changed, 996 insertions(+), 52 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 8579dcac952..aa404c4d155 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -5060,7 +5060,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelSend"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -5779,13 +5779,232 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQcancelSend">
+ <term><function>PQcancelSend</function><indexterm><primary>PQcancelSend</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command.
+<synopsis>
+PGcancelConn *PQcancelSend(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ This request is made over a connection that uses the same connection
+ options as the the original <structname>PGconn</structname>. So when the
+ original connection is encrypted (using TLS or GSS), the connection for
+ the cancel request is encrypted in the same way. Any connection options
+ that are only used during authentication or after authentication of
+ the client are ignored though, because cancellation requests do not
+ require authentication and the connection is closed right after the
+ cancellation request is submitted.
+ </para>
+
+ <para>
+ This function returns a <structname>PGcancelConn</structname>
+ object. <xref linkend="libpq-PQcancelStatus"/> can be used to check
+ if any error occured while sending the cancellation request. If
+ <xref linkend="libpq-PQcancelStatus"/> returns <symbol>CONNECTION_OK</symbol>
+ the request was sent successfully, but if it returns <symbol>CONNECTION_BAD</symbol>
+ an error occured. If an error occured the error message can be retrieved using
+ <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being cancelled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelSend</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQcancelSend"/> that can be used to
+ send cancellation requests in a non-blocking manner.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>
+ object, but it won't instantly start sending a cancel request over this
+ connection, unlike <xref linkend="libpq-PQcancelSend"/>.
+ The return value should still be passed to <xref linkend="libpq-PQcancelStatus"/>
+ though, to check if the <structname>PGcancelConn</structname> object was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe and non-blocking way.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQstatus"/> that can be used for
+ cancellation connections.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ <para>
+ In addition to all the statuses that a <structname>PGconn</structname>
+ can have, this connection can have one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQcancelPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelPoll"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually cancelled) and that the
+ connection is now closed. While a <symbol>CONNECTION_OK</symbol> result
+ for <structname>PGconn</structname> means that queries can be sent over
+ the connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSocket">
+ <term><function>PQcancelSocket</function><indexterm><primary>PQcancelSocket</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQsocket"/> that can be used for
+ cancellation connections.
+<synopsis>
+int PQcancelSocket(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelPoll">
+ <term><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQconnectPoll"/> that can be used for
+ cancellation connections.
+<synopsis>
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -5827,14 +6046,28 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ An insecure version of <xref linkend="libpq-PQcancelSend"/>, but one
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
+ <xref linkend="libpq-PQcancel"/> should only be used if it's necessary
+ to cancel a query from a signal-handler. If signal-safety is not needed,
+ <xref linkend="libpq-PQcancelSend"/> should be used to cancel the query
+ instead. <xref linkend="libpq-PQcancel"/> can be safely invoked from a
+ signal handler, if the <parameter>errbuf</parameter> is a local variable
+ in the signal handler. The <structname>PGcancel</structname> object is
+ read-only as far as <xref linkend="libpq-PQcancel"/> is concerned, so it
+ can also be invoked from a thread that is separate from the one
+ manipulating the <structname>PGconn</structname> object.
+ </para>
+
+ <para>
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
with an explanatory error message. <parameter>errbuf</parameter>
must be a char array of size <parameter>errbufsize</parameter> (the
@@ -5842,21 +6075,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
- <para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. This function connects to
+ postgres on the same address and port as the original connection. The
+ only connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. <emphasis>This means the connection
+ for the cancel request is never encrypted using TLS or GSS</emphasis>.
</para>
</listitem>
</varlistentry>
@@ -5868,13 +6102,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelSend"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelSend"/> should be
+ used instead, to avoid the security and thread-safety issues that this
+ function has. This function has the same security issues as
+ <xref linkend="libpq-PQcancel"/>, but without the benefit of being
+ signal-safe.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -9043,7 +9286,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelSend"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index e8bcc883709..f56e8c185c4 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -186,3 +186,11 @@ PQpipelineStatus 183
PQsetTraceFlags 184
PQmblenBounded 185
PQsendFlushRequest 186
+PQcancelSend 187
+PQcancelConn 188
+PQcancelPoll 189
+PQcancelStatus 190
+PQcancelSocket 191
+PQcancelErrorMessage 192
+PQcancelReset 193
+PQcancelFinish 194
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 4e798e1672c..8c89a892723 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -386,8 +386,10 @@ static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
+static void release_conn_hosts(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
static int store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
@@ -612,8 +614,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQcancelReset invocations. Otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -744,6 +755,113 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a query on the given connection. This requires polling
+ * the returned PGcancelConn to actually complete the cancellation of the
+ * query.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancelation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we manually create the host and address arrays
+ * with a single element after freeing the host array that we generated
+ * from the connection options.
+ */
+ release_conn_hosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
/*
* PQconnectStartParams
*
@@ -919,6 +1037,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2249,10 +2406,18 @@ connectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address. These fields have already set up in PQcancelConn. So leave
+ * these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2394,7 +2559,10 @@ connectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2519,13 +2687,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -3141,6 +3313,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancelation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -3885,8 +4080,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4254,19 +4455,8 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
- /* clean up pg_conn_host structures */
- for (int i = 0; i < conn->nconnhost; ++i)
- {
- free(conn->connhost[i].host);
- free(conn->connhost[i].hostaddr);
- free(conn->connhost[i].port);
- if (conn->connhost[i].password != NULL)
- {
- explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
- free(conn->connhost[i].password);
- }
- }
- free(conn->connhost);
+ release_conn_addrinfo(conn);
+ release_conn_hosts(conn);
free(conn->client_encoding_initial);
free(conn->events);
@@ -4383,6 +4573,31 @@ release_conn_addrinfo(PGconn *conn)
}
}
+/*
+ * release_conn_hosts
+ * - Free the host list in the PGconn.
+ */
+static void
+release_conn_hosts(PGconn *conn)
+{
+ if (conn->connhost)
+ {
+ for (int i = 0; i < conn->nconnhost; ++i)
+ {
+ free(conn->connhost[i].host);
+ free(conn->connhost[i].hostaddr);
+ free(conn->connhost[i].port);
+ if (conn->connhost[i].password != NULL)
+ {
+ explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
+ free(conn->connhost[i].password);
+ }
+ }
+ free(conn->connhost);
+ }
+}
+
+
/*
* sendTerminateConn
* - Send a terminate message to backend.
@@ -4390,6 +4605,15 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4443,7 +4667,13 @@ closePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we would have to rebuild it during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
@@ -4858,6 +5088,180 @@ cancel_errReturn:
return false;
}
+/*
+ * PQcancelSend
+ *
+ * Send a cancellation request in a blocking fashion.
+ */
+PGcancelConn *
+PQcancelSend(PGconn *conn)
+{
+ PGcancelConn *cancelConn = PQcancelConn(conn);
+
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return cancelConn;
+
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_BAD;
+ return cancelConn;
+ }
+
+ (void) connectDBComplete(&cancelConn->conn);
+
+ return cancelConn;
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn * cancelConn)
+{
+ PGconn *conn = (PGconn *) cancelConn;
+ int n;
+
+ /*
+ * Before we can call PQconnectPoll we first need to start the connection
+ * using connectDBstart. Non-cancel connections already do this whenever
+ * the connection is initialized. But cancel connections wait until the
+ * caller starts polling, because there might be a large delay between
+ * creating a cancel connection and actually wanting to use it.
+ */
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+ }
+
+ /*
+ * The rest of the connection establishement we leave to PQconnectPoll,
+ * since it's very similar to normal connection establishment. But once we
+ * get to the CONNECTION_AWAITING_RESPONSE we need to do our own thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * Windows is a bit special in its EOF behaviour for TCP. Sometimes it
+ * will error with an ECONNRESET when there is a clean connection closure.
+ * See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here. For
+ * all other OSes we consider any other error than EOF and report it as
+ * such.
+ */
+ if (n < 0 && n != -2)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangly do
+ * receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF. Which is what we were
+ * expecting. The cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn * cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn * cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn * cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn * cancelConn)
+{
+ closePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_STARTING;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn * cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
/*
* PQrequestCancel: old, not thread-safe function for requesting query cancel
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index f3d92204964..95899b9f55b 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_STARTING /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,28 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* issue a cancel request */
+extern PGcancelConn * PQcancelSend(PGconn *conn);
+/* non-blocking version of PQcancelSend */
+extern PGcancelConn * PQcancelConn(PGconn *conn);
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn * cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
+extern int PQcancelSocket(const PGcancelConn * cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
+extern void PQcancelReset(PGcancelConn * cancelConn);
+extern void PQcancelFinish(PGcancelConn * cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* a less secure version of PQcancelSend, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 760ee3f6912..eaca46b6aa0 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -399,6 +399,10 @@ struct pg_conn
char *target_session_attrs; /* desired session properties */
char *require_auth; /* name of the expected auth method */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -606,6 +610,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index f48da7d963e..6764ab513b2 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got cancelled.
+ *
+ * This is a function wrapped in a macro to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_cancelled(conn) confirm_query_cancelled_impl(__LINE__, conn)
+static void
+confirm_query_cancelled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_cancelled(conn);
+
+ /* test PQcancelSend */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelSend(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -985,7 +1243,7 @@ test_prepared(PGconn *conn)
static void
notice_processor(void *arg, const char *message)
{
- int *n_notices = (int *) arg;
+ int *n_notices = (int *) arg;
(*n_notices)++;
fprintf(stderr, "NOTICE %d: %s", *n_notices, message);
@@ -1681,6 +1939,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1782,7 +2041,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
v17-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patchapplication/octet-stream; name=v17-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patchDownload
From 062c395cc25e272b86b88b0095c72312cd1f0fc4 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 30 Nov 2022 10:07:19 +0100
Subject: [PATCH v17 1/5] libpq: Run pgindent after a9e9a9f32b3
It seems that pgindent was not run after the error handling refactor in
commit a9e9a9f32b35edf129c88e8b929ef223f8511f59. This fixes that and
also addresses a few other things pgindent wanted to change in libpq.
---
src/interfaces/libpq/fe-exec.c | 16 +++---
src/interfaces/libpq/fe-lobj.c | 42 ++++++++--------
src/interfaces/libpq/fe-misc.c | 10 ++--
src/interfaces/libpq/fe-protocol3.c | 2 +-
src/interfaces/libpq/fe-secure-common.c | 6 +--
src/interfaces/libpq/fe-secure-gssapi.c | 12 ++---
src/interfaces/libpq/fe-secure-openssl.c | 64 ++++++++++++------------
src/interfaces/libpq/fe-secure.c | 8 +--
src/interfaces/libpq/libpq-int.h | 4 +-
9 files changed, 82 insertions(+), 82 deletions(-)
diff --git a/src/interfaces/libpq/fe-exec.c b/src/interfaces/libpq/fe-exec.c
index a16bbf32ef5..14d706efd57 100644
--- a/src/interfaces/libpq/fe-exec.c
+++ b/src/interfaces/libpq/fe-exec.c
@@ -1448,7 +1448,7 @@ PQsendQueryInternal(PGconn *conn, const char *query, bool newQuery)
if (conn->pipelineStatus != PQ_PIPELINE_OFF)
{
libpq_append_conn_error(conn, "%s not allowed in pipeline mode",
- "PQsendQuery");
+ "PQsendQuery");
return 0;
}
@@ -1516,7 +1516,7 @@ PQsendQueryParams(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -1562,7 +1562,7 @@ PQsendPrepare(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -1656,7 +1656,7 @@ PQsendQueryPrepared(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -2103,10 +2103,9 @@ PQgetResult(PGconn *conn)
/*
* We're about to return the NULL that terminates the round of
- * results from the current query; prepare to send the results
- * of the next query, if any, when we're called next. If there's
- * no next element in the command queue, this gets us in IDLE
- * state.
+ * results from the current query; prepare to send the results of
+ * the next query, if any, when we're called next. If there's no
+ * next element in the command queue, this gets us in IDLE state.
*/
pqPipelineProcessQueue(conn);
res = NULL; /* query is complete */
@@ -3051,6 +3050,7 @@ pqPipelineProcessQueue(PGconn *conn)
return;
case PGASYNC_IDLE:
+
/*
* If we're in IDLE mode and there's some command in the queue,
* get us into PIPELINE_IDLE mode and process normally. Otherwise
diff --git a/src/interfaces/libpq/fe-lobj.c b/src/interfaces/libpq/fe-lobj.c
index 4cb6a468597..206266fd043 100644
--- a/src/interfaces/libpq/fe-lobj.c
+++ b/src/interfaces/libpq/fe-lobj.c
@@ -142,7 +142,7 @@ lo_truncate(PGconn *conn, int fd, size_t len)
if (conn->lobjfuncs->fn_lo_truncate == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_truncate");
+ "lo_truncate");
return -1;
}
@@ -205,7 +205,7 @@ lo_truncate64(PGconn *conn, int fd, pg_int64 len)
if (conn->lobjfuncs->fn_lo_truncate64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_truncate64");
+ "lo_truncate64");
return -1;
}
@@ -395,7 +395,7 @@ lo_lseek64(PGconn *conn, int fd, pg_int64 offset, int whence)
if (conn->lobjfuncs->fn_lo_lseek64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_lseek64");
+ "lo_lseek64");
return -1;
}
@@ -485,7 +485,7 @@ lo_create(PGconn *conn, Oid lobjId)
if (conn->lobjfuncs->fn_lo_create == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_create");
+ "lo_create");
return InvalidOid;
}
@@ -558,7 +558,7 @@ lo_tell64(PGconn *conn, int fd)
if (conn->lobjfuncs->fn_lo_tell64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_tell64");
+ "lo_tell64");
return -1;
}
@@ -667,7 +667,7 @@ lo_import_internal(PGconn *conn, const char *filename, Oid oid)
if (fd < 0)
{ /* error */
libpq_append_conn_error(conn, "could not open file \"%s\": %s",
- filename, strerror_r(errno, sebuf, sizeof(sebuf)));
+ filename, strerror_r(errno, sebuf, sizeof(sebuf)));
return InvalidOid;
}
@@ -723,8 +723,8 @@ lo_import_internal(PGconn *conn, const char *filename, Oid oid)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not read from file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return InvalidOid;
}
@@ -778,8 +778,8 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not open file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return -1;
}
@@ -799,8 +799,8 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not write to file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return -1;
}
}
@@ -822,7 +822,7 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
if (close(fd) != 0 && result >= 0)
{
libpq_append_conn_error(conn, "could not write to file \"%s\": %s",
- filename, strerror_r(errno, sebuf, sizeof(sebuf)));
+ filename, strerror_r(errno, sebuf, sizeof(sebuf)));
result = -1;
}
@@ -954,56 +954,56 @@ lo_initialize(PGconn *conn)
if (lobjfuncs->fn_lo_open == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_open");
+ "lo_open");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_close == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_close");
+ "lo_close");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_creat == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_creat");
+ "lo_creat");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_unlink == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_unlink");
+ "lo_unlink");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_lseek == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_lseek");
+ "lo_lseek");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_tell == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_tell");
+ "lo_tell");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_read == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "loread");
+ "loread");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_write == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lowrite");
+ "lowrite");
free(lobjfuncs);
return -1;
}
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 3653a1a8a62..660cdec93c9 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -749,8 +749,8 @@ retry4:
*/
definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
@@ -1067,7 +1067,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s() failed: %s", "select",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
}
return result;
@@ -1280,7 +1280,7 @@ libpq_ngettext(const char *msgid, const char *msgid_plural, unsigned long n)
* newline.
*/
void
-libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...)
+libpq_append_error(PQExpBuffer errorMessage, const char *fmt,...)
{
int save_errno = errno;
bool done;
@@ -1309,7 +1309,7 @@ libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...)
* format should not end with a newline.
*/
void
-libpq_append_conn_error(PGconn *conn, const char *fmt, ...)
+libpq_append_conn_error(PGconn *conn, const char *fmt,...)
{
int save_errno = errno;
bool done;
diff --git a/src/interfaces/libpq/fe-protocol3.c b/src/interfaces/libpq/fe-protocol3.c
index 8ab6a884165..b79d74f7489 100644
--- a/src/interfaces/libpq/fe-protocol3.c
+++ b/src/interfaces/libpq/fe-protocol3.c
@@ -466,7 +466,7 @@ static void
handleSyncLoss(PGconn *conn, char id, int msgLength)
{
libpq_append_conn_error(conn, "lost synchronization with server: got message type \"%c\", length %d",
- id, msgLength);
+ id, msgLength);
/* build an error result holding the error message */
pqSaveErrorResult(conn);
conn->asyncStatus = PGASYNC_READY; /* drop out of PQgetResult wait loop */
diff --git a/src/interfaces/libpq/fe-secure-common.c b/src/interfaces/libpq/fe-secure-common.c
index de115b37649..3ecc7bf6159 100644
--- a/src/interfaces/libpq/fe-secure-common.c
+++ b/src/interfaces/libpq/fe-secure-common.c
@@ -226,7 +226,7 @@ pq_verify_peer_name_matches_certificate_ip(PGconn *conn,
* wrong given the subject matter.
*/
libpq_append_conn_error(conn, "certificate contains IP address with invalid length %zu",
- iplen);
+ iplen);
return -1;
}
@@ -235,7 +235,7 @@ pq_verify_peer_name_matches_certificate_ip(PGconn *conn,
if (!addrstr)
{
libpq_append_conn_error(conn, "could not convert certificate's IP address to string: %s",
- strerror_r(errno, sebuf, sizeof(sebuf)));
+ strerror_r(errno, sebuf, sizeof(sebuf)));
return -1;
}
@@ -292,7 +292,7 @@ pq_verify_peer_name_matches_certificate(PGconn *conn)
else if (names_examined == 1)
{
libpq_append_conn_error(conn, "server certificate for \"%s\" does not match host name \"%s\"",
- first_name, host);
+ first_name, host);
}
else
{
diff --git a/src/interfaces/libpq/fe-secure-gssapi.c b/src/interfaces/libpq/fe-secure-gssapi.c
index 038e847b7e9..0af4de941af 100644
--- a/src/interfaces/libpq/fe-secure-gssapi.c
+++ b/src/interfaces/libpq/fe-secure-gssapi.c
@@ -213,8 +213,8 @@ pg_GSS_write(PGconn *conn, const void *ptr, size_t len)
if (output.length > PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "client tried to send oversize GSSAPI packet (%zu > %zu)",
- (size_t) output.length,
- PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32));
+ (size_t) output.length,
+ PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32));
errno = EIO; /* for lack of a better idea */
goto cleanup;
}
@@ -349,8 +349,8 @@ pg_GSS_read(PGconn *conn, void *ptr, size_t len)
if (input.length > PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "oversize GSSAPI packet sent by the server (%zu > %zu)",
- (size_t) input.length,
- PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
+ (size_t) input.length,
+ PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
errno = EIO; /* for lack of a better idea */
return -1;
}
@@ -590,8 +590,8 @@ pqsecure_open_gss(PGconn *conn)
if (input.length > PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "oversize GSSAPI packet sent by the server (%zu > %zu)",
- (size_t) input.length,
- PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
+ (size_t) input.length,
+ PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
return PGRES_POLLING_FAILED;
}
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index 4d1e4009ef1..61f8a5c9c6c 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -213,12 +213,12 @@ rloop:
if (result_errno == EPIPE ||
result_errno == ECONNRESET)
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
else
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
}
else
{
@@ -313,12 +313,12 @@ pgtls_write(PGconn *conn, const void *ptr, size_t len)
result_errno = SOCK_ERRNO;
if (result_errno == EPIPE || result_errno == ECONNRESET)
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
else
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
}
else
{
@@ -415,7 +415,7 @@ pgtls_get_peer_certificate_hash(PGconn *conn, size_t *len)
if (algo_type == NULL)
{
libpq_append_conn_error(conn, "could not find digest for NID %s",
- OBJ_nid2sn(algo_nid));
+ OBJ_nid2sn(algo_nid));
return NULL;
}
break;
@@ -1000,7 +1000,7 @@ initialize_SSL(PGconn *conn)
if (ssl_min_ver == -1)
{
libpq_append_conn_error(conn, "invalid value \"%s\" for minimum SSL protocol version",
- conn->ssl_min_protocol_version);
+ conn->ssl_min_protocol_version);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1026,7 +1026,7 @@ initialize_SSL(PGconn *conn)
if (ssl_max_ver == -1)
{
libpq_append_conn_error(conn, "invalid value \"%s\" for maximum SSL protocol version",
- conn->ssl_max_protocol_version);
+ conn->ssl_max_protocol_version);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1070,7 +1070,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read root certificate file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
SSL_CTX_free(SSL_context);
return -1;
@@ -1122,10 +1122,10 @@ initialize_SSL(PGconn *conn)
*/
if (fnbuf[0] == '\0')
libpq_append_conn_error(conn, "could not get home directory to locate root certificate file\n"
- "Either provide the file or change sslmode to disable server certificate verification.");
+ "Either provide the file or change sslmode to disable server certificate verification.");
else
libpq_append_conn_error(conn, "root certificate file \"%s\" does not exist\n"
- "Either provide the file or change sslmode to disable server certificate verification.", fnbuf);
+ "Either provide the file or change sslmode to disable server certificate verification.", fnbuf);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1160,7 +1160,7 @@ initialize_SSL(PGconn *conn)
if (errno != ENOENT && errno != ENOTDIR)
{
libpq_append_conn_error(conn, "could not open certificate file \"%s\": %s",
- fnbuf, strerror_r(errno, sebuf, sizeof(sebuf)));
+ fnbuf, strerror_r(errno, sebuf, sizeof(sebuf)));
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1178,7 +1178,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read certificate file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
SSL_CTX_free(SSL_context);
return -1;
@@ -1277,7 +1277,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not load SSL engine \"%s\": %s",
- engine_str, err);
+ engine_str, err);
SSLerrfree(err);
free(engine_str);
return -1;
@@ -1288,7 +1288,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not initialize SSL engine \"%s\": %s",
- engine_str, err);
+ engine_str, err);
SSLerrfree(err);
ENGINE_free(conn->engine);
conn->engine = NULL;
@@ -1303,7 +1303,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read private SSL key \"%s\" from engine \"%s\": %s",
- engine_colon, engine_str, err);
+ engine_colon, engine_str, err);
SSLerrfree(err);
ENGINE_finish(conn->engine);
ENGINE_free(conn->engine);
@@ -1316,7 +1316,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not load private SSL key \"%s\" from engine \"%s\": %s",
- engine_colon, engine_str, err);
+ engine_colon, engine_str, err);
SSLerrfree(err);
ENGINE_finish(conn->engine);
ENGINE_free(conn->engine);
@@ -1353,10 +1353,10 @@ initialize_SSL(PGconn *conn)
{
if (errno == ENOENT)
libpq_append_conn_error(conn, "certificate present, but not private key file \"%s\"",
- fnbuf);
+ fnbuf);
else
libpq_append_conn_error(conn, "could not stat private key file \"%s\": %m",
- fnbuf);
+ fnbuf);
return -1;
}
@@ -1364,7 +1364,7 @@ initialize_SSL(PGconn *conn)
if (!S_ISREG(buf.st_mode))
{
libpq_append_conn_error(conn, "private key file \"%s\" is not a regular file",
- fnbuf);
+ fnbuf);
return -1;
}
@@ -1421,7 +1421,7 @@ initialize_SSL(PGconn *conn)
if (SSL_use_PrivateKey_file(conn->ssl, fnbuf, SSL_FILETYPE_ASN1) != 1)
{
libpq_append_conn_error(conn, "could not load private key file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
return -1;
}
@@ -1437,7 +1437,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "certificate does not match private key file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
return -1;
}
@@ -1490,7 +1490,7 @@ open_client_SSL(PGconn *conn)
if (r == -1)
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
else
libpq_append_conn_error(conn, "SSL SYSCALL error: EOF detected");
pgtls_close(conn);
@@ -1532,12 +1532,12 @@ open_client_SSL(PGconn *conn)
case SSL_R_VERSION_TOO_LOW:
#endif
libpq_append_conn_error(conn, "This may indicate that the server does not support any SSL protocol version between %s and %s.",
- conn->ssl_min_protocol_version ?
- conn->ssl_min_protocol_version :
- MIN_OPENSSL_TLS_VERSION,
- conn->ssl_max_protocol_version ?
- conn->ssl_max_protocol_version :
- MAX_OPENSSL_TLS_VERSION);
+ conn->ssl_min_protocol_version ?
+ conn->ssl_min_protocol_version :
+ MIN_OPENSSL_TLS_VERSION,
+ conn->ssl_max_protocol_version ?
+ conn->ssl_max_protocol_version :
+ MAX_OPENSSL_TLS_VERSION);
break;
default:
break;
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 66e401bf3d9..8069e381424 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -255,14 +255,14 @@ pqsecure_raw_read(PGconn *conn, void *ptr, size_t len)
case EPIPE:
case ECONNRESET:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
break;
default:
libpq_append_conn_error(conn, "could not receive data from server: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
break;
}
}
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 88b9838d766..1ff57044508 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -901,8 +901,8 @@ extern char *libpq_ngettext(const char *msgid, const char *msgid_plural, unsigne
*/
#undef _
-extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...) pg_attribute_printf(2, 3);
-extern void libpq_append_conn_error(PGconn *conn, const char *fmt, ...) pg_attribute_printf(2, 3);
+extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt,...) pg_attribute_printf(2, 3);
+extern void libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
/*
* These macros are needed to let error-handling code be portable between
base-commit: c1f1c1f87fd685981c45da528649c700b6ba0655
--
2.34.1
Jelte Fennema a �crit :
Namely, I wonder why it returns a PGcancelConn and what's the
point of requiring the user to call PQcancelStatus() to see if something
got wrong. Maybe it could be defined as:int PQcancelSend(PGcancelConn *cancelConn);
where the return value would be status? And the user would only need to
call PQcancelErrorMessage() in case of error. This would leave only one
single way to create a PGcancelConn value (i.e. PQcancelConn()), which
seems less confusing to me.To clarify what you mean, the API would then be like this:
PGcancelConn cancelConn = PQcancelConn(conn);
if (PQcancelSend(cancelConn) == CONNECTION_BAD) {
printf("ERROR %s\n", PQcancelErrorMessage(cancelConn))
exit(1)
}
I'm not sure it's worth returning the connection status, maybe just an
int value (the return value of connectDBComplete() for instance).
More importantly, not having PQcancelSend() creating the PGcancelConn
makes reuse of that value, passing through PQcancelReset(), more
intuitive. E.g., in the tests:
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 6764ab513b..91363451af 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -217,17 +217,18 @@ test_cancel(PGconn *conn, const char *conninfo)
pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
confirm_query_cancelled(conn);
+ cancelConn = PQcancelConn(conn);
+
/* test PQcancelSend */
send_cancellable_query(conn, monitorConn);
- cancelConn = PQcancelSend(conn);
- if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ if (PQcancelSend(cancelConn) == CONNECTION_BAD)
pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
confirm_query_cancelled(conn);
- PQcancelFinish(cancelConn);
+
+ PQcancelReset(cancelConn);
/* test PQcancelConn and then polling with PQcancelPoll */
send_cancellable_query(conn, monitorConn);
- cancelConn = PQcancelConn(conn);
if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
while (true)
Otherwise, it's not clear if the PGcancelConn created by PQcancelSend()
should be reused or not. But maybe that's a matter of documentation?
As part of my testing, I've implemented non-blocking cancellation in
Psycopg, based on v16 on this patchset. Overall this worked fine and
seems useful; if you want to try it:https://github.com/dlax/psycopg3/tree/pg16/non-blocking-pqcancel
That's great to hear! I'll try to take a closer look at that change tomorrow.
See also https://github.com/psycopg/psycopg/issues/534 if you want to
discuss about this.
(The only thing I found slightly inconvenient is the need to convey the
connection encoding (from PGconn) when handling error message from the
PGcancelConn.)Could you expand a bit more on this? And if you have any idea on how
to improve the API with regards to this?
The thing is that we need the connection encoding (client_encoding) when
eventually forwarding the result of PQcancelErrorMessage(), decoded, to
the user. More specifically, it seems to me that we'd the encoding of
the *cancel connection*, but since PQparameterStatus() cannot be used
with a PGcancelConn, I use that of the PGconn. Roughly, in Python:
encoding = conn.parameter_status(b"client_encoding")
# i.e, in C: char *encoding PQparameterStatus(conn, "client_encoding");
cancel_conn = conn.cancel_conn()
# i.e., in C: PGcancelConn *cancelConn = PQcancelConn(conn);
# [... then work with with cancel_conn ...]
if cancel_conn.status == ConnStatus.BAD:
raise OperationalError(cancel_conn.error_message().decode(encoding))
This feels a bit non-atomic to me; isn't there a risk that
client_encoding be changed between PQparameterStatus(conn) and
PQcancelConn(conn) calls?
So maybe PQcancelParameterStatus(PGcancelConn *cancelConn, char *name)
is needed?
On Wed, 29 Mar 2023 at 10:43, Denis Laxalde <denis.laxalde@dalibo.com> wrote:
More importantly, not having PQcancelSend() creating the PGcancelConn
makes reuse of that value, passing through PQcancelReset(), more
intuitive. E.g., in the tests:
You convinced me. Attached is an updated patch where PQcancelSend
takes the PGcancelConn and returns 1 or 0.
The thing is that we need the connection encoding (client_encoding) when
eventually forwarding the result of PQcancelErrorMessage(), decoded, to
the user.
Cancel connections don't have an encoding specified. They never
receive an error from the server. All errors come from the machine
that libpq is on. So I think you're making the decoding more
complicated than it needs to be.
Attachments:
v18-0004-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=v18-0004-Add-non-blocking-version-of-PQcancel.patchDownload
From e34e2643d4728d5362af0b86409380a4650e5547 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 12 Jan 2022 09:52:05 +0100
Subject: [PATCH v18 4/5] Add non-blocking version of PQcancel
This patch makes the following changes in libpq:
1. Add a new PQcancelSend function, which sends cancellation requests
using the regular connection establishment code. This makes sure
that cancel requests support and use all connection options
including encryption.
2. Add a new PQcancelConn function which allows sending cancellation in
a non-blocking way by using it together with the newly added
PQcancelPoll and PQcancelSocket.
3. Use these two new cancellation APIs everywhere in the codebase where
signal-safety is not a necessity.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests. The postgres_fdw
cancellation code has been modified to make use of this.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
doc/src/sgml/libpq.sgml | 280 ++++++++++-
src/interfaces/libpq/exports.txt | 8 +
src/interfaces/libpq/fe-connect.c | 449 +++++++++++++++++-
src/interfaces/libpq/libpq-fe.h | 27 +-
src/interfaces/libpq/libpq-int.h | 9 +
.../modules/libpq_pipeline/libpq_pipeline.c | 265 ++++++++++-
6 files changed, 986 insertions(+), 52 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 8579dcac952..c339548d016 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -5060,7 +5060,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelSend"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -5779,13 +5779,223 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Prepares a connection over which a cancel request can be sent.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>
+ object, but it won't instantly start sending a cancel request over this
+ connection. A cancel request can be sent over this connection in a
+ blocking manner using <xref linkend="libpq-PQcancelSend"/> and in a
+ non-blocking manner using <xref linkend="libpq-PQcancelPoll"/>.
+ The return value should can be passed to <xref linkend="libpq-PQcancelStatus"/>,
+ to check if the <structname>PGcancelConn</structname> object was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe way.
+ </para>
+
+ <para>
+ If the original connection is encrypted (using TLS or GSS), then the
+ connection for the cancel request is encrypted in the same way. Any
+ connection options that are only used during authentication or after
+ authentication of the client are ignored though, because cancellation
+ requests do not require authentication and the connection is closed right
+ after the cancellation request is submitted.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSend">
+ <term><function>PQcancelSend</function><indexterm><primary>PQcancelSend</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a blocking manner.
+<synopsis>
+int PQcancelSend(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelConn"/>
+ The return value of <xref linkend="libpq-PQcancelSend"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being cancelled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQstatus"/> that can be used for
+ cancellation connections.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ <para>
+ In addition to all the statuses that a <structname>PGconn</structname>
+ can have, this connection can have one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQcancelPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelPoll"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually cancelled) and that the
+ connection is now closed. While a <symbol>CONNECTION_OK</symbol> result
+ for <structname>PGconn</structname> means that queries can be sent over
+ the connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSocket">
+ <term><function>PQcancelSocket</function><indexterm><primary>PQcancelSocket</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQsocket"/> that can be used for
+ cancellation connections.
+<synopsis>
+int PQcancelSocket(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelPoll">
+ <term><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQconnectPoll"/> that can be used for
+ cancellation connections.
+<synopsis>
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -5827,14 +6037,28 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ An insecure version of <xref linkend="libpq-PQcancelSend"/>, but one
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
+ <xref linkend="libpq-PQcancel"/> should only be used if it's necessary
+ to cancel a query from a signal-handler. If signal-safety is not needed,
+ <xref linkend="libpq-PQcancelSend"/> should be used to cancel the query
+ instead. <xref linkend="libpq-PQcancel"/> can be safely invoked from a
+ signal handler, if the <parameter>errbuf</parameter> is a local variable
+ in the signal handler. The <structname>PGcancel</structname> object is
+ read-only as far as <xref linkend="libpq-PQcancel"/> is concerned, so it
+ can also be invoked from a thread that is separate from the one
+ manipulating the <structname>PGconn</structname> object.
+ </para>
+
+ <para>
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
with an explanatory error message. <parameter>errbuf</parameter>
must be a char array of size <parameter>errbufsize</parameter> (the
@@ -5842,21 +6066,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
- <para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. This function connects to
+ postgres on the same address and port as the original connection. The
+ only connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. <emphasis>This means the connection
+ for the cancel request is never encrypted using TLS or GSS</emphasis>.
</para>
</listitem>
</varlistentry>
@@ -5868,13 +6093,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelSend"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelSend"/> should be
+ used instead, to avoid the security and thread-safety issues that this
+ function has. This function has the same security issues as
+ <xref linkend="libpq-PQcancel"/>, but without the benefit of being
+ signal-safe.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -9043,7 +9277,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelSend"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index e8bcc883709..f56e8c185c4 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -186,3 +186,11 @@ PQpipelineStatus 183
PQsetTraceFlags 184
PQmblenBounded 185
PQsendFlushRequest 186
+PQcancelSend 187
+PQcancelConn 188
+PQcancelPoll 189
+PQcancelStatus 190
+PQcancelSocket 191
+PQcancelErrorMessage 192
+PQcancelReset 193
+PQcancelFinish 194
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 4e798e1672c..21fd7c4b5a9 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -386,8 +386,10 @@ static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
+static void release_conn_hosts(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
static int store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
@@ -612,8 +614,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQcancelReset invocations. Otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -744,6 +755,113 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a query on the given connection. This requires polling
+ * the returned PGcancelConn to actually complete the cancellation of the
+ * query.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancelation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we manually create the host and address arrays with
+ * a single element after freeing the host array that we generated from
+ * the connection options.
+ */
+ release_conn_hosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
/*
* PQconnectStartParams
*
@@ -919,6 +1037,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2249,10 +2406,18 @@ connectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address. These fields have already set up in PQcancelConn. So leave
+ * these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2394,7 +2559,10 @@ connectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2519,13 +2687,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -3141,6 +3313,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancelation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -3885,8 +4080,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4254,19 +4455,8 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
- /* clean up pg_conn_host structures */
- for (int i = 0; i < conn->nconnhost; ++i)
- {
- free(conn->connhost[i].host);
- free(conn->connhost[i].hostaddr);
- free(conn->connhost[i].port);
- if (conn->connhost[i].password != NULL)
- {
- explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
- free(conn->connhost[i].password);
- }
- }
- free(conn->connhost);
+ release_conn_addrinfo(conn);
+ release_conn_hosts(conn);
free(conn->client_encoding_initial);
free(conn->events);
@@ -4383,6 +4573,31 @@ release_conn_addrinfo(PGconn *conn)
}
}
+/*
+ * release_conn_hosts
+ * - Free the host list in the PGconn.
+ */
+static void
+release_conn_hosts(PGconn *conn)
+{
+ if (conn->connhost)
+ {
+ for (int i = 0; i < conn->nconnhost; ++i)
+ {
+ free(conn->connhost[i].host);
+ free(conn->connhost[i].hostaddr);
+ free(conn->connhost[i].port);
+ if (conn->connhost[i].password != NULL)
+ {
+ explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
+ free(conn->connhost[i].password);
+ }
+ }
+ free(conn->connhost);
+ }
+}
+
+
/*
* sendTerminateConn
* - Send a terminate message to backend.
@@ -4390,6 +4605,15 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4443,7 +4667,13 @@ closePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we would have to rebuild it during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
@@ -4858,6 +5088,177 @@ cancel_errReturn:
return false;
}
+/*
+ * PQcancelSend
+ *
+ * Send a cancellation request in a blocking fashion.
+ * Returns 1 if successful 0 if not.
+ */
+int
+PQcancelSend(PGcancelConn * cancelConn)
+{
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return 1;
+
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_BAD;
+ return 1;
+ }
+
+ return connectDBComplete(&cancelConn->conn);
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn * cancelConn)
+{
+ PGconn *conn = (PGconn *) cancelConn;
+ int n;
+
+ /*
+ * Before we can call PQconnectPoll we first need to start the connection
+ * using connectDBstart. Non-cancel connections already do this whenever
+ * the connection is initialized. But cancel connections wait until the
+ * caller starts polling, because there might be a large delay between
+ * creating a cancel connection and actually wanting to use it.
+ */
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+ }
+
+ /*
+ * The rest of the connection establishement we leave to PQconnectPoll,
+ * since it's very similar to normal connection establishment. But once we
+ * get to the CONNECTION_AWAITING_RESPONSE we need to do our own thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * Windows is a bit special in its EOF behaviour for TCP. Sometimes it
+ * will error with an ECONNRESET when there is a clean connection closure.
+ * See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here. For
+ * all other OSes we consider any other error than EOF and report it as
+ * such.
+ */
+ if (n < 0 && n != -2)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangly do
+ * receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF. Which is what we were
+ * expecting. The cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn * cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn * cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn * cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn * cancelConn)
+{
+ closePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_STARTING;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn * cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
/*
* PQrequestCancel: old, not thread-safe function for requesting query cancel
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index f3d92204964..84d64c9a658 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_STARTING /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,30 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* Create a PGcancelConn that's used to cancel a query on the given PGconn */
+extern PGcancelConn * PQcancelConn(PGconn *conn);
+/* issue a blocking cancel request */
+extern int PQcancelSend(PGcancelConn * conn);
+
+/* issue or poll a non-blocking cancel request */
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn * cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
+extern int PQcancelSocket(const PGcancelConn * cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
+extern void PQcancelReset(PGcancelConn * cancelConn);
+extern void PQcancelFinish(PGcancelConn * cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* a less secure version of PQcancelSend, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 760ee3f6912..eaca46b6aa0 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -399,6 +399,10 @@ struct pg_conn
char *target_session_attrs; /* desired session properties */
char *require_auth; /* name of the expected auth method */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -606,6 +610,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index f48da7d963e..6101e5d6143 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got cancelled.
+ *
+ * This is a function wrapped in a macro to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_cancelled(conn) confirm_query_cancelled_impl(__LINE__, conn)
+static void
+confirm_query_cancelled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_cancelled(conn);
+
+ /* test PQcancelSend */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (!PQcancelSend(cancelConn))
+ pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -985,7 +1243,7 @@ test_prepared(PGconn *conn)
static void
notice_processor(void *arg, const char *message)
{
- int *n_notices = (int *) arg;
+ int *n_notices = (int *) arg;
(*n_notices)++;
fprintf(stderr, "NOTICE %d: %s", *n_notices, message);
@@ -1681,6 +1939,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1782,7 +2041,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
v18-0005-Start-using-new-libpq-cancel-APIs.patchapplication/octet-stream; name=v18-0005-Start-using-new-libpq-cancel-APIs.patchDownload
From c668d173c6982a960cc31d6e05a2ff8c7fe298e9 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 25 Jan 2023 13:32:15 +0100
Subject: [PATCH v18 5/5] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in the codebase with these newer
ones.
---
contrib/dblink/dblink.c | 30 ++++--
contrib/postgres_fdw/connection.c | 99 ++++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 10 +-
src/test/isolation/isolationtester.c | 29 +++---
6 files changed, 139 insertions(+), 51 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 78a8bcee6e3..e139f66e116 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1326,22 +1326,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelSend(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 8eb9194506c..3f9a408a6af 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -1233,35 +1233,104 @@ pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel)
static bool
pgfdw_cancel_query(PGconn *conn)
{
- PGcancel *cancel;
- char errbuf[256];
PGresult *result = NULL;
- TimestampTz endtime;
- bool timed_out;
/*
* If it takes too long to cancel the query and discard the result, assume
* the connection is dead.
*/
- endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQcancelStatus(cancel_conn) == CONNECTION_BAD)
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
+ }
+
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
}
- PQfreeCancel(cancel);
}
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ if (failed)
+ return false;
/* Get and discard the result of the query. */
if (pgfdw_get_cleanup_result(conn, endtime, &result, &timed_out))
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 04a3ef450cf..064c3103a5e 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2688,6 +2688,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index 4f3088c03ea..640958df136 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -713,6 +713,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 7a1edea7c8c..b32448c0103 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,11 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
-
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ PQcancelFinish(PQcancelSend(conn));
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153a..3781f7982b2 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelSend(conn);
- if (cancel != NULL)
+ if (PQcancelStatus(cancel_conn) == CONNECTION_OK)
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
v18-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patchapplication/octet-stream; name=v18-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patchDownload
From 4066e1c516e0d3496b5f0e62bd3803acf9761027 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 30 Nov 2022 10:07:19 +0100
Subject: [PATCH v18 1/5] libpq: Run pgindent after a9e9a9f32b3
It seems that pgindent was not run after the error handling refactor in
commit a9e9a9f32b35edf129c88e8b929ef223f8511f59. This fixes that and
also addresses a few other things pgindent wanted to change in libpq.
---
src/interfaces/libpq/fe-exec.c | 16 +++---
src/interfaces/libpq/fe-lobj.c | 42 ++++++++--------
src/interfaces/libpq/fe-misc.c | 10 ++--
src/interfaces/libpq/fe-protocol3.c | 2 +-
src/interfaces/libpq/fe-secure-common.c | 6 +--
src/interfaces/libpq/fe-secure-gssapi.c | 12 ++---
src/interfaces/libpq/fe-secure-openssl.c | 64 ++++++++++++------------
src/interfaces/libpq/fe-secure.c | 8 +--
src/interfaces/libpq/libpq-int.h | 4 +-
9 files changed, 82 insertions(+), 82 deletions(-)
diff --git a/src/interfaces/libpq/fe-exec.c b/src/interfaces/libpq/fe-exec.c
index a16bbf32ef5..14d706efd57 100644
--- a/src/interfaces/libpq/fe-exec.c
+++ b/src/interfaces/libpq/fe-exec.c
@@ -1448,7 +1448,7 @@ PQsendQueryInternal(PGconn *conn, const char *query, bool newQuery)
if (conn->pipelineStatus != PQ_PIPELINE_OFF)
{
libpq_append_conn_error(conn, "%s not allowed in pipeline mode",
- "PQsendQuery");
+ "PQsendQuery");
return 0;
}
@@ -1516,7 +1516,7 @@ PQsendQueryParams(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -1562,7 +1562,7 @@ PQsendPrepare(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -1656,7 +1656,7 @@ PQsendQueryPrepared(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -2103,10 +2103,9 @@ PQgetResult(PGconn *conn)
/*
* We're about to return the NULL that terminates the round of
- * results from the current query; prepare to send the results
- * of the next query, if any, when we're called next. If there's
- * no next element in the command queue, this gets us in IDLE
- * state.
+ * results from the current query; prepare to send the results of
+ * the next query, if any, when we're called next. If there's no
+ * next element in the command queue, this gets us in IDLE state.
*/
pqPipelineProcessQueue(conn);
res = NULL; /* query is complete */
@@ -3051,6 +3050,7 @@ pqPipelineProcessQueue(PGconn *conn)
return;
case PGASYNC_IDLE:
+
/*
* If we're in IDLE mode and there's some command in the queue,
* get us into PIPELINE_IDLE mode and process normally. Otherwise
diff --git a/src/interfaces/libpq/fe-lobj.c b/src/interfaces/libpq/fe-lobj.c
index 4cb6a468597..206266fd043 100644
--- a/src/interfaces/libpq/fe-lobj.c
+++ b/src/interfaces/libpq/fe-lobj.c
@@ -142,7 +142,7 @@ lo_truncate(PGconn *conn, int fd, size_t len)
if (conn->lobjfuncs->fn_lo_truncate == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_truncate");
+ "lo_truncate");
return -1;
}
@@ -205,7 +205,7 @@ lo_truncate64(PGconn *conn, int fd, pg_int64 len)
if (conn->lobjfuncs->fn_lo_truncate64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_truncate64");
+ "lo_truncate64");
return -1;
}
@@ -395,7 +395,7 @@ lo_lseek64(PGconn *conn, int fd, pg_int64 offset, int whence)
if (conn->lobjfuncs->fn_lo_lseek64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_lseek64");
+ "lo_lseek64");
return -1;
}
@@ -485,7 +485,7 @@ lo_create(PGconn *conn, Oid lobjId)
if (conn->lobjfuncs->fn_lo_create == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_create");
+ "lo_create");
return InvalidOid;
}
@@ -558,7 +558,7 @@ lo_tell64(PGconn *conn, int fd)
if (conn->lobjfuncs->fn_lo_tell64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_tell64");
+ "lo_tell64");
return -1;
}
@@ -667,7 +667,7 @@ lo_import_internal(PGconn *conn, const char *filename, Oid oid)
if (fd < 0)
{ /* error */
libpq_append_conn_error(conn, "could not open file \"%s\": %s",
- filename, strerror_r(errno, sebuf, sizeof(sebuf)));
+ filename, strerror_r(errno, sebuf, sizeof(sebuf)));
return InvalidOid;
}
@@ -723,8 +723,8 @@ lo_import_internal(PGconn *conn, const char *filename, Oid oid)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not read from file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return InvalidOid;
}
@@ -778,8 +778,8 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not open file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return -1;
}
@@ -799,8 +799,8 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not write to file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return -1;
}
}
@@ -822,7 +822,7 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
if (close(fd) != 0 && result >= 0)
{
libpq_append_conn_error(conn, "could not write to file \"%s\": %s",
- filename, strerror_r(errno, sebuf, sizeof(sebuf)));
+ filename, strerror_r(errno, sebuf, sizeof(sebuf)));
result = -1;
}
@@ -954,56 +954,56 @@ lo_initialize(PGconn *conn)
if (lobjfuncs->fn_lo_open == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_open");
+ "lo_open");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_close == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_close");
+ "lo_close");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_creat == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_creat");
+ "lo_creat");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_unlink == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_unlink");
+ "lo_unlink");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_lseek == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_lseek");
+ "lo_lseek");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_tell == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_tell");
+ "lo_tell");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_read == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "loread");
+ "loread");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_write == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lowrite");
+ "lowrite");
free(lobjfuncs);
return -1;
}
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 3653a1a8a62..660cdec93c9 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -749,8 +749,8 @@ retry4:
*/
definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
@@ -1067,7 +1067,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s() failed: %s", "select",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
}
return result;
@@ -1280,7 +1280,7 @@ libpq_ngettext(const char *msgid, const char *msgid_plural, unsigned long n)
* newline.
*/
void
-libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...)
+libpq_append_error(PQExpBuffer errorMessage, const char *fmt,...)
{
int save_errno = errno;
bool done;
@@ -1309,7 +1309,7 @@ libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...)
* format should not end with a newline.
*/
void
-libpq_append_conn_error(PGconn *conn, const char *fmt, ...)
+libpq_append_conn_error(PGconn *conn, const char *fmt,...)
{
int save_errno = errno;
bool done;
diff --git a/src/interfaces/libpq/fe-protocol3.c b/src/interfaces/libpq/fe-protocol3.c
index 8ab6a884165..b79d74f7489 100644
--- a/src/interfaces/libpq/fe-protocol3.c
+++ b/src/interfaces/libpq/fe-protocol3.c
@@ -466,7 +466,7 @@ static void
handleSyncLoss(PGconn *conn, char id, int msgLength)
{
libpq_append_conn_error(conn, "lost synchronization with server: got message type \"%c\", length %d",
- id, msgLength);
+ id, msgLength);
/* build an error result holding the error message */
pqSaveErrorResult(conn);
conn->asyncStatus = PGASYNC_READY; /* drop out of PQgetResult wait loop */
diff --git a/src/interfaces/libpq/fe-secure-common.c b/src/interfaces/libpq/fe-secure-common.c
index de115b37649..3ecc7bf6159 100644
--- a/src/interfaces/libpq/fe-secure-common.c
+++ b/src/interfaces/libpq/fe-secure-common.c
@@ -226,7 +226,7 @@ pq_verify_peer_name_matches_certificate_ip(PGconn *conn,
* wrong given the subject matter.
*/
libpq_append_conn_error(conn, "certificate contains IP address with invalid length %zu",
- iplen);
+ iplen);
return -1;
}
@@ -235,7 +235,7 @@ pq_verify_peer_name_matches_certificate_ip(PGconn *conn,
if (!addrstr)
{
libpq_append_conn_error(conn, "could not convert certificate's IP address to string: %s",
- strerror_r(errno, sebuf, sizeof(sebuf)));
+ strerror_r(errno, sebuf, sizeof(sebuf)));
return -1;
}
@@ -292,7 +292,7 @@ pq_verify_peer_name_matches_certificate(PGconn *conn)
else if (names_examined == 1)
{
libpq_append_conn_error(conn, "server certificate for \"%s\" does not match host name \"%s\"",
- first_name, host);
+ first_name, host);
}
else
{
diff --git a/src/interfaces/libpq/fe-secure-gssapi.c b/src/interfaces/libpq/fe-secure-gssapi.c
index 038e847b7e9..0af4de941af 100644
--- a/src/interfaces/libpq/fe-secure-gssapi.c
+++ b/src/interfaces/libpq/fe-secure-gssapi.c
@@ -213,8 +213,8 @@ pg_GSS_write(PGconn *conn, const void *ptr, size_t len)
if (output.length > PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "client tried to send oversize GSSAPI packet (%zu > %zu)",
- (size_t) output.length,
- PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32));
+ (size_t) output.length,
+ PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32));
errno = EIO; /* for lack of a better idea */
goto cleanup;
}
@@ -349,8 +349,8 @@ pg_GSS_read(PGconn *conn, void *ptr, size_t len)
if (input.length > PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "oversize GSSAPI packet sent by the server (%zu > %zu)",
- (size_t) input.length,
- PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
+ (size_t) input.length,
+ PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
errno = EIO; /* for lack of a better idea */
return -1;
}
@@ -590,8 +590,8 @@ pqsecure_open_gss(PGconn *conn)
if (input.length > PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "oversize GSSAPI packet sent by the server (%zu > %zu)",
- (size_t) input.length,
- PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
+ (size_t) input.length,
+ PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
return PGRES_POLLING_FAILED;
}
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index 4d1e4009ef1..61f8a5c9c6c 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -213,12 +213,12 @@ rloop:
if (result_errno == EPIPE ||
result_errno == ECONNRESET)
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
else
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
}
else
{
@@ -313,12 +313,12 @@ pgtls_write(PGconn *conn, const void *ptr, size_t len)
result_errno = SOCK_ERRNO;
if (result_errno == EPIPE || result_errno == ECONNRESET)
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
else
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
}
else
{
@@ -415,7 +415,7 @@ pgtls_get_peer_certificate_hash(PGconn *conn, size_t *len)
if (algo_type == NULL)
{
libpq_append_conn_error(conn, "could not find digest for NID %s",
- OBJ_nid2sn(algo_nid));
+ OBJ_nid2sn(algo_nid));
return NULL;
}
break;
@@ -1000,7 +1000,7 @@ initialize_SSL(PGconn *conn)
if (ssl_min_ver == -1)
{
libpq_append_conn_error(conn, "invalid value \"%s\" for minimum SSL protocol version",
- conn->ssl_min_protocol_version);
+ conn->ssl_min_protocol_version);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1026,7 +1026,7 @@ initialize_SSL(PGconn *conn)
if (ssl_max_ver == -1)
{
libpq_append_conn_error(conn, "invalid value \"%s\" for maximum SSL protocol version",
- conn->ssl_max_protocol_version);
+ conn->ssl_max_protocol_version);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1070,7 +1070,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read root certificate file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
SSL_CTX_free(SSL_context);
return -1;
@@ -1122,10 +1122,10 @@ initialize_SSL(PGconn *conn)
*/
if (fnbuf[0] == '\0')
libpq_append_conn_error(conn, "could not get home directory to locate root certificate file\n"
- "Either provide the file or change sslmode to disable server certificate verification.");
+ "Either provide the file or change sslmode to disable server certificate verification.");
else
libpq_append_conn_error(conn, "root certificate file \"%s\" does not exist\n"
- "Either provide the file or change sslmode to disable server certificate verification.", fnbuf);
+ "Either provide the file or change sslmode to disable server certificate verification.", fnbuf);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1160,7 +1160,7 @@ initialize_SSL(PGconn *conn)
if (errno != ENOENT && errno != ENOTDIR)
{
libpq_append_conn_error(conn, "could not open certificate file \"%s\": %s",
- fnbuf, strerror_r(errno, sebuf, sizeof(sebuf)));
+ fnbuf, strerror_r(errno, sebuf, sizeof(sebuf)));
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1178,7 +1178,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read certificate file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
SSL_CTX_free(SSL_context);
return -1;
@@ -1277,7 +1277,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not load SSL engine \"%s\": %s",
- engine_str, err);
+ engine_str, err);
SSLerrfree(err);
free(engine_str);
return -1;
@@ -1288,7 +1288,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not initialize SSL engine \"%s\": %s",
- engine_str, err);
+ engine_str, err);
SSLerrfree(err);
ENGINE_free(conn->engine);
conn->engine = NULL;
@@ -1303,7 +1303,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read private SSL key \"%s\" from engine \"%s\": %s",
- engine_colon, engine_str, err);
+ engine_colon, engine_str, err);
SSLerrfree(err);
ENGINE_finish(conn->engine);
ENGINE_free(conn->engine);
@@ -1316,7 +1316,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not load private SSL key \"%s\" from engine \"%s\": %s",
- engine_colon, engine_str, err);
+ engine_colon, engine_str, err);
SSLerrfree(err);
ENGINE_finish(conn->engine);
ENGINE_free(conn->engine);
@@ -1353,10 +1353,10 @@ initialize_SSL(PGconn *conn)
{
if (errno == ENOENT)
libpq_append_conn_error(conn, "certificate present, but not private key file \"%s\"",
- fnbuf);
+ fnbuf);
else
libpq_append_conn_error(conn, "could not stat private key file \"%s\": %m",
- fnbuf);
+ fnbuf);
return -1;
}
@@ -1364,7 +1364,7 @@ initialize_SSL(PGconn *conn)
if (!S_ISREG(buf.st_mode))
{
libpq_append_conn_error(conn, "private key file \"%s\" is not a regular file",
- fnbuf);
+ fnbuf);
return -1;
}
@@ -1421,7 +1421,7 @@ initialize_SSL(PGconn *conn)
if (SSL_use_PrivateKey_file(conn->ssl, fnbuf, SSL_FILETYPE_ASN1) != 1)
{
libpq_append_conn_error(conn, "could not load private key file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
return -1;
}
@@ -1437,7 +1437,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "certificate does not match private key file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
return -1;
}
@@ -1490,7 +1490,7 @@ open_client_SSL(PGconn *conn)
if (r == -1)
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
else
libpq_append_conn_error(conn, "SSL SYSCALL error: EOF detected");
pgtls_close(conn);
@@ -1532,12 +1532,12 @@ open_client_SSL(PGconn *conn)
case SSL_R_VERSION_TOO_LOW:
#endif
libpq_append_conn_error(conn, "This may indicate that the server does not support any SSL protocol version between %s and %s.",
- conn->ssl_min_protocol_version ?
- conn->ssl_min_protocol_version :
- MIN_OPENSSL_TLS_VERSION,
- conn->ssl_max_protocol_version ?
- conn->ssl_max_protocol_version :
- MAX_OPENSSL_TLS_VERSION);
+ conn->ssl_min_protocol_version ?
+ conn->ssl_min_protocol_version :
+ MIN_OPENSSL_TLS_VERSION,
+ conn->ssl_max_protocol_version ?
+ conn->ssl_max_protocol_version :
+ MAX_OPENSSL_TLS_VERSION);
break;
default:
break;
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 66e401bf3d9..8069e381424 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -255,14 +255,14 @@ pqsecure_raw_read(PGconn *conn, void *ptr, size_t len)
case EPIPE:
case ECONNRESET:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
break;
default:
libpq_append_conn_error(conn, "could not receive data from server: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
break;
}
}
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 88b9838d766..1ff57044508 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -901,8 +901,8 @@ extern char *libpq_ngettext(const char *msgid, const char *msgid_plural, unsigne
*/
#undef _
-extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...) pg_attribute_printf(2, 3);
-extern void libpq_append_conn_error(PGconn *conn, const char *fmt, ...) pg_attribute_printf(2, 3);
+extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt,...) pg_attribute_printf(2, 3);
+extern void libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
/*
* These macros are needed to let error-handling code be portable between
base-commit: 8e5eef50c5b41fd39ad60365c9c1b46782f881ca
--
2.34.1
v18-0002-Copy-and-store-addrinfo-in-libpq-owned-private-m.patchapplication/octet-stream; name=v18-0002-Copy-and-store-addrinfo-in-libpq-owned-private-m.patchDownload
From f5931455ef91c584b415ea7b136735da6560825f Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <dgustafsson@postgresql.org>
Date: Mon, 27 Mar 2023 11:17:56 +0200
Subject: [PATCH v18 2/5] Copy and store addrinfo in libpq-owned private memory
This refactors libpq to copy addrinfos returned by getaddrinfo to
memory owned by libpq such that future improvements can alter for
example the order of entries.
As a nice side effect of this refactor the mechanism for iteration
over addresses in PQconnectPoll is now identical to its iteration
over hosts.
Author: Jelte Fennema <postgres@jeltef.nl>
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Reviewed-by: Michael Banck <mbanck@gmx.net>
Reviewed-by: Andrey Borodin <amborodin86@gmail.com>
Discussion: https://postgr.es/m/PR3PR83MB04768E2FF04818EEB2179949F7A69@PR3PR83MB0476.EURPRD83.prod.outlook.com
---
src/include/libpq/pqcomm.h | 6 ++
src/interfaces/libpq/fe-connect.c | 112 +++++++++++++++++++++---------
src/interfaces/libpq/libpq-int.h | 7 +-
src/tools/pgindent/typedefs.list | 1 +
4 files changed, 92 insertions(+), 34 deletions(-)
diff --git a/src/include/libpq/pqcomm.h b/src/include/libpq/pqcomm.h
index bff7dd18a23..c85090259d9 100644
--- a/src/include/libpq/pqcomm.h
+++ b/src/include/libpq/pqcomm.h
@@ -27,6 +27,12 @@ typedef struct
socklen_t salen;
} SockAddr;
+typedef struct
+{
+ int family;
+ SockAddr addr;
+} AddrInfo;
+
/* Configure the UNIX socket location for the well known port. */
#define UNIXSOCK_PATH(path, port, sockdir) \
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index b71378d94c5..4e798e1672c 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -389,6 +389,7 @@ static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
+static int store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
static PQconninfoOption *conninfo_init(PQExpBuffer errorMessage);
static PQconninfoOption *parse_connection_string(const char *connstr,
@@ -2295,7 +2296,7 @@ connectDBComplete(PGconn *conn)
time_t finish_time = ((time_t) -1);
int timeout = 0;
int last_whichhost = -2; /* certainly different from whichhost */
- struct addrinfo *last_addr_cur = NULL;
+ int last_whichaddr = -2; /* certainly different from whichaddr */
if (conn == NULL || conn->status == CONNECTION_BAD)
return 0;
@@ -2339,11 +2340,11 @@ connectDBComplete(PGconn *conn)
if (flag != PGRES_POLLING_OK &&
timeout > 0 &&
(conn->whichhost != last_whichhost ||
- conn->addr_cur != last_addr_cur))
+ conn->whichaddr != last_whichaddr))
{
finish_time = time(NULL) + timeout;
last_whichhost = conn->whichhost;
- last_addr_cur = conn->addr_cur;
+ last_whichaddr = conn->whichaddr;
}
/*
@@ -2490,9 +2491,9 @@ keep_going: /* We will come back to here until there is
/* Time to advance to next address, or next host if no more addresses? */
if (conn->try_next_addr)
{
- if (conn->addr_cur && conn->addr_cur->ai_next)
+ if (conn->whichaddr < conn->naddr)
{
- conn->addr_cur = conn->addr_cur->ai_next;
+ conn->whichaddr++;
reset_connection_state_machine = true;
}
else
@@ -2505,6 +2506,7 @@ keep_going: /* We will come back to here until there is
{
pg_conn_host *ch;
struct addrinfo hint;
+ struct addrinfo *addrlist;
int thisport;
int ret;
char portstr[MAXPGPATH];
@@ -2545,7 +2547,7 @@ keep_going: /* We will come back to here until there is
/* Initialize hint structure */
MemSet(&hint, 0, sizeof(hint));
hint.ai_socktype = SOCK_STREAM;
- conn->addrlist_family = hint.ai_family = AF_UNSPEC;
+ hint.ai_family = AF_UNSPEC;
/* Figure out the port number we're going to use. */
if (ch->port == NULL || ch->port[0] == '\0')
@@ -2568,8 +2570,8 @@ keep_going: /* We will come back to here until there is
{
case CHT_HOST_NAME:
ret = pg_getaddrinfo_all(ch->host, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not translate host name \"%s\" to address: %s",
ch->host, gai_strerror(ret));
@@ -2580,8 +2582,8 @@ keep_going: /* We will come back to here until there is
case CHT_HOST_ADDRESS:
hint.ai_flags = AI_NUMERICHOST;
ret = pg_getaddrinfo_all(ch->hostaddr, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not parse network address \"%s\": %s",
ch->hostaddr, gai_strerror(ret));
@@ -2590,7 +2592,7 @@ keep_going: /* We will come back to here until there is
break;
case CHT_UNIX_SOCKET:
- conn->addrlist_family = hint.ai_family = AF_UNIX;
+ hint.ai_family = AF_UNIX;
UNIXSOCK_PATH(portstr, thisport, ch->host);
if (strlen(portstr) >= UNIXSOCK_PATH_BUFLEN)
{
@@ -2605,8 +2607,8 @@ keep_going: /* We will come back to here until there is
* name as a Unix-domain socket path.
*/
ret = pg_getaddrinfo_all(NULL, portstr, &hint,
- &conn->addrlist);
- if (ret || !conn->addrlist)
+ &addrlist);
+ if (ret || !addrlist)
{
libpq_append_conn_error(conn, "could not translate Unix-domain socket path \"%s\" to address: %s",
portstr, gai_strerror(ret));
@@ -2615,8 +2617,15 @@ keep_going: /* We will come back to here until there is
break;
}
- /* OK, scan this addrlist for a working server address */
- conn->addr_cur = conn->addrlist;
+ /*
+ * Store a copy of the addrlist in private memory so we can perform
+ * randomization for load balancing.
+ */
+ ret = store_conn_addrinfo(conn, addrlist);
+ pg_freeaddrinfo_all(hint.ai_family, addrlist);
+ if (ret)
+ goto error_return; /* message already logged */
+
reset_connection_state_machine = true;
conn->try_next_host = false;
}
@@ -2673,31 +2682,30 @@ keep_going: /* We will come back to here until there is
{
/*
* Try to initiate a connection to one of the addresses
- * returned by pg_getaddrinfo_all(). conn->addr_cur is the
+ * returned by pg_getaddrinfo_all(). conn->whichaddr is the
* next one to try.
*
* The extra level of braces here is historical. It's not
* worth reindenting this whole switch case to remove 'em.
*/
{
- struct addrinfo *addr_cur = conn->addr_cur;
char host_addr[NI_MAXHOST];
int sock_type;
+ AddrInfo *addr_cur;
/*
* Advance to next possible host, if we've tried all of
* the addresses for the current host.
*/
- if (addr_cur == NULL)
+ if (conn->whichaddr == conn->naddr)
{
conn->try_next_host = true;
goto keep_going;
}
+ addr_cur = &conn->addr[conn->whichaddr];
/* Remember current address for possible use later */
- memcpy(&conn->raddr.addr, addr_cur->ai_addr,
- addr_cur->ai_addrlen);
- conn->raddr.salen = addr_cur->ai_addrlen;
+ memcpy(&conn->raddr, &addr_cur->addr, sizeof(SockAddr));
/*
* Set connip, too. Note we purposely ignore strdup
@@ -2732,7 +2740,7 @@ keep_going: /* We will come back to here until there is
*/
sock_type |= SOCK_NONBLOCK;
#endif
- conn->sock = socket(addr_cur->ai_family, sock_type, 0);
+ conn->sock = socket(addr_cur->family, sock_type, 0);
if (conn->sock == PGINVALID_SOCKET)
{
int errorno = SOCK_ERRNO;
@@ -2743,7 +2751,7 @@ keep_going: /* We will come back to here until there is
* cases where the address list includes both IPv4 and
* IPv6 but kernel only accepts one family.
*/
- if (addr_cur->ai_next != NULL ||
+ if (conn->whichaddr < conn->naddr ||
conn->whichhost + 1 < conn->nconnhost)
{
conn->try_next_addr = true;
@@ -2769,7 +2777,7 @@ keep_going: /* We will come back to here until there is
* TCP sockets, nonblock mode, close-on-exec. Try the
* next address if any of this fails.
*/
- if (addr_cur->ai_family != AF_UNIX)
+ if (addr_cur->family != AF_UNIX)
{
if (!connectNoDelay(conn))
{
@@ -2800,7 +2808,7 @@ keep_going: /* We will come back to here until there is
#endif /* F_SETFD */
#endif
- if (addr_cur->ai_family != AF_UNIX)
+ if (addr_cur->family != AF_UNIX)
{
#ifndef WIN32
int on = 1;
@@ -2892,8 +2900,8 @@ keep_going: /* We will come back to here until there is
* Start/make connection. This should not block, since we
* are in nonblock mode. If it does, well, too bad.
*/
- if (connect(conn->sock, addr_cur->ai_addr,
- addr_cur->ai_addrlen) < 0)
+ if (connect(conn->sock, (struct sockaddr *) &addr_cur->addr.addr,
+ addr_cur->addr.salen) < 0)
{
if (SOCK_ERRNO == EINPROGRESS ||
#ifdef WIN32
@@ -4318,6 +4326,49 @@ freePGconn(PGconn *conn)
free(conn);
}
+/*
+ * store_conn_addrinfo
+ * - copy addrinfo to PGconn object
+ *
+ * Copies the addrinfos from addrlist to the PGconn object such that the
+ * addrinfos can be manipulated by libpq. Returns a positive integer on
+ * failure, otherwise zero.
+ */
+static int
+store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist)
+{
+ struct addrinfo *ai = addrlist;
+
+ conn->whichaddr = 0;
+
+ conn->naddr = 0;
+ while (ai)
+ {
+ ai = ai->ai_next;
+ conn->naddr++;
+ }
+
+ conn->addr = calloc(conn->naddr, sizeof(AddrInfo));
+ if (conn->addr == NULL)
+ {
+ libpq_append_conn_error(conn, "out of memory");
+ return 1;
+ }
+
+ ai = addrlist;
+ for (int i = 0; i < conn->naddr; i++)
+ {
+ conn->addr[i].family = ai->ai_family;
+
+ memcpy(&conn->addr[i].addr.addr, ai->ai_addr,
+ ai->ai_addrlen);
+ conn->addr[i].addr.salen = ai->ai_addrlen;
+ ai = ai->ai_next;
+ }
+
+ return 0;
+}
+
/*
* release_conn_addrinfo
* - Free any addrinfo list in the PGconn.
@@ -4325,11 +4376,10 @@ freePGconn(PGconn *conn)
static void
release_conn_addrinfo(PGconn *conn)
{
- if (conn->addrlist)
+ if (conn->addr)
{
- pg_freeaddrinfo_all(conn->addrlist_family, conn->addrlist);
- conn->addrlist = NULL;
- conn->addr_cur = NULL; /* for safety */
+ free(conn->addr);
+ conn->addr = NULL;
}
}
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 1ff57044508..760ee3f6912 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -471,9 +471,10 @@ struct pg_conn
PGTargetServerType target_server_type; /* desired session properties */
bool try_next_addr; /* time to advance to next address/host? */
bool try_next_host; /* time to advance to next connhost[]? */
- struct addrinfo *addrlist; /* list of addresses for current connhost */
- struct addrinfo *addr_cur; /* the one currently being tried */
- int addrlist_family; /* needed to know how to free addrlist */
+ int naddr; /* number of addresses returned by getaddrinfo */
+ int whichaddr; /* the address currently being tried */
+ AddrInfo *addr; /* the array of addresses for the currently
+ * tried host */
bool send_appname; /* okay to send application_name? */
/* Miscellaneous stuff */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index f5cd394b335..d4f49878298 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -26,6 +26,7 @@ AcquireSampleRowsFunc
ActionList
ActiveSnapshotElt
AddForeignUpdateTargets_function
+AddrInfo
AffixNode
AffixNodeData
AfterTriggerEvent
--
2.34.1
v18-0003-Return-2-from-pqReadData-on-EOF.patchapplication/octet-stream; name=v18-0003-Return-2-from-pqReadData-on-EOF.patchDownload
From 63453f9ffdf62fb3fecbabb4dbc9124350eabbf0 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Thu, 26 Jan 2023 12:24:38 +0100
Subject: [PATCH v18 3/5] Return -2 from pqReadData on EOF
This patch changes pqReadData to return -2 when a connection is cleanly
closed by the other side. For most of the Postgres protocol this is
considered an error, because the client will close the connection
instead of the server. But for Postgres its cancellation protocol
the distinction between errors and clean connection closure is
important, because clean connection closure is the way for the server to
signal that the cancellation was handled.
This patch is in preparation for a follow-up patch where pqReadData is
used for the cancellation protocol implementation.
No existing callsites of pqReadData or any of its internal functions
need to be updated as all of them check if the result is less than 0
instead a strict comparison against -1.
---
src/interfaces/libpq/fe-misc.c | 15 +++++++++++----
src/interfaces/libpq/fe-secure-openssl.c | 2 +-
src/interfaces/libpq/fe-secure.c | 6 ++++++
3 files changed, 18 insertions(+), 5 deletions(-)
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 660cdec93c9..2d49188d910 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -556,8 +556,11 @@ pqPutMsgEnd(PGconn *conn)
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
- * -1: error detected (including EOF = connection closure);
+ * -1: error detected (excluding EOF = clean connection closure);
* conn->errorMessage set
+ * -2: EOF detected, connection is closed cleanly by other side;
+ * conn->errorMessage set
+ *
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
@@ -639,7 +642,7 @@ retry3:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -734,7 +737,7 @@ retry4:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -751,13 +754,17 @@ definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.");
+ /* Do *not* drop any already-read data; caller still wants it */
+ pqDropConnection(conn, false);
+ conn->status = CONNECTION_BAD; /* No more connection to backend */
+ return -2;
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
/* Do *not* drop any already-read data; caller still wants it */
pqDropConnection(conn, false);
conn->status = CONNECTION_BAD; /* No more connection to backend */
- return -1;
+ return nread < 0 ? nread : -1;
}
/*
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index 61f8a5c9c6c..351161bd0f9 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -248,7 +248,7 @@ rloop:
*/
libpq_append_conn_error(conn, "SSL connection has been closed unexpectedly");
result_errno = ECONNRESET;
- n = -1;
+ n = -2;
break;
default:
libpq_append_conn_error(conn, "unrecognized SSL error code: %d", err);
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 8069e381424..20265dcb317 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -199,6 +199,12 @@ pqsecure_close(PGconn *conn)
* On failure, this function is responsible for appending a suitable message
* to conn->errorMessage. The caller must still inspect errno, but only
* to determine whether to continue/retry after error.
+ *
+ * Returns -1 in case of failures, except in the case of where a failure means
+ * that there was a clean connection closure, in those cases -2 is returned.
+ * Currently only the TLS implementation of pqsecure_read ever returns -2. For
+ * the other implementations a clean connection closure is detected in
+ * pqReadData instead.
*/
ssize_t
pqsecure_read(PGconn *conn, void *ptr, size_t len)
--
2.34.1
Jelte Fennema a �crit :
On Wed, 29 Mar 2023 at 10:43, Denis Laxalde <denis.laxalde@dalibo.com> wrote:
More importantly, not having PQcancelSend() creating the PGcancelConn
makes reuse of that value, passing through PQcancelReset(), more
intuitive. E.g., in the tests:You convinced me. Attached is an updated patch where PQcancelSend
takes the PGcancelConn and returns 1 or 0.
Patch 5 is missing respective changes; please find attached a fixup
patch for these.
Attachments:
0001-fixup-Start-using-new-libpq-cancel-APIs.patchtext/x-diff; charset=us-asciiDownload
From c9e59fb3e30db1bfab75be9fdd4afbc227a5270e Mon Sep 17 00:00:00 2001
From: Denis Laxalde <denis.laxalde@dalibo.com>
Date: Thu, 30 Mar 2023 09:19:18 +0200
Subject: [PATCH] fixup! Start using new libpq cancel APIs
---
contrib/dblink/dblink.c | 4 ++--
src/fe_utils/connect_utils.c | 4 +++-
src/test/isolation/isolationtester.c | 4 ++--
3 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index e139f66e11..073795f088 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1332,11 +1332,11 @@ dblink_cancel_query(PG_FUNCTION_ARGS)
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancelConn = PQcancelSend(conn);
+ cancelConn = PQcancelConn(conn);
PG_TRY();
{
- if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ if (!PQcancelSend(cancelConn))
{
msg = pchomp(PQcancelErrorMessage(cancelConn));
}
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index b32448c010..1cfd717217 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -161,7 +161,9 @@ disconnectDatabase(PGconn *conn)
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PQcancelFinish(PQcancelSend(conn));
+ PGcancelConn *cancelConn = PQcancelConn(conn);
+ PQcancelSend(cancelConn);
+ PQcancelFinish(cancelConn);
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 3781f7982b..de31a87571 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,9 +946,9 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancelConn *cancel_conn = PQcancelSend(conn);
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- if (PQcancelStatus(cancel_conn) == CONNECTION_OK)
+ if (PQcancelSend(cancel_conn))
{
/*
* print to stdout not stderr, as this should appear in
--
2.30.2
On Thu, 30 Mar 2023 at 10:07, Denis Laxalde <denis.laxalde@dalibo.com> wrote:
Patch 5 is missing respective changes; please find attached a fixup
patch for these.
Thanks, attached are newly rebased patches that include this change. I
also cast the result of PQcancelSend to to void in the one case where
it's ignored on purpose. Note that the patchset shrunk by one, since
the original patch 0002 has been committed now.
Attachments:
v19-0002-Return-2-from-pqReadData-on-EOF.patchapplication/octet-stream; name=v19-0002-Return-2-from-pqReadData-on-EOF.patchDownload
From a03152f6b81ff329e3265598bd111719c0c3bf52 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Thu, 26 Jan 2023 12:24:38 +0100
Subject: [PATCH v19 2/4] Return -2 from pqReadData on EOF
This patch changes pqReadData to return -2 when a connection is cleanly
closed by the other side. For most of the Postgres protocol this is
considered an error, because the client will close the connection
instead of the server. But for Postgres its cancellation protocol
the distinction between errors and clean connection closure is
important, because clean connection closure is the way for the server to
signal that the cancellation was handled.
This patch is in preparation for a follow-up patch where pqReadData is
used for the cancellation protocol implementation.
No existing callsites of pqReadData or any of its internal functions
need to be updated as all of them check if the result is less than 0
instead a strict comparison against -1.
---
src/interfaces/libpq/fe-misc.c | 15 +++++++++++----
src/interfaces/libpq/fe-secure-openssl.c | 2 +-
src/interfaces/libpq/fe-secure.c | 6 ++++++
3 files changed, 18 insertions(+), 5 deletions(-)
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 660cdec93c9..2d49188d910 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -556,8 +556,11 @@ pqPutMsgEnd(PGconn *conn)
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
- * -1: error detected (including EOF = connection closure);
+ * -1: error detected (excluding EOF = clean connection closure);
* conn->errorMessage set
+ * -2: EOF detected, connection is closed cleanly by other side;
+ * conn->errorMessage set
+ *
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
@@ -639,7 +642,7 @@ retry3:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -734,7 +737,7 @@ retry4:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -751,13 +754,17 @@ definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.");
+ /* Do *not* drop any already-read data; caller still wants it */
+ pqDropConnection(conn, false);
+ conn->status = CONNECTION_BAD; /* No more connection to backend */
+ return -2;
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
/* Do *not* drop any already-read data; caller still wants it */
pqDropConnection(conn, false);
conn->status = CONNECTION_BAD; /* No more connection to backend */
- return -1;
+ return nread < 0 ? nread : -1;
}
/*
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index 61f8a5c9c6c..351161bd0f9 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -248,7 +248,7 @@ rloop:
*/
libpq_append_conn_error(conn, "SSL connection has been closed unexpectedly");
result_errno = ECONNRESET;
- n = -1;
+ n = -2;
break;
default:
libpq_append_conn_error(conn, "unrecognized SSL error code: %d", err);
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 8069e381424..20265dcb317 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -199,6 +199,12 @@ pqsecure_close(PGconn *conn)
* On failure, this function is responsible for appending a suitable message
* to conn->errorMessage. The caller must still inspect errno, but only
* to determine whether to continue/retry after error.
+ *
+ * Returns -1 in case of failures, except in the case of where a failure means
+ * that there was a clean connection closure, in those cases -2 is returned.
+ * Currently only the TLS implementation of pqsecure_read ever returns -2. For
+ * the other implementations a clean connection closure is detected in
+ * pqReadData instead.
*/
ssize_t
pqsecure_read(PGconn *conn, void *ptr, size_t len)
--
2.34.1
v19-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patchapplication/octet-stream; name=v19-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patchDownload
From 513629d4dba779e479c14d12e1d96f1af59cd83b Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 30 Nov 2022 10:07:19 +0100
Subject: [PATCH v19 1/4] libpq: Run pgindent after a9e9a9f32b3
It seems that pgindent was not run after the error handling refactor in
commit a9e9a9f32b35edf129c88e8b929ef223f8511f59. This fixes that and
also addresses a few other things pgindent wanted to change in libpq.
---
src/interfaces/libpq/fe-exec.c | 16 +++---
src/interfaces/libpq/fe-lobj.c | 42 ++++++++--------
src/interfaces/libpq/fe-misc.c | 10 ++--
src/interfaces/libpq/fe-protocol3.c | 2 +-
src/interfaces/libpq/fe-secure-common.c | 6 +--
src/interfaces/libpq/fe-secure-gssapi.c | 12 ++---
src/interfaces/libpq/fe-secure-openssl.c | 64 ++++++++++++------------
src/interfaces/libpq/fe-secure.c | 8 +--
src/interfaces/libpq/libpq-int.h | 4 +-
9 files changed, 82 insertions(+), 82 deletions(-)
diff --git a/src/interfaces/libpq/fe-exec.c b/src/interfaces/libpq/fe-exec.c
index a16bbf32ef5..14d706efd57 100644
--- a/src/interfaces/libpq/fe-exec.c
+++ b/src/interfaces/libpq/fe-exec.c
@@ -1448,7 +1448,7 @@ PQsendQueryInternal(PGconn *conn, const char *query, bool newQuery)
if (conn->pipelineStatus != PQ_PIPELINE_OFF)
{
libpq_append_conn_error(conn, "%s not allowed in pipeline mode",
- "PQsendQuery");
+ "PQsendQuery");
return 0;
}
@@ -1516,7 +1516,7 @@ PQsendQueryParams(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -1562,7 +1562,7 @@ PQsendPrepare(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -1656,7 +1656,7 @@ PQsendQueryPrepared(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -2103,10 +2103,9 @@ PQgetResult(PGconn *conn)
/*
* We're about to return the NULL that terminates the round of
- * results from the current query; prepare to send the results
- * of the next query, if any, when we're called next. If there's
- * no next element in the command queue, this gets us in IDLE
- * state.
+ * results from the current query; prepare to send the results of
+ * the next query, if any, when we're called next. If there's no
+ * next element in the command queue, this gets us in IDLE state.
*/
pqPipelineProcessQueue(conn);
res = NULL; /* query is complete */
@@ -3051,6 +3050,7 @@ pqPipelineProcessQueue(PGconn *conn)
return;
case PGASYNC_IDLE:
+
/*
* If we're in IDLE mode and there's some command in the queue,
* get us into PIPELINE_IDLE mode and process normally. Otherwise
diff --git a/src/interfaces/libpq/fe-lobj.c b/src/interfaces/libpq/fe-lobj.c
index 4cb6a468597..206266fd043 100644
--- a/src/interfaces/libpq/fe-lobj.c
+++ b/src/interfaces/libpq/fe-lobj.c
@@ -142,7 +142,7 @@ lo_truncate(PGconn *conn, int fd, size_t len)
if (conn->lobjfuncs->fn_lo_truncate == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_truncate");
+ "lo_truncate");
return -1;
}
@@ -205,7 +205,7 @@ lo_truncate64(PGconn *conn, int fd, pg_int64 len)
if (conn->lobjfuncs->fn_lo_truncate64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_truncate64");
+ "lo_truncate64");
return -1;
}
@@ -395,7 +395,7 @@ lo_lseek64(PGconn *conn, int fd, pg_int64 offset, int whence)
if (conn->lobjfuncs->fn_lo_lseek64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_lseek64");
+ "lo_lseek64");
return -1;
}
@@ -485,7 +485,7 @@ lo_create(PGconn *conn, Oid lobjId)
if (conn->lobjfuncs->fn_lo_create == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_create");
+ "lo_create");
return InvalidOid;
}
@@ -558,7 +558,7 @@ lo_tell64(PGconn *conn, int fd)
if (conn->lobjfuncs->fn_lo_tell64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_tell64");
+ "lo_tell64");
return -1;
}
@@ -667,7 +667,7 @@ lo_import_internal(PGconn *conn, const char *filename, Oid oid)
if (fd < 0)
{ /* error */
libpq_append_conn_error(conn, "could not open file \"%s\": %s",
- filename, strerror_r(errno, sebuf, sizeof(sebuf)));
+ filename, strerror_r(errno, sebuf, sizeof(sebuf)));
return InvalidOid;
}
@@ -723,8 +723,8 @@ lo_import_internal(PGconn *conn, const char *filename, Oid oid)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not read from file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return InvalidOid;
}
@@ -778,8 +778,8 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not open file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return -1;
}
@@ -799,8 +799,8 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not write to file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return -1;
}
}
@@ -822,7 +822,7 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
if (close(fd) != 0 && result >= 0)
{
libpq_append_conn_error(conn, "could not write to file \"%s\": %s",
- filename, strerror_r(errno, sebuf, sizeof(sebuf)));
+ filename, strerror_r(errno, sebuf, sizeof(sebuf)));
result = -1;
}
@@ -954,56 +954,56 @@ lo_initialize(PGconn *conn)
if (lobjfuncs->fn_lo_open == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_open");
+ "lo_open");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_close == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_close");
+ "lo_close");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_creat == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_creat");
+ "lo_creat");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_unlink == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_unlink");
+ "lo_unlink");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_lseek == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_lseek");
+ "lo_lseek");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_tell == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_tell");
+ "lo_tell");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_read == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "loread");
+ "loread");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_write == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lowrite");
+ "lowrite");
free(lobjfuncs);
return -1;
}
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 3653a1a8a62..660cdec93c9 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -749,8 +749,8 @@ retry4:
*/
definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
@@ -1067,7 +1067,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s() failed: %s", "select",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
}
return result;
@@ -1280,7 +1280,7 @@ libpq_ngettext(const char *msgid, const char *msgid_plural, unsigned long n)
* newline.
*/
void
-libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...)
+libpq_append_error(PQExpBuffer errorMessage, const char *fmt,...)
{
int save_errno = errno;
bool done;
@@ -1309,7 +1309,7 @@ libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...)
* format should not end with a newline.
*/
void
-libpq_append_conn_error(PGconn *conn, const char *fmt, ...)
+libpq_append_conn_error(PGconn *conn, const char *fmt,...)
{
int save_errno = errno;
bool done;
diff --git a/src/interfaces/libpq/fe-protocol3.c b/src/interfaces/libpq/fe-protocol3.c
index 8ab6a884165..b79d74f7489 100644
--- a/src/interfaces/libpq/fe-protocol3.c
+++ b/src/interfaces/libpq/fe-protocol3.c
@@ -466,7 +466,7 @@ static void
handleSyncLoss(PGconn *conn, char id, int msgLength)
{
libpq_append_conn_error(conn, "lost synchronization with server: got message type \"%c\", length %d",
- id, msgLength);
+ id, msgLength);
/* build an error result holding the error message */
pqSaveErrorResult(conn);
conn->asyncStatus = PGASYNC_READY; /* drop out of PQgetResult wait loop */
diff --git a/src/interfaces/libpq/fe-secure-common.c b/src/interfaces/libpq/fe-secure-common.c
index de115b37649..3ecc7bf6159 100644
--- a/src/interfaces/libpq/fe-secure-common.c
+++ b/src/interfaces/libpq/fe-secure-common.c
@@ -226,7 +226,7 @@ pq_verify_peer_name_matches_certificate_ip(PGconn *conn,
* wrong given the subject matter.
*/
libpq_append_conn_error(conn, "certificate contains IP address with invalid length %zu",
- iplen);
+ iplen);
return -1;
}
@@ -235,7 +235,7 @@ pq_verify_peer_name_matches_certificate_ip(PGconn *conn,
if (!addrstr)
{
libpq_append_conn_error(conn, "could not convert certificate's IP address to string: %s",
- strerror_r(errno, sebuf, sizeof(sebuf)));
+ strerror_r(errno, sebuf, sizeof(sebuf)));
return -1;
}
@@ -292,7 +292,7 @@ pq_verify_peer_name_matches_certificate(PGconn *conn)
else if (names_examined == 1)
{
libpq_append_conn_error(conn, "server certificate for \"%s\" does not match host name \"%s\"",
- first_name, host);
+ first_name, host);
}
else
{
diff --git a/src/interfaces/libpq/fe-secure-gssapi.c b/src/interfaces/libpq/fe-secure-gssapi.c
index 038e847b7e9..0af4de941af 100644
--- a/src/interfaces/libpq/fe-secure-gssapi.c
+++ b/src/interfaces/libpq/fe-secure-gssapi.c
@@ -213,8 +213,8 @@ pg_GSS_write(PGconn *conn, const void *ptr, size_t len)
if (output.length > PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "client tried to send oversize GSSAPI packet (%zu > %zu)",
- (size_t) output.length,
- PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32));
+ (size_t) output.length,
+ PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32));
errno = EIO; /* for lack of a better idea */
goto cleanup;
}
@@ -349,8 +349,8 @@ pg_GSS_read(PGconn *conn, void *ptr, size_t len)
if (input.length > PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "oversize GSSAPI packet sent by the server (%zu > %zu)",
- (size_t) input.length,
- PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
+ (size_t) input.length,
+ PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
errno = EIO; /* for lack of a better idea */
return -1;
}
@@ -590,8 +590,8 @@ pqsecure_open_gss(PGconn *conn)
if (input.length > PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "oversize GSSAPI packet sent by the server (%zu > %zu)",
- (size_t) input.length,
- PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
+ (size_t) input.length,
+ PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
return PGRES_POLLING_FAILED;
}
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index 4d1e4009ef1..61f8a5c9c6c 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -213,12 +213,12 @@ rloop:
if (result_errno == EPIPE ||
result_errno == ECONNRESET)
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
else
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
}
else
{
@@ -313,12 +313,12 @@ pgtls_write(PGconn *conn, const void *ptr, size_t len)
result_errno = SOCK_ERRNO;
if (result_errno == EPIPE || result_errno == ECONNRESET)
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
else
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
}
else
{
@@ -415,7 +415,7 @@ pgtls_get_peer_certificate_hash(PGconn *conn, size_t *len)
if (algo_type == NULL)
{
libpq_append_conn_error(conn, "could not find digest for NID %s",
- OBJ_nid2sn(algo_nid));
+ OBJ_nid2sn(algo_nid));
return NULL;
}
break;
@@ -1000,7 +1000,7 @@ initialize_SSL(PGconn *conn)
if (ssl_min_ver == -1)
{
libpq_append_conn_error(conn, "invalid value \"%s\" for minimum SSL protocol version",
- conn->ssl_min_protocol_version);
+ conn->ssl_min_protocol_version);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1026,7 +1026,7 @@ initialize_SSL(PGconn *conn)
if (ssl_max_ver == -1)
{
libpq_append_conn_error(conn, "invalid value \"%s\" for maximum SSL protocol version",
- conn->ssl_max_protocol_version);
+ conn->ssl_max_protocol_version);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1070,7 +1070,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read root certificate file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
SSL_CTX_free(SSL_context);
return -1;
@@ -1122,10 +1122,10 @@ initialize_SSL(PGconn *conn)
*/
if (fnbuf[0] == '\0')
libpq_append_conn_error(conn, "could not get home directory to locate root certificate file\n"
- "Either provide the file or change sslmode to disable server certificate verification.");
+ "Either provide the file or change sslmode to disable server certificate verification.");
else
libpq_append_conn_error(conn, "root certificate file \"%s\" does not exist\n"
- "Either provide the file or change sslmode to disable server certificate verification.", fnbuf);
+ "Either provide the file or change sslmode to disable server certificate verification.", fnbuf);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1160,7 +1160,7 @@ initialize_SSL(PGconn *conn)
if (errno != ENOENT && errno != ENOTDIR)
{
libpq_append_conn_error(conn, "could not open certificate file \"%s\": %s",
- fnbuf, strerror_r(errno, sebuf, sizeof(sebuf)));
+ fnbuf, strerror_r(errno, sebuf, sizeof(sebuf)));
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1178,7 +1178,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read certificate file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
SSL_CTX_free(SSL_context);
return -1;
@@ -1277,7 +1277,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not load SSL engine \"%s\": %s",
- engine_str, err);
+ engine_str, err);
SSLerrfree(err);
free(engine_str);
return -1;
@@ -1288,7 +1288,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not initialize SSL engine \"%s\": %s",
- engine_str, err);
+ engine_str, err);
SSLerrfree(err);
ENGINE_free(conn->engine);
conn->engine = NULL;
@@ -1303,7 +1303,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read private SSL key \"%s\" from engine \"%s\": %s",
- engine_colon, engine_str, err);
+ engine_colon, engine_str, err);
SSLerrfree(err);
ENGINE_finish(conn->engine);
ENGINE_free(conn->engine);
@@ -1316,7 +1316,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not load private SSL key \"%s\" from engine \"%s\": %s",
- engine_colon, engine_str, err);
+ engine_colon, engine_str, err);
SSLerrfree(err);
ENGINE_finish(conn->engine);
ENGINE_free(conn->engine);
@@ -1353,10 +1353,10 @@ initialize_SSL(PGconn *conn)
{
if (errno == ENOENT)
libpq_append_conn_error(conn, "certificate present, but not private key file \"%s\"",
- fnbuf);
+ fnbuf);
else
libpq_append_conn_error(conn, "could not stat private key file \"%s\": %m",
- fnbuf);
+ fnbuf);
return -1;
}
@@ -1364,7 +1364,7 @@ initialize_SSL(PGconn *conn)
if (!S_ISREG(buf.st_mode))
{
libpq_append_conn_error(conn, "private key file \"%s\" is not a regular file",
- fnbuf);
+ fnbuf);
return -1;
}
@@ -1421,7 +1421,7 @@ initialize_SSL(PGconn *conn)
if (SSL_use_PrivateKey_file(conn->ssl, fnbuf, SSL_FILETYPE_ASN1) != 1)
{
libpq_append_conn_error(conn, "could not load private key file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
return -1;
}
@@ -1437,7 +1437,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "certificate does not match private key file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
return -1;
}
@@ -1490,7 +1490,7 @@ open_client_SSL(PGconn *conn)
if (r == -1)
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
else
libpq_append_conn_error(conn, "SSL SYSCALL error: EOF detected");
pgtls_close(conn);
@@ -1532,12 +1532,12 @@ open_client_SSL(PGconn *conn)
case SSL_R_VERSION_TOO_LOW:
#endif
libpq_append_conn_error(conn, "This may indicate that the server does not support any SSL protocol version between %s and %s.",
- conn->ssl_min_protocol_version ?
- conn->ssl_min_protocol_version :
- MIN_OPENSSL_TLS_VERSION,
- conn->ssl_max_protocol_version ?
- conn->ssl_max_protocol_version :
- MAX_OPENSSL_TLS_VERSION);
+ conn->ssl_min_protocol_version ?
+ conn->ssl_min_protocol_version :
+ MIN_OPENSSL_TLS_VERSION,
+ conn->ssl_max_protocol_version ?
+ conn->ssl_max_protocol_version :
+ MAX_OPENSSL_TLS_VERSION);
break;
default:
break;
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 66e401bf3d9..8069e381424 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -255,14 +255,14 @@ pqsecure_raw_read(PGconn *conn, void *ptr, size_t len)
case EPIPE:
case ECONNRESET:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
break;
default:
libpq_append_conn_error(conn, "could not receive data from server: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
break;
}
}
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index d93e976ca57..e7a2045d41a 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -917,8 +917,8 @@ extern char *libpq_ngettext(const char *msgid, const char *msgid_plural, unsigne
*/
#undef _
-extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...) pg_attribute_printf(2, 3);
-extern void libpq_append_conn_error(PGconn *conn, const char *fmt, ...) pg_attribute_printf(2, 3);
+extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt,...) pg_attribute_printf(2, 3);
+extern void libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
/*
* These macros are needed to let error-handling code be portable between
base-commit: 2fe7a6df94e69a20c57f71a0592133684cf612da
--
2.34.1
v19-0003-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=v19-0003-Add-non-blocking-version-of-PQcancel.patchDownload
From 33c4bfe84bf67b7ecabc73e2441708c610e41bd9 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 12 Jan 2022 09:52:05 +0100
Subject: [PATCH v19 3/4] Add non-blocking version of PQcancel
This patch makes the following changes in libpq:
1. Add a new PQcancelSend function, which sends cancellation requests
using the regular connection establishment code. This makes sure
that cancel requests support and use all connection options
including encryption.
2. Add a new PQcancelConn function which allows sending cancellation in
a non-blocking way by using it together with the newly added
PQcancelPoll and PQcancelSocket.
3. Use these two new cancellation APIs everywhere in the codebase where
signal-safety is not a necessity.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests. The postgres_fdw
cancellation code has been modified to make use of this.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
doc/src/sgml/libpq.sgml | 280 ++++++++++-
src/interfaces/libpq/exports.txt | 8 +
src/interfaces/libpq/fe-connect.c | 449 +++++++++++++++++-
src/interfaces/libpq/libpq-fe.h | 27 +-
src/interfaces/libpq/libpq-int.h | 9 +
.../modules/libpq_pipeline/libpq_pipeline.c | 265 ++++++++++-
6 files changed, 986 insertions(+), 52 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 9f72dd29d89..acc75ea342e 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -5121,7 +5121,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelSend"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -5840,13 +5840,223 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Prepares a connection over which a cancel request can be sent.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>
+ object, but it won't instantly start sending a cancel request over this
+ connection. A cancel request can be sent over this connection in a
+ blocking manner using <xref linkend="libpq-PQcancelSend"/> and in a
+ non-blocking manner using <xref linkend="libpq-PQcancelPoll"/>.
+ The return value should can be passed to <xref linkend="libpq-PQcancelStatus"/>,
+ to check if the <structname>PGcancelConn</structname> object was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe way.
+ </para>
+
+ <para>
+ If the original connection is encrypted (using TLS or GSS), then the
+ connection for the cancel request is encrypted in the same way. Any
+ connection options that are only used during authentication or after
+ authentication of the client are ignored though, because cancellation
+ requests do not require authentication and the connection is closed right
+ after the cancellation request is submitted.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSend">
+ <term><function>PQcancelSend</function><indexterm><primary>PQcancelSend</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a blocking manner.
+<synopsis>
+int PQcancelSend(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelConn"/>
+ The return value of <xref linkend="libpq-PQcancelSend"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being cancelled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQstatus"/> that can be used for
+ cancellation connections.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ <para>
+ In addition to all the statuses that a <structname>PGconn</structname>
+ can have, this connection can have one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQcancelPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelPoll"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually cancelled) and that the
+ connection is now closed. While a <symbol>CONNECTION_OK</symbol> result
+ for <structname>PGconn</structname> means that queries can be sent over
+ the connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSocket">
+ <term><function>PQcancelSocket</function><indexterm><primary>PQcancelSocket</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQsocket"/> that can be used for
+ cancellation connections.
+<synopsis>
+int PQcancelSocket(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelPoll">
+ <term><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQconnectPoll"/> that can be used for
+ cancellation connections.
+<synopsis>
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -5888,14 +6098,28 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ An insecure version of <xref linkend="libpq-PQcancelSend"/>, but one
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
+ <xref linkend="libpq-PQcancel"/> should only be used if it's necessary
+ to cancel a query from a signal-handler. If signal-safety is not needed,
+ <xref linkend="libpq-PQcancelSend"/> should be used to cancel the query
+ instead. <xref linkend="libpq-PQcancel"/> can be safely invoked from a
+ signal handler, if the <parameter>errbuf</parameter> is a local variable
+ in the signal handler. The <structname>PGcancel</structname> object is
+ read-only as far as <xref linkend="libpq-PQcancel"/> is concerned, so it
+ can also be invoked from a thread that is separate from the one
+ manipulating the <structname>PGconn</structname> object.
+ </para>
+
+ <para>
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
with an explanatory error message. <parameter>errbuf</parameter>
must be a char array of size <parameter>errbufsize</parameter> (the
@@ -5903,21 +6127,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
- <para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. This function connects to
+ postgres on the same address and port as the original connection. The
+ only connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. <emphasis>This means the connection
+ for the cancel request is never encrypted using TLS or GSS</emphasis>.
</para>
</listitem>
</varlistentry>
@@ -5929,13 +6154,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelSend"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelSend"/> should be
+ used instead, to avoid the security and thread-safety issues that this
+ function has. This function has the same security issues as
+ <xref linkend="libpq-PQcancel"/>, but without the benefit of being
+ signal-safe.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -9104,7 +9338,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelSend"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index e8bcc883709..f56e8c185c4 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -186,3 +186,11 @@ PQpipelineStatus 183
PQsetTraceFlags 184
PQmblenBounded 185
PQsendFlushRequest 186
+PQcancelSend 187
+PQcancelConn 188
+PQcancelPoll 189
+PQcancelStatus 190
+PQcancelSocket 191
+PQcancelErrorMessage 192
+PQcancelReset 193
+PQcancelFinish 194
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index bb7347cb0c0..360e3e57cea 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -392,8 +392,10 @@ static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
+static void release_conn_hosts(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
static int store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
@@ -620,8 +622,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQcancelReset invocations. Otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -752,6 +763,113 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a query on the given connection. This requires polling
+ * the returned PGcancelConn to actually complete the cancellation of the
+ * query.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancelation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we manually create the host and address arrays with
+ * a single element after freeing the host array that we generated from
+ * the connection options.
+ */
+ release_conn_hosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
/*
* PQconnectStartParams
*
@@ -927,6 +1045,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2325,10 +2482,18 @@ connectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address. These fields have already set up in PQcancelConn. So leave
+ * these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2470,7 +2635,10 @@ connectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2595,13 +2763,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -3243,6 +3415,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancelation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -3987,8 +4182,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4356,19 +4557,8 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
- /* clean up pg_conn_host structures */
- for (int i = 0; i < conn->nconnhost; ++i)
- {
- free(conn->connhost[i].host);
- free(conn->connhost[i].hostaddr);
- free(conn->connhost[i].port);
- if (conn->connhost[i].password != NULL)
- {
- explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
- free(conn->connhost[i].password);
- }
- }
- free(conn->connhost);
+ release_conn_addrinfo(conn);
+ release_conn_hosts(conn);
free(conn->client_encoding_initial);
free(conn->events);
@@ -4486,6 +4676,31 @@ release_conn_addrinfo(PGconn *conn)
}
}
+/*
+ * release_conn_hosts
+ * - Free the host list in the PGconn.
+ */
+static void
+release_conn_hosts(PGconn *conn)
+{
+ if (conn->connhost)
+ {
+ for (int i = 0; i < conn->nconnhost; ++i)
+ {
+ free(conn->connhost[i].host);
+ free(conn->connhost[i].hostaddr);
+ free(conn->connhost[i].port);
+ if (conn->connhost[i].password != NULL)
+ {
+ explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
+ free(conn->connhost[i].password);
+ }
+ }
+ free(conn->connhost);
+ }
+}
+
+
/*
* sendTerminateConn
* - Send a terminate message to backend.
@@ -4493,6 +4708,15 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4546,7 +4770,13 @@ closePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we would have to rebuild it during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
@@ -4961,6 +5191,177 @@ cancel_errReturn:
return false;
}
+/*
+ * PQcancelSend
+ *
+ * Send a cancellation request in a blocking fashion.
+ * Returns 1 if successful 0 if not.
+ */
+int
+PQcancelSend(PGcancelConn * cancelConn)
+{
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return 1;
+
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_BAD;
+ return 1;
+ }
+
+ return connectDBComplete(&cancelConn->conn);
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn * cancelConn)
+{
+ PGconn *conn = (PGconn *) cancelConn;
+ int n;
+
+ /*
+ * Before we can call PQconnectPoll we first need to start the connection
+ * using connectDBstart. Non-cancel connections already do this whenever
+ * the connection is initialized. But cancel connections wait until the
+ * caller starts polling, because there might be a large delay between
+ * creating a cancel connection and actually wanting to use it.
+ */
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+ }
+
+ /*
+ * The rest of the connection establishement we leave to PQconnectPoll,
+ * since it's very similar to normal connection establishment. But once we
+ * get to the CONNECTION_AWAITING_RESPONSE we need to do our own thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * Windows is a bit special in its EOF behaviour for TCP. Sometimes it
+ * will error with an ECONNRESET when there is a clean connection closure.
+ * See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here. For
+ * all other OSes we consider any other error than EOF and report it as
+ * such.
+ */
+ if (n < 0 && n != -2)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangly do
+ * receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF. Which is what we were
+ * expecting. The cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn * cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn * cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn * cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn * cancelConn)
+{
+ closePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_STARTING;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn * cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
/*
* PQrequestCancel: old, not thread-safe function for requesting query cancel
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index f3d92204964..84d64c9a658 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_STARTING /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,30 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* Create a PGcancelConn that's used to cancel a query on the given PGconn */
+extern PGcancelConn * PQcancelConn(PGconn *conn);
+/* issue a blocking cancel request */
+extern int PQcancelSend(PGcancelConn * conn);
+
+/* issue or poll a non-blocking cancel request */
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn * cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
+extern int PQcancelSocket(const PGcancelConn * cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
+extern void PQcancelReset(PGcancelConn * cancelConn);
+extern void PQcancelFinish(PGcancelConn * cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* a less secure version of PQcancelSend, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index e7a2045d41a..a938da718da 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -410,6 +410,10 @@ struct pg_conn
char *require_auth; /* name of the expected auth method */
char *load_balance_hosts; /* load balance over hosts */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -621,6 +625,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index f48da7d963e..6101e5d6143 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got cancelled.
+ *
+ * This is a function wrapped in a macro to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_cancelled(conn) confirm_query_cancelled_impl(__LINE__, conn)
+static void
+confirm_query_cancelled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_cancelled(conn);
+
+ /* test PQcancelSend */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (!PQcancelSend(cancelConn))
+ pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -985,7 +1243,7 @@ test_prepared(PGconn *conn)
static void
notice_processor(void *arg, const char *message)
{
- int *n_notices = (int *) arg;
+ int *n_notices = (int *) arg;
(*n_notices)++;
fprintf(stderr, "NOTICE %d: %s", *n_notices, message);
@@ -1681,6 +1939,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1782,7 +2041,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
v19-0004-Start-using-new-libpq-cancel-APIs.patchapplication/octet-stream; name=v19-0004-Start-using-new-libpq-cancel-APIs.patchDownload
From 4fa51e98314402489f8a2ae29272b8bc1fc6834a Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 25 Jan 2023 13:32:15 +0100
Subject: [PATCH v19 4/4] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in the codebase with these newer
ones.
---
contrib/dblink/dblink.c | 30 ++++--
contrib/postgres_fdw/connection.c | 99 ++++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 11 +--
src/test/isolation/isolationtester.c | 29 +++---
6 files changed, 141 insertions(+), 50 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 78a8bcee6e3..073795f0882 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1326,22 +1326,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelConn(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (!PQcancelSend(cancelConn))
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 8eb9194506c..3f9a408a6af 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -1233,35 +1233,104 @@ pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel)
static bool
pgfdw_cancel_query(PGconn *conn)
{
- PGcancel *cancel;
- char errbuf[256];
PGresult *result = NULL;
- TimestampTz endtime;
- bool timed_out;
/*
* If it takes too long to cancel the query and discard the result, assume
* the connection is dead.
*/
- endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), 30000);
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQcancelStatus(cancel_conn) == CONNECTION_BAD)
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
+ }
+
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
}
- PQfreeCancel(cancel);
}
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ if (failed)
+ return false;
/* Get and discard the result of the query. */
if (pgfdw_get_cleanup_result(conn, endtime, &result, &timed_out))
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 04a3ef450cf..064c3103a5e 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2688,6 +2688,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index 4f3088c03ea..640958df136 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -713,6 +713,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 7a1edea7c8c..43ccb302927 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,14 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
+ PGcancelConn *cancelConn = PQcancelConn(conn);
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQcancelSend(cancelConn);
+ PQcancelFinish(cancelConn);
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153a..de31a875716 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- if (cancel != NULL)
+ if (PQcancelSend(cancel_conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
The patch set does not apply any more.
I tried to rebase locally; even leaving out 1 ("libpq: Run pgindent
after a9e9a9f32b3"), patch 4 ("Start using new libpq cancel APIs") is
harder to resolve following 983ec23007b (I suppose).
Appart from that, the implementation in v19 sounds good to me, and seems
worthwhile. FWIW, as said before, I also implemented it in Psycopg in a
sort of an end-to-end validation.
Okay, I rebased again. Indeed 983ec23007b gave the most problems.
Show quoted text
On Fri, 7 Apr 2023 at 10:02, Denis Laxalde <denis.laxalde@dalibo.com> wrote:
The patch set does not apply any more.
I tried to rebase locally; even leaving out 1 ("libpq: Run pgindent
after a9e9a9f32b3"), patch 4 ("Start using new libpq cancel APIs") is
harder to resolve following 983ec23007b (I suppose).Appart from that, the implementation in v19 sounds good to me, and seems
worthwhile. FWIW, as said before, I also implemented it in Psycopg in a
sort of an end-to-end validation.
Attachments:
v20-0003-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=v20-0003-Add-non-blocking-version-of-PQcancel.patchDownload
From d33327d2c0c66885a76a0eed1ae4a88f5d0eb423 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 12 Jan 2022 09:52:05 +0100
Subject: [PATCH v20 3/4] Add non-blocking version of PQcancel
This patch makes the following changes in libpq:
1. Add a new PQcancelSend function, which sends cancellation requests
using the regular connection establishment code. This makes sure
that cancel requests support and use all connection options
including encryption.
2. Add a new PQcancelConn function which allows sending cancellation in
a non-blocking way by using it together with the newly added
PQcancelPoll and PQcancelSocket.
3. Use these two new cancellation APIs everywhere in the codebase where
signal-safety is not a necessity.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests. The postgres_fdw
cancellation code has been modified to make use of this.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
doc/src/sgml/libpq.sgml | 280 ++++++++++-
src/interfaces/libpq/exports.txt | 8 +
src/interfaces/libpq/fe-connect.c | 449 +++++++++++++++++-
src/interfaces/libpq/libpq-fe.h | 27 +-
src/interfaces/libpq/libpq-int.h | 9 +
.../modules/libpq_pipeline/libpq_pipeline.c | 265 ++++++++++-
6 files changed, 986 insertions(+), 52 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 27fe22de953..699eab70cc9 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -5176,7 +5176,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelSend"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -5895,13 +5895,223 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Prepares a connection over which a cancel request can be sent.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>
+ object, but it won't instantly start sending a cancel request over this
+ connection. A cancel request can be sent over this connection in a
+ blocking manner using <xref linkend="libpq-PQcancelSend"/> and in a
+ non-blocking manner using <xref linkend="libpq-PQcancelPoll"/>.
+ The return value should can be passed to <xref linkend="libpq-PQcancelStatus"/>,
+ to check if the <structname>PGcancelConn</structname> object was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe way.
+ </para>
+
+ <para>
+ If the original connection is encrypted (using TLS or GSS), then the
+ connection for the cancel request is encrypted in the same way. Any
+ connection options that are only used during authentication or after
+ authentication of the client are ignored though, because cancellation
+ requests do not require authentication and the connection is closed right
+ after the cancellation request is submitted.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSend">
+ <term><function>PQcancelSend</function><indexterm><primary>PQcancelSend</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a blocking manner.
+<synopsis>
+int PQcancelSend(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelConn"/>
+ The return value of <xref linkend="libpq-PQcancelSend"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being cancelled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQstatus"/> that can be used for
+ cancellation connections.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ <para>
+ In addition to all the statuses that a <structname>PGconn</structname>
+ can have, this connection can have one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQcancelPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelPoll"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually cancelled) and that the
+ connection is now closed. While a <symbol>CONNECTION_OK</symbol> result
+ for <structname>PGconn</structname> means that queries can be sent over
+ the connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSocket">
+ <term><function>PQcancelSocket</function><indexterm><primary>PQcancelSocket</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQsocket"/> that can be used for
+ cancellation connections.
+<synopsis>
+int PQcancelSocket(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelPoll">
+ <term><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQconnectPoll"/> that can be used for
+ cancellation connections.
+<synopsis>
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -5943,14 +6153,28 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ An insecure version of <xref linkend="libpq-PQcancelSend"/>, but one
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
+ <xref linkend="libpq-PQcancel"/> should only be used if it's necessary
+ to cancel a query from a signal-handler. If signal-safety is not needed,
+ <xref linkend="libpq-PQcancelSend"/> should be used to cancel the query
+ instead. <xref linkend="libpq-PQcancel"/> can be safely invoked from a
+ signal handler, if the <parameter>errbuf</parameter> is a local variable
+ in the signal handler. The <structname>PGcancel</structname> object is
+ read-only as far as <xref linkend="libpq-PQcancel"/> is concerned, so it
+ can also be invoked from a thread that is separate from the one
+ manipulating the <structname>PGconn</structname> object.
+ </para>
+
+ <para>
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
with an explanatory error message. <parameter>errbuf</parameter>
must be a char array of size <parameter>errbufsize</parameter> (the
@@ -5958,21 +6182,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
- <para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. This function connects to
+ postgres on the same address and port as the original connection. The
+ only connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. <emphasis>This means the connection
+ for the cancel request is never encrypted using TLS or GSS</emphasis>.
</para>
</listitem>
</varlistentry>
@@ -5984,13 +6209,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelSend"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelSend"/> should be
+ used instead, to avoid the security and thread-safety issues that this
+ function has. This function has the same security issues as
+ <xref linkend="libpq-PQcancel"/>, but without the benefit of being
+ signal-safe.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -9171,7 +9405,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelSend"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 7ded77aff37..586927f227d 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -187,3 +187,11 @@ PQsetTraceFlags 184
PQmblenBounded 185
PQsendFlushRequest 186
PQconnectionUsedGSSAPI 187
+PQcancelSend 188
+PQcancelConn 189
+PQcancelPoll 190
+PQcancelStatus 191
+PQcancelSocket 192
+PQcancelErrorMessage 193
+PQcancelReset 194
+PQcancelFinish 195
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index fcd3d0d9a35..db1c3a9396c 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -396,8 +396,10 @@ static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
+static void release_conn_hosts(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
static int store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
@@ -625,8 +627,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQcancelReset invocations. Otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -757,6 +768,113 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a query on the given connection. This requires polling
+ * the returned PGcancelConn to actually complete the cancellation of the
+ * query.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancelation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we manually create the host and address arrays with
+ * a single element after freeing the host array that we generated from
+ * the connection options.
+ */
+ release_conn_hosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
/*
* PQconnectStartParams
*
@@ -932,6 +1050,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2363,10 +2520,18 @@ connectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address. These fields have already set up in PQcancelConn. So leave
+ * these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2508,7 +2673,10 @@ connectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2633,13 +2801,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -3281,6 +3453,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancelation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -4025,8 +4220,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4394,19 +4595,8 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
- /* clean up pg_conn_host structures */
- for (int i = 0; i < conn->nconnhost; ++i)
- {
- free(conn->connhost[i].host);
- free(conn->connhost[i].hostaddr);
- free(conn->connhost[i].port);
- if (conn->connhost[i].password != NULL)
- {
- explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
- free(conn->connhost[i].password);
- }
- }
- free(conn->connhost);
+ release_conn_addrinfo(conn);
+ release_conn_hosts(conn);
free(conn->client_encoding_initial);
free(conn->events);
@@ -4525,6 +4715,31 @@ release_conn_addrinfo(PGconn *conn)
}
}
+/*
+ * release_conn_hosts
+ * - Free the host list in the PGconn.
+ */
+static void
+release_conn_hosts(PGconn *conn)
+{
+ if (conn->connhost)
+ {
+ for (int i = 0; i < conn->nconnhost; ++i)
+ {
+ free(conn->connhost[i].host);
+ free(conn->connhost[i].hostaddr);
+ free(conn->connhost[i].port);
+ if (conn->connhost[i].password != NULL)
+ {
+ explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
+ free(conn->connhost[i].password);
+ }
+ }
+ free(conn->connhost);
+ }
+}
+
+
/*
* sendTerminateConn
* - Send a terminate message to backend.
@@ -4532,6 +4747,15 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4585,7 +4809,13 @@ closePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we would have to rebuild it during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
@@ -5000,6 +5230,177 @@ cancel_errReturn:
return false;
}
+/*
+ * PQcancelSend
+ *
+ * Send a cancellation request in a blocking fashion.
+ * Returns 1 if successful 0 if not.
+ */
+int
+PQcancelSend(PGcancelConn * cancelConn)
+{
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return 1;
+
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_BAD;
+ return 1;
+ }
+
+ return connectDBComplete(&cancelConn->conn);
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn * cancelConn)
+{
+ PGconn *conn = (PGconn *) cancelConn;
+ int n;
+
+ /*
+ * Before we can call PQconnectPoll we first need to start the connection
+ * using connectDBstart. Non-cancel connections already do this whenever
+ * the connection is initialized. But cancel connections wait until the
+ * caller starts polling, because there might be a large delay between
+ * creating a cancel connection and actually wanting to use it.
+ */
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+ }
+
+ /*
+ * The rest of the connection establishement we leave to PQconnectPoll,
+ * since it's very similar to normal connection establishment. But once we
+ * get to the CONNECTION_AWAITING_RESPONSE we need to do our own thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * Windows is a bit special in its EOF behaviour for TCP. Sometimes it
+ * will error with an ECONNRESET when there is a clean connection closure.
+ * See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here. For
+ * all other OSes we consider any other error than EOF and report it as
+ * such.
+ */
+ if (n < 0 && n != -2)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangly do
+ * receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF. Which is what we were
+ * expecting. The cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn * cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn * cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn * cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn * cancelConn)
+{
+ closePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_STARTING;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn * cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
/*
* PQrequestCancel: old, not thread-safe function for requesting query cancel
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 7476dbe0e90..5dffab36eb6 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_STARTING /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,30 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* Create a PGcancelConn that's used to cancel a query on the given PGconn */
+extern PGcancelConn * PQcancelConn(PGconn *conn);
+/* issue a blocking cancel request */
+extern int PQcancelSend(PGcancelConn * conn);
+
+/* issue or poll a non-blocking cancel request */
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn * cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
+extern int PQcancelSocket(const PGcancelConn * cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
+extern void PQcancelReset(PGcancelConn * cancelConn);
+extern void PQcancelFinish(PGcancelConn * cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* a less secure version of PQcancelSend, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 16321aed251..8fdd291bb76 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -411,6 +411,10 @@ struct pg_conn
char *require_auth; /* name of the expected auth method */
char *load_balance_hosts; /* load balance over hosts */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -623,6 +627,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index f48da7d963e..6101e5d6143 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got cancelled.
+ *
+ * This is a function wrapped in a macro to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_cancelled(conn) confirm_query_cancelled_impl(__LINE__, conn)
+static void
+confirm_query_cancelled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_cancelled(conn);
+
+ /* test PQcancelSend */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (!PQcancelSend(cancelConn))
+ pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -985,7 +1243,7 @@ test_prepared(PGconn *conn)
static void
notice_processor(void *arg, const char *message)
{
- int *n_notices = (int *) arg;
+ int *n_notices = (int *) arg;
(*n_notices)++;
fprintf(stderr, "NOTICE %d: %s", *n_notices, message);
@@ -1681,6 +1939,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1782,7 +2041,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
v20-0004-Start-using-new-libpq-cancel-APIs.patchapplication/octet-stream; name=v20-0004-Start-using-new-libpq-cancel-APIs.patchDownload
From 60ce98c5fc04aed75abbb51a5eef28ad1d167693 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 25 Jan 2023 13:32:15 +0100
Subject: [PATCH v20 4/4] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in the codebase with these newer
ones.
---
contrib/dblink/dblink.c | 30 +++--
contrib/postgres_fdw/connection.c | 105 +++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 11 +-
src/test/isolation/isolationtester.c | 29 ++---
6 files changed, 145 insertions(+), 52 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 55f75eff361..7120c261540 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1328,22 +1328,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelConn(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (!PQcancelSend(cancelConn))
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index da32d503bc5..232158b66d8 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -128,7 +128,7 @@ static void pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue);
static void pgfdw_reject_incomplete_xact_state_change(ConnCacheEntry *entry);
static void pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_cancel_query(PGconn *conn);
-static bool pgfdw_cancel_query_begin(PGconn *conn);
+static bool pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime);
static bool pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
bool consume_input);
static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query,
@@ -1356,36 +1356,104 @@ pgfdw_cancel_query(PGconn *conn)
endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
CONNECTION_CLEANUP_TIMEOUT);
- if (!pgfdw_cancel_query_begin(conn))
+ if (!pgfdw_cancel_query_begin(conn, endtime))
return false;
return pgfdw_cancel_query_end(conn, endtime, false);
}
static bool
-pgfdw_cancel_query_begin(PGconn *conn)
+pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
{
- PGcancel *cancel;
- char errbuf[256];
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQcancelStatus(cancel_conn) == CONNECTION_BAD)
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
}
- return true;
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ return !failed;
}
static bool
@@ -1722,7 +1790,10 @@ pgfdw_abort_cleanup_begin(ConnCacheEntry *entry, bool toplevel,
*/
if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
{
- if (!pgfdw_cancel_query_begin(entry->conn))
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ CONNECTION_CLEANUP_TIMEOUT);
+
+ if (!pgfdw_cancel_query_begin(entry->conn, endtime))
return false; /* Unable to cancel running query */
*cancel_requested = lappend(*cancel_requested, entry);
}
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index fd5752bd5bf..660041b454d 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2689,6 +2689,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index c05046f8676..1c206bebd08 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -714,6 +714,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 7a1edea7c8c..43ccb302927 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,14 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
+ PGcancelConn *cancelConn = PQcancelConn(conn);
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQcancelSend(cancelConn);
+ PQcancelFinish(cancelConn);
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153a..de31a875716 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- if (cancel != NULL)
+ if (PQcancelSend(cancel_conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
v20-0002-Return-2-from-pqReadData-on-EOF.patchapplication/octet-stream; name=v20-0002-Return-2-from-pqReadData-on-EOF.patchDownload
From e7780d84ce0074c4d27608b89bc8ea8c4b1c81ed Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Thu, 26 Jan 2023 12:24:38 +0100
Subject: [PATCH v20 2/4] Return -2 from pqReadData on EOF
This patch changes pqReadData to return -2 when a connection is cleanly
closed by the other side. For most of the Postgres protocol this is
considered an error, because the client will close the connection
instead of the server. But for Postgres its cancellation protocol
the distinction between errors and clean connection closure is
important, because clean connection closure is the way for the server to
signal that the cancellation was handled.
This patch is in preparation for a follow-up patch where pqReadData is
used for the cancellation protocol implementation.
No existing callsites of pqReadData or any of its internal functions
need to be updated as all of them check if the result is less than 0
instead a strict comparison against -1.
---
src/interfaces/libpq/fe-misc.c | 15 +++++++++++----
src/interfaces/libpq/fe-secure-openssl.c | 2 +-
src/interfaces/libpq/fe-secure.c | 6 ++++++
3 files changed, 18 insertions(+), 5 deletions(-)
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 660cdec93c9..2d49188d910 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -556,8 +556,11 @@ pqPutMsgEnd(PGconn *conn)
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
- * -1: error detected (including EOF = connection closure);
+ * -1: error detected (excluding EOF = clean connection closure);
* conn->errorMessage set
+ * -2: EOF detected, connection is closed cleanly by other side;
+ * conn->errorMessage set
+ *
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
@@ -639,7 +642,7 @@ retry3:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -734,7 +737,7 @@ retry4:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -751,13 +754,17 @@ definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.");
+ /* Do *not* drop any already-read data; caller still wants it */
+ pqDropConnection(conn, false);
+ conn->status = CONNECTION_BAD; /* No more connection to backend */
+ return -2;
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
/* Do *not* drop any already-read data; caller still wants it */
pqDropConnection(conn, false);
conn->status = CONNECTION_BAD; /* No more connection to backend */
- return -1;
+ return nread < 0 ? nread : -1;
}
/*
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index 390c888c962..cb0eefa408c 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -248,7 +248,7 @@ rloop:
*/
libpq_append_conn_error(conn, "SSL connection has been closed unexpectedly");
result_errno = ECONNRESET;
- n = -1;
+ n = -2;
break;
default:
libpq_append_conn_error(conn, "unrecognized SSL error code: %d", err);
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 8069e381424..20265dcb317 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -199,6 +199,12 @@ pqsecure_close(PGconn *conn)
* On failure, this function is responsible for appending a suitable message
* to conn->errorMessage. The caller must still inspect errno, but only
* to determine whether to continue/retry after error.
+ *
+ * Returns -1 in case of failures, except in the case of where a failure means
+ * that there was a clean connection closure, in those cases -2 is returned.
+ * Currently only the TLS implementation of pqsecure_read ever returns -2. For
+ * the other implementations a clean connection closure is detected in
+ * pqReadData instead.
*/
ssize_t
pqsecure_read(PGconn *conn, void *ptr, size_t len)
--
2.34.1
v20-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patchapplication/octet-stream; name=v20-0001-libpq-Run-pgindent-after-a9e9a9f32b3.patchDownload
From 2038323d814c5d1b5ae156fb2a219dc5216062b4 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 30 Nov 2022 10:07:19 +0100
Subject: [PATCH v20 1/4] libpq: Run pgindent after a9e9a9f32b3
It seems that pgindent was not run after the error handling refactor in
commit a9e9a9f32b35edf129c88e8b929ef223f8511f59. This fixes that and
also addresses a few other things pgindent wanted to change in libpq.
---
src/interfaces/libpq/fe-exec.c | 16 +++---
src/interfaces/libpq/fe-lobj.c | 42 +++++++--------
src/interfaces/libpq/fe-misc.c | 10 ++--
src/interfaces/libpq/fe-protocol3.c | 2 +-
src/interfaces/libpq/fe-secure-common.c | 6 +--
src/interfaces/libpq/fe-secure-gssapi.c | 12 ++---
src/interfaces/libpq/fe-secure-openssl.c | 66 ++++++++++++------------
src/interfaces/libpq/fe-secure.c | 8 +--
src/interfaces/libpq/libpq-int.h | 4 +-
9 files changed, 83 insertions(+), 83 deletions(-)
diff --git a/src/interfaces/libpq/fe-exec.c b/src/interfaces/libpq/fe-exec.c
index a16bbf32ef5..14d706efd57 100644
--- a/src/interfaces/libpq/fe-exec.c
+++ b/src/interfaces/libpq/fe-exec.c
@@ -1448,7 +1448,7 @@ PQsendQueryInternal(PGconn *conn, const char *query, bool newQuery)
if (conn->pipelineStatus != PQ_PIPELINE_OFF)
{
libpq_append_conn_error(conn, "%s not allowed in pipeline mode",
- "PQsendQuery");
+ "PQsendQuery");
return 0;
}
@@ -1516,7 +1516,7 @@ PQsendQueryParams(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -1562,7 +1562,7 @@ PQsendPrepare(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -1656,7 +1656,7 @@ PQsendQueryPrepared(PGconn *conn,
if (nParams < 0 || nParams > PQ_QUERY_PARAM_MAX_LIMIT)
{
libpq_append_conn_error(conn, "number of parameters must be between 0 and %d",
- PQ_QUERY_PARAM_MAX_LIMIT);
+ PQ_QUERY_PARAM_MAX_LIMIT);
return 0;
}
@@ -2103,10 +2103,9 @@ PQgetResult(PGconn *conn)
/*
* We're about to return the NULL that terminates the round of
- * results from the current query; prepare to send the results
- * of the next query, if any, when we're called next. If there's
- * no next element in the command queue, this gets us in IDLE
- * state.
+ * results from the current query; prepare to send the results of
+ * the next query, if any, when we're called next. If there's no
+ * next element in the command queue, this gets us in IDLE state.
*/
pqPipelineProcessQueue(conn);
res = NULL; /* query is complete */
@@ -3051,6 +3050,7 @@ pqPipelineProcessQueue(PGconn *conn)
return;
case PGASYNC_IDLE:
+
/*
* If we're in IDLE mode and there's some command in the queue,
* get us into PIPELINE_IDLE mode and process normally. Otherwise
diff --git a/src/interfaces/libpq/fe-lobj.c b/src/interfaces/libpq/fe-lobj.c
index 4cb6a468597..206266fd043 100644
--- a/src/interfaces/libpq/fe-lobj.c
+++ b/src/interfaces/libpq/fe-lobj.c
@@ -142,7 +142,7 @@ lo_truncate(PGconn *conn, int fd, size_t len)
if (conn->lobjfuncs->fn_lo_truncate == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_truncate");
+ "lo_truncate");
return -1;
}
@@ -205,7 +205,7 @@ lo_truncate64(PGconn *conn, int fd, pg_int64 len)
if (conn->lobjfuncs->fn_lo_truncate64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_truncate64");
+ "lo_truncate64");
return -1;
}
@@ -395,7 +395,7 @@ lo_lseek64(PGconn *conn, int fd, pg_int64 offset, int whence)
if (conn->lobjfuncs->fn_lo_lseek64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_lseek64");
+ "lo_lseek64");
return -1;
}
@@ -485,7 +485,7 @@ lo_create(PGconn *conn, Oid lobjId)
if (conn->lobjfuncs->fn_lo_create == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_create");
+ "lo_create");
return InvalidOid;
}
@@ -558,7 +558,7 @@ lo_tell64(PGconn *conn, int fd)
if (conn->lobjfuncs->fn_lo_tell64 == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_tell64");
+ "lo_tell64");
return -1;
}
@@ -667,7 +667,7 @@ lo_import_internal(PGconn *conn, const char *filename, Oid oid)
if (fd < 0)
{ /* error */
libpq_append_conn_error(conn, "could not open file \"%s\": %s",
- filename, strerror_r(errno, sebuf, sizeof(sebuf)));
+ filename, strerror_r(errno, sebuf, sizeof(sebuf)));
return InvalidOid;
}
@@ -723,8 +723,8 @@ lo_import_internal(PGconn *conn, const char *filename, Oid oid)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not read from file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return InvalidOid;
}
@@ -778,8 +778,8 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not open file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return -1;
}
@@ -799,8 +799,8 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
/* deliberately overwrite any error from lo_close */
pqClearConnErrorState(conn);
libpq_append_conn_error(conn, "could not write to file \"%s\": %s",
- filename,
- strerror_r(save_errno, sebuf, sizeof(sebuf)));
+ filename,
+ strerror_r(save_errno, sebuf, sizeof(sebuf)));
return -1;
}
}
@@ -822,7 +822,7 @@ lo_export(PGconn *conn, Oid lobjId, const char *filename)
if (close(fd) != 0 && result >= 0)
{
libpq_append_conn_error(conn, "could not write to file \"%s\": %s",
- filename, strerror_r(errno, sebuf, sizeof(sebuf)));
+ filename, strerror_r(errno, sebuf, sizeof(sebuf)));
result = -1;
}
@@ -954,56 +954,56 @@ lo_initialize(PGconn *conn)
if (lobjfuncs->fn_lo_open == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_open");
+ "lo_open");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_close == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_close");
+ "lo_close");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_creat == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_creat");
+ "lo_creat");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_unlink == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_unlink");
+ "lo_unlink");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_lseek == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_lseek");
+ "lo_lseek");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_tell == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lo_tell");
+ "lo_tell");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_read == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "loread");
+ "loread");
free(lobjfuncs);
return -1;
}
if (lobjfuncs->fn_lo_write == 0)
{
libpq_append_conn_error(conn, "cannot determine OID of function %s",
- "lowrite");
+ "lowrite");
free(lobjfuncs);
return -1;
}
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 3653a1a8a62..660cdec93c9 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -749,8 +749,8 @@ retry4:
*/
definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
@@ -1067,7 +1067,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
char sebuf[PG_STRERROR_R_BUFLEN];
libpq_append_conn_error(conn, "%s() failed: %s", "select",
- SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
}
return result;
@@ -1280,7 +1280,7 @@ libpq_ngettext(const char *msgid, const char *msgid_plural, unsigned long n)
* newline.
*/
void
-libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...)
+libpq_append_error(PQExpBuffer errorMessage, const char *fmt,...)
{
int save_errno = errno;
bool done;
@@ -1309,7 +1309,7 @@ libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...)
* format should not end with a newline.
*/
void
-libpq_append_conn_error(PGconn *conn, const char *fmt, ...)
+libpq_append_conn_error(PGconn *conn, const char *fmt,...)
{
int save_errno = errno;
bool done;
diff --git a/src/interfaces/libpq/fe-protocol3.c b/src/interfaces/libpq/fe-protocol3.c
index 8ab6a884165..b79d74f7489 100644
--- a/src/interfaces/libpq/fe-protocol3.c
+++ b/src/interfaces/libpq/fe-protocol3.c
@@ -466,7 +466,7 @@ static void
handleSyncLoss(PGconn *conn, char id, int msgLength)
{
libpq_append_conn_error(conn, "lost synchronization with server: got message type \"%c\", length %d",
- id, msgLength);
+ id, msgLength);
/* build an error result holding the error message */
pqSaveErrorResult(conn);
conn->asyncStatus = PGASYNC_READY; /* drop out of PQgetResult wait loop */
diff --git a/src/interfaces/libpq/fe-secure-common.c b/src/interfaces/libpq/fe-secure-common.c
index de115b37649..3ecc7bf6159 100644
--- a/src/interfaces/libpq/fe-secure-common.c
+++ b/src/interfaces/libpq/fe-secure-common.c
@@ -226,7 +226,7 @@ pq_verify_peer_name_matches_certificate_ip(PGconn *conn,
* wrong given the subject matter.
*/
libpq_append_conn_error(conn, "certificate contains IP address with invalid length %zu",
- iplen);
+ iplen);
return -1;
}
@@ -235,7 +235,7 @@ pq_verify_peer_name_matches_certificate_ip(PGconn *conn,
if (!addrstr)
{
libpq_append_conn_error(conn, "could not convert certificate's IP address to string: %s",
- strerror_r(errno, sebuf, sizeof(sebuf)));
+ strerror_r(errno, sebuf, sizeof(sebuf)));
return -1;
}
@@ -292,7 +292,7 @@ pq_verify_peer_name_matches_certificate(PGconn *conn)
else if (names_examined == 1)
{
libpq_append_conn_error(conn, "server certificate for \"%s\" does not match host name \"%s\"",
- first_name, host);
+ first_name, host);
}
else
{
diff --git a/src/interfaces/libpq/fe-secure-gssapi.c b/src/interfaces/libpq/fe-secure-gssapi.c
index 95ded9eeaa0..3b2d0fd1401 100644
--- a/src/interfaces/libpq/fe-secure-gssapi.c
+++ b/src/interfaces/libpq/fe-secure-gssapi.c
@@ -213,8 +213,8 @@ pg_GSS_write(PGconn *conn, const void *ptr, size_t len)
if (output.length > PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "client tried to send oversize GSSAPI packet (%zu > %zu)",
- (size_t) output.length,
- PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32));
+ (size_t) output.length,
+ PQ_GSS_SEND_BUFFER_SIZE - sizeof(uint32));
errno = EIO; /* for lack of a better idea */
goto cleanup;
}
@@ -349,8 +349,8 @@ pg_GSS_read(PGconn *conn, void *ptr, size_t len)
if (input.length > PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "oversize GSSAPI packet sent by the server (%zu > %zu)",
- (size_t) input.length,
- PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
+ (size_t) input.length,
+ PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
errno = EIO; /* for lack of a better idea */
return -1;
}
@@ -591,8 +591,8 @@ pqsecure_open_gss(PGconn *conn)
if (input.length > PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32))
{
libpq_append_conn_error(conn, "oversize GSSAPI packet sent by the server (%zu > %zu)",
- (size_t) input.length,
- PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
+ (size_t) input.length,
+ PQ_GSS_RECV_BUFFER_SIZE - sizeof(uint32));
return PGRES_POLLING_FAILED;
}
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index 470e9265400..390c888c962 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -213,12 +213,12 @@ rloop:
if (result_errno == EPIPE ||
result_errno == ECONNRESET)
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
else
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
}
else
{
@@ -313,12 +313,12 @@ pgtls_write(PGconn *conn, const void *ptr, size_t len)
result_errno = SOCK_ERRNO;
if (result_errno == EPIPE || result_errno == ECONNRESET)
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
else
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
}
else
{
@@ -415,7 +415,7 @@ pgtls_get_peer_certificate_hash(PGconn *conn, size_t *len)
if (algo_type == NULL)
{
libpq_append_conn_error(conn, "could not find digest for NID %s",
- OBJ_nid2sn(algo_nid));
+ OBJ_nid2sn(algo_nid));
return NULL;
}
break;
@@ -1000,7 +1000,7 @@ initialize_SSL(PGconn *conn)
if (ssl_min_ver == -1)
{
libpq_append_conn_error(conn, "invalid value \"%s\" for minimum SSL protocol version",
- conn->ssl_min_protocol_version);
+ conn->ssl_min_protocol_version);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1026,7 +1026,7 @@ initialize_SSL(PGconn *conn)
if (ssl_max_ver == -1)
{
libpq_append_conn_error(conn, "invalid value \"%s\" for maximum SSL protocol version",
- conn->ssl_max_protocol_version);
+ conn->ssl_max_protocol_version);
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1091,7 +1091,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read root certificate file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
SSL_CTX_free(SSL_context);
return -1;
@@ -1161,7 +1161,7 @@ initialize_SSL(PGconn *conn)
else
fnbuf[0] = '\0';
- if (conn->sslcertmode[0] == 'd') /* disable */
+ if (conn->sslcertmode[0] == 'd') /* disable */
{
/* don't send a client cert even if we have one */
have_cert = false;
@@ -1181,7 +1181,7 @@ initialize_SSL(PGconn *conn)
if (errno != ENOENT && errno != ENOTDIR)
{
libpq_append_conn_error(conn, "could not open certificate file \"%s\": %s",
- fnbuf, strerror_r(errno, sebuf, sizeof(sebuf)));
+ fnbuf, strerror_r(errno, sebuf, sizeof(sebuf)));
SSL_CTX_free(SSL_context);
return -1;
}
@@ -1199,7 +1199,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read certificate file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
SSL_CTX_free(SSL_context);
return -1;
@@ -1298,7 +1298,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not load SSL engine \"%s\": %s",
- engine_str, err);
+ engine_str, err);
SSLerrfree(err);
free(engine_str);
return -1;
@@ -1309,7 +1309,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not initialize SSL engine \"%s\": %s",
- engine_str, err);
+ engine_str, err);
SSLerrfree(err);
ENGINE_free(conn->engine);
conn->engine = NULL;
@@ -1324,7 +1324,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not read private SSL key \"%s\" from engine \"%s\": %s",
- engine_colon, engine_str, err);
+ engine_colon, engine_str, err);
SSLerrfree(err);
ENGINE_finish(conn->engine);
ENGINE_free(conn->engine);
@@ -1337,7 +1337,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "could not load private SSL key \"%s\" from engine \"%s\": %s",
- engine_colon, engine_str, err);
+ engine_colon, engine_str, err);
SSLerrfree(err);
ENGINE_finish(conn->engine);
ENGINE_free(conn->engine);
@@ -1374,10 +1374,10 @@ initialize_SSL(PGconn *conn)
{
if (errno == ENOENT)
libpq_append_conn_error(conn, "certificate present, but not private key file \"%s\"",
- fnbuf);
+ fnbuf);
else
libpq_append_conn_error(conn, "could not stat private key file \"%s\": %m",
- fnbuf);
+ fnbuf);
return -1;
}
@@ -1385,7 +1385,7 @@ initialize_SSL(PGconn *conn)
if (!S_ISREG(buf.st_mode))
{
libpq_append_conn_error(conn, "private key file \"%s\" is not a regular file",
- fnbuf);
+ fnbuf);
return -1;
}
@@ -1442,7 +1442,7 @@ initialize_SSL(PGconn *conn)
if (SSL_use_PrivateKey_file(conn->ssl, fnbuf, SSL_FILETYPE_ASN1) != 1)
{
libpq_append_conn_error(conn, "could not load private key file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
return -1;
}
@@ -1458,7 +1458,7 @@ initialize_SSL(PGconn *conn)
char *err = SSLerrmessage(ERR_get_error());
libpq_append_conn_error(conn, "certificate does not match private key file \"%s\": %s",
- fnbuf, err);
+ fnbuf, err);
SSLerrfree(err);
return -1;
}
@@ -1520,8 +1520,8 @@ open_client_SSL(PGconn *conn)
* it means that verification failed due to a missing
* system CA pool without it being a protocol error. We
* inspect the sslrootcert setting to ensure that the user
- * was using the system CA pool. For other errors, log them
- * using the normal SYSCALL logging.
+ * was using the system CA pool. For other errors, log
+ * them using the normal SYSCALL logging.
*/
if (!save_errno && vcode == X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY &&
strcmp(conn->sslrootcert, "system") == 0)
@@ -1529,7 +1529,7 @@ open_client_SSL(PGconn *conn)
X509_verify_cert_error_string(vcode));
else if (r == -1)
libpq_append_conn_error(conn, "SSL SYSCALL error: %s",
- SOCK_STRERROR(save_errno, sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(save_errno, sebuf, sizeof(sebuf)));
else
libpq_append_conn_error(conn, "SSL SYSCALL error: EOF detected");
pgtls_close(conn);
@@ -1571,12 +1571,12 @@ open_client_SSL(PGconn *conn)
case SSL_R_VERSION_TOO_LOW:
#endif
libpq_append_conn_error(conn, "This may indicate that the server does not support any SSL protocol version between %s and %s.",
- conn->ssl_min_protocol_version ?
- conn->ssl_min_protocol_version :
- MIN_OPENSSL_TLS_VERSION,
- conn->ssl_max_protocol_version ?
- conn->ssl_max_protocol_version :
- MAX_OPENSSL_TLS_VERSION);
+ conn->ssl_min_protocol_version ?
+ conn->ssl_min_protocol_version :
+ MIN_OPENSSL_TLS_VERSION,
+ conn->ssl_max_protocol_version ?
+ conn->ssl_max_protocol_version :
+ MAX_OPENSSL_TLS_VERSION);
break;
default:
break;
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 66e401bf3d9..8069e381424 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -255,14 +255,14 @@ pqsecure_raw_read(PGconn *conn, void *ptr, size_t len)
case EPIPE:
case ECONNRESET:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
- "\tThis probably means the server terminated abnormally\n"
- "\tbefore or while processing the request.");
+ "\tThis probably means the server terminated abnormally\n"
+ "\tbefore or while processing the request.");
break;
default:
libpq_append_conn_error(conn, "could not receive data from server: %s",
- SOCK_STRERROR(result_errno,
- sebuf, sizeof(sebuf)));
+ SOCK_STRERROR(result_errno,
+ sebuf, sizeof(sebuf)));
break;
}
}
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index ce0167c1b66..16321aed251 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -919,8 +919,8 @@ extern char *libpq_ngettext(const char *msgid, const char *msgid_plural, unsigne
*/
#undef _
-extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...) pg_attribute_printf(2, 3);
-extern void libpq_append_conn_error(PGconn *conn, const char *fmt, ...) pg_attribute_printf(2, 3);
+extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt,...) pg_attribute_printf(2, 3);
+extern void libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
/*
* These macros are needed to let error-handling code be portable between
base-commit: a9781ae11ba2fdb44a3a72c9a7ebb727140b25c5
--
2.34.1
I noticed that cfbot was unable to run tests due to some rebase
conflict. It seems the pgindent changes from patch 1 have now been
made.
So adding the rebased patches without patch 1 now to unblock cfbot.
Attachments:
v21-0002-Return-2-from-pqReadData-on-EOF.patchapplication/octet-stream; name=v21-0002-Return-2-from-pqReadData-on-EOF.patchDownload
From e883ed4375671de5d1321346d307795a2645322f Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Thu, 26 Jan 2023 12:24:38 +0100
Subject: [PATCH v21 2/4] Return -2 from pqReadData on EOF
This patch changes pqReadData to return -2 when a connection is cleanly
closed by the other side. For most of the Postgres protocol this is
considered an error, because the client is expected to close the
connection, not the server. But for Postgres its cancellation protocol
the distinction between errors and clean connection closure is
important, because clean connection closure is the way for the server to
signal that the cancellation was handled.
This patch is in preparation for a follow-up patch where pqReadData is
used for the cancellation protocol implementation.
No existing callsites of pqReadData or any of its internal functions
need to be updated as all of them check if the result is less than 0
instead a strict comparison against -1.
---
src/interfaces/libpq/fe-misc.c | 15 +++++++++++----
src/interfaces/libpq/fe-secure-openssl.c | 2 +-
src/interfaces/libpq/fe-secure.c | 6 ++++++
3 files changed, 18 insertions(+), 5 deletions(-)
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 660cdec93c9..2d49188d910 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -556,8 +556,11 @@ pqPutMsgEnd(PGconn *conn)
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
- * -1: error detected (including EOF = connection closure);
+ * -1: error detected (excluding EOF = clean connection closure);
* conn->errorMessage set
+ * -2: EOF detected, connection is closed cleanly by other side;
+ * conn->errorMessage set
+ *
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
@@ -639,7 +642,7 @@ retry3:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -734,7 +737,7 @@ retry4:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -751,13 +754,17 @@ definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.");
+ /* Do *not* drop any already-read data; caller still wants it */
+ pqDropConnection(conn, false);
+ conn->status = CONNECTION_BAD; /* No more connection to backend */
+ return -2;
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
/* Do *not* drop any already-read data; caller still wants it */
pqDropConnection(conn, false);
conn->status = CONNECTION_BAD; /* No more connection to backend */
- return -1;
+ return nread < 0 ? nread : -1;
}
/*
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index 390c888c962..cb0eefa408c 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -248,7 +248,7 @@ rloop:
*/
libpq_append_conn_error(conn, "SSL connection has been closed unexpectedly");
result_errno = ECONNRESET;
- n = -1;
+ n = -2;
break;
default:
libpq_append_conn_error(conn, "unrecognized SSL error code: %d", err);
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index 8069e381424..20265dcb317 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -199,6 +199,12 @@ pqsecure_close(PGconn *conn)
* On failure, this function is responsible for appending a suitable message
* to conn->errorMessage. The caller must still inspect errno, but only
* to determine whether to continue/retry after error.
+ *
+ * Returns -1 in case of failures, except in the case of where a failure means
+ * that there was a clean connection closure, in those cases -2 is returned.
+ * Currently only the TLS implementation of pqsecure_read ever returns -2. For
+ * the other implementations a clean connection closure is detected in
+ * pqReadData instead.
*/
ssize_t
pqsecure_read(PGconn *conn, void *ptr, size_t len)
base-commit: 7fcd7ef2a9c372b789f95b40043edffdc611c566
--
2.34.1
v21-0003-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=v21-0003-Add-non-blocking-version-of-PQcancel.patchDownload
From 4d052d605f8e1d36f331c9229b26b3a5b6f191ba Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 12 Jan 2022 09:52:05 +0100
Subject: [PATCH v21 3/4] Add non-blocking version of PQcancel
This patch makes the following changes in libpq:
1. Add a new PQcancelSend function, which sends cancellation requests
using the regular connection establishment code. This makes sure
that cancel requests support and use all connection options
including encryption.
2. Add a new PQcancelConn function which allows sending cancellation in
a non-blocking way by using it together with the newly added
PQcancelPoll and PQcancelSocket.
3. Use these two new cancellation APIs everywhere in the codebase where
signal-safety is not a necessity.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests. The postgres_fdw
cancellation code has been modified to make use of this.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
doc/src/sgml/libpq.sgml | 280 ++++++++++-
src/interfaces/libpq/exports.txt | 8 +
src/interfaces/libpq/fe-connect.c | 449 +++++++++++++++++-
src/interfaces/libpq/libpq-fe.h | 27 +-
src/interfaces/libpq/libpq-int.h | 9 +
.../modules/libpq_pipeline/libpq_pipeline.c | 263 +++++++++-
6 files changed, 985 insertions(+), 51 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 2225e4e0ef3..041ab5c1550 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -5176,7 +5176,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelSend"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -5895,13 +5895,223 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Prepares a connection over which a cancel request can be sent.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>
+ object, but it won't instantly start sending a cancel request over this
+ connection. A cancel request can be sent over this connection in a
+ blocking manner using <xref linkend="libpq-PQcancelSend"/> and in a
+ non-blocking manner using <xref linkend="libpq-PQcancelPoll"/>.
+ The return value should can be passed to <xref linkend="libpq-PQcancelStatus"/>,
+ to check if the <structname>PGcancelConn</structname> object was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe way.
+ </para>
+
+ <para>
+ If the original connection is encrypted (using TLS or GSS), then the
+ connection for the cancel request is encrypted in the same way. Any
+ connection options that are only used during authentication or after
+ authentication of the client are ignored though, because cancellation
+ requests do not require authentication and the connection is closed right
+ after the cancellation request is submitted.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSend">
+ <term><function>PQcancelSend</function><indexterm><primary>PQcancelSend</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a blocking manner.
+<synopsis>
+int PQcancelSend(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelConn"/>
+ The return value of <xref linkend="libpq-PQcancelSend"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being cancelled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQstatus"/> that can be used for
+ cancellation connections.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ <para>
+ In addition to all the statuses that a <structname>PGconn</structname>
+ can have, this connection can have one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQcancelPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelPoll"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually cancelled) and that the
+ connection is now closed. While a <symbol>CONNECTION_OK</symbol> result
+ for <structname>PGconn</structname> means that queries can be sent over
+ the connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSocket">
+ <term><function>PQcancelSocket</function><indexterm><primary>PQcancelSocket</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQsocket"/> that can be used for
+ cancellation connections.
+<synopsis>
+int PQcancelSocket(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelPoll">
+ <term><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQconnectPoll"/> that can be used for
+ cancellation connections.
+<synopsis>
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -5943,14 +6153,28 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ An insecure version of <xref linkend="libpq-PQcancelSend"/>, but one
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
+ <xref linkend="libpq-PQcancel"/> should only be used if it's necessary
+ to cancel a query from a signal-handler. If signal-safety is not needed,
+ <xref linkend="libpq-PQcancelSend"/> should be used to cancel the query
+ instead. <xref linkend="libpq-PQcancel"/> can be safely invoked from a
+ signal handler, if the <parameter>errbuf</parameter> is a local variable
+ in the signal handler. The <structname>PGcancel</structname> object is
+ read-only as far as <xref linkend="libpq-PQcancel"/> is concerned, so it
+ can also be invoked from a thread that is separate from the one
+ manipulating the <structname>PGconn</structname> object.
+ </para>
+
+ <para>
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
with an explanatory error message. <parameter>errbuf</parameter>
must be a char array of size <parameter>errbufsize</parameter> (the
@@ -5958,21 +6182,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
- <para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. This function connects to
+ postgres on the same address and port as the original connection. The
+ only connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. <emphasis>This means the connection
+ for the cancel request is never encrypted using TLS or GSS</emphasis>.
</para>
</listitem>
</varlistentry>
@@ -5984,13 +6209,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelSend"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelSend"/> should be
+ used instead, to avoid the security and thread-safety issues that this
+ function has. This function has the same security issues as
+ <xref linkend="libpq-PQcancel"/>, but without the benefit of being
+ signal-safe.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -9181,7 +9415,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelSend"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 7ded77aff37..586927f227d 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -187,3 +187,11 @@ PQsetTraceFlags 184
PQmblenBounded 185
PQsendFlushRequest 186
PQconnectionUsedGSSAPI 187
+PQcancelSend 188
+PQcancelConn 189
+PQcancelPoll 190
+PQcancelStatus 191
+PQcancelSocket 192
+PQcancelErrorMessage 193
+PQcancelReset 194
+PQcancelFinish 195
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index a8584d2c684..cfbccb492b5 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -396,8 +396,10 @@ static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
+static void release_conn_hosts(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
static int store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
@@ -625,8 +627,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQcancelReset invocations. Otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -757,6 +768,113 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a query on the given connection. This requires polling
+ * the returned PGcancelConn to actually complete the cancellation of the
+ * query.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancelation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we manually create the host and address arrays with
+ * a single element after freeing the host array that we generated from
+ * the connection options.
+ */
+ release_conn_hosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
/*
* PQconnectStartParams
*
@@ -932,6 +1050,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2363,10 +2520,18 @@ connectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address. These fields have already set up in PQcancelConn. So leave
+ * these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2508,7 +2673,10 @@ connectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2633,13 +2801,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -3281,6 +3453,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancelation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -4025,8 +4220,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4394,19 +4595,8 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
- /* clean up pg_conn_host structures */
- for (int i = 0; i < conn->nconnhost; ++i)
- {
- free(conn->connhost[i].host);
- free(conn->connhost[i].hostaddr);
- free(conn->connhost[i].port);
- if (conn->connhost[i].password != NULL)
- {
- explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
- free(conn->connhost[i].password);
- }
- }
- free(conn->connhost);
+ release_conn_addrinfo(conn);
+ release_conn_hosts(conn);
free(conn->client_encoding_initial);
free(conn->events);
@@ -4525,6 +4715,31 @@ release_conn_addrinfo(PGconn *conn)
}
}
+/*
+ * release_conn_hosts
+ * - Free the host list in the PGconn.
+ */
+static void
+release_conn_hosts(PGconn *conn)
+{
+ if (conn->connhost)
+ {
+ for (int i = 0; i < conn->nconnhost; ++i)
+ {
+ free(conn->connhost[i].host);
+ free(conn->connhost[i].hostaddr);
+ free(conn->connhost[i].port);
+ if (conn->connhost[i].password != NULL)
+ {
+ explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
+ free(conn->connhost[i].password);
+ }
+ }
+ free(conn->connhost);
+ }
+}
+
+
/*
* sendTerminateConn
* - Send a terminate message to backend.
@@ -4532,6 +4747,15 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4585,7 +4809,13 @@ closePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we would have to rebuild it during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
@@ -5000,6 +5230,177 @@ cancel_errReturn:
return false;
}
+/*
+ * PQcancelSend
+ *
+ * Send a cancellation request in a blocking fashion.
+ * Returns 1 if successful 0 if not.
+ */
+int
+PQcancelSend(PGcancelConn * cancelConn)
+{
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return 1;
+
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_BAD;
+ return 1;
+ }
+
+ return connectDBComplete(&cancelConn->conn);
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn * cancelConn)
+{
+ PGconn *conn = (PGconn *) cancelConn;
+ int n;
+
+ /*
+ * Before we can call PQconnectPoll we first need to start the connection
+ * using connectDBstart. Non-cancel connections already do this whenever
+ * the connection is initialized. But cancel connections wait until the
+ * caller starts polling, because there might be a large delay between
+ * creating a cancel connection and actually wanting to use it.
+ */
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+ }
+
+ /*
+ * The rest of the connection establishement we leave to PQconnectPoll,
+ * since it's very similar to normal connection establishment. But once we
+ * get to the CONNECTION_AWAITING_RESPONSE we need to do our own thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * Windows is a bit special in its EOF behaviour for TCP. Sometimes it
+ * will error with an ECONNRESET when there is a clean connection closure.
+ * See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here. For
+ * all other OSes we consider any other error than EOF and report it as
+ * such.
+ */
+ if (n < 0 && n != -2)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangly do
+ * receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF. Which is what we were
+ * expecting. The cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn * cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn * cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn * cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn * cancelConn)
+{
+ closePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_STARTING;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn * cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
/*
* PQrequestCancel: old, not thread-safe function for requesting query cancel
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 7476dbe0e90..5dffab36eb6 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_STARTING /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,30 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* Create a PGcancelConn that's used to cancel a query on the given PGconn */
+extern PGcancelConn * PQcancelConn(PGconn *conn);
+/* issue a blocking cancel request */
+extern int PQcancelSend(PGcancelConn * conn);
+
+/* issue or poll a non-blocking cancel request */
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn * cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
+extern int PQcancelSocket(const PGcancelConn * cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
+extern void PQcancelReset(PGcancelConn * cancelConn);
+extern void PQcancelFinish(PGcancelConn * cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* a less secure version of PQcancelSend, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 0045f83cbfd..f83fd930d12 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -411,6 +411,10 @@ struct pg_conn
char *require_auth; /* name of the expected auth method */
char *load_balance_hosts; /* load balance over hosts */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -623,6 +627,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index f5b4d4d1ff2..6101e5d6143 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got cancelled.
+ *
+ * This is a function wrapped in a macro to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_cancelled(conn) confirm_query_cancelled_impl(__LINE__, conn)
+static void
+confirm_query_cancelled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_cancelled(conn);
+
+ /* test PQcancelSend */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (!PQcancelSend(cancelConn))
+ pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -1681,6 +1939,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1782,7 +2041,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
v21-0004-Start-using-new-libpq-cancel-APIs.patchapplication/octet-stream; name=v21-0004-Start-using-new-libpq-cancel-APIs.patchDownload
From 99976bf48c33bf66a50899910deb6af9bd15bf90 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 25 Jan 2023 13:32:15 +0100
Subject: [PATCH v21 4/4] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in the codebase with these newer
ones.
---
contrib/dblink/dblink.c | 30 +++--
contrib/postgres_fdw/connection.c | 105 +++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 11 +-
src/test/isolation/isolationtester.c | 29 ++---
6 files changed, 145 insertions(+), 52 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 1ff65d1e521..41a2d33c05b 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1328,22 +1328,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelConn(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (!PQcancelSend(cancelConn))
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index f839308b400..ef0f4595e8b 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -128,7 +128,7 @@ static void pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue);
static void pgfdw_reject_incomplete_xact_state_change(ConnCacheEntry *entry);
static void pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_cancel_query(PGconn *conn);
-static bool pgfdw_cancel_query_begin(PGconn *conn);
+static bool pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime);
static bool pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
bool consume_input);
static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query,
@@ -1356,36 +1356,104 @@ pgfdw_cancel_query(PGconn *conn)
endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
CONNECTION_CLEANUP_TIMEOUT);
- if (!pgfdw_cancel_query_begin(conn))
+ if (!pgfdw_cancel_query_begin(conn, endtime))
return false;
return pgfdw_cancel_query_end(conn, endtime, false);
}
static bool
-pgfdw_cancel_query_begin(PGconn *conn)
+pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
{
- PGcancel *cancel;
- char errbuf[256];
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQcancelStatus(cancel_conn) == CONNECTION_BAD)
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
}
- return true;
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ return !failed;
}
static bool
@@ -1722,7 +1790,10 @@ pgfdw_abort_cleanup_begin(ConnCacheEntry *entry, bool toplevel,
*/
if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
{
- if (!pgfdw_cancel_query_begin(entry->conn))
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ CONNECTION_CLEANUP_TIMEOUT);
+
+ if (!pgfdw_cancel_query_begin(entry->conn, endtime))
return false; /* Unable to cancel running query */
*cancel_requested = lappend(*cancel_requested, entry);
}
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index c8c4614b547..d26a88fd3ea 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2689,6 +2689,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index b54903ad8fa..074fcef4e42 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -714,6 +714,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- cleanup
DROP OWNED BY regress_view_owner;
DROP ROLE regress_view_owner;
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 7a1edea7c8c..43ccb302927 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,14 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
+ PGcancelConn *cancelConn = PQcancelConn(conn);
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQcancelSend(cancelConn);
+ PQcancelFinish(cancelConn);
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153a..de31a875716 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- if (cancel != NULL)
+ if (PQcancelSend(cancel_conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
Rebased again to resolve some conflicts
Attachments:
v22-0003-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=v22-0003-Add-non-blocking-version-of-PQcancel.patchDownload
From 059aac613ad969715a0e5a79c6a0772fae04f5c9 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 12 Jan 2022 09:52:05 +0100
Subject: [PATCH v22 3/4] Add non-blocking version of PQcancel
This patch makes the following changes in libpq:
1. Add a new PQcancelSend function, which sends cancellation requests
using the regular connection establishment code. This makes sure
that cancel requests support and use all connection options
including encryption.
2. Add a new PQcancelConn function which allows sending cancellation in
a non-blocking way by using it together with the newly added
PQcancelPoll and PQcancelSocket.
3. Use these two new cancellation APIs everywhere in the codebase where
signal-safety is not a necessity.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests. The postgres_fdw
cancellation code has been modified to make use of this.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
doc/src/sgml/libpq.sgml | 280 ++++++++++-
src/interfaces/libpq/exports.txt | 8 +
src/interfaces/libpq/fe-connect.c | 449 +++++++++++++++++-
src/interfaces/libpq/libpq-fe.h | 27 +-
src/interfaces/libpq/libpq-int.h | 9 +
.../modules/libpq_pipeline/libpq_pipeline.c | 263 +++++++++-
6 files changed, 985 insertions(+), 51 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index a52baa27d56..4d504aafda9 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -5279,7 +5279,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelSend"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -6002,13 +6002,223 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Prepares a connection over which a cancel request can be sent.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>
+ object, but it won't instantly start sending a cancel request over this
+ connection. A cancel request can be sent over this connection in a
+ blocking manner using <xref linkend="libpq-PQcancelSend"/> and in a
+ non-blocking manner using <xref linkend="libpq-PQcancelPoll"/>.
+ The return value should can be passed to <xref linkend="libpq-PQcancelStatus"/>,
+ to check if the <structname>PGcancelConn</structname> object was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe way.
+ </para>
+
+ <para>
+ If the original connection is encrypted (using TLS or GSS), then the
+ connection for the cancel request is encrypted in the same way. Any
+ connection options that are only used during authentication or after
+ authentication of the client are ignored though, because cancellation
+ requests do not require authentication and the connection is closed right
+ after the cancellation request is submitted.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSend">
+ <term><function>PQcancelSend</function><indexterm><primary>PQcancelSend</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a blocking manner.
+<synopsis>
+int PQcancelSend(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelConn"/>
+ The return value of <xref linkend="libpq-PQcancelSend"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being cancelled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQstatus"/> that can be used for
+ cancellation connections.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ <para>
+ In addition to all the statuses that a <structname>PGconn</structname>
+ can have, this connection can have one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQcancelPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelPoll"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually cancelled) and that the
+ connection is now closed. While a <symbol>CONNECTION_OK</symbol> result
+ for <structname>PGconn</structname> means that queries can be sent over
+ the connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSocket">
+ <term><function>PQcancelSocket</function><indexterm><primary>PQcancelSocket</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQsocket"/> that can be used for
+ cancellation connections.
+<synopsis>
+int PQcancelSocket(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelPoll">
+ <term><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQconnectPoll"/> that can be used for
+ cancellation connections.
+<synopsis>
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -6050,14 +6260,28 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ An insecure version of <xref linkend="libpq-PQcancelSend"/>, but one
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
+ <xref linkend="libpq-PQcancel"/> should only be used if it's necessary
+ to cancel a query from a signal-handler. If signal-safety is not needed,
+ <xref linkend="libpq-PQcancelSend"/> should be used to cancel the query
+ instead. <xref linkend="libpq-PQcancel"/> can be safely invoked from a
+ signal handler, if the <parameter>errbuf</parameter> is a local variable
+ in the signal handler. The <structname>PGcancel</structname> object is
+ read-only as far as <xref linkend="libpq-PQcancel"/> is concerned, so it
+ can also be invoked from a thread that is separate from the one
+ manipulating the <structname>PGconn</structname> object.
+ </para>
+
+ <para>
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
with an explanatory error message. <parameter>errbuf</parameter>
must be a char array of size <parameter>errbufsize</parameter> (the
@@ -6065,21 +6289,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
- <para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. This function connects to
+ postgres on the same address and port as the original connection. The
+ only connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. <emphasis>This means the connection
+ for the cancel request is never encrypted using TLS or GSS</emphasis>.
</para>
</listitem>
</varlistentry>
@@ -6091,13 +6316,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelSend"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelSend"/> should be
+ used instead, to avoid the security and thread-safety issues that this
+ function has. This function has the same security issues as
+ <xref linkend="libpq-PQcancel"/>, but without the benefit of being
+ signal-safe.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -9285,7 +9519,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelSend"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 850734ac96c..972322aa9c0 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -191,3 +191,11 @@ PQclosePrepared 188
PQclosePortal 189
PQsendClosePrepared 190
PQsendClosePortal 191
+PQcancelSend 192
+PQcancelConn 193
+PQcancelPoll 194
+PQcancelStatus 195
+PQcancelSocket 196
+PQcancelErrorMessage 197
+PQcancelReset 198
+PQcancelFinish 199
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 837c5321aa1..8f038bc2340 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -394,8 +394,10 @@ static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
+static void release_conn_hosts(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
static int store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
@@ -623,8 +625,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQcancelReset invocations. Otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -755,6 +766,113 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a query on the given connection. This requires polling
+ * the returned PGcancelConn to actually complete the cancellation of the
+ * query.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancelation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we manually create the host and address arrays with
+ * a single element after freeing the host array that we generated from
+ * the connection options.
+ */
+ release_conn_hosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
/*
* PQconnectStartParams
*
@@ -930,6 +1048,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2361,10 +2518,18 @@ connectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address. These fields have already set up in PQcancelConn. So leave
+ * these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2506,7 +2671,10 @@ connectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2631,13 +2799,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -3279,6 +3451,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancelation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -4023,8 +4218,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4392,19 +4593,8 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
- /* clean up pg_conn_host structures */
- for (int i = 0; i < conn->nconnhost; ++i)
- {
- free(conn->connhost[i].host);
- free(conn->connhost[i].hostaddr);
- free(conn->connhost[i].port);
- if (conn->connhost[i].password != NULL)
- {
- explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
- free(conn->connhost[i].password);
- }
- }
- free(conn->connhost);
+ release_conn_addrinfo(conn);
+ release_conn_hosts(conn);
free(conn->client_encoding_initial);
free(conn->events);
@@ -4523,6 +4713,31 @@ release_conn_addrinfo(PGconn *conn)
}
}
+/*
+ * release_conn_hosts
+ * - Free the host list in the PGconn.
+ */
+static void
+release_conn_hosts(PGconn *conn)
+{
+ if (conn->connhost)
+ {
+ for (int i = 0; i < conn->nconnhost; ++i)
+ {
+ free(conn->connhost[i].host);
+ free(conn->connhost[i].hostaddr);
+ free(conn->connhost[i].port);
+ if (conn->connhost[i].password != NULL)
+ {
+ explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
+ free(conn->connhost[i].password);
+ }
+ }
+ free(conn->connhost);
+ }
+}
+
+
/*
* sendTerminateConn
* - Send a terminate message to backend.
@@ -4530,6 +4745,15 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4583,7 +4807,13 @@ closePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we would have to rebuild it during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
@@ -4998,6 +5228,177 @@ cancel_errReturn:
return false;
}
+/*
+ * PQcancelSend
+ *
+ * Send a cancellation request in a blocking fashion.
+ * Returns 1 if successful 0 if not.
+ */
+int
+PQcancelSend(PGcancelConn * cancelConn)
+{
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return 1;
+
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_BAD;
+ return 1;
+ }
+
+ return connectDBComplete(&cancelConn->conn);
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn * cancelConn)
+{
+ PGconn *conn = (PGconn *) cancelConn;
+ int n;
+
+ /*
+ * Before we can call PQconnectPoll we first need to start the connection
+ * using connectDBstart. Non-cancel connections already do this whenever
+ * the connection is initialized. But cancel connections wait until the
+ * caller starts polling, because there might be a large delay between
+ * creating a cancel connection and actually wanting to use it.
+ */
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+ }
+
+ /*
+ * The rest of the connection establishement we leave to PQconnectPoll,
+ * since it's very similar to normal connection establishment. But once we
+ * get to the CONNECTION_AWAITING_RESPONSE we need to do our own thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * Windows is a bit special in its EOF behaviour for TCP. Sometimes it
+ * will error with an ECONNRESET when there is a clean connection closure.
+ * See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here. For
+ * all other OSes we consider any other error than EOF and report it as
+ * such.
+ */
+ if (n < 0 && n != -2)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangly do
+ * receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF. Which is what we were
+ * expecting. The cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn * cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn * cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn * cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn * cancelConn)
+{
+ closePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_STARTING;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn * cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
/*
* PQrequestCancel: old, not thread-safe function for requesting query cancel
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 97762d56f5d..44185a68f45 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_STARTING /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,30 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* Create a PGcancelConn that's used to cancel a query on the given PGconn */
+extern PGcancelConn * PQcancelConn(PGconn *conn);
+/* issue a blocking cancel request */
+extern int PQcancelSend(PGcancelConn * conn);
+
+/* issue or poll a non-blocking cancel request */
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn * cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
+extern int PQcancelSocket(const PGcancelConn * cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
+extern void PQcancelReset(PGcancelConn * cancelConn);
+extern void PQcancelFinish(PGcancelConn * cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* a less secure version of PQcancelSend, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index c745facfec3..cdf3d483abe 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -409,6 +409,10 @@ struct pg_conn
char *require_auth; /* name of the expected auth method */
char *load_balance_hosts; /* load balance over hosts */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -621,6 +625,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 9907bc86004..2249fe7b044 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got cancelled.
+ *
+ * This is a function wrapped in a macro to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_cancelled(conn) confirm_query_cancelled_impl(__LINE__, conn)
+static void
+confirm_query_cancelled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_cancelled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_cancelled(conn);
+
+ /* test PQcancelSend */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (!PQcancelSend(cancelConn))
+ pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_cancelled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -1746,6 +2004,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1847,7 +2106,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
v22-0002-Return-2-from-pqReadData-on-EOF.patchapplication/octet-stream; name=v22-0002-Return-2-from-pqReadData-on-EOF.patchDownload
From 6d37b6f9e2a8c3e4d68d1b9b50b9534f40c2674a Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Thu, 26 Jan 2023 12:24:38 +0100
Subject: [PATCH v22 2/4] Return -2 from pqReadData on EOF
This patch changes pqReadData to return -2 when a connection is cleanly
closed by the other side. For most of the Postgres protocol this is
considered an error, because the client is expected to close the
connection, not the server. But for Postgres its cancellation protocol
the distinction between errors and clean connection closure is
important, because clean connection closure is the way for the server to
signal that the cancellation was handled.
This patch is in preparation for a follow-up patch where pqReadData is
used for the cancellation protocol implementation.
No existing callsites of pqReadData or any of its internal functions
need to be updated as all of them check if the result is less than 0
instead a strict comparison against -1.
---
src/interfaces/libpq/fe-misc.c | 15 +++++++++++----
src/interfaces/libpq/fe-secure-openssl.c | 2 +-
src/interfaces/libpq/fe-secure.c | 6 ++++++
3 files changed, 18 insertions(+), 5 deletions(-)
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 660cdec93c9..2d49188d910 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -556,8 +556,11 @@ pqPutMsgEnd(PGconn *conn)
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
- * -1: error detected (including EOF = connection closure);
+ * -1: error detected (excluding EOF = clean connection closure);
* conn->errorMessage set
+ * -2: EOF detected, connection is closed cleanly by other side;
+ * conn->errorMessage set
+ *
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
@@ -639,7 +642,7 @@ retry3:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -734,7 +737,7 @@ retry4:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -751,13 +754,17 @@ definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.");
+ /* Do *not* drop any already-read data; caller still wants it */
+ pqDropConnection(conn, false);
+ conn->status = CONNECTION_BAD; /* No more connection to backend */
+ return -2;
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
/* Do *not* drop any already-read data; caller still wants it */
pqDropConnection(conn, false);
conn->status = CONNECTION_BAD; /* No more connection to backend */
- return -1;
+ return nread < 0 ? nread : -1;
}
/*
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index f1192d28f26..7a92e20f2e3 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -241,7 +241,7 @@ rloop:
*/
libpq_append_conn_error(conn, "SSL connection has been closed unexpectedly");
result_errno = ECONNRESET;
- n = -1;
+ n = -2;
break;
default:
libpq_append_conn_error(conn, "unrecognized SSL error code: %d", err);
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index bd72a87bbba..e5b733851fe 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -177,6 +177,12 @@ pqsecure_close(PGconn *conn)
* On failure, this function is responsible for appending a suitable message
* to conn->errorMessage. The caller must still inspect errno, but only
* to determine whether to continue/retry after error.
+ *
+ * Returns -1 in case of failures, except in the case of where a failure means
+ * that there was a clean connection closure, in those cases -2 is returned.
+ * Currently only the TLS implementation of pqsecure_read ever returns -2. For
+ * the other implementations a clean connection closure is detected in
+ * pqReadData instead.
*/
ssize_t
pqsecure_read(PGconn *conn, void *ptr, size_t len)
base-commit: 2c2eb0d6b27f498851bace47fc19e4c7fc90af4f
--
2.34.1
v22-0004-Start-using-new-libpq-cancel-APIs.patchapplication/octet-stream; name=v22-0004-Start-using-new-libpq-cancel-APIs.patchDownload
From d9f72da8061c97cae7c6e7c550aab2a4df1f8c52 Mon Sep 17 00:00:00 2001
From: Jelte Fennema <jelte.fennema@microsoft.com>
Date: Wed, 25 Jan 2023 13:32:15 +0100
Subject: [PATCH v22 4/4] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in the codebase with these newer
ones.
---
contrib/dblink/dblink.c | 30 +++--
contrib/postgres_fdw/connection.c | 105 +++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 11 +-
src/test/isolation/isolationtester.c | 29 ++---
6 files changed, 145 insertions(+), 52 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 41e1f6c91d6..432434483c7 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1328,22 +1328,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelConn(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (!PQcancelSend(cancelConn))
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 7e12b722ec9..d1a01a963b6 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -128,7 +128,7 @@ static void pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue);
static void pgfdw_reject_incomplete_xact_state_change(ConnCacheEntry *entry);
static void pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_cancel_query(PGconn *conn);
-static bool pgfdw_cancel_query_begin(PGconn *conn);
+static bool pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime);
static bool pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
bool consume_input);
static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query,
@@ -1356,36 +1356,104 @@ pgfdw_cancel_query(PGconn *conn)
endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
CONNECTION_CLEANUP_TIMEOUT);
- if (!pgfdw_cancel_query_begin(conn))
+ if (!pgfdw_cancel_query_begin(conn, endtime))
return false;
return pgfdw_cancel_query_end(conn, endtime, false);
}
static bool
-pgfdw_cancel_query_begin(PGconn *conn)
+pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
{
- PGcancel *cancel;
- char errbuf[256];
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQcancelStatus(cancel_conn) == CONNECTION_BAD)
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
}
- return true;
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ return !failed;
}
static bool
@@ -1722,7 +1790,10 @@ pgfdw_abort_cleanup_begin(ConnCacheEntry *entry, bool toplevel,
*/
if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
{
- if (!pgfdw_cancel_query_begin(entry->conn))
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ CONNECTION_CLEANUP_TIMEOUT);
+
+ if (!pgfdw_cancel_query_begin(entry->conn, endtime))
return false; /* Unable to cancel running query */
*cancel_requested = lappend(*cancel_requested, entry);
}
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 852b5b4707e..ef2310a79dd 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2689,6 +2689,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index 2fe8abc7af4..be65ba017c0 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -714,6 +714,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 7a1edea7c8c..43ccb302927 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,14 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
+ PGcancelConn *cancelConn = PQcancelConn(conn);
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQcancelSend(cancelConn);
+ PQcancelFinish(cancelConn);
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153a..de31a875716 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- if (cancel != NULL)
+ if (PQcancelSend(cancel_conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
Trivial observation: these patches obviously introduce many instances
of words derived from "cancel", but they don't all conform to
established project decisions (cf 21f1e15a) about how to spell them.
We follow the common en-US usage: "canceled", "canceling" but
"cancellation". Blame Webstah et al.
https://english.stackexchange.com/questions/176957/cancellation-canceled-canceling-us-usage
On Mon, 13 Nov 2023 at 03:39, Thomas Munro <thomas.munro@gmail.com> wrote:
We follow the common en-US usage: "canceled", "canceling" but
"cancellation". Blame Webstah et al.
I changed all the places that were not adhering to those spellings.
There were also a few of such places in parts of the codebase that
these changes didn't touch. I included a new 0001 patch to fix those.
I do feel like this patchset is pretty much in a committable state. So
it would be very much appreciated if any comitter could help push it
over the finish line.
Attachments:
v23-0002-Return-2-from-pqReadData-on-EOF.patchapplication/octet-stream; name=v23-0002-Return-2-from-pqReadData-on-EOF.patchDownload
From d54af2f66144752eac72026d5b3420b3ac48b4fe Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Dec 2023 13:38:55 +0100
Subject: [PATCH v23 2/4] Return -2 from pqReadData on EOF
This patch changes pqReadData to return -2 when a connection is cleanly
closed by the other side. For most of the Postgres protocol this is
considered an error, because the client is expected to close the
connection, not the server. But for Postgres its cancellation protocol
the distinction between errors and clean connection closure is
important, because clean connection closure is the way for the server to
signal that the cancellation was handled.
This patch is in preparation for a follow-up patch where pqReadData is
used for the cancellation protocol implementation.
No existing callsites of pqReadData or any of its internal functions
need to be updated as all of them check if the result is less than 0
instead a strict comparison against -1.
---
src/interfaces/libpq/fe-misc.c | 15 +++++++++++----
src/interfaces/libpq/fe-secure-openssl.c | 2 +-
src/interfaces/libpq/fe-secure.c | 6 ++++++
3 files changed, 18 insertions(+), 5 deletions(-)
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 660cdec93c9..2d49188d910 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -556,8 +556,11 @@ pqPutMsgEnd(PGconn *conn)
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
- * -1: error detected (including EOF = connection closure);
+ * -1: error detected (excluding EOF = clean connection closure);
* conn->errorMessage set
+ * -2: EOF detected, connection is closed cleanly by other side;
+ * conn->errorMessage set
+ *
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
@@ -639,7 +642,7 @@ retry3:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -734,7 +737,7 @@ retry4:
default:
/* pqsecure_read set the error message for us */
- return -1;
+ return nread;
}
}
if (nread > 0)
@@ -751,13 +754,17 @@ definitelyEOF:
libpq_append_conn_error(conn, "server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.");
+ /* Do *not* drop any already-read data; caller still wants it */
+ pqDropConnection(conn, false);
+ conn->status = CONNECTION_BAD; /* No more connection to backend */
+ return -2;
/* Come here if lower-level code already set a suitable errorMessage */
definitelyFailed:
/* Do *not* drop any already-read data; caller still wants it */
pqDropConnection(conn, false);
conn->status = CONNECTION_BAD; /* No more connection to backend */
- return -1;
+ return nread < 0 ? nread : -1;
}
/*
diff --git a/src/interfaces/libpq/fe-secure-openssl.c b/src/interfaces/libpq/fe-secure-openssl.c
index 2b221e7d151..9c1f6646a31 100644
--- a/src/interfaces/libpq/fe-secure-openssl.c
+++ b/src/interfaces/libpq/fe-secure-openssl.c
@@ -241,7 +241,7 @@ rloop:
*/
libpq_append_conn_error(conn, "SSL connection has been closed unexpectedly");
result_errno = ECONNRESET;
- n = -1;
+ n = -2;
break;
default:
libpq_append_conn_error(conn, "unrecognized SSL error code: %d", err);
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index b2430362a90..ab04a3ea34f 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -177,6 +177,12 @@ pqsecure_close(PGconn *conn)
* On failure, this function is responsible for appending a suitable message
* to conn->errorMessage. The caller must still inspect errno, but only
* to determine whether to continue/retry after error.
+ *
+ * Returns -1 in case of failures, except in the case of where a failure means
+ * that there was a clean connection closure, in those cases -2 is returned.
+ * Currently only the TLS implementation of pqsecure_read ever returns -2. For
+ * the other implementations a clean connection closure is detected in
+ * pqReadData instead.
*/
ssize_t
pqsecure_read(PGconn *conn, void *ptr, size_t len)
--
2.34.1
v23-0001-Fix-spelling-of-canceled-cancellation.patchapplication/octet-stream; name=v23-0001-Fix-spelling-of-canceled-cancellation.patchDownload
From e867878c653a4f99e2ffe4c297ef80b8b5a6c041 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Dec 2023 13:31:18 +0100
Subject: [PATCH v23 1/4] Fix spelling of canceled/cancellation
This fixes places where words derived from cancel were not using their
common en-US spelling.
---
doc/src/sgml/event-trigger.sgml | 2 +-
doc/src/sgml/libpq.sgml | 2 +-
src/backend/storage/lmgr/proc.c | 2 +-
src/test/recovery/t/001_stream_rep.pl | 2 +-
4 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/doc/src/sgml/event-trigger.sgml b/doc/src/sgml/event-trigger.sgml
index 234b4ffd024..a76bd844257 100644
--- a/doc/src/sgml/event-trigger.sgml
+++ b/doc/src/sgml/event-trigger.sgml
@@ -50,7 +50,7 @@
writing anything to the database when running on a standby.
Also, it's recommended to avoid long-running queries in
<literal>login</literal> event triggers. Notes that, for instance,
- cancelling connection in <application>psql</application> wouldn't cancel
+ canceling connection in <application>psql</application> wouldn't cancel
the in-progress <literal>login</literal> trigger.
</para>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index ed88ac001a1..1c6a6b3f4e2 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -7555,7 +7555,7 @@ defaultNoticeProcessor(void *arg, const char *message)
is called. It is the ideal time to initialize any
<literal>instanceData</literal> an event procedure may need. Only one
register event will be fired per event handler per connection. If the
- event procedure fails (returns zero), the registration is cancelled.
+ event procedure fails (returns zero), the registration is canceled.
<synopsis>
typedef struct
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index b6451d9d083..0b87a3bf179 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -1353,7 +1353,7 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable)
* coding means that there is a tiny chance that the process
* terminates its current transaction and starts a different one
* before we have a change to send the signal; the worst possible
- * consequence is that a for-wraparound vacuum is cancelled. But
+ * consequence is that a for-wraparound vacuum is canceled. But
* that could happen in any case unless we were to do kill() with
* the lock held, which is much more undesirable.
*/
diff --git a/src/test/recovery/t/001_stream_rep.pl b/src/test/recovery/t/001_stream_rep.pl
index 95f9b0d7726..a5e68c12b25 100644
--- a/src/test/recovery/t/001_stream_rep.pl
+++ b/src/test/recovery/t/001_stream_rep.pl
@@ -605,7 +605,7 @@ is( $node_primary->poll_query_until(
ok( pump_until(
$sigchld_bb, $sigchld_bb_timeout,
\$sigchld_bb_stderr, qr/backup is not in progress/),
- 'base backup cleanly cancelled');
+ 'base backup cleanly canceled');
$sigchld_bb->finish();
done_testing();
base-commit: 0e917508b89dd21c5bcd9183e77585f01055a20d
--
2.34.1
v23-0004-Start-using-new-libpq-cancel-APIs.patchapplication/octet-stream; name=v23-0004-Start-using-new-libpq-cancel-APIs.patchDownload
From 8c8255ea949d8a2decf247b443f8adc19307ed06 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Dec 2023 13:39:09 +0100
Subject: [PATCH v23 4/4] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in the codebase with these newer
ones.
---
contrib/dblink/dblink.c | 30 +++--
contrib/postgres_fdw/connection.c | 105 +++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 11 +-
src/test/isolation/isolationtester.c | 29 ++---
6 files changed, 145 insertions(+), 52 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 27bd0d31fdf..770908ed945 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1340,22 +1340,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelConn(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (!PQcancelSend(cancelConn))
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 5800c6a9fb3..3cd55564cc9 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -133,7 +133,7 @@ static void pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue);
static void pgfdw_reject_incomplete_xact_state_change(ConnCacheEntry *entry);
static void pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_cancel_query(PGconn *conn);
-static bool pgfdw_cancel_query_begin(PGconn *conn);
+static bool pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime);
static bool pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
bool consume_input);
static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query,
@@ -1369,36 +1369,104 @@ pgfdw_cancel_query(PGconn *conn)
endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
CONNECTION_CLEANUP_TIMEOUT);
- if (!pgfdw_cancel_query_begin(conn))
+ if (!pgfdw_cancel_query_begin(conn, endtime))
return false;
return pgfdw_cancel_query_end(conn, endtime, false);
}
static bool
-pgfdw_cancel_query_begin(PGconn *conn)
+pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
{
- PGcancel *cancel;
- char errbuf[256];
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQcancelStatus(cancel_conn) == CONNECTION_BAD)
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
}
- return true;
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ return !failed;
}
static bool
@@ -1739,7 +1807,10 @@ pgfdw_abort_cleanup_begin(ConnCacheEntry *entry, bool toplevel,
*/
if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
{
- if (!pgfdw_cancel_query_begin(entry->conn))
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ CONNECTION_CLEANUP_TIMEOUT);
+
+ if (!pgfdw_cancel_query_begin(entry->conn, endtime))
return false; /* Unable to cancel running query */
*cancel_requested = lappend(*cancel_requested, entry);
}
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index c988745b926..7db297b3a1c 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2698,6 +2698,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index cb405407028..9e8c2ae01c3 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -717,6 +717,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 7d45f5c6090..812a215c091 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,14 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
+ PGcancelConn *cancelConn = PQcancelConn(conn);
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQcancelSend(cancelConn);
+ PQcancelFinish(cancelConn);
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153a..de31a875716 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- if (cancel != NULL)
+ if (PQcancelSend(cancel_conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
v23-0003-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=v23-0003-Add-non-blocking-version-of-PQcancel.patchDownload
From f536382012d7b3c3da9d67cc2adc240978b5b464 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Dec 2023 13:39:04 +0100
Subject: [PATCH v23 3/4] Add non-blocking version of PQcancel
This patch makes the following changes in libpq:
1. Add a new PQcancelSend function, which sends cancellation requests
using the regular connection establishment code. This makes sure
that cancel requests support and use all connection options
including encryption.
2. Add a new PQcancelConn function which allows sending cancellation in
a non-blocking way by using it together with the newly added
PQcancelPoll and PQcancelSocket.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
doc/src/sgml/libpq.sgml | 280 ++++++++++-
src/interfaces/libpq/exports.txt | 8 +
src/interfaces/libpq/fe-connect.c | 449 +++++++++++++++++-
src/interfaces/libpq/libpq-fe.h | 27 +-
src/interfaces/libpq/libpq-int.h | 9 +
.../modules/libpq_pipeline/libpq_pipeline.c | 263 +++++++++-
6 files changed, 985 insertions(+), 51 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 1c6a6b3f4e2..17a99dec57f 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -5279,7 +5279,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelSend"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -6003,13 +6003,223 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Prepares a connection over which a cancel request can be sent.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>
+ object, but it won't instantly start sending a cancel request over this
+ connection. A cancel request can be sent over this connection in a
+ blocking manner using <xref linkend="libpq-PQcancelSend"/> and in a
+ non-blocking manner using <xref linkend="libpq-PQcancelPoll"/>.
+ The return value should can be passed to <xref linkend="libpq-PQcancelStatus"/>,
+ to check if the <structname>PGcancelConn</structname> object was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe way.
+ </para>
+
+ <para>
+ If the original connection is encrypted (using TLS or GSS), then the
+ connection for the cancel request is encrypted in the same way. Any
+ connection options that are only used during authentication or after
+ authentication of the client are ignored though, because cancellation
+ requests do not require authentication and the connection is closed right
+ after the cancellation request is submitted.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSend">
+ <term><function>PQcancelSend</function><indexterm><primary>PQcancelSend</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a blocking manner.
+<synopsis>
+int PQcancelSend(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelConn"/>
+ The return value of <xref linkend="libpq-PQcancelSend"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQstatus"/> that can be used for
+ cancellation connections.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ <para>
+ In addition to all the statuses that a <structname>PGconn</structname>
+ can have, this connection can have one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQcancelPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelPoll"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually canceled) and that the
+ connection is now closed. While a <symbol>CONNECTION_OK</symbol> result
+ for <structname>PGconn</structname> means that queries can be sent over
+ the connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSocket">
+ <term><function>PQcancelSocket</function><indexterm><primary>PQcancelSocket</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQsocket"/> that can be used for
+ cancellation connections.
+<synopsis>
+int PQcancelSocket(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelPoll">
+ <term><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQconnectPoll"/> that can be used for
+ cancellation connections.
+<synopsis>
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -6051,14 +6261,28 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ An insecure version of <xref linkend="libpq-PQcancelSend"/>, but one
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
+ <xref linkend="libpq-PQcancel"/> should only be used if it's necessary
+ to cancel a query from a signal-handler. If signal-safety is not needed,
+ <xref linkend="libpq-PQcancelSend"/> should be used to cancel the query
+ instead. <xref linkend="libpq-PQcancel"/> can be safely invoked from a
+ signal handler, if the <parameter>errbuf</parameter> is a local variable
+ in the signal handler. The <structname>PGcancel</structname> object is
+ read-only as far as <xref linkend="libpq-PQcancel"/> is concerned, so it
+ can also be invoked from a thread that is separate from the one
+ manipulating the <structname>PGconn</structname> object.
+ </para>
+
+ <para>
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
with an explanatory error message. <parameter>errbuf</parameter>
must be a char array of size <parameter>errbufsize</parameter> (the
@@ -6066,21 +6290,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
- <para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. This function connects to
+ postgres on the same address and port as the original connection. The
+ only connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. <emphasis>This means the connection
+ for the cancel request is never encrypted using TLS or GSS</emphasis>.
</para>
</listitem>
</varlistentry>
@@ -6092,13 +6317,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelSend"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelSend"/> should be
+ used instead, to avoid the security and thread-safety issues that this
+ function has. This function has the same security issues as
+ <xref linkend="libpq-PQcancel"/>, but without the benefit of being
+ signal-safe.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -9286,7 +9520,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelSend"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 850734ac96c..972322aa9c0 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -191,3 +191,11 @@ PQclosePrepared 188
PQclosePortal 189
PQsendClosePrepared 190
PQsendClosePortal 191
+PQcancelSend 192
+PQcancelConn 193
+PQcancelPoll 194
+PQcancelStatus 195
+PQcancelSocket 196
+PQcancelErrorMessage 197
+PQcancelReset 198
+PQcancelFinish 199
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index bf83a9b5697..3c8cb152486 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -394,8 +394,10 @@ static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
+static void release_conn_hosts(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
static int store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
@@ -623,8 +625,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQcancelReset invocations. Otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -755,6 +766,113 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a query on the given connection. This requires polling
+ * the returned PGcancelConn to actually complete the cancellation of the
+ * query.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancellation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we manually create the host and address arrays with
+ * a single element after freeing the host array that we generated from
+ * the connection options.
+ */
+ release_conn_hosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
/*
* PQconnectStartParams
*
@@ -930,6 +1048,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2361,10 +2518,18 @@ connectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address. These fields have already set up in PQcancelConn. So leave
+ * these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2506,7 +2671,10 @@ connectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2631,13 +2799,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -3279,6 +3451,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancellation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -4028,8 +4223,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4397,19 +4598,8 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
- /* clean up pg_conn_host structures */
- for (int i = 0; i < conn->nconnhost; ++i)
- {
- free(conn->connhost[i].host);
- free(conn->connhost[i].hostaddr);
- free(conn->connhost[i].port);
- if (conn->connhost[i].password != NULL)
- {
- explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
- free(conn->connhost[i].password);
- }
- }
- free(conn->connhost);
+ release_conn_addrinfo(conn);
+ release_conn_hosts(conn);
free(conn->client_encoding_initial);
free(conn->events);
@@ -4528,6 +4718,31 @@ release_conn_addrinfo(PGconn *conn)
}
}
+/*
+ * release_conn_hosts
+ * - Free the host list in the PGconn.
+ */
+static void
+release_conn_hosts(PGconn *conn)
+{
+ if (conn->connhost)
+ {
+ for (int i = 0; i < conn->nconnhost; ++i)
+ {
+ free(conn->connhost[i].host);
+ free(conn->connhost[i].hostaddr);
+ free(conn->connhost[i].port);
+ if (conn->connhost[i].password != NULL)
+ {
+ explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
+ free(conn->connhost[i].password);
+ }
+ }
+ free(conn->connhost);
+ }
+}
+
+
/*
* sendTerminateConn
* - Send a terminate message to backend.
@@ -4535,6 +4750,15 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4588,7 +4812,13 @@ closePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we would have to rebuild it during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
@@ -5003,6 +5233,177 @@ cancel_errReturn:
return false;
}
+/*
+ * PQcancelSend
+ *
+ * Send a cancellation request in a blocking fashion.
+ * Returns 1 if successful 0 if not.
+ */
+int
+PQcancelSend(PGcancelConn * cancelConn)
+{
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return 1;
+
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_BAD;
+ return 1;
+ }
+
+ return connectDBComplete(&cancelConn->conn);
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn * cancelConn)
+{
+ PGconn *conn = (PGconn *) cancelConn;
+ int n;
+
+ /*
+ * Before we can call PQconnectPoll we first need to start the connection
+ * using connectDBstart. Non-cancel connections already do this whenever
+ * the connection is initialized. But cancel connections wait until the
+ * caller starts polling, because there might be a large delay between
+ * creating a cancel connection and actually wanting to use it.
+ */
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+ }
+
+ /*
+ * The rest of the connection establishement we leave to PQconnectPoll,
+ * since it's very similar to normal connection establishment. But once we
+ * get to the CONNECTION_AWAITING_RESPONSE we need to do our own thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * Windows is a bit special in its EOF behaviour for TCP. Sometimes it
+ * will error with an ECONNRESET when there is a clean connection closure.
+ * See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here. For
+ * all other OSes we consider any other error than EOF and report it as
+ * such.
+ */
+ if (n < 0 && n != -2)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangly do
+ * receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF. Which is what we were
+ * expecting. The cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn * cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn * cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn * cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn * cancelConn)
+{
+ closePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_STARTING;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn * cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
/*
* PQrequestCancel: old, not thread-safe function for requesting query cancel
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 97762d56f5d..44185a68f45 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_STARTING /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,30 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* Create a PGcancelConn that's used to cancel a query on the given PGconn */
+extern PGcancelConn * PQcancelConn(PGconn *conn);
+/* issue a blocking cancel request */
+extern int PQcancelSend(PGcancelConn * conn);
+
+/* issue or poll a non-blocking cancel request */
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn * cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
+extern int PQcancelSocket(const PGcancelConn * cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
+extern void PQcancelReset(PGcancelConn * cancelConn);
+extern void PQcancelFinish(PGcancelConn * cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* a less secure version of PQcancelSend, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 7888199b0d9..02079f5f4e8 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -409,6 +409,10 @@ struct pg_conn
char *require_auth; /* name of the expected auth method */
char *load_balance_hosts; /* load balance over hosts */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -621,6 +625,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 3c009ee1539..e78af194d15 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got canceled.
+ *
+ * This is a function wrapped in a macro to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_canceled(conn) confirm_query_cancelled_impl(__LINE__, conn)
+static void
+confirm_query_canceled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_canceled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_canceled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_canceled(conn);
+
+ /* test PQcancelSend */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (!PQcancelSend(cancelConn))
+ pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -1746,6 +2004,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1847,7 +2106,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
On Thu, 14 Dec 2023 at 13:57, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
I changed all the places that were not adhering to those spellings.
It seems I forgot a /g on my sed command to do this so it turned out I
missed one that caused the test to fail to compile... Attached is a
fixed version.
I also updated the patchset to use the EOF detection provided by
0a5c46a7a488f2f4260a90843bb9de6c584c7f4e instead of introducing a new
way of EOF detection using a -2 return value.
Attachments:
v24-0003-Start-using-new-libpq-cancel-APIs.patchapplication/octet-stream; name=v24-0003-Start-using-new-libpq-cancel-APIs.patchDownload
From f600b0f997c33fd9bb2524f8c074b93ebbc6e143 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Dec 2023 13:39:09 +0100
Subject: [PATCH v24 3/3] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in the codebase with these newer
ones.
---
contrib/dblink/dblink.c | 30 +++--
contrib/postgres_fdw/connection.c | 105 +++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 11 +-
src/test/isolation/isolationtester.c | 29 ++---
6 files changed, 145 insertions(+), 52 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 27bd0d31fdf..770908ed945 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1340,22 +1340,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelConn(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (!PQcancelSend(cancelConn))
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 5800c6a9fb3..3cd55564cc9 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -133,7 +133,7 @@ static void pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue);
static void pgfdw_reject_incomplete_xact_state_change(ConnCacheEntry *entry);
static void pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_cancel_query(PGconn *conn);
-static bool pgfdw_cancel_query_begin(PGconn *conn);
+static bool pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime);
static bool pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
bool consume_input);
static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query,
@@ -1369,36 +1369,104 @@ pgfdw_cancel_query(PGconn *conn)
endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
CONNECTION_CLEANUP_TIMEOUT);
- if (!pgfdw_cancel_query_begin(conn))
+ if (!pgfdw_cancel_query_begin(conn, endtime))
return false;
return pgfdw_cancel_query_end(conn, endtime, false);
}
static bool
-pgfdw_cancel_query_begin(PGconn *conn)
+pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
{
- PGcancel *cancel;
- char errbuf[256];
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQcancelStatus(cancel_conn) == CONNECTION_BAD)
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
}
- return true;
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ return !failed;
}
static bool
@@ -1739,7 +1807,10 @@ pgfdw_abort_cleanup_begin(ConnCacheEntry *entry, bool toplevel,
*/
if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
{
- if (!pgfdw_cancel_query_begin(entry->conn))
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ CONNECTION_CLEANUP_TIMEOUT);
+
+ if (!pgfdw_cancel_query_begin(entry->conn, endtime))
return false; /* Unable to cancel running query */
*cancel_requested = lappend(*cancel_requested, entry);
}
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index c988745b926..7db297b3a1c 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2698,6 +2698,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index cb405407028..9e8c2ae01c3 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -717,6 +717,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 7d45f5c6090..812a215c091 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,14 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
+ PGcancelConn *cancelConn = PQcancelConn(conn);
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQcancelSend(cancelConn);
+ PQcancelFinish(cancelConn);
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153a..de31a875716 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- if (cancel != NULL)
+ if (PQcancelSend(cancel_conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
v24-0001-Fix-spelling-of-canceled-cancellation.patchapplication/octet-stream; name=v24-0001-Fix-spelling-of-canceled-cancellation.patchDownload
From 2e57bc255239b1465f0eb5b3f35d05b8e786b3ce Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Dec 2023 13:31:18 +0100
Subject: [PATCH v24 1/3] Fix spelling of canceled/cancellation
This fixes places where words derived from cancel were not using their
common en-US spelling.
---
doc/src/sgml/event-trigger.sgml | 2 +-
doc/src/sgml/libpq.sgml | 2 +-
src/backend/storage/lmgr/proc.c | 2 +-
src/test/recovery/t/001_stream_rep.pl | 2 +-
4 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/doc/src/sgml/event-trigger.sgml b/doc/src/sgml/event-trigger.sgml
index 234b4ffd024..a76bd844257 100644
--- a/doc/src/sgml/event-trigger.sgml
+++ b/doc/src/sgml/event-trigger.sgml
@@ -50,7 +50,7 @@
writing anything to the database when running on a standby.
Also, it's recommended to avoid long-running queries in
<literal>login</literal> event triggers. Notes that, for instance,
- cancelling connection in <application>psql</application> wouldn't cancel
+ canceling connection in <application>psql</application> wouldn't cancel
the in-progress <literal>login</literal> trigger.
</para>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index ed88ac001a1..1c6a6b3f4e2 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -7555,7 +7555,7 @@ defaultNoticeProcessor(void *arg, const char *message)
is called. It is the ideal time to initialize any
<literal>instanceData</literal> an event procedure may need. Only one
register event will be fired per event handler per connection. If the
- event procedure fails (returns zero), the registration is cancelled.
+ event procedure fails (returns zero), the registration is canceled.
<synopsis>
typedef struct
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index b6451d9d083..0b87a3bf179 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -1353,7 +1353,7 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable)
* coding means that there is a tiny chance that the process
* terminates its current transaction and starts a different one
* before we have a change to send the signal; the worst possible
- * consequence is that a for-wraparound vacuum is cancelled. But
+ * consequence is that a for-wraparound vacuum is canceled. But
* that could happen in any case unless we were to do kill() with
* the lock held, which is much more undesirable.
*/
diff --git a/src/test/recovery/t/001_stream_rep.pl b/src/test/recovery/t/001_stream_rep.pl
index 95f9b0d7726..a5e68c12b25 100644
--- a/src/test/recovery/t/001_stream_rep.pl
+++ b/src/test/recovery/t/001_stream_rep.pl
@@ -605,7 +605,7 @@ is( $node_primary->poll_query_until(
ok( pump_until(
$sigchld_bb, $sigchld_bb_timeout,
\$sigchld_bb_stderr, qr/backup is not in progress/),
- 'base backup cleanly cancelled');
+ 'base backup cleanly canceled');
$sigchld_bb->finish();
done_testing();
base-commit: 00498b718564cee3530b76d860b328718aed672b
--
2.34.1
v24-0002-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=v24-0002-Add-non-blocking-version-of-PQcancel.patchDownload
From 24e408c37c077f1291d550a01e6bccc85f25d3b9 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Dec 2023 13:39:04 +0100
Subject: [PATCH v24 2/3] Add non-blocking version of PQcancel
This patch makes the following changes in libpq:
1. Add a new PQcancelSend function, which sends cancellation requests
using the regular connection establishment code. This makes sure
that cancel requests support and use all connection options
including encryption.
2. Add a new PQcancelConn function which allows sending cancellation in
a non-blocking way by using it together with the newly added
PQcancelPoll and PQcancelSocket.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
doc/src/sgml/libpq.sgml | 280 ++++++++++-
src/interfaces/libpq/exports.txt | 8 +
src/interfaces/libpq/fe-connect.c | 451 +++++++++++++++++-
src/interfaces/libpq/libpq-fe.h | 27 +-
src/interfaces/libpq/libpq-int.h | 9 +
.../modules/libpq_pipeline/libpq_pipeline.c | 263 +++++++++-
6 files changed, 987 insertions(+), 51 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 1c6a6b3f4e2..17a99dec57f 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -5279,7 +5279,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelSend"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -6003,13 +6003,223 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Prepares a connection over which a cancel request can be sent.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>
+ object, but it won't instantly start sending a cancel request over this
+ connection. A cancel request can be sent over this connection in a
+ blocking manner using <xref linkend="libpq-PQcancelSend"/> and in a
+ non-blocking manner using <xref linkend="libpq-PQcancelPoll"/>.
+ The return value should can be passed to <xref linkend="libpq-PQcancelStatus"/>,
+ to check if the <structname>PGcancelConn</structname> object was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe way.
+ </para>
+
+ <para>
+ If the original connection is encrypted (using TLS or GSS), then the
+ connection for the cancel request is encrypted in the same way. Any
+ connection options that are only used during authentication or after
+ authentication of the client are ignored though, because cancellation
+ requests do not require authentication and the connection is closed right
+ after the cancellation request is submitted.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSend">
+ <term><function>PQcancelSend</function><indexterm><primary>PQcancelSend</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a blocking manner.
+<synopsis>
+int PQcancelSend(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelConn"/>
+ The return value of <xref linkend="libpq-PQcancelSend"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQstatus"/> that can be used for
+ cancellation connections.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ <para>
+ In addition to all the statuses that a <structname>PGconn</structname>
+ can have, this connection can have one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQcancelPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelPoll"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually canceled) and that the
+ connection is now closed. While a <symbol>CONNECTION_OK</symbol> result
+ for <structname>PGconn</structname> means that queries can be sent over
+ the connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSocket">
+ <term><function>PQcancelSocket</function><indexterm><primary>PQcancelSocket</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQsocket"/> that can be used for
+ cancellation connections.
+<synopsis>
+int PQcancelSocket(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelPoll">
+ <term><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQconnectPoll"/> that can be used for
+ cancellation connections.
+<synopsis>
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -6051,14 +6261,28 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ An insecure version of <xref linkend="libpq-PQcancelSend"/>, but one
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
+ <xref linkend="libpq-PQcancel"/> should only be used if it's necessary
+ to cancel a query from a signal-handler. If signal-safety is not needed,
+ <xref linkend="libpq-PQcancelSend"/> should be used to cancel the query
+ instead. <xref linkend="libpq-PQcancel"/> can be safely invoked from a
+ signal handler, if the <parameter>errbuf</parameter> is a local variable
+ in the signal handler. The <structname>PGcancel</structname> object is
+ read-only as far as <xref linkend="libpq-PQcancel"/> is concerned, so it
+ can also be invoked from a thread that is separate from the one
+ manipulating the <structname>PGconn</structname> object.
+ </para>
+
+ <para>
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
with an explanatory error message. <parameter>errbuf</parameter>
must be a char array of size <parameter>errbufsize</parameter> (the
@@ -6066,21 +6290,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
- <para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. This function connects to
+ postgres on the same address and port as the original connection. The
+ only connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. <emphasis>This means the connection
+ for the cancel request is never encrypted using TLS or GSS</emphasis>.
</para>
</listitem>
</varlistentry>
@@ -6092,13 +6317,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelSend"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelSend"/> should be
+ used instead, to avoid the security and thread-safety issues that this
+ function has. This function has the same security issues as
+ <xref linkend="libpq-PQcancel"/>, but without the benefit of being
+ signal-safe.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -9286,7 +9520,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelSend"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 850734ac96c..972322aa9c0 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -191,3 +191,11 @@ PQclosePrepared 188
PQclosePortal 189
PQsendClosePrepared 190
PQsendClosePortal 191
+PQcancelSend 192
+PQcancelConn 193
+PQcancelPoll 194
+PQcancelStatus 195
+PQcancelSocket 196
+PQcancelErrorMessage 197
+PQcancelReset 198
+PQcancelFinish 199
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index bf83a9b5697..4a3e3cba91f 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -394,8 +394,10 @@ static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
+static void release_conn_hosts(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
static int store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
@@ -623,8 +625,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQcancelReset invocations. Otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -755,6 +766,113 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a query on the given connection. This requires polling
+ * the returned PGcancelConn to actually complete the cancellation of the
+ * query.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancellation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we manually create the host and address arrays with
+ * a single element after freeing the host array that we generated from
+ * the connection options.
+ */
+ release_conn_hosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
/*
* PQconnectStartParams
*
@@ -930,6 +1048,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2361,10 +2518,18 @@ connectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address. These fields have already set up in PQcancelConn. So leave
+ * these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2506,7 +2671,10 @@ connectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2631,13 +2799,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -3279,6 +3451,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancellation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -4028,8 +4223,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4397,19 +4598,8 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
- /* clean up pg_conn_host structures */
- for (int i = 0; i < conn->nconnhost; ++i)
- {
- free(conn->connhost[i].host);
- free(conn->connhost[i].hostaddr);
- free(conn->connhost[i].port);
- if (conn->connhost[i].password != NULL)
- {
- explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
- free(conn->connhost[i].password);
- }
- }
- free(conn->connhost);
+ release_conn_addrinfo(conn);
+ release_conn_hosts(conn);
free(conn->client_encoding_initial);
free(conn->events);
@@ -4528,6 +4718,31 @@ release_conn_addrinfo(PGconn *conn)
}
}
+/*
+ * release_conn_hosts
+ * - Free the host list in the PGconn.
+ */
+static void
+release_conn_hosts(PGconn *conn)
+{
+ if (conn->connhost)
+ {
+ for (int i = 0; i < conn->nconnhost; ++i)
+ {
+ free(conn->connhost[i].host);
+ free(conn->connhost[i].hostaddr);
+ free(conn->connhost[i].port);
+ if (conn->connhost[i].password != NULL)
+ {
+ explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
+ free(conn->connhost[i].password);
+ }
+ }
+ free(conn->connhost);
+ }
+}
+
+
/*
* sendTerminateConn
* - Send a terminate message to backend.
@@ -4535,6 +4750,15 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4588,7 +4812,13 @@ closePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we would have to rebuild it during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
@@ -5003,6 +5233,179 @@ cancel_errReturn:
return false;
}
+/*
+ * PQcancelSend
+ *
+ * Send a cancellation request in a blocking fashion.
+ * Returns 1 if successful 0 if not.
+ */
+int
+PQcancelSend(PGcancelConn * cancelConn)
+{
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return 1;
+
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_BAD;
+ return 1;
+ }
+
+ return connectDBComplete(&cancelConn->conn);
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn * cancelConn)
+{
+ PGconn *conn = (PGconn *) cancelConn;
+ int n;
+
+ /*
+ * Before we can call PQconnectPoll we first need to start the connection
+ * using connectDBstart. Non-cancel connections already do this whenever
+ * the connection is initialized. But cancel connections wait until the
+ * caller starts polling, because there might be a large delay between
+ * creating a cancel connection and actually wanting to use it.
+ */
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+ }
+
+ /*
+ * The rest of the connection establishement we leave to PQconnectPoll,
+ * since it's very similar to normal connection establishment. But once we
+ * get to the CONNECTION_AWAITING_RESPONSE we need to do our own thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * If we receive an error report it, but only if errno is non-zero.
+ * Otherwise we assume it's an EOF, which is what we expect from the
+ * server.
+ *
+ * We skip this for Windows, because Windows is a bit special in its EOF
+ * behaviour for TCP. Sometimes it will error with an ECONNRESET when
+ * there is a clean connection closure. See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here.
+ */
+ if (n < 0 && errno != 0)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangly do
+ * receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF. Which is what we were
+ * expecting. The cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn * cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn * cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn * cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn * cancelConn)
+{
+ closePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_STARTING;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn * cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
/*
* PQrequestCancel: old, not thread-safe function for requesting query cancel
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 97762d56f5d..44185a68f45 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_STARTING /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,30 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* Create a PGcancelConn that's used to cancel a query on the given PGconn */
+extern PGcancelConn * PQcancelConn(PGconn *conn);
+/* issue a blocking cancel request */
+extern int PQcancelSend(PGcancelConn * conn);
+
+/* issue or poll a non-blocking cancel request */
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn * cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
+extern int PQcancelSocket(const PGcancelConn * cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
+extern void PQcancelReset(PGcancelConn * cancelConn);
+extern void PQcancelFinish(PGcancelConn * cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* a less secure version of PQcancelSend, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 7888199b0d9..02079f5f4e8 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -409,6 +409,10 @@ struct pg_conn
char *require_auth; /* name of the expected auth method */
char *load_balance_hosts; /* load balance over hosts */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -621,6 +625,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 3c009ee1539..390ce0f0b38 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got canceled.
+ *
+ * This is a function wrapped in a macro to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_canceled(conn) confirm_query_canceled_impl(__LINE__, conn)
+static void
+confirm_query_canceled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_canceled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_canceled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_canceled(conn);
+
+ /* test PQcancelSend */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (!PQcancelSend(cancelConn))
+ pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -1746,6 +2004,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1847,7 +2106,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
On Wed, 20 Dec 2023 at 19:17, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
On Thu, 14 Dec 2023 at 13:57, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
I changed all the places that were not adhering to those spellings.
It seems I forgot a /g on my sed command to do this so it turned out I
missed one that caused the test to fail to compile... Attached is a
fixed version.I also updated the patchset to use the EOF detection provided by
0a5c46a7a488f2f4260a90843bb9de6c584c7f4e instead of introducing a new
way of EOF detection using a -2 return value.
CFBot shows that the patch does not apply anymore as in [1]http://cfbot.cputube.org/patch_46_3511.log:
patching file doc/src/sgml/libpq.sgml
...
patching file src/interfaces/libpq/exports.txt
Hunk #1 FAILED at 191.
1 out of 1 hunk FAILED -- saving rejects to file
src/interfaces/libpq/exports.txt.rej
patching file src/interfaces/libpq/fe-connect.c
Please post an updated version for the same.
[1]: http://cfbot.cputube.org/patch_46_3511.log
Regards,
Vignesh
On Fri, 26 Jan 2024 at 02:59, vignesh C <vignesh21@gmail.com> wrote:
Please post an updated version for the same.
Done.
Attachments:
v25-0002-Add-non-blocking-version-of-PQcancel.patchapplication/x-patch; name=v25-0002-Add-non-blocking-version-of-PQcancel.patchDownload
From 5a94d610a4fe138365e2e88c5cec72eba53ed036 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Dec 2023 13:39:04 +0100
Subject: [PATCH v25 2/3] Add non-blocking version of PQcancel
This patch makes the following changes in libpq:
1. Add a new PQcancelSend function, which sends cancellation requests
using the regular connection establishment code. This makes sure
that cancel requests support and use all connection options
including encryption.
2. Add a new PQcancelConn function which allows sending cancellation in
a non-blocking way by using it together with the newly added
PQcancelPoll and PQcancelSocket.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
doc/src/sgml/libpq.sgml | 280 ++++++++++-
src/interfaces/libpq/exports.txt | 8 +
src/interfaces/libpq/fe-connect.c | 451 +++++++++++++++++-
src/interfaces/libpq/libpq-fe.h | 27 +-
src/interfaces/libpq/libpq-int.h | 9 +
.../modules/libpq_pipeline/libpq_pipeline.c | 263 +++++++++-
6 files changed, 987 insertions(+), 51 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index d0d5aefadc0..9808e678650 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -5281,7 +5281,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelSend"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -6034,13 +6034,223 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Prepares a connection over which a cancel request can be sent.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>
+ object, but it won't instantly start sending a cancel request over this
+ connection. A cancel request can be sent over this connection in a
+ blocking manner using <xref linkend="libpq-PQcancelSend"/> and in a
+ non-blocking manner using <xref linkend="libpq-PQcancelPoll"/>.
+ The return value should can be passed to <xref linkend="libpq-PQcancelStatus"/>,
+ to check if the <structname>PGcancelConn</structname> object was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe way.
+ </para>
+
+ <para>
+ If the original connection is encrypted (using TLS or GSS), then the
+ connection for the cancel request is encrypted in the same way. Any
+ connection options that are only used during authentication or after
+ authentication of the client are ignored though, because cancellation
+ requests do not require authentication and the connection is closed right
+ after the cancellation request is submitted.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSend">
+ <term><function>PQcancelSend</function><indexterm><primary>PQcancelSend</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a blocking manner.
+<synopsis>
+int PQcancelSend(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelConn"/>
+ The return value of <xref linkend="libpq-PQcancelSend"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQstatus"/> that can be used for
+ cancellation connections.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ <para>
+ In addition to all the statuses that a <structname>PGconn</structname>
+ can have, this connection can have one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQcancelPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelPoll"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually canceled) and that the
+ connection is now closed. While a <symbol>CONNECTION_OK</symbol> result
+ for <structname>PGconn</structname> means that queries can be sent over
+ the connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSocket">
+ <term><function>PQcancelSocket</function><indexterm><primary>PQcancelSocket</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQsocket"/> that can be used for
+ cancellation connections.
+<synopsis>
+int PQcancelSocket(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelPoll">
+ <term><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQconnectPoll"/> that can be used for
+ cancellation connections.
+<synopsis>
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -6082,14 +6292,28 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ An insecure version of <xref linkend="libpq-PQcancelSend"/>, but one
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
+ <xref linkend="libpq-PQcancel"/> should only be used if it's necessary
+ to cancel a query from a signal-handler. If signal-safety is not needed,
+ <xref linkend="libpq-PQcancelSend"/> should be used to cancel the query
+ instead. <xref linkend="libpq-PQcancel"/> can be safely invoked from a
+ signal handler, if the <parameter>errbuf</parameter> is a local variable
+ in the signal handler. The <structname>PGcancel</structname> object is
+ read-only as far as <xref linkend="libpq-PQcancel"/> is concerned, so it
+ can also be invoked from a thread that is separate from the one
+ manipulating the <structname>PGconn</structname> object.
+ </para>
+
+ <para>
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
with an explanatory error message. <parameter>errbuf</parameter>
must be a char array of size <parameter>errbufsize</parameter> (the
@@ -6097,21 +6321,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
- <para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. This function connects to
+ postgres on the same address and port as the original connection. The
+ only connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. <emphasis>This means the connection
+ for the cancel request is never encrypted using TLS or GSS</emphasis>.
</para>
</listitem>
</varlistentry>
@@ -6123,13 +6348,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelSend"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelSend"/> should be
+ used instead, to avoid the security and thread-safety issues that this
+ function has. This function has the same security issues as
+ <xref linkend="libpq-PQcancel"/>, but without the benefit of being
+ signal-safe.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -9356,7 +9590,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelSend"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 088592deb16..125bc80679a 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -193,3 +193,11 @@ PQsendClosePrepared 190
PQsendClosePortal 191
PQchangePassword 192
PQsendPipelineSync 193
+PQcancelSend 194
+PQcancelConn 195
+PQcancelPoll 196
+PQcancelStatus 197
+PQcancelSocket 198
+PQcancelErrorMessage 199
+PQcancelReset 200
+PQcancelFinish 201
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 79e0b73d618..f8e3b5953f0 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -394,8 +394,10 @@ static PGPing internal_ping(PGconn *conn);
static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
+static bool copyPGconn(PGconn *srcConn, PGconn *dstConn);
static void freePGconn(PGconn *conn);
static void closePGconn(PGconn *conn);
+static void release_conn_hosts(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
static int store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
@@ -623,8 +625,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQcancelReset invocations. Otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -755,6 +766,113 @@ PQping(const char *conninfo)
return ret;
}
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a query on the given connection. This requires polling
+ * the returned PGcancelConn to actually complete the cancellation of the
+ * query.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = makeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!copyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!connectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancellation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we manually create the host and address arrays with
+ * a single element after freeing the host array that we generated from
+ * the connection options.
+ */
+ release_conn_hosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
/*
* PQconnectStartParams
*
@@ -930,6 +1048,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+static bool
+copyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2361,10 +2518,18 @@ connectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address. These fields have already set up in PQcancelConn. So leave
+ * these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2506,7 +2671,10 @@ connectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2631,13 +2799,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -3279,6 +3451,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancellation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -4028,8 +4223,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4397,19 +4598,8 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
- /* clean up pg_conn_host structures */
- for (int i = 0; i < conn->nconnhost; ++i)
- {
- free(conn->connhost[i].host);
- free(conn->connhost[i].hostaddr);
- free(conn->connhost[i].port);
- if (conn->connhost[i].password != NULL)
- {
- explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
- free(conn->connhost[i].password);
- }
- }
- free(conn->connhost);
+ release_conn_addrinfo(conn);
+ release_conn_hosts(conn);
free(conn->client_encoding_initial);
free(conn->events);
@@ -4528,6 +4718,31 @@ release_conn_addrinfo(PGconn *conn)
}
}
+/*
+ * release_conn_hosts
+ * - Free the host list in the PGconn.
+ */
+static void
+release_conn_hosts(PGconn *conn)
+{
+ if (conn->connhost)
+ {
+ for (int i = 0; i < conn->nconnhost; ++i)
+ {
+ free(conn->connhost[i].host);
+ free(conn->connhost[i].hostaddr);
+ free(conn->connhost[i].port);
+ if (conn->connhost[i].password != NULL)
+ {
+ explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
+ free(conn->connhost[i].password);
+ }
+ }
+ free(conn->connhost);
+ }
+}
+
+
/*
* sendTerminateConn
* - Send a terminate message to backend.
@@ -4535,6 +4750,15 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4588,7 +4812,13 @@ closePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we would have to rebuild it during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
@@ -5003,6 +5233,179 @@ cancel_errReturn:
return false;
}
+/*
+ * PQcancelSend
+ *
+ * Send a cancellation request in a blocking fashion.
+ * Returns 1 if successful 0 if not.
+ */
+int
+PQcancelSend(PGcancelConn * cancelConn)
+{
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return 1;
+
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_BAD;
+ return 1;
+ }
+
+ return connectDBComplete(&cancelConn->conn);
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn * cancelConn)
+{
+ PGconn *conn = (PGconn *) cancelConn;
+ int n;
+
+ /*
+ * Before we can call PQconnectPoll we first need to start the connection
+ * using connectDBstart. Non-cancel connections already do this whenever
+ * the connection is initialized. But cancel connections wait until the
+ * caller starts polling, because there might be a large delay between
+ * creating a cancel connection and actually wanting to use it.
+ */
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!connectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+ }
+
+ /*
+ * The rest of the connection establishement we leave to PQconnectPoll,
+ * since it's very similar to normal connection establishment. But once we
+ * get to the CONNECTION_AWAITING_RESPONSE we need to do our own thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * If we receive an error report it, but only if errno is non-zero.
+ * Otherwise we assume it's an EOF, which is what we expect from the
+ * server.
+ *
+ * We skip this for Windows, because Windows is a bit special in its EOF
+ * behaviour for TCP. Sometimes it will error with an ECONNRESET when
+ * there is a clean connection closure. See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here.
+ */
+ if (n < 0 && errno != 0)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangly do
+ * receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF. Which is what we were
+ * expecting. The cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn * cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn * cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn * cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn * cancelConn)
+{
+ closePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_STARTING;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn * cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
/*
* PQrequestCancel: old, not thread-safe function for requesting query cancel
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index defc415fa3f..857ba54d943 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_STARTING /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,30 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* Create a PGcancelConn that's used to cancel a query on the given PGconn */
+extern PGcancelConn * PQcancelConn(PGconn *conn);
+/* issue a blocking cancel request */
+extern int PQcancelSend(PGcancelConn * conn);
+
+/* issue or poll a non-blocking cancel request */
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn * cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
+extern int PQcancelSocket(const PGcancelConn * cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
+extern void PQcancelReset(PGcancelConn * cancelConn);
+extern void PQcancelFinish(PGcancelConn * cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* a less secure version of PQcancelSend, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index f0143726bbc..ea99ff4efa5 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -409,6 +409,10 @@ struct pg_conn
char *require_auth; /* name of the expected auth method */
char *load_balance_hosts; /* load balance over hosts */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -621,6 +625,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 5f43aa40de4..580003002e4 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got canceled.
+ *
+ * This is a function wrapped in a macro to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_canceled(conn) confirm_query_canceled_impl(__LINE__, conn)
+static void
+confirm_query_canceled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_canceled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_canceled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_canceled(conn);
+
+ /* test PQcancelSend */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (!PQcancelSend(cancelConn))
+ pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -1789,6 +2047,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1890,7 +2149,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
v25-0003-Start-using-new-libpq-cancel-APIs.patchapplication/x-patch; name=v25-0003-Start-using-new-libpq-cancel-APIs.patchDownload
From 2ee4f68919d60bd94f7ecfc23a662a22efaf0e62 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Dec 2023 13:39:09 +0100
Subject: [PATCH v25 3/3] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in the codebase with these newer
ones.
---
contrib/dblink/dblink.c | 30 +++--
contrib/postgres_fdw/connection.c | 105 +++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 11 +-
src/test/isolation/isolationtester.c | 29 ++---
6 files changed, 145 insertions(+), 52 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 19a362526d2..81749b2cdd0 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1346,22 +1346,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelConn(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (!PQcancelSend(cancelConn))
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 4931ebf5915..3ac74ff6a7f 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -133,7 +133,7 @@ static void pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue);
static void pgfdw_reject_incomplete_xact_state_change(ConnCacheEntry *entry);
static void pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_cancel_query(PGconn *conn);
-static bool pgfdw_cancel_query_begin(PGconn *conn);
+static bool pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime);
static bool pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
bool consume_input);
static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query,
@@ -1315,36 +1315,104 @@ pgfdw_cancel_query(PGconn *conn)
endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
CONNECTION_CLEANUP_TIMEOUT);
- if (!pgfdw_cancel_query_begin(conn))
+ if (!pgfdw_cancel_query_begin(conn, endtime))
return false;
return pgfdw_cancel_query_end(conn, endtime, false);
}
static bool
-pgfdw_cancel_query_begin(PGconn *conn)
+pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
{
- PGcancel *cancel;
- char errbuf[256];
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQcancelStatus(cancel_conn) == CONNECTION_BAD)
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
}
- return true;
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ return !failed;
}
static bool
@@ -1685,7 +1753,10 @@ pgfdw_abort_cleanup_begin(ConnCacheEntry *entry, bool toplevel,
*/
if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
{
- if (!pgfdw_cancel_query_begin(entry->conn))
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ CONNECTION_CLEANUP_TIMEOUT);
+
+ if (!pgfdw_cancel_query_begin(entry->conn, endtime))
return false; /* Unable to cancel running query */
*cancel_requested = lappend(*cancel_requested, entry);
}
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index d83f6ae8cbc..df0b88e70c7 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2698,6 +2698,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index 90c8fa4b705..115c3c117a1 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -717,6 +717,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 808d54461fd..c5cd2f57875 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,14 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
+ PGcancelConn *cancelConn = PQcancelConn(conn);
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQcancelSend(cancelConn);
+ PQcancelFinish(cancelConn);
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153a..de31a875716 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- if (cancel != NULL)
+ if (PQcancelSend(cancel_conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
v25-0001-Fix-spelling-of-canceled-cancellation.patchapplication/x-patch; name=v25-0001-Fix-spelling-of-canceled-cancellation.patchDownload
From 2c43b57abe766a0b817ee85597a326f3ece5ed41 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Dec 2023 13:31:18 +0100
Subject: [PATCH v25 1/3] Fix spelling of canceled/cancellation
This fixes places where words derived from cancel were not using their
common en-US spelling.
---
doc/src/sgml/event-trigger.sgml | 2 +-
doc/src/sgml/libpq.sgml | 2 +-
src/backend/storage/lmgr/proc.c | 2 +-
src/test/recovery/t/001_stream_rep.pl | 2 +-
4 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/doc/src/sgml/event-trigger.sgml b/doc/src/sgml/event-trigger.sgml
index 234b4ffd024..a76bd844257 100644
--- a/doc/src/sgml/event-trigger.sgml
+++ b/doc/src/sgml/event-trigger.sgml
@@ -50,7 +50,7 @@
writing anything to the database when running on a standby.
Also, it's recommended to avoid long-running queries in
<literal>login</literal> event triggers. Notes that, for instance,
- cancelling connection in <application>psql</application> wouldn't cancel
+ canceling connection in <application>psql</application> wouldn't cancel
the in-progress <literal>login</literal> trigger.
</para>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 173ab779a08..d0d5aefadc0 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -7625,7 +7625,7 @@ defaultNoticeProcessor(void *arg, const char *message)
is called. It is the ideal time to initialize any
<literal>instanceData</literal> an event procedure may need. Only one
register event will be fired per event handler per connection. If the
- event procedure fails (returns zero), the registration is cancelled.
+ event procedure fails (returns zero), the registration is canceled.
<synopsis>
typedef struct
diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c
index 4ad96beb87a..e5977548fe2 100644
--- a/src/backend/storage/lmgr/proc.c
+++ b/src/backend/storage/lmgr/proc.c
@@ -1353,7 +1353,7 @@ ProcSleep(LOCALLOCK *locallock, LockMethod lockMethodTable)
* coding means that there is a tiny chance that the process
* terminates its current transaction and starts a different one
* before we have a change to send the signal; the worst possible
- * consequence is that a for-wraparound vacuum is cancelled. But
+ * consequence is that a for-wraparound vacuum is canceled. But
* that could happen in any case unless we were to do kill() with
* the lock held, which is much more undesirable.
*/
diff --git a/src/test/recovery/t/001_stream_rep.pl b/src/test/recovery/t/001_stream_rep.pl
index cb988f4d10c..5311ade509b 100644
--- a/src/test/recovery/t/001_stream_rep.pl
+++ b/src/test/recovery/t/001_stream_rep.pl
@@ -601,7 +601,7 @@ is( $node_primary->poll_query_until(
ok( pump_until(
$sigchld_bb, $sigchld_bb_timeout,
\$sigchld_bb_stderr, qr/backup is not in progress/),
- 'base backup cleanly cancelled');
+ 'base backup cleanly canceled');
$sigchld_bb->finish();
done_testing();
base-commit: b199eb89c67d737ced55721590f7fc8ff585e837
--
2.34.1
Pushed 0001.
I wonder, would it make sense to put all these new functions in a
separate file fe-cancel.c?
--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/
"World domination is proceeding according to plan" (Andrew Morton)
On Fri, 26 Jan 2024 at 13:11, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
I wonder, would it make sense to put all these new functions in a
separate file fe-cancel.c?
Okay I tried doing that. I think the end result is indeed quite nice,
having all the cancellation related functions together in a file. But
it did require making a bunch of static functions in fe-connect
extern, and adding them to libpq-int.h. On one hand that seems fine to
me, on the other maybe that indicates that this cancellation logic
makes sense to be in the same file as the other connection functions
(in a sense, connecting is all that a cancel request does).
Attachments:
v26-0005-Start-using-new-libpq-cancel-APIs.patchapplication/octet-stream; name=v26-0005-Start-using-new-libpq-cancel-APIs.patchDownload
From 0a8de21ec8728a474d89d19880d44efae39038ef Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Dec 2023 13:39:09 +0100
Subject: [PATCH v26 5/5] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in the codebase with these newer
ones.
---
contrib/dblink/dblink.c | 30 +++--
contrib/postgres_fdw/connection.c | 105 +++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 11 +-
src/test/isolation/isolationtester.c | 29 ++---
6 files changed, 145 insertions(+), 52 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 19a362526d2..81749b2cdd0 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1346,22 +1346,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelConn(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (!PQcancelSend(cancelConn))
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 4931ebf5915..3ac74ff6a7f 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -133,7 +133,7 @@ static void pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue);
static void pgfdw_reject_incomplete_xact_state_change(ConnCacheEntry *entry);
static void pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_cancel_query(PGconn *conn);
-static bool pgfdw_cancel_query_begin(PGconn *conn);
+static bool pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime);
static bool pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
bool consume_input);
static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query,
@@ -1315,36 +1315,104 @@ pgfdw_cancel_query(PGconn *conn)
endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
CONNECTION_CLEANUP_TIMEOUT);
- if (!pgfdw_cancel_query_begin(conn))
+ if (!pgfdw_cancel_query_begin(conn, endtime))
return false;
return pgfdw_cancel_query_end(conn, endtime, false);
}
static bool
-pgfdw_cancel_query_begin(PGconn *conn)
+pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
{
- PGcancel *cancel;
- char errbuf[256];
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQcancelStatus(cancel_conn) == CONNECTION_BAD)
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
}
- return true;
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ return !failed;
}
static bool
@@ -1685,7 +1753,10 @@ pgfdw_abort_cleanup_begin(ConnCacheEntry *entry, bool toplevel,
*/
if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
{
- if (!pgfdw_cancel_query_begin(entry->conn))
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ CONNECTION_CLEANUP_TIMEOUT);
+
+ if (!pgfdw_cancel_query_begin(entry->conn, endtime))
return false; /* Unable to cancel running query */
*cancel_requested = lappend(*cancel_requested, entry);
}
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index b5a38aeb214..16206a23a9d 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2698,6 +2698,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index f410c3db4e6..01a98750611 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -717,6 +717,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 808d54461fd..c5cd2f57875 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,14 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
+ PGcancelConn *cancelConn = PQcancelConn(conn);
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQcancelSend(cancelConn);
+ PQcancelFinish(cancelConn);
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153a..de31a875716 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- if (cancel != NULL)
+ if (PQcancelSend(cancel_conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
v26-0002-libpq-Add-pq_release_conn_hosts-function.patchapplication/octet-stream; name=v26-0002-libpq-Add-pq_release_conn_hosts-function.patchDownload
From 63fb41b9aa654a6450a4c2f08d8bf204a5916b08 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Fri, 26 Jan 2024 17:01:28 +0100
Subject: [PATCH v26 2/5] libpq: Add pq_release_conn_hosts function
In a follow up PR we'll need to free this connhost field in a function
defined in fe-cancel.c
So this extracts the logic to a dedicated extern function.
---
src/interfaces/libpq/fe-connect.c | 39 ++++++++++++++++++++-----------
src/interfaces/libpq/libpq-int.h | 1 +
2 files changed, 27 insertions(+), 13 deletions(-)
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 5357b0a9d22..0622fe32253 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -4395,19 +4395,7 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
- /* clean up pg_conn_host structures */
- for (int i = 0; i < conn->nconnhost; ++i)
- {
- free(conn->connhost[i].host);
- free(conn->connhost[i].hostaddr);
- free(conn->connhost[i].port);
- if (conn->connhost[i].password != NULL)
- {
- explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
- free(conn->connhost[i].password);
- }
- }
- free(conn->connhost);
+ pq_release_conn_hosts(conn);
free(conn->client_encoding_initial);
free(conn->events);
@@ -4526,6 +4514,31 @@ release_conn_addrinfo(PGconn *conn)
}
}
+/*
+ * pq_release_conn_hosts
+ * - Free the host list in the PGconn.
+ */
+void
+pq_release_conn_hosts(PGconn *conn)
+{
+ if (conn->connhost)
+ {
+ for (int i = 0; i < conn->nconnhost; ++i)
+ {
+ free(conn->connhost[i].host);
+ free(conn->connhost[i].hostaddr);
+ free(conn->connhost[i].port);
+ if (conn->connhost[i].password != NULL)
+ {
+ explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
+ free(conn->connhost[i].password);
+ }
+ }
+ free(conn->connhost);
+ }
+}
+
+
/*
* sendTerminateConn
* - Send a terminate message to backend.
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 66b77e75e18..a0da7356584 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -680,6 +680,7 @@ extern int pqPacketSend(PGconn *conn, char pack_type,
extern bool pqGetHomeDirectory(char *buf, int bufsize);
extern bool pq_parse_int_param(const char *value, int *result, PGconn *conn,
const char *context);
+extern void pq_release_conn_hosts(PGconn *conn);
extern pgthreadlock_t pg_g_threadlock;
--
2.34.1
v26-0003-libpq-Change-some-static-functions-to-extern.patchapplication/octet-stream; name=v26-0003-libpq-Change-some-static-functions-to-extern.patchDownload
From c727e1ccab265c49f7737ba083dd0bf1aa55471e Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Fri, 26 Jan 2024 16:47:51 +0100
Subject: [PATCH v26 3/5] libpq: Change some static functions to extern
This is in preparation of a follow up commit that starts using these
functions from fe-cancel.c.
---
src/interfaces/libpq/fe-connect.c | 85 +++++++++++++++----------------
src/interfaces/libpq/libpq-int.h | 6 +++
2 files changed, 46 insertions(+), 45 deletions(-)
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 0622fe32253..8dbc9d2cc57 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -387,15 +387,10 @@ static const char uri_designator[] = "postgresql://";
static const char short_uri_designator[] = "postgres://";
static bool connectOptions1(PGconn *conn, const char *conninfo);
-static bool connectOptions2(PGconn *conn);
-static int connectDBStart(PGconn *conn);
-static int connectDBComplete(PGconn *conn);
static PGPing internal_ping(PGconn *conn);
-static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
static void freePGconn(PGconn *conn);
-static void closePGconn(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
static int store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
@@ -644,7 +639,7 @@ pqDropServerData(PGconn *conn)
* PQconnectStart or PQconnectStartParams (which differ in the same way as
* PQconnectdb and PQconnectdbParams) and PQconnectPoll.
*
- * Internally, the static functions connectDBStart, connectDBComplete
+ * Internally, the static functions pqConnectDBStart, pqConnectDBComplete
* are part of the connection procedure.
*/
@@ -678,7 +673,7 @@ PQconnectdbParams(const char *const *keywords,
PGconn *conn = PQconnectStartParams(keywords, values, expand_dbname);
if (conn && conn->status != CONNECTION_BAD)
- (void) connectDBComplete(conn);
+ (void) pqConnectDBComplete(conn);
return conn;
}
@@ -731,7 +726,7 @@ PQconnectdb(const char *conninfo)
PGconn *conn = PQconnectStart(conninfo);
if (conn && conn->status != CONNECTION_BAD)
- (void) connectDBComplete(conn);
+ (void) pqConnectDBComplete(conn);
return conn;
}
@@ -785,7 +780,7 @@ PQconnectStartParams(const char *const *keywords,
* to initialize conn->errorMessage to empty. All subsequent steps during
* connection initialization will only append to that buffer.
*/
- conn = makeEmptyPGconn();
+ conn = pqMakeEmptyPGconn();
if (conn == NULL)
return NULL;
@@ -819,15 +814,15 @@ PQconnectStartParams(const char *const *keywords,
/*
* Compute derived options
*/
- if (!connectOptions2(conn))
+ if (!pqConnectOptions2(conn))
return conn;
/*
* Connect to the database
*/
- if (!connectDBStart(conn))
+ if (!pqConnectDBStart(conn))
{
- /* Just in case we failed to set it in connectDBStart */
+ /* Just in case we failed to set it in pqConnectDBStart */
conn->status = CONNECTION_BAD;
}
@@ -863,7 +858,7 @@ PQconnectStart(const char *conninfo)
* to initialize conn->errorMessage to empty. All subsequent steps during
* connection initialization will only append to that buffer.
*/
- conn = makeEmptyPGconn();
+ conn = pqMakeEmptyPGconn();
if (conn == NULL)
return NULL;
@@ -876,15 +871,15 @@ PQconnectStart(const char *conninfo)
/*
* Compute derived options
*/
- if (!connectOptions2(conn))
+ if (!pqConnectOptions2(conn))
return conn;
/*
* Connect to the database
*/
- if (!connectDBStart(conn))
+ if (!pqConnectDBStart(conn))
{
- /* Just in case we failed to set it in connectDBStart */
+ /* Just in case we failed to set it in pqConnectDBStart */
conn->status = CONNECTION_BAD;
}
@@ -895,7 +890,7 @@ PQconnectStart(const char *conninfo)
* Move option values into conn structure
*
* Don't put anything cute here --- intelligence should be in
- * connectOptions2 ...
+ * pqConnectOptions2 ...
*
* Returns true on success. On failure, returns false and sets error message.
*/
@@ -933,7 +928,7 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
*
* Internal subroutine to set up connection parameters given an already-
* created PGconn and a conninfo string. Derived settings should be
- * processed by calling connectOptions2 next. (We split them because
+ * processed by calling pqConnectOptions2 next. (We split them because
* PQsetdbLogin overrides defaults in between.)
*
* Returns true if OK, false if trouble (in which case errorMessage is set
@@ -1055,15 +1050,15 @@ libpq_prng_init(PGconn *conn)
}
/*
- * connectOptions2
+ * pqConnectOptions2
*
* Compute derived connection options after absorbing all user-supplied info.
*
* Returns true if OK, false if trouble (in which case errorMessage is set
* and so is conn->status).
*/
-static bool
-connectOptions2(PGconn *conn)
+bool
+pqConnectOptions2(PGconn *conn)
{
int i;
@@ -1822,7 +1817,7 @@ PQsetdbLogin(const char *pghost, const char *pgport, const char *pgoptions,
* to initialize conn->errorMessage to empty. All subsequent steps during
* connection initialization will only append to that buffer.
*/
- conn = makeEmptyPGconn();
+ conn = pqMakeEmptyPGconn();
if (conn == NULL)
return NULL;
@@ -1901,14 +1896,14 @@ PQsetdbLogin(const char *pghost, const char *pgport, const char *pgoptions,
/*
* Compute derived options
*/
- if (!connectOptions2(conn))
+ if (!pqConnectOptions2(conn))
return conn;
/*
* Connect to the database
*/
- if (connectDBStart(conn))
- (void) connectDBComplete(conn);
+ if (pqConnectDBStart(conn))
+ (void) pqConnectDBComplete(conn);
return conn;
@@ -2323,14 +2318,14 @@ setTCPUserTimeout(PGconn *conn)
}
/* ----------
- * connectDBStart -
+ * pqConnectDBStart -
* Begin the process of making a connection to the backend.
*
* Returns 1 if successful, 0 if not.
* ----------
*/
-static int
-connectDBStart(PGconn *conn)
+int
+pqConnectDBStart(PGconn *conn)
{
if (!conn)
return 0;
@@ -2393,14 +2388,14 @@ connect_errReturn:
/*
- * connectDBComplete
+ * pqConnectDBComplete
*
* Block and complete a connection.
*
* Returns 1 on success, 0 on failure.
*/
-static int
-connectDBComplete(PGconn *conn)
+int
+pqConnectDBComplete(PGconn *conn)
{
PostgresPollingStatusType flag = PGRES_POLLING_WRITING;
time_t finish_time = ((time_t) -1);
@@ -2750,7 +2745,7 @@ keep_going: /* We will come back to here until there is
* combining it with the insertion.
*
* We don't need to initialize conn->prng_state here, because that
- * already happened in connectOptions2.
+ * already happened in pqConnectOptions2.
*/
for (int i = 1; i < conn->naddr; i++)
{
@@ -4227,7 +4222,7 @@ internal_ping(PGconn *conn)
/* Attempt to complete the connection */
if (conn->status != CONNECTION_BAD)
- (void) connectDBComplete(conn);
+ (void) pqConnectDBComplete(conn);
/* Definitely OK if we succeeded */
if (conn->status != CONNECTION_BAD)
@@ -4279,11 +4274,11 @@ internal_ping(PGconn *conn)
/*
- * makeEmptyPGconn
+ * pqMakeEmptyPGconn
* - create a PGconn data structure with (as yet) no interesting data
*/
-static PGconn *
-makeEmptyPGconn(void)
+PGconn *
+pqMakeEmptyPGconn(void)
{
PGconn *conn;
@@ -4376,7 +4371,7 @@ makeEmptyPGconn(void)
* freePGconn
* - free an idle (closed) PGconn data structure
*
- * NOTE: this should not overlap any functionality with closePGconn().
+ * NOTE: this should not overlap any functionality with pqClosePGconn().
* Clearing/resetting of transient state belongs there; what we do here is
* release data that is to be held for the life of the PGconn structure.
* If a value ought to be cleared/freed during PQreset(), do it there not here.
@@ -4563,15 +4558,15 @@ sendTerminateConn(PGconn *conn)
}
/*
- * closePGconn
+ * pqClosePGconn
* - properly close a connection to the backend
*
* This should reset or release all transient state, but NOT the connection
* parameters. On exit, the PGconn should be in condition to start a fresh
* connection with the same parameters (see PQreset()).
*/
-static void
-closePGconn(PGconn *conn)
+void
+pqClosePGconn(PGconn *conn)
{
/*
* If possible, send Terminate message to close the connection politely.
@@ -4614,7 +4609,7 @@ PQfinish(PGconn *conn)
{
if (conn)
{
- closePGconn(conn);
+ pqClosePGconn(conn);
freePGconn(conn);
}
}
@@ -4628,9 +4623,9 @@ PQreset(PGconn *conn)
{
if (conn)
{
- closePGconn(conn);
+ pqClosePGconn(conn);
- if (connectDBStart(conn) && connectDBComplete(conn))
+ if (pqConnectDBStart(conn) && pqConnectDBComplete(conn))
{
/*
* Notify event procs of successful reset.
@@ -4661,9 +4656,9 @@ PQresetStart(PGconn *conn)
{
if (conn)
{
- closePGconn(conn);
+ pqClosePGconn(conn);
- return connectDBStart(conn);
+ return pqConnectDBStart(conn);
}
return 0;
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index a0da7356584..b1e1bd6331f 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -681,6 +681,12 @@ extern bool pqGetHomeDirectory(char *buf, int bufsize);
extern bool pq_parse_int_param(const char *value, int *result, PGconn *conn,
const char *context);
extern void pq_release_conn_hosts(PGconn *conn);
+extern bool pqConnectOptions2(PGconn *conn);
+extern int pqConnectDBStart(PGconn *conn);
+extern int pqConnectDBComplete(PGconn *conn);
+extern PGconn *pqMakeEmptyPGconn(void);
+extern bool pqCopyPGconn(PGconn *srcConn, PGconn *dstConn);
+extern void pqClosePGconn(PGconn *conn);
extern pgthreadlock_t pg_g_threadlock;
--
2.34.1
v26-0001-libpq-Move-cancellation-related-functions-to-fe-.patchapplication/octet-stream; name=v26-0001-libpq-Move-cancellation-related-functions-to-fe-.patchDownload
From b3a3e5e659b68f2a9abf1d9af8733ee4888c5c60 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Fri, 26 Jan 2024 14:35:48 +0100
Subject: [PATCH v26 1/5] libpq: Move cancellation related functions to
fe-cancel.c
In follow up commits we'll add more functions related to cancellations
this groups those all together instead of grouping them with all the
other functions in fe-connect.c
---
src/interfaces/libpq/fe-cancel.c | 386 ++++++++++++++++++++++++++++
src/interfaces/libpq/fe-connect.c | 405 ++----------------------------
src/interfaces/libpq/libpq-int.h | 2 +
src/interfaces/libpq/meson.build | 1 +
4 files changed, 407 insertions(+), 387 deletions(-)
create mode 100644 src/interfaces/libpq/fe-cancel.c
diff --git a/src/interfaces/libpq/fe-cancel.c b/src/interfaces/libpq/fe-cancel.c
new file mode 100644
index 00000000000..f1d836d0216
--- /dev/null
+++ b/src/interfaces/libpq/fe-cancel.c
@@ -0,0 +1,386 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-cancel.c
+ * functions related to setting up a connection to the backend
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ * src/interfaces/libpq/fe-cancel.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+#include "port/pg_bswap.h"
+
+/*
+ * PQgetCancel: get a PGcancel structure corresponding to a connection.
+ *
+ * A copy is needed to be able to cancel a running query from a different
+ * thread. If the same structure is used all structure members would have
+ * to be individually locked (if the entire structure was locked, it would
+ * be impossible to cancel a synchronous query because the structure would
+ * have to stay locked for the duration of the query).
+ */
+PGcancel *
+PQgetCancel(PGconn *conn)
+{
+ PGcancel *cancel;
+
+ if (!conn)
+ return NULL;
+
+ if (conn->sock == PGINVALID_SOCKET)
+ return NULL;
+
+ cancel = malloc(sizeof(PGcancel));
+ if (cancel == NULL)
+ return NULL;
+
+ memcpy(&cancel->raddr, &conn->raddr, sizeof(SockAddr));
+ cancel->be_pid = conn->be_pid;
+ cancel->be_key = conn->be_key;
+ /* We use -1 to indicate an unset connection option */
+ cancel->pgtcp_user_timeout = -1;
+ cancel->keepalives = -1;
+ cancel->keepalives_idle = -1;
+ cancel->keepalives_interval = -1;
+ cancel->keepalives_count = -1;
+ if (conn->pgtcp_user_timeout != NULL)
+ {
+ if (!pq_parse_int_param(conn->pgtcp_user_timeout,
+ &cancel->pgtcp_user_timeout,
+ conn, "tcp_user_timeout"))
+ goto fail;
+ }
+ if (conn->keepalives != NULL)
+ {
+ if (!pq_parse_int_param(conn->keepalives,
+ &cancel->keepalives,
+ conn, "keepalives"))
+ goto fail;
+ }
+ if (conn->keepalives_idle != NULL)
+ {
+ if (!pq_parse_int_param(conn->keepalives_idle,
+ &cancel->keepalives_idle,
+ conn, "keepalives_idle"))
+ goto fail;
+ }
+ if (conn->keepalives_interval != NULL)
+ {
+ if (!pq_parse_int_param(conn->keepalives_interval,
+ &cancel->keepalives_interval,
+ conn, "keepalives_interval"))
+ goto fail;
+ }
+ if (conn->keepalives_count != NULL)
+ {
+ if (!pq_parse_int_param(conn->keepalives_count,
+ &cancel->keepalives_count,
+ conn, "keepalives_count"))
+ goto fail;
+ }
+
+ return cancel;
+
+fail:
+ free(cancel);
+ return NULL;
+}
+
+/* PQfreeCancel: free a cancel structure */
+void
+PQfreeCancel(PGcancel *cancel)
+{
+ free(cancel);
+}
+
+
+/*
+ * Sets an integer socket option on a TCP socket, if the provided value is
+ * not negative. Returns false if setsockopt fails for some reason.
+ *
+ * CAUTION: This needs to be signal safe, since it's used by PQcancel.
+ */
+#if defined(TCP_USER_TIMEOUT) || !defined(WIN32)
+static bool
+optional_setsockopt(int fd, int protoid, int optid, int value)
+{
+ if (value < 0)
+ return true;
+ if (setsockopt(fd, protoid, optid, (char *) &value, sizeof(value)) < 0)
+ return false;
+ return true;
+}
+#endif
+
+
+
+/*
+ * PQcancel: request query cancel
+ *
+ * The return value is true if the cancel request was successfully
+ * dispatched, false if not (in which case an error message is available).
+ * Note: successful dispatch is no guarantee that there will be any effect at
+ * the backend. The application must read the operation result as usual.
+ *
+ * On failure, an error message is stored in *errbuf, which must be of size
+ * errbufsize (recommended size is 256 bytes). *errbuf is not changed on
+ * success return.
+ *
+ * CAUTION: we want this routine to be safely callable from a signal handler
+ * (for example, an application might want to call it in a SIGINT handler).
+ * This means we cannot use any C library routine that might be non-reentrant.
+ * malloc/free are often non-reentrant, and anything that might call them is
+ * just as dangerous. We avoid sprintf here for that reason. Building up
+ * error messages with strcpy/strcat is tedious but should be quite safe.
+ * We also save/restore errno in case the signal handler support doesn't.
+ */
+int
+PQcancel(PGcancel *cancel, char *errbuf, int errbufsize)
+{
+ int save_errno = SOCK_ERRNO;
+ pgsocket tmpsock = PGINVALID_SOCKET;
+ int maxlen;
+ struct
+ {
+ uint32 packetlen;
+ CancelRequestPacket cp;
+ } crp;
+
+ if (!cancel)
+ {
+ strlcpy(errbuf, "PQcancel() -- no cancel object supplied", errbufsize);
+ /* strlcpy probably doesn't change errno, but be paranoid */
+ SOCK_ERRNO_SET(save_errno);
+ return false;
+ }
+
+ /*
+ * We need to open a temporary connection to the postmaster. Do this with
+ * only kernel calls.
+ */
+ if ((tmpsock = socket(cancel->raddr.addr.ss_family, SOCK_STREAM, 0)) == PGINVALID_SOCKET)
+ {
+ strlcpy(errbuf, "PQcancel() -- socket() failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+
+ /*
+ * Since this connection will only be used to send a single packet of
+ * data, we don't need NODELAY. We also don't set the socket to
+ * nonblocking mode, because the API definition of PQcancel requires the
+ * cancel to be sent in a blocking way.
+ *
+ * We do set socket options related to keepalives and other TCP timeouts.
+ * This ensures that this function does not block indefinitely when
+ * reasonable keepalive and timeout settings have been provided.
+ */
+ if (cancel->raddr.addr.ss_family != AF_UNIX &&
+ cancel->keepalives != 0)
+ {
+#ifndef WIN32
+ if (!optional_setsockopt(tmpsock, SOL_SOCKET, SO_KEEPALIVE, 1))
+ {
+ strlcpy(errbuf, "PQcancel() -- setsockopt(SO_KEEPALIVE) failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+
+#ifdef PG_TCP_KEEPALIVE_IDLE
+ if (!optional_setsockopt(tmpsock, IPPROTO_TCP, PG_TCP_KEEPALIVE_IDLE,
+ cancel->keepalives_idle))
+ {
+ strlcpy(errbuf, "PQcancel() -- setsockopt(" PG_TCP_KEEPALIVE_IDLE_STR ") failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+#endif
+
+#ifdef TCP_KEEPINTVL
+ if (!optional_setsockopt(tmpsock, IPPROTO_TCP, TCP_KEEPINTVL,
+ cancel->keepalives_interval))
+ {
+ strlcpy(errbuf, "PQcancel() -- setsockopt(TCP_KEEPINTVL) failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+#endif
+
+#ifdef TCP_KEEPCNT
+ if (!optional_setsockopt(tmpsock, IPPROTO_TCP, TCP_KEEPCNT,
+ cancel->keepalives_count))
+ {
+ strlcpy(errbuf, "PQcancel() -- setsockopt(TCP_KEEPCNT) failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+#endif
+
+#else /* WIN32 */
+
+#ifdef SIO_KEEPALIVE_VALS
+ if (!setKeepalivesWin32(tmpsock,
+ cancel->keepalives_idle,
+ cancel->keepalives_interval))
+ {
+ strlcpy(errbuf, "PQcancel() -- WSAIoctl(SIO_KEEPALIVE_VALS) failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+#endif /* SIO_KEEPALIVE_VALS */
+#endif /* WIN32 */
+
+ /* TCP_USER_TIMEOUT works the same way on Unix and Windows */
+#ifdef TCP_USER_TIMEOUT
+ if (!optional_setsockopt(tmpsock, IPPROTO_TCP, TCP_USER_TIMEOUT,
+ cancel->pgtcp_user_timeout))
+ {
+ strlcpy(errbuf, "PQcancel() -- setsockopt(TCP_USER_TIMEOUT) failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+#endif
+ }
+
+retry3:
+ if (connect(tmpsock, (struct sockaddr *) &cancel->raddr.addr,
+ cancel->raddr.salen) < 0)
+ {
+ if (SOCK_ERRNO == EINTR)
+ /* Interrupted system call - we'll just try again */
+ goto retry3;
+ strlcpy(errbuf, "PQcancel() -- connect() failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+
+ /* Create and send the cancel request packet. */
+
+ crp.packetlen = pg_hton32((uint32) sizeof(crp));
+ crp.cp.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ crp.cp.backendPID = pg_hton32(cancel->be_pid);
+ crp.cp.cancelAuthCode = pg_hton32(cancel->be_key);
+
+retry4:
+ if (send(tmpsock, (char *) &crp, sizeof(crp), 0) != (int) sizeof(crp))
+ {
+ if (SOCK_ERRNO == EINTR)
+ /* Interrupted system call - we'll just try again */
+ goto retry4;
+ strlcpy(errbuf, "PQcancel() -- send() failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+
+ /*
+ * Wait for the postmaster to close the connection, which indicates that
+ * it's processed the request. Without this delay, we might issue another
+ * command only to find that our cancel zaps that command instead of the
+ * one we thought we were canceling. Note we don't actually expect this
+ * read to obtain any data, we are just waiting for EOF to be signaled.
+ */
+retry5:
+ if (recv(tmpsock, (char *) &crp, 1, 0) < 0)
+ {
+ if (SOCK_ERRNO == EINTR)
+ /* Interrupted system call - we'll just try again */
+ goto retry5;
+ /* we ignore other error conditions */
+ }
+
+ /* All done */
+ closesocket(tmpsock);
+ SOCK_ERRNO_SET(save_errno);
+ return true;
+
+cancel_errReturn:
+
+ /*
+ * Make sure we don't overflow the error buffer. Leave space for the \n at
+ * the end, and for the terminating zero.
+ */
+ maxlen = errbufsize - strlen(errbuf) - 2;
+ if (maxlen >= 0)
+ {
+ /*
+ * We can't invoke strerror here, since it's not signal-safe. Settle
+ * for printing the decimal value of errno. Even that has to be done
+ * the hard way.
+ */
+ int val = SOCK_ERRNO;
+ char buf[32];
+ char *bufp;
+
+ bufp = buf + sizeof(buf) - 1;
+ *bufp = '\0';
+ do
+ {
+ *(--bufp) = (val % 10) + '0';
+ val /= 10;
+ } while (val > 0);
+ bufp -= 6;
+ memcpy(bufp, "error ", 6);
+ strncat(errbuf, bufp, maxlen);
+ strcat(errbuf, "\n");
+ }
+ if (tmpsock != PGINVALID_SOCKET)
+ closesocket(tmpsock);
+ SOCK_ERRNO_SET(save_errno);
+ return false;
+}
+
+/*
+ * PQrequestCancel: old, not thread-safe function for requesting query cancel
+ *
+ * Returns true if able to send the cancel request, false if not.
+ *
+ * On failure, the error message is saved in conn->errorMessage; this means
+ * that this can't be used when there might be other active operations on
+ * the connection object.
+ *
+ * NOTE: error messages will be cut off at the current size of the
+ * error message buffer, since we dare not try to expand conn->errorMessage!
+ */
+int
+PQrequestCancel(PGconn *conn)
+{
+ int r;
+ PGcancel *cancel;
+
+ /* Check we have an open connection */
+ if (!conn)
+ return false;
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ strlcpy(conn->errorMessage.data,
+ "PQrequestCancel() -- connection is not open\n",
+ conn->errorMessage.maxlen);
+ conn->errorMessage.len = strlen(conn->errorMessage.data);
+ conn->errorReported = 0;
+
+ return false;
+ }
+
+ cancel = PQgetCancel(conn);
+ if (cancel)
+ {
+ r = PQcancel(cancel, conn->errorMessage.data,
+ conn->errorMessage.maxlen);
+ PQfreeCancel(cancel);
+ }
+ else
+ {
+ strlcpy(conn->errorMessage.data, "out of memory",
+ conn->errorMessage.maxlen);
+ r = false;
+ }
+
+ if (!r)
+ {
+ conn->errorMessage.len = strlen(conn->errorMessage.data);
+ conn->errorReported = 0;
+ }
+
+ return r;
+}
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 79e0b73d618..5357b0a9d22 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -443,8 +443,6 @@ static void pgpassfileWarning(PGconn *conn);
static void default_threadlock(int acquire);
static bool sslVerifyProtocolVersion(const char *version);
static bool sslVerifyProtocolRange(const char *min, const char *max);
-static bool parse_int_param(const char *value, int *result, PGconn *conn,
- const char *context);
/* global variable because fe-auth.c needs to access it */
@@ -2081,9 +2079,9 @@ useKeepalives(PGconn *conn)
* store it in *result, complaining if there is any trailing garbage or an
* overflow. This allows any number of leading and trailing whitespaces.
*/
-static bool
-parse_int_param(const char *value, int *result, PGconn *conn,
- const char *context)
+bool
+pq_parse_int_param(const char *value, int *result, PGconn *conn,
+ const char *context)
{
char *end;
long numval;
@@ -2134,8 +2132,8 @@ setKeepalivesIdle(PGconn *conn)
if (conn->keepalives_idle == NULL)
return 1;
- if (!parse_int_param(conn->keepalives_idle, &idle, conn,
- "keepalives_idle"))
+ if (!pq_parse_int_param(conn->keepalives_idle, &idle, conn,
+ "keepalives_idle"))
return 0;
if (idle < 0)
idle = 0;
@@ -2168,8 +2166,8 @@ setKeepalivesInterval(PGconn *conn)
if (conn->keepalives_interval == NULL)
return 1;
- if (!parse_int_param(conn->keepalives_interval, &interval, conn,
- "keepalives_interval"))
+ if (!pq_parse_int_param(conn->keepalives_interval, &interval, conn,
+ "keepalives_interval"))
return 0;
if (interval < 0)
interval = 0;
@@ -2203,8 +2201,8 @@ setKeepalivesCount(PGconn *conn)
if (conn->keepalives_count == NULL)
return 1;
- if (!parse_int_param(conn->keepalives_count, &count, conn,
- "keepalives_count"))
+ if (!pq_parse_int_param(conn->keepalives_count, &count, conn,
+ "keepalives_count"))
return 0;
if (count < 0)
count = 0;
@@ -2269,12 +2267,12 @@ prepKeepalivesWin32(PGconn *conn)
int interval = -1;
if (conn->keepalives_idle &&
- !parse_int_param(conn->keepalives_idle, &idle, conn,
- "keepalives_idle"))
+ !pq_parse_int_param(conn->keepalives_idle, &idle, conn,
+ "keepalives_idle"))
return 0;
if (conn->keepalives_interval &&
- !parse_int_param(conn->keepalives_interval, &interval, conn,
- "keepalives_interval"))
+ !pq_parse_int_param(conn->keepalives_interval, &interval, conn,
+ "keepalives_interval"))
return 0;
if (!setKeepalivesWin32(conn->sock, idle, interval))
@@ -2300,8 +2298,8 @@ setTCPUserTimeout(PGconn *conn)
if (conn->pgtcp_user_timeout == NULL)
return 1;
- if (!parse_int_param(conn->pgtcp_user_timeout, &timeout, conn,
- "tcp_user_timeout"))
+ if (!pq_parse_int_param(conn->pgtcp_user_timeout, &timeout, conn,
+ "tcp_user_timeout"))
return 0;
if (timeout < 0)
@@ -2418,8 +2416,8 @@ connectDBComplete(PGconn *conn)
*/
if (conn->connect_timeout != NULL)
{
- if (!parse_int_param(conn->connect_timeout, &timeout, conn,
- "connect_timeout"))
+ if (!pq_parse_int_param(conn->connect_timeout, &timeout, conn,
+ "connect_timeout"))
{
/* mark the connection as bad to report the parsing failure */
conn->status = CONNECTION_BAD;
@@ -2666,7 +2664,7 @@ keep_going: /* We will come back to here until there is
thisport = DEF_PGPORT;
else
{
- if (!parse_int_param(ch->port, &thisport, conn, "port"))
+ if (!pq_parse_int_param(ch->port, &thisport, conn, "port"))
goto error_return;
if (thisport < 1 || thisport > 65535)
@@ -4694,373 +4692,6 @@ PQresetPoll(PGconn *conn)
return PGRES_POLLING_FAILED;
}
-/*
- * PQgetCancel: get a PGcancel structure corresponding to a connection.
- *
- * A copy is needed to be able to cancel a running query from a different
- * thread. If the same structure is used all structure members would have
- * to be individually locked (if the entire structure was locked, it would
- * be impossible to cancel a synchronous query because the structure would
- * have to stay locked for the duration of the query).
- */
-PGcancel *
-PQgetCancel(PGconn *conn)
-{
- PGcancel *cancel;
-
- if (!conn)
- return NULL;
-
- if (conn->sock == PGINVALID_SOCKET)
- return NULL;
-
- cancel = malloc(sizeof(PGcancel));
- if (cancel == NULL)
- return NULL;
-
- memcpy(&cancel->raddr, &conn->raddr, sizeof(SockAddr));
- cancel->be_pid = conn->be_pid;
- cancel->be_key = conn->be_key;
- /* We use -1 to indicate an unset connection option */
- cancel->pgtcp_user_timeout = -1;
- cancel->keepalives = -1;
- cancel->keepalives_idle = -1;
- cancel->keepalives_interval = -1;
- cancel->keepalives_count = -1;
- if (conn->pgtcp_user_timeout != NULL)
- {
- if (!parse_int_param(conn->pgtcp_user_timeout,
- &cancel->pgtcp_user_timeout,
- conn, "tcp_user_timeout"))
- goto fail;
- }
- if (conn->keepalives != NULL)
- {
- if (!parse_int_param(conn->keepalives,
- &cancel->keepalives,
- conn, "keepalives"))
- goto fail;
- }
- if (conn->keepalives_idle != NULL)
- {
- if (!parse_int_param(conn->keepalives_idle,
- &cancel->keepalives_idle,
- conn, "keepalives_idle"))
- goto fail;
- }
- if (conn->keepalives_interval != NULL)
- {
- if (!parse_int_param(conn->keepalives_interval,
- &cancel->keepalives_interval,
- conn, "keepalives_interval"))
- goto fail;
- }
- if (conn->keepalives_count != NULL)
- {
- if (!parse_int_param(conn->keepalives_count,
- &cancel->keepalives_count,
- conn, "keepalives_count"))
- goto fail;
- }
-
- return cancel;
-
-fail:
- free(cancel);
- return NULL;
-}
-
-/* PQfreeCancel: free a cancel structure */
-void
-PQfreeCancel(PGcancel *cancel)
-{
- free(cancel);
-}
-
-
-/*
- * Sets an integer socket option on a TCP socket, if the provided value is
- * not negative. Returns false if setsockopt fails for some reason.
- *
- * CAUTION: This needs to be signal safe, since it's used by PQcancel.
- */
-#if defined(TCP_USER_TIMEOUT) || !defined(WIN32)
-static bool
-optional_setsockopt(int fd, int protoid, int optid, int value)
-{
- if (value < 0)
- return true;
- if (setsockopt(fd, protoid, optid, (char *) &value, sizeof(value)) < 0)
- return false;
- return true;
-}
-#endif
-
-
-/*
- * PQcancel: request query cancel
- *
- * The return value is true if the cancel request was successfully
- * dispatched, false if not (in which case an error message is available).
- * Note: successful dispatch is no guarantee that there will be any effect at
- * the backend. The application must read the operation result as usual.
- *
- * On failure, an error message is stored in *errbuf, which must be of size
- * errbufsize (recommended size is 256 bytes). *errbuf is not changed on
- * success return.
- *
- * CAUTION: we want this routine to be safely callable from a signal handler
- * (for example, an application might want to call it in a SIGINT handler).
- * This means we cannot use any C library routine that might be non-reentrant.
- * malloc/free are often non-reentrant, and anything that might call them is
- * just as dangerous. We avoid sprintf here for that reason. Building up
- * error messages with strcpy/strcat is tedious but should be quite safe.
- * We also save/restore errno in case the signal handler support doesn't.
- */
-int
-PQcancel(PGcancel *cancel, char *errbuf, int errbufsize)
-{
- int save_errno = SOCK_ERRNO;
- pgsocket tmpsock = PGINVALID_SOCKET;
- int maxlen;
- struct
- {
- uint32 packetlen;
- CancelRequestPacket cp;
- } crp;
-
- if (!cancel)
- {
- strlcpy(errbuf, "PQcancel() -- no cancel object supplied", errbufsize);
- /* strlcpy probably doesn't change errno, but be paranoid */
- SOCK_ERRNO_SET(save_errno);
- return false;
- }
-
- /*
- * We need to open a temporary connection to the postmaster. Do this with
- * only kernel calls.
- */
- if ((tmpsock = socket(cancel->raddr.addr.ss_family, SOCK_STREAM, 0)) == PGINVALID_SOCKET)
- {
- strlcpy(errbuf, "PQcancel() -- socket() failed: ", errbufsize);
- goto cancel_errReturn;
- }
-
- /*
- * Since this connection will only be used to send a single packet of
- * data, we don't need NODELAY. We also don't set the socket to
- * nonblocking mode, because the API definition of PQcancel requires the
- * cancel to be sent in a blocking way.
- *
- * We do set socket options related to keepalives and other TCP timeouts.
- * This ensures that this function does not block indefinitely when
- * reasonable keepalive and timeout settings have been provided.
- */
- if (cancel->raddr.addr.ss_family != AF_UNIX &&
- cancel->keepalives != 0)
- {
-#ifndef WIN32
- if (!optional_setsockopt(tmpsock, SOL_SOCKET, SO_KEEPALIVE, 1))
- {
- strlcpy(errbuf, "PQcancel() -- setsockopt(SO_KEEPALIVE) failed: ", errbufsize);
- goto cancel_errReturn;
- }
-
-#ifdef PG_TCP_KEEPALIVE_IDLE
- if (!optional_setsockopt(tmpsock, IPPROTO_TCP, PG_TCP_KEEPALIVE_IDLE,
- cancel->keepalives_idle))
- {
- strlcpy(errbuf, "PQcancel() -- setsockopt(" PG_TCP_KEEPALIVE_IDLE_STR ") failed: ", errbufsize);
- goto cancel_errReturn;
- }
-#endif
-
-#ifdef TCP_KEEPINTVL
- if (!optional_setsockopt(tmpsock, IPPROTO_TCP, TCP_KEEPINTVL,
- cancel->keepalives_interval))
- {
- strlcpy(errbuf, "PQcancel() -- setsockopt(TCP_KEEPINTVL) failed: ", errbufsize);
- goto cancel_errReturn;
- }
-#endif
-
-#ifdef TCP_KEEPCNT
- if (!optional_setsockopt(tmpsock, IPPROTO_TCP, TCP_KEEPCNT,
- cancel->keepalives_count))
- {
- strlcpy(errbuf, "PQcancel() -- setsockopt(TCP_KEEPCNT) failed: ", errbufsize);
- goto cancel_errReturn;
- }
-#endif
-
-#else /* WIN32 */
-
-#ifdef SIO_KEEPALIVE_VALS
- if (!setKeepalivesWin32(tmpsock,
- cancel->keepalives_idle,
- cancel->keepalives_interval))
- {
- strlcpy(errbuf, "PQcancel() -- WSAIoctl(SIO_KEEPALIVE_VALS) failed: ", errbufsize);
- goto cancel_errReturn;
- }
-#endif /* SIO_KEEPALIVE_VALS */
-#endif /* WIN32 */
-
- /* TCP_USER_TIMEOUT works the same way on Unix and Windows */
-#ifdef TCP_USER_TIMEOUT
- if (!optional_setsockopt(tmpsock, IPPROTO_TCP, TCP_USER_TIMEOUT,
- cancel->pgtcp_user_timeout))
- {
- strlcpy(errbuf, "PQcancel() -- setsockopt(TCP_USER_TIMEOUT) failed: ", errbufsize);
- goto cancel_errReturn;
- }
-#endif
- }
-
-retry3:
- if (connect(tmpsock, (struct sockaddr *) &cancel->raddr.addr,
- cancel->raddr.salen) < 0)
- {
- if (SOCK_ERRNO == EINTR)
- /* Interrupted system call - we'll just try again */
- goto retry3;
- strlcpy(errbuf, "PQcancel() -- connect() failed: ", errbufsize);
- goto cancel_errReturn;
- }
-
- /* Create and send the cancel request packet. */
-
- crp.packetlen = pg_hton32((uint32) sizeof(crp));
- crp.cp.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
- crp.cp.backendPID = pg_hton32(cancel->be_pid);
- crp.cp.cancelAuthCode = pg_hton32(cancel->be_key);
-
-retry4:
- if (send(tmpsock, (char *) &crp, sizeof(crp), 0) != (int) sizeof(crp))
- {
- if (SOCK_ERRNO == EINTR)
- /* Interrupted system call - we'll just try again */
- goto retry4;
- strlcpy(errbuf, "PQcancel() -- send() failed: ", errbufsize);
- goto cancel_errReturn;
- }
-
- /*
- * Wait for the postmaster to close the connection, which indicates that
- * it's processed the request. Without this delay, we might issue another
- * command only to find that our cancel zaps that command instead of the
- * one we thought we were canceling. Note we don't actually expect this
- * read to obtain any data, we are just waiting for EOF to be signaled.
- */
-retry5:
- if (recv(tmpsock, (char *) &crp, 1, 0) < 0)
- {
- if (SOCK_ERRNO == EINTR)
- /* Interrupted system call - we'll just try again */
- goto retry5;
- /* we ignore other error conditions */
- }
-
- /* All done */
- closesocket(tmpsock);
- SOCK_ERRNO_SET(save_errno);
- return true;
-
-cancel_errReturn:
-
- /*
- * Make sure we don't overflow the error buffer. Leave space for the \n at
- * the end, and for the terminating zero.
- */
- maxlen = errbufsize - strlen(errbuf) - 2;
- if (maxlen >= 0)
- {
- /*
- * We can't invoke strerror here, since it's not signal-safe. Settle
- * for printing the decimal value of errno. Even that has to be done
- * the hard way.
- */
- int val = SOCK_ERRNO;
- char buf[32];
- char *bufp;
-
- bufp = buf + sizeof(buf) - 1;
- *bufp = '\0';
- do
- {
- *(--bufp) = (val % 10) + '0';
- val /= 10;
- } while (val > 0);
- bufp -= 6;
- memcpy(bufp, "error ", 6);
- strncat(errbuf, bufp, maxlen);
- strcat(errbuf, "\n");
- }
- if (tmpsock != PGINVALID_SOCKET)
- closesocket(tmpsock);
- SOCK_ERRNO_SET(save_errno);
- return false;
-}
-
-
-/*
- * PQrequestCancel: old, not thread-safe function for requesting query cancel
- *
- * Returns true if able to send the cancel request, false if not.
- *
- * On failure, the error message is saved in conn->errorMessage; this means
- * that this can't be used when there might be other active operations on
- * the connection object.
- *
- * NOTE: error messages will be cut off at the current size of the
- * error message buffer, since we dare not try to expand conn->errorMessage!
- */
-int
-PQrequestCancel(PGconn *conn)
-{
- int r;
- PGcancel *cancel;
-
- /* Check we have an open connection */
- if (!conn)
- return false;
-
- if (conn->sock == PGINVALID_SOCKET)
- {
- strlcpy(conn->errorMessage.data,
- "PQrequestCancel() -- connection is not open\n",
- conn->errorMessage.maxlen);
- conn->errorMessage.len = strlen(conn->errorMessage.data);
- conn->errorReported = 0;
-
- return false;
- }
-
- cancel = PQgetCancel(conn);
- if (cancel)
- {
- r = PQcancel(cancel, conn->errorMessage.data,
- conn->errorMessage.maxlen);
- PQfreeCancel(cancel);
- }
- else
- {
- strlcpy(conn->errorMessage.data, "out of memory",
- conn->errorMessage.maxlen);
- r = false;
- }
-
- if (!r)
- {
- conn->errorMessage.len = strlen(conn->errorMessage.data);
- conn->errorReported = 0;
- }
-
- return r;
-}
-
-
/*
* pqPacketSend() -- convenience routine to send a message to server.
*
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index f0143726bbc..66b77e75e18 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -678,6 +678,8 @@ extern void pqDropConnection(PGconn *conn, bool flushInput);
extern int pqPacketSend(PGconn *conn, char pack_type,
const void *buf, size_t buf_len);
extern bool pqGetHomeDirectory(char *buf, int bufsize);
+extern bool pq_parse_int_param(const char *value, int *result, PGconn *conn,
+ const char *context);
extern pgthreadlock_t pg_g_threadlock;
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index c76a1e40c83..a47b6f425dd 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -6,6 +6,7 @@
libpq_sources = files(
'fe-auth-scram.c',
'fe-auth.c',
+ 'fe-cancel.c',
'fe-connect.c',
'fe-exec.c',
'fe-lobj.c',
base-commit: f7cf9494bad3aef1b2ba1cd84376a1e71797ac50
--
2.34.1
v26-0004-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=v26-0004-Add-non-blocking-version-of-PQcancel.patchDownload
From 67f311ca416361668a6865d2d893f8c6d0cb6cd0 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Fri, 26 Jan 2024 17:01:00 +0100
Subject: [PATCH v26 4/5] Add non-blocking version of PQcancel
This patch makes the following changes in libpq:
1. Add a new PQcancelSend function, which sends cancellation requests
using the regular connection establishment code. This makes sure
that cancel requests support and use all connection options
including encryption.
2. Add a new PQcancelConn function which allows sending cancellation in
a non-blocking way by using it together with the newly added
PQcancelPoll and PQcancelSocket.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
doc/src/sgml/libpq.sgml | 280 ++++++++++++++--
src/interfaces/libpq/exports.txt | 8 +
src/interfaces/libpq/fe-cancel.c | 304 +++++++++++++++++-
src/interfaces/libpq/fe-connect.c | 130 +++++++-
src/interfaces/libpq/libpq-fe.h | 27 +-
src/interfaces/libpq/libpq-int.h | 10 +
.../modules/libpq_pipeline/libpq_pipeline.c | 263 ++++++++++++++-
7 files changed, 974 insertions(+), 48 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index d0d5aefadc0..9808e678650 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -5281,7 +5281,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelSend"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -6034,13 +6034,223 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Prepares a connection over which a cancel request can be sent.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>
+ object, but it won't instantly start sending a cancel request over this
+ connection. A cancel request can be sent over this connection in a
+ blocking manner using <xref linkend="libpq-PQcancelSend"/> and in a
+ non-blocking manner using <xref linkend="libpq-PQcancelPoll"/>.
+ The return value should can be passed to <xref linkend="libpq-PQcancelStatus"/>,
+ to check if the <structname>PGcancelConn</structname> object was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe way.
+ </para>
+
+ <para>
+ If the original connection is encrypted (using TLS or GSS), then the
+ connection for the cancel request is encrypted in the same way. Any
+ connection options that are only used during authentication or after
+ authentication of the client are ignored though, because cancellation
+ requests do not require authentication and the connection is closed right
+ after the cancellation request is submitted.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSend">
+ <term><function>PQcancelSend</function><indexterm><primary>PQcancelSend</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a blocking manner.
+<synopsis>
+int PQcancelSend(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelConn"/>
+ The return value of <xref linkend="libpq-PQcancelSend"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQstatus"/> that can be used for
+ cancellation connections.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ <para>
+ In addition to all the statuses that a <structname>PGconn</structname>
+ can have, this connection can have one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQcancelPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelPoll"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually canceled) and that the
+ connection is now closed. While a <symbol>CONNECTION_OK</symbol> result
+ for <structname>PGconn</structname> means that queries can be sent over
+ the connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSocket">
+ <term><function>PQcancelSocket</function><indexterm><primary>PQcancelSocket</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQsocket"/> that can be used for
+ cancellation connections.
+<synopsis>
+int PQcancelSocket(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelPoll">
+ <term><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQconnectPoll"/> that can be used for
+ cancellation connections.
+<synopsis>
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -6082,14 +6292,28 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ An insecure version of <xref linkend="libpq-PQcancelSend"/>, but one
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
+ <xref linkend="libpq-PQcancel"/> should only be used if it's necessary
+ to cancel a query from a signal-handler. If signal-safety is not needed,
+ <xref linkend="libpq-PQcancelSend"/> should be used to cancel the query
+ instead. <xref linkend="libpq-PQcancel"/> can be safely invoked from a
+ signal handler, if the <parameter>errbuf</parameter> is a local variable
+ in the signal handler. The <structname>PGcancel</structname> object is
+ read-only as far as <xref linkend="libpq-PQcancel"/> is concerned, so it
+ can also be invoked from a thread that is separate from the one
+ manipulating the <structname>PGconn</structname> object.
+ </para>
+
+ <para>
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
with an explanatory error message. <parameter>errbuf</parameter>
must be a char array of size <parameter>errbufsize</parameter> (the
@@ -6097,21 +6321,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
- <para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. This function connects to
+ postgres on the same address and port as the original connection. The
+ only connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. <emphasis>This means the connection
+ for the cancel request is never encrypted using TLS or GSS</emphasis>.
</para>
</listitem>
</varlistentry>
@@ -6123,13 +6348,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelSend"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelSend"/> should be
+ used instead, to avoid the security and thread-safety issues that this
+ function has. This function has the same security issues as
+ <xref linkend="libpq-PQcancel"/>, but without the benefit of being
+ signal-safe.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -9356,7 +9590,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelSend"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 088592deb16..125bc80679a 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -193,3 +193,11 @@ PQsendClosePrepared 190
PQsendClosePortal 191
PQchangePassword 192
PQsendPipelineSync 193
+PQcancelSend 194
+PQcancelConn 195
+PQcancelPoll 196
+PQcancelStatus 197
+PQcancelSocket 198
+PQcancelErrorMessage 199
+PQcancelReset 200
+PQcancelFinish 201
diff --git a/src/interfaces/libpq/fe-cancel.c b/src/interfaces/libpq/fe-cancel.c
index f1d836d0216..e9262359453 100644
--- a/src/interfaces/libpq/fe-cancel.c
+++ b/src/interfaces/libpq/fe-cancel.c
@@ -19,6 +19,290 @@
#include "libpq-int.h"
#include "port/pg_bswap.h"
+
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a query on the given connection. This requires polling
+ * the returned PGcancelConn to actually complete the cancellation of the
+ * query.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = pqMakeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!pqCopyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!pqConnectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancellation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we manually create the host and address arrays with
+ * a single element after freeing the host array that we generated from
+ * the connection options.
+ */
+ pq_release_conn_hosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
+
+/*
+ * PQcancelSend
+ *
+ * Send a cancellation request in a blocking fashion.
+ * Returns 1 if successful 0 if not.
+ */
+int
+PQcancelSend(PGcancelConn * cancelConn)
+{
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return 1;
+
+ if (!pqConnectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_BAD;
+ return 1;
+ }
+
+ return pqConnectDBComplete(&cancelConn->conn);
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn * cancelConn)
+{
+ PGconn *conn = (PGconn *) cancelConn;
+ int n;
+
+ /*
+ * Before we can call PQconnectPoll we first need to start the connection
+ * using pqConnectDBStart. Non-cancel connections already do this whenever
+ * the connection is initialized. But cancel connections wait until the
+ * caller starts polling, because there might be a large delay between
+ * creating a cancel connection and actually wanting to use it.
+ */
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!pqConnectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+ }
+
+ /*
+ * The rest of the connection establishement we leave to PQconnectPoll,
+ * since it's very similar to normal connection establishment. But once we
+ * get to the CONNECTION_AWAITING_RESPONSE we need to do our own thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * If we receive an error report it, but only if errno is non-zero.
+ * Otherwise we assume it's an EOF, which is what we expect from the
+ * server.
+ *
+ * We skip this for Windows, because Windows is a bit special in its EOF
+ * behaviour for TCP. Sometimes it will error with an ECONNRESET when
+ * there is a clean connection closure. See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here.
+ */
+ if (n < 0 && errno != 0)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangly do
+ * receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF. Which is what we were
+ * expecting. The cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn * cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn * cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn * cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn * cancelConn)
+{
+ pqClosePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_STARTING;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn * cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
+
+
/*
* PQgetCancel: get a PGcancel structure corresponding to a connection.
*
@@ -55,36 +339,36 @@ PQgetCancel(PGconn *conn)
if (conn->pgtcp_user_timeout != NULL)
{
if (!pq_parse_int_param(conn->pgtcp_user_timeout,
- &cancel->pgtcp_user_timeout,
- conn, "tcp_user_timeout"))
+ &cancel->pgtcp_user_timeout,
+ conn, "tcp_user_timeout"))
goto fail;
}
if (conn->keepalives != NULL)
{
if (!pq_parse_int_param(conn->keepalives,
- &cancel->keepalives,
- conn, "keepalives"))
+ &cancel->keepalives,
+ conn, "keepalives"))
goto fail;
}
if (conn->keepalives_idle != NULL)
{
if (!pq_parse_int_param(conn->keepalives_idle,
- &cancel->keepalives_idle,
- conn, "keepalives_idle"))
+ &cancel->keepalives_idle,
+ conn, "keepalives_idle"))
goto fail;
}
if (conn->keepalives_interval != NULL)
{
if (!pq_parse_int_param(conn->keepalives_interval,
- &cancel->keepalives_interval,
- conn, "keepalives_interval"))
+ &cancel->keepalives_interval,
+ conn, "keepalives_interval"))
goto fail;
}
if (conn->keepalives_count != NULL)
{
if (!pq_parse_int_param(conn->keepalives_count,
- &cancel->keepalives_count,
- conn, "keepalives_count"))
+ &cancel->keepalives_count,
+ conn, "keepalives_count"))
goto fail;
}
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 8dbc9d2cc57..b63ac63d514 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -616,8 +616,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQcancelReset invocations. Otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -923,6 +932,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+bool
+pqCopyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2354,10 +2402,18 @@ pqConnectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address. These fields have already set up in PQcancelConn. So leave
+ * these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2499,7 +2555,10 @@ pqConnectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2624,13 +2683,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -3272,6 +3335,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancellation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -4021,8 +4107,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4390,6 +4482,7 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
+ release_conn_addrinfo(conn);
pq_release_conn_hosts(conn);
free(conn->client_encoding_initial);
@@ -4541,6 +4634,15 @@ pq_release_conn_hosts(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4594,7 +4696,13 @@ pqClosePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we would have to rebuild it during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index defc415fa3f..857ba54d943 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_STARTING /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,30 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* Create a PGcancelConn that's used to cancel a query on the given PGconn */
+extern PGcancelConn * PQcancelConn(PGconn *conn);
+/* issue a blocking cancel request */
+extern int PQcancelSend(PGcancelConn * conn);
+
+/* issue or poll a non-blocking cancel request */
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn * cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
+extern int PQcancelSocket(const PGcancelConn * cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
+extern void PQcancelReset(PGcancelConn * cancelConn);
+extern void PQcancelFinish(PGcancelConn * cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* a less secure version of PQcancelSend, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index b1e1bd6331f..94990292a04 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -409,6 +409,10 @@ struct pg_conn
char *require_auth; /* name of the expected auth method */
char *load_balance_hosts; /* load balance over hosts */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -621,6 +625,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
@@ -678,6 +687,7 @@ extern void pqDropConnection(PGconn *conn, bool flushInput);
extern int pqPacketSend(PGconn *conn, char pack_type,
const void *buf, size_t buf_len);
extern bool pqGetHomeDirectory(char *buf, int bufsize);
+extern bool pqCopyPGconn(PGconn *srcConn, PGconn *dstConn);
extern bool pq_parse_int_param(const char *value, int *result, PGconn *conn,
const char *context);
extern void pq_release_conn_hosts(PGconn *conn);
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 5f43aa40de4..580003002e4 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got canceled.
+ *
+ * This is a function wrapped in a macro to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_canceled(conn) confirm_query_canceled_impl(__LINE__, conn)
+static void
+confirm_query_canceled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_canceled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_canceled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_canceled(conn);
+
+ /* test PQcancelSend */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (!PQcancelSend(cancelConn))
+ pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -1789,6 +2047,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1890,7 +2149,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
On 2024-Jan-26, Jelte Fennema-Nio wrote:
Okay I tried doing that. I think the end result is indeed quite nice,
having all the cancellation related functions together in a file. But
it did require making a bunch of static functions in fe-connect
extern, and adding them to libpq-int.h. On one hand that seems fine to
me, on the other maybe that indicates that this cancellation logic
makes sense to be in the same file as the other connection functions
(in a sense, connecting is all that a cancel request does).
Yeah, I see that point of view as well. I like the end result; the
additional protos in libpq-int.h don't bother me. Does anybody else
wants to share their opinion on it? If none, then I'd consider going
ahead with this version.
--
Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
"We’ve narrowed the problem down to the customer’s pants being in a situation
of vigorous combustion" (Robert Haas, Postgres expert extraordinaire)
On Fri, 26 Jan 2024 at 18:19, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Yeah, I see that point of view as well. I like the end result; the
additional protos in libpq-int.h don't bother me. Does anybody else
wants to share their opinion on it? If none, then I'd consider going
ahead with this version.
To be clear, I'm +1 on the new file structure (although if people feel
strongly against it, I don't care enough to make a big deal out of
it).
@Alvaro did you have any other comments on the contents of the patch btw?
On Fri, 26 Jan 2024 at 22:22, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
On Fri, 26 Jan 2024 at 13:11, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
I wonder, would it make sense to put all these new functions in a
separate file fe-cancel.c?Okay I tried doing that. I think the end result is indeed quite nice,
having all the cancellation related functions together in a file. But
it did require making a bunch of static functions in fe-connect
extern, and adding them to libpq-int.h. On one hand that seems fine to
me, on the other maybe that indicates that this cancellation logic
makes sense to be in the same file as the other connection functions
(in a sense, connecting is all that a cancel request does).
CFBot shows that the patch has few compilation errors as in [1]https://cirrus-ci.com/task/6210637211107328:
[17:07:07.621] /usr/bin/ld:
../../../src/fe_utils/libpgfeutils.a(cancel.o): in function
`handle_sigint':
[17:07:07.621] cancel.c:(.text+0x50): undefined reference to `PQcancel'
[17:07:07.621] /usr/bin/ld:
../../../src/fe_utils/libpgfeutils.a(cancel.o): in function
`SetCancelConn':
[17:07:07.621] cancel.c:(.text+0x10c): undefined reference to `PQfreeCancel'
[17:07:07.621] /usr/bin/ld: cancel.c:(.text+0x114): undefined
reference to `PQgetCancel'
[17:07:07.621] /usr/bin/ld:
../../../src/fe_utils/libpgfeutils.a(cancel.o): in function
`ResetCancelConn':
[17:07:07.621] cancel.c:(.text+0x148): undefined reference to `PQfreeCancel'
[17:07:07.621] /usr/bin/ld:
../../../src/fe_utils/libpgfeutils.a(connect_utils.o): in function
`disconnectDatabase':
[17:07:07.621] connect_utils.c:(.text+0x2fc): undefined reference to
`PQcancelConn'
[17:07:07.621] /usr/bin/ld: connect_utils.c:(.text+0x307): undefined
reference to `PQcancelSend'
[17:07:07.621] /usr/bin/ld: connect_utils.c:(.text+0x30f): undefined
reference to `PQcancelFinish'
[17:07:07.623] /usr/bin/ld: ../../../src/interfaces/libpq/libpq.so:
undefined reference to `PQcancelPoll'
[17:07:07.626] collect2: error: ld returned 1 exit status
[17:07:07.626] make[3]: *** [Makefile:31: pg_amcheck] Error 1
[17:07:07.626] make[2]: *** [Makefile:45: all-pg_amcheck-recurse] Error 2
[17:07:07.626] make[2]: *** Waiting for unfinished jobs....
[17:07:08.126] /usr/bin/ld: ../../../src/interfaces/libpq/libpq.so:
undefined reference to `PQcancelPoll'
[17:07:08.130] collect2: error: ld returned 1 exit status
[17:07:08.131] make[3]: *** [Makefile:42: initdb] Error 1
[17:07:08.131] make[2]: *** [Makefile:45: all-initdb-recurse] Error 2
[17:07:08.492] /usr/bin/ld: ../../../src/interfaces/libpq/libpq.so:
undefined reference to `PQcancelPoll'
[17:07:08.495] collect2: error: ld returned 1 exit status
[17:07:08.496] make[3]: *** [Makefile:50: pg_basebackup] Error 1
[17:07:08.496] make[2]: *** [Makefile:45: all-pg_basebackup-recurse] Error 2
[17:07:09.060] /usr/bin/ld: parallel.o: in function `sigTermHandler':
[17:07:09.060] parallel.c:(.text+0x1aa): undefined reference to `PQcancel'
Please post an updated version for the same.
[1]: https://cirrus-ci.com/task/6210637211107328
Regards,
Vignesh
On Sun, 28 Jan 2024 at 04:15, vignesh C <vignesh21@gmail.com> wrote:
CFBot shows that the patch has few compilation errors as in [1]:
[17:07:07.621] /usr/bin/ld:
../../../src/fe_utils/libpgfeutils.a(cancel.o): in function
`handle_sigint':
[17:07:07.621] cancel.c:(.text+0x50): undefined reference to `PQcancel'
I forgot to update ./configure based builds with the new file, only
meson was working. Also it seems I trimmed the header list fe-cancel.c
a bit too much for OSX, so I added unistd.h back.
Both of those are fixed now.
Attachments:
v27-0002-libpq-Add-pq_release_conn_hosts-function.patchapplication/octet-stream; name=v27-0002-libpq-Add-pq_release_conn_hosts-function.patchDownload
From 134d698f01e430fa8f119a4804888e022b33a68e Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Fri, 26 Jan 2024 17:01:28 +0100
Subject: [PATCH v27 2/5] libpq: Add pq_release_conn_hosts function
In a follow up PR we'll need to free this connhost field in a function
defined in fe-cancel.c
So this extracts the logic to a dedicated extern function.
---
src/interfaces/libpq/fe-connect.c | 39 ++++++++++++++++++++-----------
src/interfaces/libpq/libpq-int.h | 1 +
2 files changed, 27 insertions(+), 13 deletions(-)
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 5357b0a9d22..0622fe32253 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -4395,19 +4395,7 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
- /* clean up pg_conn_host structures */
- for (int i = 0; i < conn->nconnhost; ++i)
- {
- free(conn->connhost[i].host);
- free(conn->connhost[i].hostaddr);
- free(conn->connhost[i].port);
- if (conn->connhost[i].password != NULL)
- {
- explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
- free(conn->connhost[i].password);
- }
- }
- free(conn->connhost);
+ pq_release_conn_hosts(conn);
free(conn->client_encoding_initial);
free(conn->events);
@@ -4526,6 +4514,31 @@ release_conn_addrinfo(PGconn *conn)
}
}
+/*
+ * pq_release_conn_hosts
+ * - Free the host list in the PGconn.
+ */
+void
+pq_release_conn_hosts(PGconn *conn)
+{
+ if (conn->connhost)
+ {
+ for (int i = 0; i < conn->nconnhost; ++i)
+ {
+ free(conn->connhost[i].host);
+ free(conn->connhost[i].hostaddr);
+ free(conn->connhost[i].port);
+ if (conn->connhost[i].password != NULL)
+ {
+ explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
+ free(conn->connhost[i].password);
+ }
+ }
+ free(conn->connhost);
+ }
+}
+
+
/*
* sendTerminateConn
* - Send a terminate message to backend.
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 66b77e75e18..a0da7356584 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -680,6 +680,7 @@ extern int pqPacketSend(PGconn *conn, char pack_type,
extern bool pqGetHomeDirectory(char *buf, int bufsize);
extern bool pq_parse_int_param(const char *value, int *result, PGconn *conn,
const char *context);
+extern void pq_release_conn_hosts(PGconn *conn);
extern pgthreadlock_t pg_g_threadlock;
--
2.34.1
v27-0005-Start-using-new-libpq-cancel-APIs.patchapplication/octet-stream; name=v27-0005-Start-using-new-libpq-cancel-APIs.patchDownload
From c9e1b1d66efddf9f15a22daf450757c6d6561775 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Dec 2023 13:39:09 +0100
Subject: [PATCH v27 5/5] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in the codebase with these newer
ones.
---
contrib/dblink/dblink.c | 30 +++--
contrib/postgres_fdw/connection.c | 105 +++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 11 +-
src/test/isolation/isolationtester.c | 29 ++---
6 files changed, 145 insertions(+), 52 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 19a362526d2..81749b2cdd0 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1346,22 +1346,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelConn(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (!PQcancelSend(cancelConn))
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 4931ebf5915..3ac74ff6a7f 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -133,7 +133,7 @@ static void pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue);
static void pgfdw_reject_incomplete_xact_state_change(ConnCacheEntry *entry);
static void pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_cancel_query(PGconn *conn);
-static bool pgfdw_cancel_query_begin(PGconn *conn);
+static bool pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime);
static bool pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
bool consume_input);
static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query,
@@ -1315,36 +1315,104 @@ pgfdw_cancel_query(PGconn *conn)
endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
CONNECTION_CLEANUP_TIMEOUT);
- if (!pgfdw_cancel_query_begin(conn))
+ if (!pgfdw_cancel_query_begin(conn, endtime))
return false;
return pgfdw_cancel_query_end(conn, endtime, false);
}
static bool
-pgfdw_cancel_query_begin(PGconn *conn)
+pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
{
- PGcancel *cancel;
- char errbuf[256];
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQcancelStatus(cancel_conn) == CONNECTION_BAD)
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
}
- return true;
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ return !failed;
}
static bool
@@ -1685,7 +1753,10 @@ pgfdw_abort_cleanup_begin(ConnCacheEntry *entry, bool toplevel,
*/
if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
{
- if (!pgfdw_cancel_query_begin(entry->conn))
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ CONNECTION_CLEANUP_TIMEOUT);
+
+ if (!pgfdw_cancel_query_begin(entry->conn, endtime))
return false; /* Unable to cancel running query */
*cancel_requested = lappend(*cancel_requested, entry);
}
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index b5a38aeb214..16206a23a9d 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2698,6 +2698,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index f410c3db4e6..01a98750611 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -717,6 +717,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 808d54461fd..c5cd2f57875 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,14 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
+ PGcancelConn *cancelConn = PQcancelConn(conn);
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQcancelSend(cancelConn);
+ PQcancelFinish(cancelConn);
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153a..de31a875716 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- if (cancel != NULL)
+ if (PQcancelSend(cancel_conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
v27-0003-libpq-Change-some-static-functions-to-extern.patchapplication/octet-stream; name=v27-0003-libpq-Change-some-static-functions-to-extern.patchDownload
From 5851e24c40ee68a6da3837f27a6d0cab18322b94 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Fri, 26 Jan 2024 16:47:51 +0100
Subject: [PATCH v27 3/5] libpq: Change some static functions to extern
This is in preparation of a follow up commit that starts using these
functions from fe-cancel.c.
---
src/interfaces/libpq/fe-connect.c | 85 +++++++++++++++----------------
src/interfaces/libpq/libpq-int.h | 6 +++
2 files changed, 46 insertions(+), 45 deletions(-)
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 0622fe32253..8dbc9d2cc57 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -387,15 +387,10 @@ static const char uri_designator[] = "postgresql://";
static const char short_uri_designator[] = "postgres://";
static bool connectOptions1(PGconn *conn, const char *conninfo);
-static bool connectOptions2(PGconn *conn);
-static int connectDBStart(PGconn *conn);
-static int connectDBComplete(PGconn *conn);
static PGPing internal_ping(PGconn *conn);
-static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
static void freePGconn(PGconn *conn);
-static void closePGconn(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
static int store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
@@ -644,7 +639,7 @@ pqDropServerData(PGconn *conn)
* PQconnectStart or PQconnectStartParams (which differ in the same way as
* PQconnectdb and PQconnectdbParams) and PQconnectPoll.
*
- * Internally, the static functions connectDBStart, connectDBComplete
+ * Internally, the static functions pqConnectDBStart, pqConnectDBComplete
* are part of the connection procedure.
*/
@@ -678,7 +673,7 @@ PQconnectdbParams(const char *const *keywords,
PGconn *conn = PQconnectStartParams(keywords, values, expand_dbname);
if (conn && conn->status != CONNECTION_BAD)
- (void) connectDBComplete(conn);
+ (void) pqConnectDBComplete(conn);
return conn;
}
@@ -731,7 +726,7 @@ PQconnectdb(const char *conninfo)
PGconn *conn = PQconnectStart(conninfo);
if (conn && conn->status != CONNECTION_BAD)
- (void) connectDBComplete(conn);
+ (void) pqConnectDBComplete(conn);
return conn;
}
@@ -785,7 +780,7 @@ PQconnectStartParams(const char *const *keywords,
* to initialize conn->errorMessage to empty. All subsequent steps during
* connection initialization will only append to that buffer.
*/
- conn = makeEmptyPGconn();
+ conn = pqMakeEmptyPGconn();
if (conn == NULL)
return NULL;
@@ -819,15 +814,15 @@ PQconnectStartParams(const char *const *keywords,
/*
* Compute derived options
*/
- if (!connectOptions2(conn))
+ if (!pqConnectOptions2(conn))
return conn;
/*
* Connect to the database
*/
- if (!connectDBStart(conn))
+ if (!pqConnectDBStart(conn))
{
- /* Just in case we failed to set it in connectDBStart */
+ /* Just in case we failed to set it in pqConnectDBStart */
conn->status = CONNECTION_BAD;
}
@@ -863,7 +858,7 @@ PQconnectStart(const char *conninfo)
* to initialize conn->errorMessage to empty. All subsequent steps during
* connection initialization will only append to that buffer.
*/
- conn = makeEmptyPGconn();
+ conn = pqMakeEmptyPGconn();
if (conn == NULL)
return NULL;
@@ -876,15 +871,15 @@ PQconnectStart(const char *conninfo)
/*
* Compute derived options
*/
- if (!connectOptions2(conn))
+ if (!pqConnectOptions2(conn))
return conn;
/*
* Connect to the database
*/
- if (!connectDBStart(conn))
+ if (!pqConnectDBStart(conn))
{
- /* Just in case we failed to set it in connectDBStart */
+ /* Just in case we failed to set it in pqConnectDBStart */
conn->status = CONNECTION_BAD;
}
@@ -895,7 +890,7 @@ PQconnectStart(const char *conninfo)
* Move option values into conn structure
*
* Don't put anything cute here --- intelligence should be in
- * connectOptions2 ...
+ * pqConnectOptions2 ...
*
* Returns true on success. On failure, returns false and sets error message.
*/
@@ -933,7 +928,7 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
*
* Internal subroutine to set up connection parameters given an already-
* created PGconn and a conninfo string. Derived settings should be
- * processed by calling connectOptions2 next. (We split them because
+ * processed by calling pqConnectOptions2 next. (We split them because
* PQsetdbLogin overrides defaults in between.)
*
* Returns true if OK, false if trouble (in which case errorMessage is set
@@ -1055,15 +1050,15 @@ libpq_prng_init(PGconn *conn)
}
/*
- * connectOptions2
+ * pqConnectOptions2
*
* Compute derived connection options after absorbing all user-supplied info.
*
* Returns true if OK, false if trouble (in which case errorMessage is set
* and so is conn->status).
*/
-static bool
-connectOptions2(PGconn *conn)
+bool
+pqConnectOptions2(PGconn *conn)
{
int i;
@@ -1822,7 +1817,7 @@ PQsetdbLogin(const char *pghost, const char *pgport, const char *pgoptions,
* to initialize conn->errorMessage to empty. All subsequent steps during
* connection initialization will only append to that buffer.
*/
- conn = makeEmptyPGconn();
+ conn = pqMakeEmptyPGconn();
if (conn == NULL)
return NULL;
@@ -1901,14 +1896,14 @@ PQsetdbLogin(const char *pghost, const char *pgport, const char *pgoptions,
/*
* Compute derived options
*/
- if (!connectOptions2(conn))
+ if (!pqConnectOptions2(conn))
return conn;
/*
* Connect to the database
*/
- if (connectDBStart(conn))
- (void) connectDBComplete(conn);
+ if (pqConnectDBStart(conn))
+ (void) pqConnectDBComplete(conn);
return conn;
@@ -2323,14 +2318,14 @@ setTCPUserTimeout(PGconn *conn)
}
/* ----------
- * connectDBStart -
+ * pqConnectDBStart -
* Begin the process of making a connection to the backend.
*
* Returns 1 if successful, 0 if not.
* ----------
*/
-static int
-connectDBStart(PGconn *conn)
+int
+pqConnectDBStart(PGconn *conn)
{
if (!conn)
return 0;
@@ -2393,14 +2388,14 @@ connect_errReturn:
/*
- * connectDBComplete
+ * pqConnectDBComplete
*
* Block and complete a connection.
*
* Returns 1 on success, 0 on failure.
*/
-static int
-connectDBComplete(PGconn *conn)
+int
+pqConnectDBComplete(PGconn *conn)
{
PostgresPollingStatusType flag = PGRES_POLLING_WRITING;
time_t finish_time = ((time_t) -1);
@@ -2750,7 +2745,7 @@ keep_going: /* We will come back to here until there is
* combining it with the insertion.
*
* We don't need to initialize conn->prng_state here, because that
- * already happened in connectOptions2.
+ * already happened in pqConnectOptions2.
*/
for (int i = 1; i < conn->naddr; i++)
{
@@ -4227,7 +4222,7 @@ internal_ping(PGconn *conn)
/* Attempt to complete the connection */
if (conn->status != CONNECTION_BAD)
- (void) connectDBComplete(conn);
+ (void) pqConnectDBComplete(conn);
/* Definitely OK if we succeeded */
if (conn->status != CONNECTION_BAD)
@@ -4279,11 +4274,11 @@ internal_ping(PGconn *conn)
/*
- * makeEmptyPGconn
+ * pqMakeEmptyPGconn
* - create a PGconn data structure with (as yet) no interesting data
*/
-static PGconn *
-makeEmptyPGconn(void)
+PGconn *
+pqMakeEmptyPGconn(void)
{
PGconn *conn;
@@ -4376,7 +4371,7 @@ makeEmptyPGconn(void)
* freePGconn
* - free an idle (closed) PGconn data structure
*
- * NOTE: this should not overlap any functionality with closePGconn().
+ * NOTE: this should not overlap any functionality with pqClosePGconn().
* Clearing/resetting of transient state belongs there; what we do here is
* release data that is to be held for the life of the PGconn structure.
* If a value ought to be cleared/freed during PQreset(), do it there not here.
@@ -4563,15 +4558,15 @@ sendTerminateConn(PGconn *conn)
}
/*
- * closePGconn
+ * pqClosePGconn
* - properly close a connection to the backend
*
* This should reset or release all transient state, but NOT the connection
* parameters. On exit, the PGconn should be in condition to start a fresh
* connection with the same parameters (see PQreset()).
*/
-static void
-closePGconn(PGconn *conn)
+void
+pqClosePGconn(PGconn *conn)
{
/*
* If possible, send Terminate message to close the connection politely.
@@ -4614,7 +4609,7 @@ PQfinish(PGconn *conn)
{
if (conn)
{
- closePGconn(conn);
+ pqClosePGconn(conn);
freePGconn(conn);
}
}
@@ -4628,9 +4623,9 @@ PQreset(PGconn *conn)
{
if (conn)
{
- closePGconn(conn);
+ pqClosePGconn(conn);
- if (connectDBStart(conn) && connectDBComplete(conn))
+ if (pqConnectDBStart(conn) && pqConnectDBComplete(conn))
{
/*
* Notify event procs of successful reset.
@@ -4661,9 +4656,9 @@ PQresetStart(PGconn *conn)
{
if (conn)
{
- closePGconn(conn);
+ pqClosePGconn(conn);
- return connectDBStart(conn);
+ return pqConnectDBStart(conn);
}
return 0;
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index a0da7356584..b1e1bd6331f 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -681,6 +681,12 @@ extern bool pqGetHomeDirectory(char *buf, int bufsize);
extern bool pq_parse_int_param(const char *value, int *result, PGconn *conn,
const char *context);
extern void pq_release_conn_hosts(PGconn *conn);
+extern bool pqConnectOptions2(PGconn *conn);
+extern int pqConnectDBStart(PGconn *conn);
+extern int pqConnectDBComplete(PGconn *conn);
+extern PGconn *pqMakeEmptyPGconn(void);
+extern bool pqCopyPGconn(PGconn *srcConn, PGconn *dstConn);
+extern void pqClosePGconn(PGconn *conn);
extern pgthreadlock_t pg_g_threadlock;
--
2.34.1
v27-0004-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=v27-0004-Add-non-blocking-version-of-PQcancel.patchDownload
From badc854b51348b7b36a873bc67886b3d247d6506 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Fri, 26 Jan 2024 17:01:00 +0100
Subject: [PATCH v27 4/5] Add non-blocking version of PQcancel
This patch makes the following changes in libpq:
1. Add a new PQcancelSend function, which sends cancellation requests
using the regular connection establishment code. This makes sure
that cancel requests support and use all connection options
including encryption.
2. Add a new PQcancelConn function which allows sending cancellation in
a non-blocking way by using it together with the newly added
PQcancelPoll and PQcancelSocket.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
doc/src/sgml/libpq.sgml | 280 ++++++++++++++--
src/interfaces/libpq/exports.txt | 8 +
src/interfaces/libpq/fe-cancel.c | 304 +++++++++++++++++-
src/interfaces/libpq/fe-connect.c | 130 +++++++-
src/interfaces/libpq/libpq-fe.h | 27 +-
src/interfaces/libpq/libpq-int.h | 10 +
.../modules/libpq_pipeline/libpq_pipeline.c | 263 ++++++++++++++-
7 files changed, 974 insertions(+), 48 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index d0d5aefadc0..9808e678650 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -5281,7 +5281,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelSend"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -6034,13 +6034,223 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Prepares a connection over which a cancel request can be sent.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>
+ object, but it won't instantly start sending a cancel request over this
+ connection. A cancel request can be sent over this connection in a
+ blocking manner using <xref linkend="libpq-PQcancelSend"/> and in a
+ non-blocking manner using <xref linkend="libpq-PQcancelPoll"/>.
+ The return value should can be passed to <xref linkend="libpq-PQcancelStatus"/>,
+ to check if the <structname>PGcancelConn</structname> object was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe way.
+ </para>
+
+ <para>
+ If the original connection is encrypted (using TLS or GSS), then the
+ connection for the cancel request is encrypted in the same way. Any
+ connection options that are only used during authentication or after
+ authentication of the client are ignored though, because cancellation
+ requests do not require authentication and the connection is closed right
+ after the cancellation request is submitted.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSend">
+ <term><function>PQcancelSend</function><indexterm><primary>PQcancelSend</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a blocking manner.
+<synopsis>
+int PQcancelSend(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelConn"/>
+ The return value of <xref linkend="libpq-PQcancelSend"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQstatus"/> that can be used for
+ cancellation connections.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ <para>
+ In addition to all the statuses that a <structname>PGconn</structname>
+ can have, this connection can have one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQcancelPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelPoll"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually canceled) and that the
+ connection is now closed. While a <symbol>CONNECTION_OK</symbol> result
+ for <structname>PGconn</structname> means that queries can be sent over
+ the connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSocket">
+ <term><function>PQcancelSocket</function><indexterm><primary>PQcancelSocket</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQsocket"/> that can be used for
+ cancellation connections.
+<synopsis>
+int PQcancelSocket(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelPoll">
+ <term><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQconnectPoll"/> that can be used for
+ cancellation connections.
+<synopsis>
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -6082,14 +6292,28 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ An insecure version of <xref linkend="libpq-PQcancelSend"/>, but one
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
+ <xref linkend="libpq-PQcancel"/> should only be used if it's necessary
+ to cancel a query from a signal-handler. If signal-safety is not needed,
+ <xref linkend="libpq-PQcancelSend"/> should be used to cancel the query
+ instead. <xref linkend="libpq-PQcancel"/> can be safely invoked from a
+ signal handler, if the <parameter>errbuf</parameter> is a local variable
+ in the signal handler. The <structname>PGcancel</structname> object is
+ read-only as far as <xref linkend="libpq-PQcancel"/> is concerned, so it
+ can also be invoked from a thread that is separate from the one
+ manipulating the <structname>PGconn</structname> object.
+ </para>
+
+ <para>
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
with an explanatory error message. <parameter>errbuf</parameter>
must be a char array of size <parameter>errbufsize</parameter> (the
@@ -6097,21 +6321,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
- <para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. This function connects to
+ postgres on the same address and port as the original connection. The
+ only connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. <emphasis>This means the connection
+ for the cancel request is never encrypted using TLS or GSS</emphasis>.
</para>
</listitem>
</varlistentry>
@@ -6123,13 +6348,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelSend"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelSend"/> should be
+ used instead, to avoid the security and thread-safety issues that this
+ function has. This function has the same security issues as
+ <xref linkend="libpq-PQcancel"/>, but without the benefit of being
+ signal-safe.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -9356,7 +9590,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelSend"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 088592deb16..125bc80679a 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -193,3 +193,11 @@ PQsendClosePrepared 190
PQsendClosePortal 191
PQchangePassword 192
PQsendPipelineSync 193
+PQcancelSend 194
+PQcancelConn 195
+PQcancelPoll 196
+PQcancelStatus 197
+PQcancelSocket 198
+PQcancelErrorMessage 199
+PQcancelReset 200
+PQcancelFinish 201
diff --git a/src/interfaces/libpq/fe-cancel.c b/src/interfaces/libpq/fe-cancel.c
index 9626f17a9cd..fd2a4f5b209 100644
--- a/src/interfaces/libpq/fe-cancel.c
+++ b/src/interfaces/libpq/fe-cancel.c
@@ -21,6 +21,290 @@
#include "libpq-int.h"
#include "port/pg_bswap.h"
+
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a query on the given connection. This requires polling
+ * the returned PGcancelConn to actually complete the cancellation of the
+ * query.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = pqMakeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!pqCopyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!pqConnectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancellation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we manually create the host and address arrays with
+ * a single element after freeing the host array that we generated from
+ * the connection options.
+ */
+ pq_release_conn_hosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
+
+/*
+ * PQcancelSend
+ *
+ * Send a cancellation request in a blocking fashion.
+ * Returns 1 if successful 0 if not.
+ */
+int
+PQcancelSend(PGcancelConn * cancelConn)
+{
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return 1;
+
+ if (!pqConnectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_BAD;
+ return 1;
+ }
+
+ return pqConnectDBComplete(&cancelConn->conn);
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn * cancelConn)
+{
+ PGconn *conn = (PGconn *) cancelConn;
+ int n;
+
+ /*
+ * Before we can call PQconnectPoll we first need to start the connection
+ * using pqConnectDBStart. Non-cancel connections already do this whenever
+ * the connection is initialized. But cancel connections wait until the
+ * caller starts polling, because there might be a large delay between
+ * creating a cancel connection and actually wanting to use it.
+ */
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!pqConnectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+ }
+
+ /*
+ * The rest of the connection establishement we leave to PQconnectPoll,
+ * since it's very similar to normal connection establishment. But once we
+ * get to the CONNECTION_AWAITING_RESPONSE we need to do our own thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * If we receive an error report it, but only if errno is non-zero.
+ * Otherwise we assume it's an EOF, which is what we expect from the
+ * server.
+ *
+ * We skip this for Windows, because Windows is a bit special in its EOF
+ * behaviour for TCP. Sometimes it will error with an ECONNRESET when
+ * there is a clean connection closure. See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here.
+ */
+ if (n < 0 && errno != 0)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangly do
+ * receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF. Which is what we were
+ * expecting. The cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn * cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn * cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn * cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn * cancelConn)
+{
+ pqClosePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_STARTING;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn * cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
+
+
/*
* PQgetCancel: get a PGcancel structure corresponding to a connection.
*
@@ -57,36 +341,36 @@ PQgetCancel(PGconn *conn)
if (conn->pgtcp_user_timeout != NULL)
{
if (!pq_parse_int_param(conn->pgtcp_user_timeout,
- &cancel->pgtcp_user_timeout,
- conn, "tcp_user_timeout"))
+ &cancel->pgtcp_user_timeout,
+ conn, "tcp_user_timeout"))
goto fail;
}
if (conn->keepalives != NULL)
{
if (!pq_parse_int_param(conn->keepalives,
- &cancel->keepalives,
- conn, "keepalives"))
+ &cancel->keepalives,
+ conn, "keepalives"))
goto fail;
}
if (conn->keepalives_idle != NULL)
{
if (!pq_parse_int_param(conn->keepalives_idle,
- &cancel->keepalives_idle,
- conn, "keepalives_idle"))
+ &cancel->keepalives_idle,
+ conn, "keepalives_idle"))
goto fail;
}
if (conn->keepalives_interval != NULL)
{
if (!pq_parse_int_param(conn->keepalives_interval,
- &cancel->keepalives_interval,
- conn, "keepalives_interval"))
+ &cancel->keepalives_interval,
+ conn, "keepalives_interval"))
goto fail;
}
if (conn->keepalives_count != NULL)
{
if (!pq_parse_int_param(conn->keepalives_count,
- &cancel->keepalives_count,
- conn, "keepalives_count"))
+ &cancel->keepalives_count,
+ conn, "keepalives_count"))
goto fail;
}
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 8dbc9d2cc57..b63ac63d514 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -616,8 +616,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQcancelReset invocations. Otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -923,6 +932,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+bool
+pqCopyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2354,10 +2402,18 @@ pqConnectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address. These fields have already set up in PQcancelConn. So leave
+ * these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2499,7 +2555,10 @@ pqConnectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2624,13 +2683,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -3272,6 +3335,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancellation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -4021,8 +4107,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4390,6 +4482,7 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
+ release_conn_addrinfo(conn);
pq_release_conn_hosts(conn);
free(conn->client_encoding_initial);
@@ -4541,6 +4634,15 @@ pq_release_conn_hosts(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4594,7 +4696,13 @@ pqClosePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we would have to rebuild it during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index defc415fa3f..857ba54d943 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_STARTING /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,30 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* Create a PGcancelConn that's used to cancel a query on the given PGconn */
+extern PGcancelConn * PQcancelConn(PGconn *conn);
+/* issue a blocking cancel request */
+extern int PQcancelSend(PGcancelConn * conn);
+
+/* issue or poll a non-blocking cancel request */
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn * cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
+extern int PQcancelSocket(const PGcancelConn * cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
+extern void PQcancelReset(PGcancelConn * cancelConn);
+extern void PQcancelFinish(PGcancelConn * cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* a less secure version of PQcancelSend, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index b1e1bd6331f..94990292a04 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -409,6 +409,10 @@ struct pg_conn
char *require_auth; /* name of the expected auth method */
char *load_balance_hosts; /* load balance over hosts */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -621,6 +625,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
@@ -678,6 +687,7 @@ extern void pqDropConnection(PGconn *conn, bool flushInput);
extern int pqPacketSend(PGconn *conn, char pack_type,
const void *buf, size_t buf_len);
extern bool pqGetHomeDirectory(char *buf, int bufsize);
+extern bool pqCopyPGconn(PGconn *srcConn, PGconn *dstConn);
extern bool pq_parse_int_param(const char *value, int *result, PGconn *conn,
const char *context);
extern void pq_release_conn_hosts(PGconn *conn);
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 5f43aa40de4..580003002e4 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got canceled.
+ *
+ * This is a function wrapped in a macro to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_canceled(conn) confirm_query_canceled_impl(__LINE__, conn)
+static void
+confirm_query_canceled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_canceled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_canceled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_canceled(conn);
+
+ /* test PQcancelSend */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (!PQcancelSend(cancelConn))
+ pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -1789,6 +2047,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1890,7 +2149,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
v27-0001-libpq-Move-cancellation-related-functions-to-fe-.patchapplication/octet-stream; name=v27-0001-libpq-Move-cancellation-related-functions-to-fe-.patchDownload
From 98d8e783ebf4afa824bec70fd3ed266e2dba6908 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Fri, 26 Jan 2024 14:35:48 +0100
Subject: [PATCH v27 1/5] libpq: Move cancellation related functions to
fe-cancel.c
In follow up commits we'll add more functions related to cancellations
this groups those all together instead of grouping them with all the
other functions in fe-connect.c
---
src/interfaces/libpq/Makefile | 1 +
src/interfaces/libpq/fe-cancel.c | 388 ++++++++++++++++++++++++++++
src/interfaces/libpq/fe-connect.c | 405 ++----------------------------
src/interfaces/libpq/libpq-int.h | 2 +
src/interfaces/libpq/meson.build | 1 +
5 files changed, 410 insertions(+), 387 deletions(-)
create mode 100644 src/interfaces/libpq/fe-cancel.c
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index fce17bc72a0..bfcc7cdde99 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -30,6 +30,7 @@ endif
OBJS = \
$(WIN32RES) \
fe-auth-scram.o \
+ fe-cancel.o \
fe-connect.o \
fe-exec.o \
fe-lobj.o \
diff --git a/src/interfaces/libpq/fe-cancel.c b/src/interfaces/libpq/fe-cancel.c
new file mode 100644
index 00000000000..9626f17a9cd
--- /dev/null
+++ b/src/interfaces/libpq/fe-cancel.c
@@ -0,0 +1,388 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-cancel.c
+ * functions related to setting up a connection to the backend
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ * src/interfaces/libpq/fe-cancel.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <unistd.h>
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+#include "port/pg_bswap.h"
+
+/*
+ * PQgetCancel: get a PGcancel structure corresponding to a connection.
+ *
+ * A copy is needed to be able to cancel a running query from a different
+ * thread. If the same structure is used all structure members would have
+ * to be individually locked (if the entire structure was locked, it would
+ * be impossible to cancel a synchronous query because the structure would
+ * have to stay locked for the duration of the query).
+ */
+PGcancel *
+PQgetCancel(PGconn *conn)
+{
+ PGcancel *cancel;
+
+ if (!conn)
+ return NULL;
+
+ if (conn->sock == PGINVALID_SOCKET)
+ return NULL;
+
+ cancel = malloc(sizeof(PGcancel));
+ if (cancel == NULL)
+ return NULL;
+
+ memcpy(&cancel->raddr, &conn->raddr, sizeof(SockAddr));
+ cancel->be_pid = conn->be_pid;
+ cancel->be_key = conn->be_key;
+ /* We use -1 to indicate an unset connection option */
+ cancel->pgtcp_user_timeout = -1;
+ cancel->keepalives = -1;
+ cancel->keepalives_idle = -1;
+ cancel->keepalives_interval = -1;
+ cancel->keepalives_count = -1;
+ if (conn->pgtcp_user_timeout != NULL)
+ {
+ if (!pq_parse_int_param(conn->pgtcp_user_timeout,
+ &cancel->pgtcp_user_timeout,
+ conn, "tcp_user_timeout"))
+ goto fail;
+ }
+ if (conn->keepalives != NULL)
+ {
+ if (!pq_parse_int_param(conn->keepalives,
+ &cancel->keepalives,
+ conn, "keepalives"))
+ goto fail;
+ }
+ if (conn->keepalives_idle != NULL)
+ {
+ if (!pq_parse_int_param(conn->keepalives_idle,
+ &cancel->keepalives_idle,
+ conn, "keepalives_idle"))
+ goto fail;
+ }
+ if (conn->keepalives_interval != NULL)
+ {
+ if (!pq_parse_int_param(conn->keepalives_interval,
+ &cancel->keepalives_interval,
+ conn, "keepalives_interval"))
+ goto fail;
+ }
+ if (conn->keepalives_count != NULL)
+ {
+ if (!pq_parse_int_param(conn->keepalives_count,
+ &cancel->keepalives_count,
+ conn, "keepalives_count"))
+ goto fail;
+ }
+
+ return cancel;
+
+fail:
+ free(cancel);
+ return NULL;
+}
+
+/* PQfreeCancel: free a cancel structure */
+void
+PQfreeCancel(PGcancel *cancel)
+{
+ free(cancel);
+}
+
+
+/*
+ * Sets an integer socket option on a TCP socket, if the provided value is
+ * not negative. Returns false if setsockopt fails for some reason.
+ *
+ * CAUTION: This needs to be signal safe, since it's used by PQcancel.
+ */
+#if defined(TCP_USER_TIMEOUT) || !defined(WIN32)
+static bool
+optional_setsockopt(int fd, int protoid, int optid, int value)
+{
+ if (value < 0)
+ return true;
+ if (setsockopt(fd, protoid, optid, (char *) &value, sizeof(value)) < 0)
+ return false;
+ return true;
+}
+#endif
+
+
+
+/*
+ * PQcancel: request query cancel
+ *
+ * The return value is true if the cancel request was successfully
+ * dispatched, false if not (in which case an error message is available).
+ * Note: successful dispatch is no guarantee that there will be any effect at
+ * the backend. The application must read the operation result as usual.
+ *
+ * On failure, an error message is stored in *errbuf, which must be of size
+ * errbufsize (recommended size is 256 bytes). *errbuf is not changed on
+ * success return.
+ *
+ * CAUTION: we want this routine to be safely callable from a signal handler
+ * (for example, an application might want to call it in a SIGINT handler).
+ * This means we cannot use any C library routine that might be non-reentrant.
+ * malloc/free are often non-reentrant, and anything that might call them is
+ * just as dangerous. We avoid sprintf here for that reason. Building up
+ * error messages with strcpy/strcat is tedious but should be quite safe.
+ * We also save/restore errno in case the signal handler support doesn't.
+ */
+int
+PQcancel(PGcancel *cancel, char *errbuf, int errbufsize)
+{
+ int save_errno = SOCK_ERRNO;
+ pgsocket tmpsock = PGINVALID_SOCKET;
+ int maxlen;
+ struct
+ {
+ uint32 packetlen;
+ CancelRequestPacket cp;
+ } crp;
+
+ if (!cancel)
+ {
+ strlcpy(errbuf, "PQcancel() -- no cancel object supplied", errbufsize);
+ /* strlcpy probably doesn't change errno, but be paranoid */
+ SOCK_ERRNO_SET(save_errno);
+ return false;
+ }
+
+ /*
+ * We need to open a temporary connection to the postmaster. Do this with
+ * only kernel calls.
+ */
+ if ((tmpsock = socket(cancel->raddr.addr.ss_family, SOCK_STREAM, 0)) == PGINVALID_SOCKET)
+ {
+ strlcpy(errbuf, "PQcancel() -- socket() failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+
+ /*
+ * Since this connection will only be used to send a single packet of
+ * data, we don't need NODELAY. We also don't set the socket to
+ * nonblocking mode, because the API definition of PQcancel requires the
+ * cancel to be sent in a blocking way.
+ *
+ * We do set socket options related to keepalives and other TCP timeouts.
+ * This ensures that this function does not block indefinitely when
+ * reasonable keepalive and timeout settings have been provided.
+ */
+ if (cancel->raddr.addr.ss_family != AF_UNIX &&
+ cancel->keepalives != 0)
+ {
+#ifndef WIN32
+ if (!optional_setsockopt(tmpsock, SOL_SOCKET, SO_KEEPALIVE, 1))
+ {
+ strlcpy(errbuf, "PQcancel() -- setsockopt(SO_KEEPALIVE) failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+
+#ifdef PG_TCP_KEEPALIVE_IDLE
+ if (!optional_setsockopt(tmpsock, IPPROTO_TCP, PG_TCP_KEEPALIVE_IDLE,
+ cancel->keepalives_idle))
+ {
+ strlcpy(errbuf, "PQcancel() -- setsockopt(" PG_TCP_KEEPALIVE_IDLE_STR ") failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+#endif
+
+#ifdef TCP_KEEPINTVL
+ if (!optional_setsockopt(tmpsock, IPPROTO_TCP, TCP_KEEPINTVL,
+ cancel->keepalives_interval))
+ {
+ strlcpy(errbuf, "PQcancel() -- setsockopt(TCP_KEEPINTVL) failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+#endif
+
+#ifdef TCP_KEEPCNT
+ if (!optional_setsockopt(tmpsock, IPPROTO_TCP, TCP_KEEPCNT,
+ cancel->keepalives_count))
+ {
+ strlcpy(errbuf, "PQcancel() -- setsockopt(TCP_KEEPCNT) failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+#endif
+
+#else /* WIN32 */
+
+#ifdef SIO_KEEPALIVE_VALS
+ if (!setKeepalivesWin32(tmpsock,
+ cancel->keepalives_idle,
+ cancel->keepalives_interval))
+ {
+ strlcpy(errbuf, "PQcancel() -- WSAIoctl(SIO_KEEPALIVE_VALS) failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+#endif /* SIO_KEEPALIVE_VALS */
+#endif /* WIN32 */
+
+ /* TCP_USER_TIMEOUT works the same way on Unix and Windows */
+#ifdef TCP_USER_TIMEOUT
+ if (!optional_setsockopt(tmpsock, IPPROTO_TCP, TCP_USER_TIMEOUT,
+ cancel->pgtcp_user_timeout))
+ {
+ strlcpy(errbuf, "PQcancel() -- setsockopt(TCP_USER_TIMEOUT) failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+#endif
+ }
+
+retry3:
+ if (connect(tmpsock, (struct sockaddr *) &cancel->raddr.addr,
+ cancel->raddr.salen) < 0)
+ {
+ if (SOCK_ERRNO == EINTR)
+ /* Interrupted system call - we'll just try again */
+ goto retry3;
+ strlcpy(errbuf, "PQcancel() -- connect() failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+
+ /* Create and send the cancel request packet. */
+
+ crp.packetlen = pg_hton32((uint32) sizeof(crp));
+ crp.cp.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ crp.cp.backendPID = pg_hton32(cancel->be_pid);
+ crp.cp.cancelAuthCode = pg_hton32(cancel->be_key);
+
+retry4:
+ if (send(tmpsock, (char *) &crp, sizeof(crp), 0) != (int) sizeof(crp))
+ {
+ if (SOCK_ERRNO == EINTR)
+ /* Interrupted system call - we'll just try again */
+ goto retry4;
+ strlcpy(errbuf, "PQcancel() -- send() failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+
+ /*
+ * Wait for the postmaster to close the connection, which indicates that
+ * it's processed the request. Without this delay, we might issue another
+ * command only to find that our cancel zaps that command instead of the
+ * one we thought we were canceling. Note we don't actually expect this
+ * read to obtain any data, we are just waiting for EOF to be signaled.
+ */
+retry5:
+ if (recv(tmpsock, (char *) &crp, 1, 0) < 0)
+ {
+ if (SOCK_ERRNO == EINTR)
+ /* Interrupted system call - we'll just try again */
+ goto retry5;
+ /* we ignore other error conditions */
+ }
+
+ /* All done */
+ closesocket(tmpsock);
+ SOCK_ERRNO_SET(save_errno);
+ return true;
+
+cancel_errReturn:
+
+ /*
+ * Make sure we don't overflow the error buffer. Leave space for the \n at
+ * the end, and for the terminating zero.
+ */
+ maxlen = errbufsize - strlen(errbuf) - 2;
+ if (maxlen >= 0)
+ {
+ /*
+ * We can't invoke strerror here, since it's not signal-safe. Settle
+ * for printing the decimal value of errno. Even that has to be done
+ * the hard way.
+ */
+ int val = SOCK_ERRNO;
+ char buf[32];
+ char *bufp;
+
+ bufp = buf + sizeof(buf) - 1;
+ *bufp = '\0';
+ do
+ {
+ *(--bufp) = (val % 10) + '0';
+ val /= 10;
+ } while (val > 0);
+ bufp -= 6;
+ memcpy(bufp, "error ", 6);
+ strncat(errbuf, bufp, maxlen);
+ strcat(errbuf, "\n");
+ }
+ if (tmpsock != PGINVALID_SOCKET)
+ closesocket(tmpsock);
+ SOCK_ERRNO_SET(save_errno);
+ return false;
+}
+
+/*
+ * PQrequestCancel: old, not thread-safe function for requesting query cancel
+ *
+ * Returns true if able to send the cancel request, false if not.
+ *
+ * On failure, the error message is saved in conn->errorMessage; this means
+ * that this can't be used when there might be other active operations on
+ * the connection object.
+ *
+ * NOTE: error messages will be cut off at the current size of the
+ * error message buffer, since we dare not try to expand conn->errorMessage!
+ */
+int
+PQrequestCancel(PGconn *conn)
+{
+ int r;
+ PGcancel *cancel;
+
+ /* Check we have an open connection */
+ if (!conn)
+ return false;
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ strlcpy(conn->errorMessage.data,
+ "PQrequestCancel() -- connection is not open\n",
+ conn->errorMessage.maxlen);
+ conn->errorMessage.len = strlen(conn->errorMessage.data);
+ conn->errorReported = 0;
+
+ return false;
+ }
+
+ cancel = PQgetCancel(conn);
+ if (cancel)
+ {
+ r = PQcancel(cancel, conn->errorMessage.data,
+ conn->errorMessage.maxlen);
+ PQfreeCancel(cancel);
+ }
+ else
+ {
+ strlcpy(conn->errorMessage.data, "out of memory",
+ conn->errorMessage.maxlen);
+ r = false;
+ }
+
+ if (!r)
+ {
+ conn->errorMessage.len = strlen(conn->errorMessage.data);
+ conn->errorReported = 0;
+ }
+
+ return r;
+}
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 79e0b73d618..5357b0a9d22 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -443,8 +443,6 @@ static void pgpassfileWarning(PGconn *conn);
static void default_threadlock(int acquire);
static bool sslVerifyProtocolVersion(const char *version);
static bool sslVerifyProtocolRange(const char *min, const char *max);
-static bool parse_int_param(const char *value, int *result, PGconn *conn,
- const char *context);
/* global variable because fe-auth.c needs to access it */
@@ -2081,9 +2079,9 @@ useKeepalives(PGconn *conn)
* store it in *result, complaining if there is any trailing garbage or an
* overflow. This allows any number of leading and trailing whitespaces.
*/
-static bool
-parse_int_param(const char *value, int *result, PGconn *conn,
- const char *context)
+bool
+pq_parse_int_param(const char *value, int *result, PGconn *conn,
+ const char *context)
{
char *end;
long numval;
@@ -2134,8 +2132,8 @@ setKeepalivesIdle(PGconn *conn)
if (conn->keepalives_idle == NULL)
return 1;
- if (!parse_int_param(conn->keepalives_idle, &idle, conn,
- "keepalives_idle"))
+ if (!pq_parse_int_param(conn->keepalives_idle, &idle, conn,
+ "keepalives_idle"))
return 0;
if (idle < 0)
idle = 0;
@@ -2168,8 +2166,8 @@ setKeepalivesInterval(PGconn *conn)
if (conn->keepalives_interval == NULL)
return 1;
- if (!parse_int_param(conn->keepalives_interval, &interval, conn,
- "keepalives_interval"))
+ if (!pq_parse_int_param(conn->keepalives_interval, &interval, conn,
+ "keepalives_interval"))
return 0;
if (interval < 0)
interval = 0;
@@ -2203,8 +2201,8 @@ setKeepalivesCount(PGconn *conn)
if (conn->keepalives_count == NULL)
return 1;
- if (!parse_int_param(conn->keepalives_count, &count, conn,
- "keepalives_count"))
+ if (!pq_parse_int_param(conn->keepalives_count, &count, conn,
+ "keepalives_count"))
return 0;
if (count < 0)
count = 0;
@@ -2269,12 +2267,12 @@ prepKeepalivesWin32(PGconn *conn)
int interval = -1;
if (conn->keepalives_idle &&
- !parse_int_param(conn->keepalives_idle, &idle, conn,
- "keepalives_idle"))
+ !pq_parse_int_param(conn->keepalives_idle, &idle, conn,
+ "keepalives_idle"))
return 0;
if (conn->keepalives_interval &&
- !parse_int_param(conn->keepalives_interval, &interval, conn,
- "keepalives_interval"))
+ !pq_parse_int_param(conn->keepalives_interval, &interval, conn,
+ "keepalives_interval"))
return 0;
if (!setKeepalivesWin32(conn->sock, idle, interval))
@@ -2300,8 +2298,8 @@ setTCPUserTimeout(PGconn *conn)
if (conn->pgtcp_user_timeout == NULL)
return 1;
- if (!parse_int_param(conn->pgtcp_user_timeout, &timeout, conn,
- "tcp_user_timeout"))
+ if (!pq_parse_int_param(conn->pgtcp_user_timeout, &timeout, conn,
+ "tcp_user_timeout"))
return 0;
if (timeout < 0)
@@ -2418,8 +2416,8 @@ connectDBComplete(PGconn *conn)
*/
if (conn->connect_timeout != NULL)
{
- if (!parse_int_param(conn->connect_timeout, &timeout, conn,
- "connect_timeout"))
+ if (!pq_parse_int_param(conn->connect_timeout, &timeout, conn,
+ "connect_timeout"))
{
/* mark the connection as bad to report the parsing failure */
conn->status = CONNECTION_BAD;
@@ -2666,7 +2664,7 @@ keep_going: /* We will come back to here until there is
thisport = DEF_PGPORT;
else
{
- if (!parse_int_param(ch->port, &thisport, conn, "port"))
+ if (!pq_parse_int_param(ch->port, &thisport, conn, "port"))
goto error_return;
if (thisport < 1 || thisport > 65535)
@@ -4694,373 +4692,6 @@ PQresetPoll(PGconn *conn)
return PGRES_POLLING_FAILED;
}
-/*
- * PQgetCancel: get a PGcancel structure corresponding to a connection.
- *
- * A copy is needed to be able to cancel a running query from a different
- * thread. If the same structure is used all structure members would have
- * to be individually locked (if the entire structure was locked, it would
- * be impossible to cancel a synchronous query because the structure would
- * have to stay locked for the duration of the query).
- */
-PGcancel *
-PQgetCancel(PGconn *conn)
-{
- PGcancel *cancel;
-
- if (!conn)
- return NULL;
-
- if (conn->sock == PGINVALID_SOCKET)
- return NULL;
-
- cancel = malloc(sizeof(PGcancel));
- if (cancel == NULL)
- return NULL;
-
- memcpy(&cancel->raddr, &conn->raddr, sizeof(SockAddr));
- cancel->be_pid = conn->be_pid;
- cancel->be_key = conn->be_key;
- /* We use -1 to indicate an unset connection option */
- cancel->pgtcp_user_timeout = -1;
- cancel->keepalives = -1;
- cancel->keepalives_idle = -1;
- cancel->keepalives_interval = -1;
- cancel->keepalives_count = -1;
- if (conn->pgtcp_user_timeout != NULL)
- {
- if (!parse_int_param(conn->pgtcp_user_timeout,
- &cancel->pgtcp_user_timeout,
- conn, "tcp_user_timeout"))
- goto fail;
- }
- if (conn->keepalives != NULL)
- {
- if (!parse_int_param(conn->keepalives,
- &cancel->keepalives,
- conn, "keepalives"))
- goto fail;
- }
- if (conn->keepalives_idle != NULL)
- {
- if (!parse_int_param(conn->keepalives_idle,
- &cancel->keepalives_idle,
- conn, "keepalives_idle"))
- goto fail;
- }
- if (conn->keepalives_interval != NULL)
- {
- if (!parse_int_param(conn->keepalives_interval,
- &cancel->keepalives_interval,
- conn, "keepalives_interval"))
- goto fail;
- }
- if (conn->keepalives_count != NULL)
- {
- if (!parse_int_param(conn->keepalives_count,
- &cancel->keepalives_count,
- conn, "keepalives_count"))
- goto fail;
- }
-
- return cancel;
-
-fail:
- free(cancel);
- return NULL;
-}
-
-/* PQfreeCancel: free a cancel structure */
-void
-PQfreeCancel(PGcancel *cancel)
-{
- free(cancel);
-}
-
-
-/*
- * Sets an integer socket option on a TCP socket, if the provided value is
- * not negative. Returns false if setsockopt fails for some reason.
- *
- * CAUTION: This needs to be signal safe, since it's used by PQcancel.
- */
-#if defined(TCP_USER_TIMEOUT) || !defined(WIN32)
-static bool
-optional_setsockopt(int fd, int protoid, int optid, int value)
-{
- if (value < 0)
- return true;
- if (setsockopt(fd, protoid, optid, (char *) &value, sizeof(value)) < 0)
- return false;
- return true;
-}
-#endif
-
-
-/*
- * PQcancel: request query cancel
- *
- * The return value is true if the cancel request was successfully
- * dispatched, false if not (in which case an error message is available).
- * Note: successful dispatch is no guarantee that there will be any effect at
- * the backend. The application must read the operation result as usual.
- *
- * On failure, an error message is stored in *errbuf, which must be of size
- * errbufsize (recommended size is 256 bytes). *errbuf is not changed on
- * success return.
- *
- * CAUTION: we want this routine to be safely callable from a signal handler
- * (for example, an application might want to call it in a SIGINT handler).
- * This means we cannot use any C library routine that might be non-reentrant.
- * malloc/free are often non-reentrant, and anything that might call them is
- * just as dangerous. We avoid sprintf here for that reason. Building up
- * error messages with strcpy/strcat is tedious but should be quite safe.
- * We also save/restore errno in case the signal handler support doesn't.
- */
-int
-PQcancel(PGcancel *cancel, char *errbuf, int errbufsize)
-{
- int save_errno = SOCK_ERRNO;
- pgsocket tmpsock = PGINVALID_SOCKET;
- int maxlen;
- struct
- {
- uint32 packetlen;
- CancelRequestPacket cp;
- } crp;
-
- if (!cancel)
- {
- strlcpy(errbuf, "PQcancel() -- no cancel object supplied", errbufsize);
- /* strlcpy probably doesn't change errno, but be paranoid */
- SOCK_ERRNO_SET(save_errno);
- return false;
- }
-
- /*
- * We need to open a temporary connection to the postmaster. Do this with
- * only kernel calls.
- */
- if ((tmpsock = socket(cancel->raddr.addr.ss_family, SOCK_STREAM, 0)) == PGINVALID_SOCKET)
- {
- strlcpy(errbuf, "PQcancel() -- socket() failed: ", errbufsize);
- goto cancel_errReturn;
- }
-
- /*
- * Since this connection will only be used to send a single packet of
- * data, we don't need NODELAY. We also don't set the socket to
- * nonblocking mode, because the API definition of PQcancel requires the
- * cancel to be sent in a blocking way.
- *
- * We do set socket options related to keepalives and other TCP timeouts.
- * This ensures that this function does not block indefinitely when
- * reasonable keepalive and timeout settings have been provided.
- */
- if (cancel->raddr.addr.ss_family != AF_UNIX &&
- cancel->keepalives != 0)
- {
-#ifndef WIN32
- if (!optional_setsockopt(tmpsock, SOL_SOCKET, SO_KEEPALIVE, 1))
- {
- strlcpy(errbuf, "PQcancel() -- setsockopt(SO_KEEPALIVE) failed: ", errbufsize);
- goto cancel_errReturn;
- }
-
-#ifdef PG_TCP_KEEPALIVE_IDLE
- if (!optional_setsockopt(tmpsock, IPPROTO_TCP, PG_TCP_KEEPALIVE_IDLE,
- cancel->keepalives_idle))
- {
- strlcpy(errbuf, "PQcancel() -- setsockopt(" PG_TCP_KEEPALIVE_IDLE_STR ") failed: ", errbufsize);
- goto cancel_errReturn;
- }
-#endif
-
-#ifdef TCP_KEEPINTVL
- if (!optional_setsockopt(tmpsock, IPPROTO_TCP, TCP_KEEPINTVL,
- cancel->keepalives_interval))
- {
- strlcpy(errbuf, "PQcancel() -- setsockopt(TCP_KEEPINTVL) failed: ", errbufsize);
- goto cancel_errReturn;
- }
-#endif
-
-#ifdef TCP_KEEPCNT
- if (!optional_setsockopt(tmpsock, IPPROTO_TCP, TCP_KEEPCNT,
- cancel->keepalives_count))
- {
- strlcpy(errbuf, "PQcancel() -- setsockopt(TCP_KEEPCNT) failed: ", errbufsize);
- goto cancel_errReturn;
- }
-#endif
-
-#else /* WIN32 */
-
-#ifdef SIO_KEEPALIVE_VALS
- if (!setKeepalivesWin32(tmpsock,
- cancel->keepalives_idle,
- cancel->keepalives_interval))
- {
- strlcpy(errbuf, "PQcancel() -- WSAIoctl(SIO_KEEPALIVE_VALS) failed: ", errbufsize);
- goto cancel_errReturn;
- }
-#endif /* SIO_KEEPALIVE_VALS */
-#endif /* WIN32 */
-
- /* TCP_USER_TIMEOUT works the same way on Unix and Windows */
-#ifdef TCP_USER_TIMEOUT
- if (!optional_setsockopt(tmpsock, IPPROTO_TCP, TCP_USER_TIMEOUT,
- cancel->pgtcp_user_timeout))
- {
- strlcpy(errbuf, "PQcancel() -- setsockopt(TCP_USER_TIMEOUT) failed: ", errbufsize);
- goto cancel_errReturn;
- }
-#endif
- }
-
-retry3:
- if (connect(tmpsock, (struct sockaddr *) &cancel->raddr.addr,
- cancel->raddr.salen) < 0)
- {
- if (SOCK_ERRNO == EINTR)
- /* Interrupted system call - we'll just try again */
- goto retry3;
- strlcpy(errbuf, "PQcancel() -- connect() failed: ", errbufsize);
- goto cancel_errReturn;
- }
-
- /* Create and send the cancel request packet. */
-
- crp.packetlen = pg_hton32((uint32) sizeof(crp));
- crp.cp.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
- crp.cp.backendPID = pg_hton32(cancel->be_pid);
- crp.cp.cancelAuthCode = pg_hton32(cancel->be_key);
-
-retry4:
- if (send(tmpsock, (char *) &crp, sizeof(crp), 0) != (int) sizeof(crp))
- {
- if (SOCK_ERRNO == EINTR)
- /* Interrupted system call - we'll just try again */
- goto retry4;
- strlcpy(errbuf, "PQcancel() -- send() failed: ", errbufsize);
- goto cancel_errReturn;
- }
-
- /*
- * Wait for the postmaster to close the connection, which indicates that
- * it's processed the request. Without this delay, we might issue another
- * command only to find that our cancel zaps that command instead of the
- * one we thought we were canceling. Note we don't actually expect this
- * read to obtain any data, we are just waiting for EOF to be signaled.
- */
-retry5:
- if (recv(tmpsock, (char *) &crp, 1, 0) < 0)
- {
- if (SOCK_ERRNO == EINTR)
- /* Interrupted system call - we'll just try again */
- goto retry5;
- /* we ignore other error conditions */
- }
-
- /* All done */
- closesocket(tmpsock);
- SOCK_ERRNO_SET(save_errno);
- return true;
-
-cancel_errReturn:
-
- /*
- * Make sure we don't overflow the error buffer. Leave space for the \n at
- * the end, and for the terminating zero.
- */
- maxlen = errbufsize - strlen(errbuf) - 2;
- if (maxlen >= 0)
- {
- /*
- * We can't invoke strerror here, since it's not signal-safe. Settle
- * for printing the decimal value of errno. Even that has to be done
- * the hard way.
- */
- int val = SOCK_ERRNO;
- char buf[32];
- char *bufp;
-
- bufp = buf + sizeof(buf) - 1;
- *bufp = '\0';
- do
- {
- *(--bufp) = (val % 10) + '0';
- val /= 10;
- } while (val > 0);
- bufp -= 6;
- memcpy(bufp, "error ", 6);
- strncat(errbuf, bufp, maxlen);
- strcat(errbuf, "\n");
- }
- if (tmpsock != PGINVALID_SOCKET)
- closesocket(tmpsock);
- SOCK_ERRNO_SET(save_errno);
- return false;
-}
-
-
-/*
- * PQrequestCancel: old, not thread-safe function for requesting query cancel
- *
- * Returns true if able to send the cancel request, false if not.
- *
- * On failure, the error message is saved in conn->errorMessage; this means
- * that this can't be used when there might be other active operations on
- * the connection object.
- *
- * NOTE: error messages will be cut off at the current size of the
- * error message buffer, since we dare not try to expand conn->errorMessage!
- */
-int
-PQrequestCancel(PGconn *conn)
-{
- int r;
- PGcancel *cancel;
-
- /* Check we have an open connection */
- if (!conn)
- return false;
-
- if (conn->sock == PGINVALID_SOCKET)
- {
- strlcpy(conn->errorMessage.data,
- "PQrequestCancel() -- connection is not open\n",
- conn->errorMessage.maxlen);
- conn->errorMessage.len = strlen(conn->errorMessage.data);
- conn->errorReported = 0;
-
- return false;
- }
-
- cancel = PQgetCancel(conn);
- if (cancel)
- {
- r = PQcancel(cancel, conn->errorMessage.data,
- conn->errorMessage.maxlen);
- PQfreeCancel(cancel);
- }
- else
- {
- strlcpy(conn->errorMessage.data, "out of memory",
- conn->errorMessage.maxlen);
- r = false;
- }
-
- if (!r)
- {
- conn->errorMessage.len = strlen(conn->errorMessage.data);
- conn->errorReported = 0;
- }
-
- return r;
-}
-
-
/*
* pqPacketSend() -- convenience routine to send a message to server.
*
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index f0143726bbc..66b77e75e18 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -678,6 +678,8 @@ extern void pqDropConnection(PGconn *conn, bool flushInput);
extern int pqPacketSend(PGconn *conn, char pack_type,
const void *buf, size_t buf_len);
extern bool pqGetHomeDirectory(char *buf, int bufsize);
+extern bool pq_parse_int_param(const char *value, int *result, PGconn *conn,
+ const char *context);
extern pgthreadlock_t pg_g_threadlock;
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index c76a1e40c83..a47b6f425dd 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -6,6 +6,7 @@
libpq_sources = files(
'fe-auth-scram.c',
'fe-auth.c',
+ 'fe-cancel.c',
'fe-connect.c',
'fe-exec.c',
'fe-lobj.c',
base-commit: a3a836fb5e51183eae624d43225279306c2285b8
--
2.34.1
On Sun, 28 Jan 2024 at 10:51, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
Both of those are fixed now.
Okay, there turned out to also be an issue on Windows with
setKeepalivesWin32 not being available in fe-cancel.c. That's fixed
now too (as well as some minor formatting issues).
Attachments:
v28-0002-libpq-Add-pq_release_conn_hosts-function.patchapplication/x-patch; name=v28-0002-libpq-Add-pq_release_conn_hosts-function.patchDownload
From 4efbb0c75341f4612f0c5b8d5d3fe3f8f9c3b43c Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Fri, 26 Jan 2024 17:01:28 +0100
Subject: [PATCH v28 2/5] libpq: Add pq_release_conn_hosts function
In a follow up PR we'll need to free this connhost field in a function
defined in fe-cancel.c
So this extracts the logic to a dedicated extern function.
---
src/interfaces/libpq/fe-connect.c | 38 ++++++++++++++++++++-----------
src/interfaces/libpq/libpq-int.h | 1 +
2 files changed, 26 insertions(+), 13 deletions(-)
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 5d08b4904d3..bc1f6521650 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -4395,19 +4395,7 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
- /* clean up pg_conn_host structures */
- for (int i = 0; i < conn->nconnhost; ++i)
- {
- free(conn->connhost[i].host);
- free(conn->connhost[i].hostaddr);
- free(conn->connhost[i].port);
- if (conn->connhost[i].password != NULL)
- {
- explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
- free(conn->connhost[i].password);
- }
- }
- free(conn->connhost);
+ pq_release_conn_hosts(conn);
free(conn->client_encoding_initial);
free(conn->events);
@@ -4526,6 +4514,30 @@ release_conn_addrinfo(PGconn *conn)
}
}
+/*
+ * pq_release_conn_hosts
+ * - Free the host list in the PGconn.
+ */
+void
+pq_release_conn_hosts(PGconn *conn)
+{
+ if (conn->connhost)
+ {
+ for (int i = 0; i < conn->nconnhost; ++i)
+ {
+ free(conn->connhost[i].host);
+ free(conn->connhost[i].hostaddr);
+ free(conn->connhost[i].port);
+ if (conn->connhost[i].password != NULL)
+ {
+ explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
+ free(conn->connhost[i].password);
+ }
+ }
+ free(conn->connhost);
+ }
+}
+
/*
* sendTerminateConn
* - Send a terminate message to backend.
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 48c10b474f5..4cbad2c2c83 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -680,6 +680,7 @@ extern int pqPacketSend(PGconn *conn, char pack_type,
extern bool pqGetHomeDirectory(char *buf, int bufsize);
extern bool pq_parse_int_param(const char *value, int *result, PGconn *conn,
const char *context);
+extern void pq_release_conn_hosts(PGconn *conn);
extern pgthreadlock_t pg_g_threadlock;
--
2.34.1
v28-0003-libpq-Change-some-static-functions-to-extern.patchapplication/x-patch; name=v28-0003-libpq-Change-some-static-functions-to-extern.patchDownload
From f1168ac4c3dd758a77be3ceb8c40bacb9aebef8c Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Fri, 26 Jan 2024 16:47:51 +0100
Subject: [PATCH v28 3/5] libpq: Change some static functions to extern
This is in preparation of a follow up commit that starts using these
functions from fe-cancel.c.
---
src/interfaces/libpq/fe-connect.c | 85 +++++++++++++++----------------
src/interfaces/libpq/libpq-int.h | 6 +++
2 files changed, 46 insertions(+), 45 deletions(-)
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index bc1f6521650..aeb3adc0e31 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -387,15 +387,10 @@ static const char uri_designator[] = "postgresql://";
static const char short_uri_designator[] = "postgres://";
static bool connectOptions1(PGconn *conn, const char *conninfo);
-static bool connectOptions2(PGconn *conn);
-static int connectDBStart(PGconn *conn);
-static int connectDBComplete(PGconn *conn);
static PGPing internal_ping(PGconn *conn);
-static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
static void freePGconn(PGconn *conn);
-static void closePGconn(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
static int store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
@@ -644,7 +639,7 @@ pqDropServerData(PGconn *conn)
* PQconnectStart or PQconnectStartParams (which differ in the same way as
* PQconnectdb and PQconnectdbParams) and PQconnectPoll.
*
- * Internally, the static functions connectDBStart, connectDBComplete
+ * Internally, the static functions pqConnectDBStart, pqConnectDBComplete
* are part of the connection procedure.
*/
@@ -678,7 +673,7 @@ PQconnectdbParams(const char *const *keywords,
PGconn *conn = PQconnectStartParams(keywords, values, expand_dbname);
if (conn && conn->status != CONNECTION_BAD)
- (void) connectDBComplete(conn);
+ (void) pqConnectDBComplete(conn);
return conn;
}
@@ -731,7 +726,7 @@ PQconnectdb(const char *conninfo)
PGconn *conn = PQconnectStart(conninfo);
if (conn && conn->status != CONNECTION_BAD)
- (void) connectDBComplete(conn);
+ (void) pqConnectDBComplete(conn);
return conn;
}
@@ -785,7 +780,7 @@ PQconnectStartParams(const char *const *keywords,
* to initialize conn->errorMessage to empty. All subsequent steps during
* connection initialization will only append to that buffer.
*/
- conn = makeEmptyPGconn();
+ conn = pqMakeEmptyPGconn();
if (conn == NULL)
return NULL;
@@ -819,15 +814,15 @@ PQconnectStartParams(const char *const *keywords,
/*
* Compute derived options
*/
- if (!connectOptions2(conn))
+ if (!pqConnectOptions2(conn))
return conn;
/*
* Connect to the database
*/
- if (!connectDBStart(conn))
+ if (!pqConnectDBStart(conn))
{
- /* Just in case we failed to set it in connectDBStart */
+ /* Just in case we failed to set it in pqConnectDBStart */
conn->status = CONNECTION_BAD;
}
@@ -863,7 +858,7 @@ PQconnectStart(const char *conninfo)
* to initialize conn->errorMessage to empty. All subsequent steps during
* connection initialization will only append to that buffer.
*/
- conn = makeEmptyPGconn();
+ conn = pqMakeEmptyPGconn();
if (conn == NULL)
return NULL;
@@ -876,15 +871,15 @@ PQconnectStart(const char *conninfo)
/*
* Compute derived options
*/
- if (!connectOptions2(conn))
+ if (!pqConnectOptions2(conn))
return conn;
/*
* Connect to the database
*/
- if (!connectDBStart(conn))
+ if (!pqConnectDBStart(conn))
{
- /* Just in case we failed to set it in connectDBStart */
+ /* Just in case we failed to set it in pqConnectDBStart */
conn->status = CONNECTION_BAD;
}
@@ -895,7 +890,7 @@ PQconnectStart(const char *conninfo)
* Move option values into conn structure
*
* Don't put anything cute here --- intelligence should be in
- * connectOptions2 ...
+ * pqConnectOptions2 ...
*
* Returns true on success. On failure, returns false and sets error message.
*/
@@ -933,7 +928,7 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
*
* Internal subroutine to set up connection parameters given an already-
* created PGconn and a conninfo string. Derived settings should be
- * processed by calling connectOptions2 next. (We split them because
+ * processed by calling pqConnectOptions2 next. (We split them because
* PQsetdbLogin overrides defaults in between.)
*
* Returns true if OK, false if trouble (in which case errorMessage is set
@@ -1055,15 +1050,15 @@ libpq_prng_init(PGconn *conn)
}
/*
- * connectOptions2
+ * pqConnectOptions2
*
* Compute derived connection options after absorbing all user-supplied info.
*
* Returns true if OK, false if trouble (in which case errorMessage is set
* and so is conn->status).
*/
-static bool
-connectOptions2(PGconn *conn)
+bool
+pqConnectOptions2(PGconn *conn)
{
int i;
@@ -1822,7 +1817,7 @@ PQsetdbLogin(const char *pghost, const char *pgport, const char *pgoptions,
* to initialize conn->errorMessage to empty. All subsequent steps during
* connection initialization will only append to that buffer.
*/
- conn = makeEmptyPGconn();
+ conn = pqMakeEmptyPGconn();
if (conn == NULL)
return NULL;
@@ -1901,14 +1896,14 @@ PQsetdbLogin(const char *pghost, const char *pgport, const char *pgoptions,
/*
* Compute derived options
*/
- if (!connectOptions2(conn))
+ if (!pqConnectOptions2(conn))
return conn;
/*
* Connect to the database
*/
- if (connectDBStart(conn))
- (void) connectDBComplete(conn);
+ if (pqConnectDBStart(conn))
+ (void) pqConnectDBComplete(conn);
return conn;
@@ -2323,14 +2318,14 @@ setTCPUserTimeout(PGconn *conn)
}
/* ----------
- * connectDBStart -
+ * pqConnectDBStart -
* Begin the process of making a connection to the backend.
*
* Returns 1 if successful, 0 if not.
* ----------
*/
-static int
-connectDBStart(PGconn *conn)
+int
+pqConnectDBStart(PGconn *conn)
{
if (!conn)
return 0;
@@ -2393,14 +2388,14 @@ connect_errReturn:
/*
- * connectDBComplete
+ * pqConnectDBComplete
*
* Block and complete a connection.
*
* Returns 1 on success, 0 on failure.
*/
-static int
-connectDBComplete(PGconn *conn)
+int
+pqConnectDBComplete(PGconn *conn)
{
PostgresPollingStatusType flag = PGRES_POLLING_WRITING;
time_t finish_time = ((time_t) -1);
@@ -2750,7 +2745,7 @@ keep_going: /* We will come back to here until there is
* combining it with the insertion.
*
* We don't need to initialize conn->prng_state here, because that
- * already happened in connectOptions2.
+ * already happened in pqConnectOptions2.
*/
for (int i = 1; i < conn->naddr; i++)
{
@@ -4227,7 +4222,7 @@ internal_ping(PGconn *conn)
/* Attempt to complete the connection */
if (conn->status != CONNECTION_BAD)
- (void) connectDBComplete(conn);
+ (void) pqConnectDBComplete(conn);
/* Definitely OK if we succeeded */
if (conn->status != CONNECTION_BAD)
@@ -4279,11 +4274,11 @@ internal_ping(PGconn *conn)
/*
- * makeEmptyPGconn
+ * pqMakeEmptyPGconn
* - create a PGconn data structure with (as yet) no interesting data
*/
-static PGconn *
-makeEmptyPGconn(void)
+PGconn *
+pqMakeEmptyPGconn(void)
{
PGconn *conn;
@@ -4376,7 +4371,7 @@ makeEmptyPGconn(void)
* freePGconn
* - free an idle (closed) PGconn data structure
*
- * NOTE: this should not overlap any functionality with closePGconn().
+ * NOTE: this should not overlap any functionality with pqClosePGconn().
* Clearing/resetting of transient state belongs there; what we do here is
* release data that is to be held for the life of the PGconn structure.
* If a value ought to be cleared/freed during PQreset(), do it there not here.
@@ -4562,15 +4557,15 @@ sendTerminateConn(PGconn *conn)
}
/*
- * closePGconn
+ * pqClosePGconn
* - properly close a connection to the backend
*
* This should reset or release all transient state, but NOT the connection
* parameters. On exit, the PGconn should be in condition to start a fresh
* connection with the same parameters (see PQreset()).
*/
-static void
-closePGconn(PGconn *conn)
+void
+pqClosePGconn(PGconn *conn)
{
/*
* If possible, send Terminate message to close the connection politely.
@@ -4613,7 +4608,7 @@ PQfinish(PGconn *conn)
{
if (conn)
{
- closePGconn(conn);
+ pqClosePGconn(conn);
freePGconn(conn);
}
}
@@ -4627,9 +4622,9 @@ PQreset(PGconn *conn)
{
if (conn)
{
- closePGconn(conn);
+ pqClosePGconn(conn);
- if (connectDBStart(conn) && connectDBComplete(conn))
+ if (pqConnectDBStart(conn) && pqConnectDBComplete(conn))
{
/*
* Notify event procs of successful reset.
@@ -4660,9 +4655,9 @@ PQresetStart(PGconn *conn)
{
if (conn)
{
- closePGconn(conn);
+ pqClosePGconn(conn);
- return connectDBStart(conn);
+ return pqConnectDBStart(conn);
}
return 0;
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 4cbad2c2c83..c1ff12dd396 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -681,6 +681,12 @@ extern bool pqGetHomeDirectory(char *buf, int bufsize);
extern bool pq_parse_int_param(const char *value, int *result, PGconn *conn,
const char *context);
extern void pq_release_conn_hosts(PGconn *conn);
+extern bool pqConnectOptions2(PGconn *conn);
+extern int pqConnectDBStart(PGconn *conn);
+extern int pqConnectDBComplete(PGconn *conn);
+extern PGconn *pqMakeEmptyPGconn(void);
+extern bool pqCopyPGconn(PGconn *srcConn, PGconn *dstConn);
+extern void pqClosePGconn(PGconn *conn);
extern pgthreadlock_t pg_g_threadlock;
--
2.34.1
v28-0001-libpq-Move-cancellation-related-functions-to-fe-.patchapplication/x-patch; name=v28-0001-libpq-Move-cancellation-related-functions-to-fe-.patchDownload
From d5cdd1451ecd9160d285bdfe3cdcf6df452c5249 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Fri, 26 Jan 2024 14:35:48 +0100
Subject: [PATCH v28 1/5] libpq: Move cancellation related functions to
fe-cancel.c
In follow up commits we'll add more functions related to cancellations
this groups those all together instead of grouping them with all the
other functions in fe-connect.c
---
src/interfaces/libpq/Makefile | 1 +
src/interfaces/libpq/fe-cancel.c | 387 ++++++++++++++++++++++++++++
src/interfaces/libpq/fe-connect.c | 411 ++----------------------------
src/interfaces/libpq/libpq-int.h | 6 +
src/interfaces/libpq/meson.build | 1 +
5 files changed, 416 insertions(+), 390 deletions(-)
create mode 100644 src/interfaces/libpq/fe-cancel.c
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index fce17bc72a0..bfcc7cdde99 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -30,6 +30,7 @@ endif
OBJS = \
$(WIN32RES) \
fe-auth-scram.o \
+ fe-cancel.o \
fe-connect.o \
fe-exec.o \
fe-lobj.o \
diff --git a/src/interfaces/libpq/fe-cancel.c b/src/interfaces/libpq/fe-cancel.c
new file mode 100644
index 00000000000..ce28d39f3f5
--- /dev/null
+++ b/src/interfaces/libpq/fe-cancel.c
@@ -0,0 +1,387 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-cancel.c
+ * functions related to setting up a connection to the backend
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ * src/interfaces/libpq/fe-cancel.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <unistd.h>
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+#include "port/pg_bswap.h"
+
+/*
+ * PQgetCancel: get a PGcancel structure corresponding to a connection.
+ *
+ * A copy is needed to be able to cancel a running query from a different
+ * thread. If the same structure is used all structure members would have
+ * to be individually locked (if the entire structure was locked, it would
+ * be impossible to cancel a synchronous query because the structure would
+ * have to stay locked for the duration of the query).
+ */
+PGcancel *
+PQgetCancel(PGconn *conn)
+{
+ PGcancel *cancel;
+
+ if (!conn)
+ return NULL;
+
+ if (conn->sock == PGINVALID_SOCKET)
+ return NULL;
+
+ cancel = malloc(sizeof(PGcancel));
+ if (cancel == NULL)
+ return NULL;
+
+ memcpy(&cancel->raddr, &conn->raddr, sizeof(SockAddr));
+ cancel->be_pid = conn->be_pid;
+ cancel->be_key = conn->be_key;
+ /* We use -1 to indicate an unset connection option */
+ cancel->pgtcp_user_timeout = -1;
+ cancel->keepalives = -1;
+ cancel->keepalives_idle = -1;
+ cancel->keepalives_interval = -1;
+ cancel->keepalives_count = -1;
+ if (conn->pgtcp_user_timeout != NULL)
+ {
+ if (!pq_parse_int_param(conn->pgtcp_user_timeout,
+ &cancel->pgtcp_user_timeout,
+ conn, "tcp_user_timeout"))
+ goto fail;
+ }
+ if (conn->keepalives != NULL)
+ {
+ if (!pq_parse_int_param(conn->keepalives,
+ &cancel->keepalives,
+ conn, "keepalives"))
+ goto fail;
+ }
+ if (conn->keepalives_idle != NULL)
+ {
+ if (!pq_parse_int_param(conn->keepalives_idle,
+ &cancel->keepalives_idle,
+ conn, "keepalives_idle"))
+ goto fail;
+ }
+ if (conn->keepalives_interval != NULL)
+ {
+ if (!pq_parse_int_param(conn->keepalives_interval,
+ &cancel->keepalives_interval,
+ conn, "keepalives_interval"))
+ goto fail;
+ }
+ if (conn->keepalives_count != NULL)
+ {
+ if (!pq_parse_int_param(conn->keepalives_count,
+ &cancel->keepalives_count,
+ conn, "keepalives_count"))
+ goto fail;
+ }
+
+ return cancel;
+
+fail:
+ free(cancel);
+ return NULL;
+}
+
+/* PQfreeCancel: free a cancel structure */
+void
+PQfreeCancel(PGcancel *cancel)
+{
+ free(cancel);
+}
+
+
+/*
+ * Sets an integer socket option on a TCP socket, if the provided value is
+ * not negative. Returns false if setsockopt fails for some reason.
+ *
+ * CAUTION: This needs to be signal safe, since it's used by PQcancel.
+ */
+#if defined(TCP_USER_TIMEOUT) || !defined(WIN32)
+static bool
+optional_setsockopt(int fd, int protoid, int optid, int value)
+{
+ if (value < 0)
+ return true;
+ if (setsockopt(fd, protoid, optid, (char *) &value, sizeof(value)) < 0)
+ return false;
+ return true;
+}
+#endif
+
+
+/*
+ * PQcancel: request query cancel
+ *
+ * The return value is true if the cancel request was successfully
+ * dispatched, false if not (in which case an error message is available).
+ * Note: successful dispatch is no guarantee that there will be any effect at
+ * the backend. The application must read the operation result as usual.
+ *
+ * On failure, an error message is stored in *errbuf, which must be of size
+ * errbufsize (recommended size is 256 bytes). *errbuf is not changed on
+ * success return.
+ *
+ * CAUTION: we want this routine to be safely callable from a signal handler
+ * (for example, an application might want to call it in a SIGINT handler).
+ * This means we cannot use any C library routine that might be non-reentrant.
+ * malloc/free are often non-reentrant, and anything that might call them is
+ * just as dangerous. We avoid sprintf here for that reason. Building up
+ * error messages with strcpy/strcat is tedious but should be quite safe.
+ * We also save/restore errno in case the signal handler support doesn't.
+ */
+int
+PQcancel(PGcancel *cancel, char *errbuf, int errbufsize)
+{
+ int save_errno = SOCK_ERRNO;
+ pgsocket tmpsock = PGINVALID_SOCKET;
+ int maxlen;
+ struct
+ {
+ uint32 packetlen;
+ CancelRequestPacket cp;
+ } crp;
+
+ if (!cancel)
+ {
+ strlcpy(errbuf, "PQcancel() -- no cancel object supplied", errbufsize);
+ /* strlcpy probably doesn't change errno, but be paranoid */
+ SOCK_ERRNO_SET(save_errno);
+ return false;
+ }
+
+ /*
+ * We need to open a temporary connection to the postmaster. Do this with
+ * only kernel calls.
+ */
+ if ((tmpsock = socket(cancel->raddr.addr.ss_family, SOCK_STREAM, 0)) == PGINVALID_SOCKET)
+ {
+ strlcpy(errbuf, "PQcancel() -- socket() failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+
+ /*
+ * Since this connection will only be used to send a single packet of
+ * data, we don't need NODELAY. We also don't set the socket to
+ * nonblocking mode, because the API definition of PQcancel requires the
+ * cancel to be sent in a blocking way.
+ *
+ * We do set socket options related to keepalives and other TCP timeouts.
+ * This ensures that this function does not block indefinitely when
+ * reasonable keepalive and timeout settings have been provided.
+ */
+ if (cancel->raddr.addr.ss_family != AF_UNIX &&
+ cancel->keepalives != 0)
+ {
+#ifndef WIN32
+ if (!optional_setsockopt(tmpsock, SOL_SOCKET, SO_KEEPALIVE, 1))
+ {
+ strlcpy(errbuf, "PQcancel() -- setsockopt(SO_KEEPALIVE) failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+
+#ifdef PG_TCP_KEEPALIVE_IDLE
+ if (!optional_setsockopt(tmpsock, IPPROTO_TCP, PG_TCP_KEEPALIVE_IDLE,
+ cancel->keepalives_idle))
+ {
+ strlcpy(errbuf, "PQcancel() -- setsockopt(" PG_TCP_KEEPALIVE_IDLE_STR ") failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+#endif
+
+#ifdef TCP_KEEPINTVL
+ if (!optional_setsockopt(tmpsock, IPPROTO_TCP, TCP_KEEPINTVL,
+ cancel->keepalives_interval))
+ {
+ strlcpy(errbuf, "PQcancel() -- setsockopt(TCP_KEEPINTVL) failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+#endif
+
+#ifdef TCP_KEEPCNT
+ if (!optional_setsockopt(tmpsock, IPPROTO_TCP, TCP_KEEPCNT,
+ cancel->keepalives_count))
+ {
+ strlcpy(errbuf, "PQcancel() -- setsockopt(TCP_KEEPCNT) failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+#endif
+
+#else /* WIN32 */
+
+#ifdef SIO_KEEPALIVE_VALS
+ if (!pqSetKeepalivesWin32(tmpsock,
+ cancel->keepalives_idle,
+ cancel->keepalives_interval))
+ {
+ strlcpy(errbuf, "PQcancel() -- WSAIoctl(SIO_KEEPALIVE_VALS) failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+#endif /* SIO_KEEPALIVE_VALS */
+#endif /* WIN32 */
+
+ /* TCP_USER_TIMEOUT works the same way on Unix and Windows */
+#ifdef TCP_USER_TIMEOUT
+ if (!optional_setsockopt(tmpsock, IPPROTO_TCP, TCP_USER_TIMEOUT,
+ cancel->pgtcp_user_timeout))
+ {
+ strlcpy(errbuf, "PQcancel() -- setsockopt(TCP_USER_TIMEOUT) failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+#endif
+ }
+
+retry3:
+ if (connect(tmpsock, (struct sockaddr *) &cancel->raddr.addr,
+ cancel->raddr.salen) < 0)
+ {
+ if (SOCK_ERRNO == EINTR)
+ /* Interrupted system call - we'll just try again */
+ goto retry3;
+ strlcpy(errbuf, "PQcancel() -- connect() failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+
+ /* Create and send the cancel request packet. */
+
+ crp.packetlen = pg_hton32((uint32) sizeof(crp));
+ crp.cp.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ crp.cp.backendPID = pg_hton32(cancel->be_pid);
+ crp.cp.cancelAuthCode = pg_hton32(cancel->be_key);
+
+retry4:
+ if (send(tmpsock, (char *) &crp, sizeof(crp), 0) != (int) sizeof(crp))
+ {
+ if (SOCK_ERRNO == EINTR)
+ /* Interrupted system call - we'll just try again */
+ goto retry4;
+ strlcpy(errbuf, "PQcancel() -- send() failed: ", errbufsize);
+ goto cancel_errReturn;
+ }
+
+ /*
+ * Wait for the postmaster to close the connection, which indicates that
+ * it's processed the request. Without this delay, we might issue another
+ * command only to find that our cancel zaps that command instead of the
+ * one we thought we were canceling. Note we don't actually expect this
+ * read to obtain any data, we are just waiting for EOF to be signaled.
+ */
+retry5:
+ if (recv(tmpsock, (char *) &crp, 1, 0) < 0)
+ {
+ if (SOCK_ERRNO == EINTR)
+ /* Interrupted system call - we'll just try again */
+ goto retry5;
+ /* we ignore other error conditions */
+ }
+
+ /* All done */
+ closesocket(tmpsock);
+ SOCK_ERRNO_SET(save_errno);
+ return true;
+
+cancel_errReturn:
+
+ /*
+ * Make sure we don't overflow the error buffer. Leave space for the \n at
+ * the end, and for the terminating zero.
+ */
+ maxlen = errbufsize - strlen(errbuf) - 2;
+ if (maxlen >= 0)
+ {
+ /*
+ * We can't invoke strerror here, since it's not signal-safe. Settle
+ * for printing the decimal value of errno. Even that has to be done
+ * the hard way.
+ */
+ int val = SOCK_ERRNO;
+ char buf[32];
+ char *bufp;
+
+ bufp = buf + sizeof(buf) - 1;
+ *bufp = '\0';
+ do
+ {
+ *(--bufp) = (val % 10) + '0';
+ val /= 10;
+ } while (val > 0);
+ bufp -= 6;
+ memcpy(bufp, "error ", 6);
+ strncat(errbuf, bufp, maxlen);
+ strcat(errbuf, "\n");
+ }
+ if (tmpsock != PGINVALID_SOCKET)
+ closesocket(tmpsock);
+ SOCK_ERRNO_SET(save_errno);
+ return false;
+}
+
+/*
+ * PQrequestCancel: old, not thread-safe function for requesting query cancel
+ *
+ * Returns true if able to send the cancel request, false if not.
+ *
+ * On failure, the error message is saved in conn->errorMessage; this means
+ * that this can't be used when there might be other active operations on
+ * the connection object.
+ *
+ * NOTE: error messages will be cut off at the current size of the
+ * error message buffer, since we dare not try to expand conn->errorMessage!
+ */
+int
+PQrequestCancel(PGconn *conn)
+{
+ int r;
+ PGcancel *cancel;
+
+ /* Check we have an open connection */
+ if (!conn)
+ return false;
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ strlcpy(conn->errorMessage.data,
+ "PQrequestCancel() -- connection is not open\n",
+ conn->errorMessage.maxlen);
+ conn->errorMessage.len = strlen(conn->errorMessage.data);
+ conn->errorReported = 0;
+
+ return false;
+ }
+
+ cancel = PQgetCancel(conn);
+ if (cancel)
+ {
+ r = PQcancel(cancel, conn->errorMessage.data,
+ conn->errorMessage.maxlen);
+ PQfreeCancel(cancel);
+ }
+ else
+ {
+ strlcpy(conn->errorMessage.data, "out of memory",
+ conn->errorMessage.maxlen);
+ r = false;
+ }
+
+ if (!r)
+ {
+ conn->errorMessage.len = strlen(conn->errorMessage.data);
+ conn->errorReported = 0;
+ }
+
+ return r;
+}
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 79e0b73d618..5d08b4904d3 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -443,8 +443,6 @@ static void pgpassfileWarning(PGconn *conn);
static void default_threadlock(int acquire);
static bool sslVerifyProtocolVersion(const char *version);
static bool sslVerifyProtocolRange(const char *min, const char *max);
-static bool parse_int_param(const char *value, int *result, PGconn *conn,
- const char *context);
/* global variable because fe-auth.c needs to access it */
@@ -2081,9 +2079,9 @@ useKeepalives(PGconn *conn)
* store it in *result, complaining if there is any trailing garbage or an
* overflow. This allows any number of leading and trailing whitespaces.
*/
-static bool
-parse_int_param(const char *value, int *result, PGconn *conn,
- const char *context)
+bool
+pq_parse_int_param(const char *value, int *result, PGconn *conn,
+ const char *context)
{
char *end;
long numval;
@@ -2134,8 +2132,8 @@ setKeepalivesIdle(PGconn *conn)
if (conn->keepalives_idle == NULL)
return 1;
- if (!parse_int_param(conn->keepalives_idle, &idle, conn,
- "keepalives_idle"))
+ if (!pq_parse_int_param(conn->keepalives_idle, &idle, conn,
+ "keepalives_idle"))
return 0;
if (idle < 0)
idle = 0;
@@ -2168,8 +2166,8 @@ setKeepalivesInterval(PGconn *conn)
if (conn->keepalives_interval == NULL)
return 1;
- if (!parse_int_param(conn->keepalives_interval, &interval, conn,
- "keepalives_interval"))
+ if (!pq_parse_int_param(conn->keepalives_interval, &interval, conn,
+ "keepalives_interval"))
return 0;
if (interval < 0)
interval = 0;
@@ -2203,8 +2201,8 @@ setKeepalivesCount(PGconn *conn)
if (conn->keepalives_count == NULL)
return 1;
- if (!parse_int_param(conn->keepalives_count, &count, conn,
- "keepalives_count"))
+ if (!pq_parse_int_param(conn->keepalives_count, &count, conn,
+ "keepalives_count"))
return 0;
if (count < 0)
count = 0;
@@ -2233,8 +2231,8 @@ setKeepalivesCount(PGconn *conn)
*
* CAUTION: This needs to be signal safe, since it's used by PQcancel.
*/
-static int
-setKeepalivesWin32(pgsocket sock, int idle, int interval)
+int
+pqSetKeepalivesWin32(pgsocket sock, int idle, int interval)
{
struct tcp_keepalive ka;
DWORD retsize;
@@ -2269,15 +2267,15 @@ prepKeepalivesWin32(PGconn *conn)
int interval = -1;
if (conn->keepalives_idle &&
- !parse_int_param(conn->keepalives_idle, &idle, conn,
- "keepalives_idle"))
+ !pq_parse_int_param(conn->keepalives_idle, &idle, conn,
+ "keepalives_idle"))
return 0;
if (conn->keepalives_interval &&
- !parse_int_param(conn->keepalives_interval, &interval, conn,
- "keepalives_interval"))
+ !pq_parse_int_param(conn->keepalives_interval, &interval, conn,
+ "keepalives_interval"))
return 0;
- if (!setKeepalivesWin32(conn->sock, idle, interval))
+ if (!pqSetKeepalivesWin32(conn->sock, idle, interval))
{
libpq_append_conn_error(conn, "%s(%s) failed: error code %d",
"WSAIoctl", "SIO_KEEPALIVE_VALS",
@@ -2300,8 +2298,8 @@ setTCPUserTimeout(PGconn *conn)
if (conn->pgtcp_user_timeout == NULL)
return 1;
- if (!parse_int_param(conn->pgtcp_user_timeout, &timeout, conn,
- "tcp_user_timeout"))
+ if (!pq_parse_int_param(conn->pgtcp_user_timeout, &timeout, conn,
+ "tcp_user_timeout"))
return 0;
if (timeout < 0)
@@ -2418,8 +2416,8 @@ connectDBComplete(PGconn *conn)
*/
if (conn->connect_timeout != NULL)
{
- if (!parse_int_param(conn->connect_timeout, &timeout, conn,
- "connect_timeout"))
+ if (!pq_parse_int_param(conn->connect_timeout, &timeout, conn,
+ "connect_timeout"))
{
/* mark the connection as bad to report the parsing failure */
conn->status = CONNECTION_BAD;
@@ -2666,7 +2664,7 @@ keep_going: /* We will come back to here until there is
thisport = DEF_PGPORT;
else
{
- if (!parse_int_param(ch->port, &thisport, conn, "port"))
+ if (!pq_parse_int_param(ch->port, &thisport, conn, "port"))
goto error_return;
if (thisport < 1 || thisport > 65535)
@@ -4694,373 +4692,6 @@ PQresetPoll(PGconn *conn)
return PGRES_POLLING_FAILED;
}
-/*
- * PQgetCancel: get a PGcancel structure corresponding to a connection.
- *
- * A copy is needed to be able to cancel a running query from a different
- * thread. If the same structure is used all structure members would have
- * to be individually locked (if the entire structure was locked, it would
- * be impossible to cancel a synchronous query because the structure would
- * have to stay locked for the duration of the query).
- */
-PGcancel *
-PQgetCancel(PGconn *conn)
-{
- PGcancel *cancel;
-
- if (!conn)
- return NULL;
-
- if (conn->sock == PGINVALID_SOCKET)
- return NULL;
-
- cancel = malloc(sizeof(PGcancel));
- if (cancel == NULL)
- return NULL;
-
- memcpy(&cancel->raddr, &conn->raddr, sizeof(SockAddr));
- cancel->be_pid = conn->be_pid;
- cancel->be_key = conn->be_key;
- /* We use -1 to indicate an unset connection option */
- cancel->pgtcp_user_timeout = -1;
- cancel->keepalives = -1;
- cancel->keepalives_idle = -1;
- cancel->keepalives_interval = -1;
- cancel->keepalives_count = -1;
- if (conn->pgtcp_user_timeout != NULL)
- {
- if (!parse_int_param(conn->pgtcp_user_timeout,
- &cancel->pgtcp_user_timeout,
- conn, "tcp_user_timeout"))
- goto fail;
- }
- if (conn->keepalives != NULL)
- {
- if (!parse_int_param(conn->keepalives,
- &cancel->keepalives,
- conn, "keepalives"))
- goto fail;
- }
- if (conn->keepalives_idle != NULL)
- {
- if (!parse_int_param(conn->keepalives_idle,
- &cancel->keepalives_idle,
- conn, "keepalives_idle"))
- goto fail;
- }
- if (conn->keepalives_interval != NULL)
- {
- if (!parse_int_param(conn->keepalives_interval,
- &cancel->keepalives_interval,
- conn, "keepalives_interval"))
- goto fail;
- }
- if (conn->keepalives_count != NULL)
- {
- if (!parse_int_param(conn->keepalives_count,
- &cancel->keepalives_count,
- conn, "keepalives_count"))
- goto fail;
- }
-
- return cancel;
-
-fail:
- free(cancel);
- return NULL;
-}
-
-/* PQfreeCancel: free a cancel structure */
-void
-PQfreeCancel(PGcancel *cancel)
-{
- free(cancel);
-}
-
-
-/*
- * Sets an integer socket option on a TCP socket, if the provided value is
- * not negative. Returns false if setsockopt fails for some reason.
- *
- * CAUTION: This needs to be signal safe, since it's used by PQcancel.
- */
-#if defined(TCP_USER_TIMEOUT) || !defined(WIN32)
-static bool
-optional_setsockopt(int fd, int protoid, int optid, int value)
-{
- if (value < 0)
- return true;
- if (setsockopt(fd, protoid, optid, (char *) &value, sizeof(value)) < 0)
- return false;
- return true;
-}
-#endif
-
-
-/*
- * PQcancel: request query cancel
- *
- * The return value is true if the cancel request was successfully
- * dispatched, false if not (in which case an error message is available).
- * Note: successful dispatch is no guarantee that there will be any effect at
- * the backend. The application must read the operation result as usual.
- *
- * On failure, an error message is stored in *errbuf, which must be of size
- * errbufsize (recommended size is 256 bytes). *errbuf is not changed on
- * success return.
- *
- * CAUTION: we want this routine to be safely callable from a signal handler
- * (for example, an application might want to call it in a SIGINT handler).
- * This means we cannot use any C library routine that might be non-reentrant.
- * malloc/free are often non-reentrant, and anything that might call them is
- * just as dangerous. We avoid sprintf here for that reason. Building up
- * error messages with strcpy/strcat is tedious but should be quite safe.
- * We also save/restore errno in case the signal handler support doesn't.
- */
-int
-PQcancel(PGcancel *cancel, char *errbuf, int errbufsize)
-{
- int save_errno = SOCK_ERRNO;
- pgsocket tmpsock = PGINVALID_SOCKET;
- int maxlen;
- struct
- {
- uint32 packetlen;
- CancelRequestPacket cp;
- } crp;
-
- if (!cancel)
- {
- strlcpy(errbuf, "PQcancel() -- no cancel object supplied", errbufsize);
- /* strlcpy probably doesn't change errno, but be paranoid */
- SOCK_ERRNO_SET(save_errno);
- return false;
- }
-
- /*
- * We need to open a temporary connection to the postmaster. Do this with
- * only kernel calls.
- */
- if ((tmpsock = socket(cancel->raddr.addr.ss_family, SOCK_STREAM, 0)) == PGINVALID_SOCKET)
- {
- strlcpy(errbuf, "PQcancel() -- socket() failed: ", errbufsize);
- goto cancel_errReturn;
- }
-
- /*
- * Since this connection will only be used to send a single packet of
- * data, we don't need NODELAY. We also don't set the socket to
- * nonblocking mode, because the API definition of PQcancel requires the
- * cancel to be sent in a blocking way.
- *
- * We do set socket options related to keepalives and other TCP timeouts.
- * This ensures that this function does not block indefinitely when
- * reasonable keepalive and timeout settings have been provided.
- */
- if (cancel->raddr.addr.ss_family != AF_UNIX &&
- cancel->keepalives != 0)
- {
-#ifndef WIN32
- if (!optional_setsockopt(tmpsock, SOL_SOCKET, SO_KEEPALIVE, 1))
- {
- strlcpy(errbuf, "PQcancel() -- setsockopt(SO_KEEPALIVE) failed: ", errbufsize);
- goto cancel_errReturn;
- }
-
-#ifdef PG_TCP_KEEPALIVE_IDLE
- if (!optional_setsockopt(tmpsock, IPPROTO_TCP, PG_TCP_KEEPALIVE_IDLE,
- cancel->keepalives_idle))
- {
- strlcpy(errbuf, "PQcancel() -- setsockopt(" PG_TCP_KEEPALIVE_IDLE_STR ") failed: ", errbufsize);
- goto cancel_errReturn;
- }
-#endif
-
-#ifdef TCP_KEEPINTVL
- if (!optional_setsockopt(tmpsock, IPPROTO_TCP, TCP_KEEPINTVL,
- cancel->keepalives_interval))
- {
- strlcpy(errbuf, "PQcancel() -- setsockopt(TCP_KEEPINTVL) failed: ", errbufsize);
- goto cancel_errReturn;
- }
-#endif
-
-#ifdef TCP_KEEPCNT
- if (!optional_setsockopt(tmpsock, IPPROTO_TCP, TCP_KEEPCNT,
- cancel->keepalives_count))
- {
- strlcpy(errbuf, "PQcancel() -- setsockopt(TCP_KEEPCNT) failed: ", errbufsize);
- goto cancel_errReturn;
- }
-#endif
-
-#else /* WIN32 */
-
-#ifdef SIO_KEEPALIVE_VALS
- if (!setKeepalivesWin32(tmpsock,
- cancel->keepalives_idle,
- cancel->keepalives_interval))
- {
- strlcpy(errbuf, "PQcancel() -- WSAIoctl(SIO_KEEPALIVE_VALS) failed: ", errbufsize);
- goto cancel_errReturn;
- }
-#endif /* SIO_KEEPALIVE_VALS */
-#endif /* WIN32 */
-
- /* TCP_USER_TIMEOUT works the same way on Unix and Windows */
-#ifdef TCP_USER_TIMEOUT
- if (!optional_setsockopt(tmpsock, IPPROTO_TCP, TCP_USER_TIMEOUT,
- cancel->pgtcp_user_timeout))
- {
- strlcpy(errbuf, "PQcancel() -- setsockopt(TCP_USER_TIMEOUT) failed: ", errbufsize);
- goto cancel_errReturn;
- }
-#endif
- }
-
-retry3:
- if (connect(tmpsock, (struct sockaddr *) &cancel->raddr.addr,
- cancel->raddr.salen) < 0)
- {
- if (SOCK_ERRNO == EINTR)
- /* Interrupted system call - we'll just try again */
- goto retry3;
- strlcpy(errbuf, "PQcancel() -- connect() failed: ", errbufsize);
- goto cancel_errReturn;
- }
-
- /* Create and send the cancel request packet. */
-
- crp.packetlen = pg_hton32((uint32) sizeof(crp));
- crp.cp.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
- crp.cp.backendPID = pg_hton32(cancel->be_pid);
- crp.cp.cancelAuthCode = pg_hton32(cancel->be_key);
-
-retry4:
- if (send(tmpsock, (char *) &crp, sizeof(crp), 0) != (int) sizeof(crp))
- {
- if (SOCK_ERRNO == EINTR)
- /* Interrupted system call - we'll just try again */
- goto retry4;
- strlcpy(errbuf, "PQcancel() -- send() failed: ", errbufsize);
- goto cancel_errReturn;
- }
-
- /*
- * Wait for the postmaster to close the connection, which indicates that
- * it's processed the request. Without this delay, we might issue another
- * command only to find that our cancel zaps that command instead of the
- * one we thought we were canceling. Note we don't actually expect this
- * read to obtain any data, we are just waiting for EOF to be signaled.
- */
-retry5:
- if (recv(tmpsock, (char *) &crp, 1, 0) < 0)
- {
- if (SOCK_ERRNO == EINTR)
- /* Interrupted system call - we'll just try again */
- goto retry5;
- /* we ignore other error conditions */
- }
-
- /* All done */
- closesocket(tmpsock);
- SOCK_ERRNO_SET(save_errno);
- return true;
-
-cancel_errReturn:
-
- /*
- * Make sure we don't overflow the error buffer. Leave space for the \n at
- * the end, and for the terminating zero.
- */
- maxlen = errbufsize - strlen(errbuf) - 2;
- if (maxlen >= 0)
- {
- /*
- * We can't invoke strerror here, since it's not signal-safe. Settle
- * for printing the decimal value of errno. Even that has to be done
- * the hard way.
- */
- int val = SOCK_ERRNO;
- char buf[32];
- char *bufp;
-
- bufp = buf + sizeof(buf) - 1;
- *bufp = '\0';
- do
- {
- *(--bufp) = (val % 10) + '0';
- val /= 10;
- } while (val > 0);
- bufp -= 6;
- memcpy(bufp, "error ", 6);
- strncat(errbuf, bufp, maxlen);
- strcat(errbuf, "\n");
- }
- if (tmpsock != PGINVALID_SOCKET)
- closesocket(tmpsock);
- SOCK_ERRNO_SET(save_errno);
- return false;
-}
-
-
-/*
- * PQrequestCancel: old, not thread-safe function for requesting query cancel
- *
- * Returns true if able to send the cancel request, false if not.
- *
- * On failure, the error message is saved in conn->errorMessage; this means
- * that this can't be used when there might be other active operations on
- * the connection object.
- *
- * NOTE: error messages will be cut off at the current size of the
- * error message buffer, since we dare not try to expand conn->errorMessage!
- */
-int
-PQrequestCancel(PGconn *conn)
-{
- int r;
- PGcancel *cancel;
-
- /* Check we have an open connection */
- if (!conn)
- return false;
-
- if (conn->sock == PGINVALID_SOCKET)
- {
- strlcpy(conn->errorMessage.data,
- "PQrequestCancel() -- connection is not open\n",
- conn->errorMessage.maxlen);
- conn->errorMessage.len = strlen(conn->errorMessage.data);
- conn->errorReported = 0;
-
- return false;
- }
-
- cancel = PQgetCancel(conn);
- if (cancel)
- {
- r = PQcancel(cancel, conn->errorMessage.data,
- conn->errorMessage.maxlen);
- PQfreeCancel(cancel);
- }
- else
- {
- strlcpy(conn->errorMessage.data, "out of memory",
- conn->errorMessage.maxlen);
- r = false;
- }
-
- if (!r)
- {
- conn->errorMessage.len = strlen(conn->errorMessage.data);
- conn->errorReported = 0;
- }
-
- return r;
-}
-
-
/*
* pqPacketSend() -- convenience routine to send a message to server.
*
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index f0143726bbc..48c10b474f5 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -678,12 +678,18 @@ extern void pqDropConnection(PGconn *conn, bool flushInput);
extern int pqPacketSend(PGconn *conn, char pack_type,
const void *buf, size_t buf_len);
extern bool pqGetHomeDirectory(char *buf, int bufsize);
+extern bool pq_parse_int_param(const char *value, int *result, PGconn *conn,
+ const char *context);
extern pgthreadlock_t pg_g_threadlock;
#define pglock_thread() pg_g_threadlock(true)
#define pgunlock_thread() pg_g_threadlock(false)
+#if defined(WIN32) && defined(SIO_KEEPALIVE_VALS)
+extern int pqSetKeepalivesWin32(pgsocket sock, int idle, int interval);
+#endif
+
/* === in fe-exec.c === */
extern void pqSetResultError(PGresult *res, PQExpBuffer errorMessage, int offset);
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index c76a1e40c83..a47b6f425dd 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -6,6 +6,7 @@
libpq_sources = files(
'fe-auth-scram.c',
'fe-auth.c',
+ 'fe-cancel.c',
'fe-connect.c',
'fe-exec.c',
'fe-lobj.c',
base-commit: a3a836fb5e51183eae624d43225279306c2285b8
--
2.34.1
v28-0005-Start-using-new-libpq-cancel-APIs.patchapplication/x-patch; name=v28-0005-Start-using-new-libpq-cancel-APIs.patchDownload
From cb5b87e6e0c9127352013453bc2e944696d0925a Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Dec 2023 13:39:09 +0100
Subject: [PATCH v28 5/5] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in the codebase with these newer
ones.
---
contrib/dblink/dblink.c | 30 +++--
contrib/postgres_fdw/connection.c | 105 +++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 11 +-
src/test/isolation/isolationtester.c | 29 ++---
6 files changed, 145 insertions(+), 52 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 19a362526d2..81749b2cdd0 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1346,22 +1346,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelConn(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (!PQcancelSend(cancelConn))
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 4931ebf5915..3ac74ff6a7f 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -133,7 +133,7 @@ static void pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue);
static void pgfdw_reject_incomplete_xact_state_change(ConnCacheEntry *entry);
static void pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_cancel_query(PGconn *conn);
-static bool pgfdw_cancel_query_begin(PGconn *conn);
+static bool pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime);
static bool pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
bool consume_input);
static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query,
@@ -1315,36 +1315,104 @@ pgfdw_cancel_query(PGconn *conn)
endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
CONNECTION_CLEANUP_TIMEOUT);
- if (!pgfdw_cancel_query_begin(conn))
+ if (!pgfdw_cancel_query_begin(conn, endtime))
return false;
return pgfdw_cancel_query_end(conn, endtime, false);
}
static bool
-pgfdw_cancel_query_begin(PGconn *conn)
+pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
{
- PGcancel *cancel;
- char errbuf[256];
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQcancelStatus(cancel_conn) == CONNECTION_BAD)
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
}
- return true;
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ return !failed;
}
static bool
@@ -1685,7 +1753,10 @@ pgfdw_abort_cleanup_begin(ConnCacheEntry *entry, bool toplevel,
*/
if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
{
- if (!pgfdw_cancel_query_begin(entry->conn))
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ CONNECTION_CLEANUP_TIMEOUT);
+
+ if (!pgfdw_cancel_query_begin(entry->conn, endtime))
return false; /* Unable to cancel running query */
*cancel_requested = lappend(*cancel_requested, entry);
}
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index b5a38aeb214..16206a23a9d 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2698,6 +2698,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index f410c3db4e6..01a98750611 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -717,6 +717,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 808d54461fd..c5cd2f57875 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,14 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
+ PGcancelConn *cancelConn = PQcancelConn(conn);
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQcancelSend(cancelConn);
+ PQcancelFinish(cancelConn);
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153a..de31a875716 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- if (cancel != NULL)
+ if (PQcancelSend(cancel_conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
v28-0004-Add-non-blocking-version-of-PQcancel.patchapplication/x-patch; name=v28-0004-Add-non-blocking-version-of-PQcancel.patchDownload
From 819ecc80382b93ffa0a9757119c396e4bf667908 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Fri, 26 Jan 2024 17:01:00 +0100
Subject: [PATCH v28 4/5] Add non-blocking version of PQcancel
This patch makes the following changes in libpq:
1. Add a new PQcancelSend function, which sends cancellation requests
using the regular connection establishment code. This makes sure
that cancel requests support and use all connection options
including encryption.
2. Add a new PQcancelConn function which allows sending cancellation in
a non-blocking way by using it together with the newly added
PQcancelPoll and PQcancelSocket.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
doc/src/sgml/libpq.sgml | 280 +++++++++++++++--
src/interfaces/libpq/exports.txt | 8 +
src/interfaces/libpq/fe-cancel.c | 284 ++++++++++++++++++
src/interfaces/libpq/fe-connect.c | 130 +++++++-
src/interfaces/libpq/libpq-fe.h | 27 +-
src/interfaces/libpq/libpq-int.h | 10 +
.../modules/libpq_pipeline/libpq_pipeline.c | 263 +++++++++++++++-
7 files changed, 964 insertions(+), 38 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index d0d5aefadc0..9808e678650 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -5281,7 +5281,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelSend"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -6034,13 +6034,223 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Prepares a connection over which a cancel request can be sent.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>
+ object, but it won't instantly start sending a cancel request over this
+ connection. A cancel request can be sent over this connection in a
+ blocking manner using <xref linkend="libpq-PQcancelSend"/> and in a
+ non-blocking manner using <xref linkend="libpq-PQcancelPoll"/>.
+ The return value should can be passed to <xref linkend="libpq-PQcancelStatus"/>,
+ to check if the <structname>PGcancelConn</structname> object was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe way.
+ </para>
+
+ <para>
+ If the original connection is encrypted (using TLS or GSS), then the
+ connection for the cancel request is encrypted in the same way. Any
+ connection options that are only used during authentication or after
+ authentication of the client are ignored though, because cancellation
+ requests do not require authentication and the connection is closed right
+ after the cancellation request is submitted.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSend">
+ <term><function>PQcancelSend</function><indexterm><primary>PQcancelSend</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a blocking manner.
+<synopsis>
+int PQcancelSend(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelConn"/>
+ The return value of <xref linkend="libpq-PQcancelSend"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQstatus"/> that can be used for
+ cancellation connections.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ <para>
+ In addition to all the statuses that a <structname>PGconn</structname>
+ can have, this connection can have one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQcancelPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelPoll"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually canceled) and that the
+ connection is now closed. While a <symbol>CONNECTION_OK</symbol> result
+ for <structname>PGconn</structname> means that queries can be sent over
+ the connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSocket">
+ <term><function>PQcancelSocket</function><indexterm><primary>PQcancelSocket</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQsocket"/> that can be used for
+ cancellation connections.
+<synopsis>
+int PQcancelSocket(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelPoll">
+ <term><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQconnectPoll"/> that can be used for
+ cancellation connections.
+<synopsis>
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -6082,14 +6292,28 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ An insecure version of <xref linkend="libpq-PQcancelSend"/>, but one
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
+ <xref linkend="libpq-PQcancel"/> should only be used if it's necessary
+ to cancel a query from a signal-handler. If signal-safety is not needed,
+ <xref linkend="libpq-PQcancelSend"/> should be used to cancel the query
+ instead. <xref linkend="libpq-PQcancel"/> can be safely invoked from a
+ signal handler, if the <parameter>errbuf</parameter> is a local variable
+ in the signal handler. The <structname>PGcancel</structname> object is
+ read-only as far as <xref linkend="libpq-PQcancel"/> is concerned, so it
+ can also be invoked from a thread that is separate from the one
+ manipulating the <structname>PGconn</structname> object.
+ </para>
+
+ <para>
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
with an explanatory error message. <parameter>errbuf</parameter>
must be a char array of size <parameter>errbufsize</parameter> (the
@@ -6097,21 +6321,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
- <para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. This function connects to
+ postgres on the same address and port as the original connection. The
+ only connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. <emphasis>This means the connection
+ for the cancel request is never encrypted using TLS or GSS</emphasis>.
</para>
</listitem>
</varlistentry>
@@ -6123,13 +6348,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelSend"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelSend"/> should be
+ used instead, to avoid the security and thread-safety issues that this
+ function has. This function has the same security issues as
+ <xref linkend="libpq-PQcancel"/>, but without the benefit of being
+ signal-safe.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -9356,7 +9590,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelSend"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 088592deb16..125bc80679a 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -193,3 +193,11 @@ PQsendClosePrepared 190
PQsendClosePortal 191
PQchangePassword 192
PQsendPipelineSync 193
+PQcancelSend 194
+PQcancelConn 195
+PQcancelPoll 196
+PQcancelStatus 197
+PQcancelSocket 198
+PQcancelErrorMessage 199
+PQcancelReset 200
+PQcancelFinish 201
diff --git a/src/interfaces/libpq/fe-cancel.c b/src/interfaces/libpq/fe-cancel.c
index ce28d39f3f5..e37ee0c45ec 100644
--- a/src/interfaces/libpq/fe-cancel.c
+++ b/src/interfaces/libpq/fe-cancel.c
@@ -21,6 +21,290 @@
#include "libpq-int.h"
#include "port/pg_bswap.h"
+
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a query on the given connection. This requires polling
+ * the returned PGcancelConn to actually complete the cancellation of the
+ * query.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = pqMakeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!pqCopyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!pqConnectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancellation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we manually create the host and address arrays with
+ * a single element after freeing the host array that we generated from
+ * the connection options.
+ */
+ pq_release_conn_hosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
+
+/*
+ * PQcancelSend
+ *
+ * Send a cancellation request in a blocking fashion.
+ * Returns 1 if successful 0 if not.
+ */
+int
+PQcancelSend(PGcancelConn * cancelConn)
+{
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return 1;
+
+ if (!pqConnectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_BAD;
+ return 1;
+ }
+
+ return pqConnectDBComplete(&cancelConn->conn);
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn * cancelConn)
+{
+ PGconn *conn = (PGconn *) cancelConn;
+ int n;
+
+ /*
+ * Before we can call PQconnectPoll we first need to start the connection
+ * using pqConnectDBStart. Non-cancel connections already do this whenever
+ * the connection is initialized. But cancel connections wait until the
+ * caller starts polling, because there might be a large delay between
+ * creating a cancel connection and actually wanting to use it.
+ */
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!pqConnectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+ }
+
+ /*
+ * The rest of the connection establishement we leave to PQconnectPoll,
+ * since it's very similar to normal connection establishment. But once we
+ * get to the CONNECTION_AWAITING_RESPONSE we need to do our own thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * If we receive an error report it, but only if errno is non-zero.
+ * Otherwise we assume it's an EOF, which is what we expect from the
+ * server.
+ *
+ * We skip this for Windows, because Windows is a bit special in its EOF
+ * behaviour for TCP. Sometimes it will error with an ECONNRESET when
+ * there is a clean connection closure. See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here.
+ */
+ if (n < 0 && errno != 0)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangly do
+ * receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF. Which is what we were
+ * expecting. The cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn * cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn * cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn * cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn * cancelConn)
+{
+ pqClosePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_STARTING;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn * cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
+
+
/*
* PQgetCancel: get a PGcancel structure corresponding to a connection.
*
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index aeb3adc0e31..2cd95767fdb 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -616,8 +616,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQcancelReset invocations. Otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -923,6 +932,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+bool
+pqCopyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2354,10 +2402,18 @@ pqConnectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address. These fields have already set up in PQcancelConn. So leave
+ * these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2499,7 +2555,10 @@ pqConnectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2624,13 +2683,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -3272,6 +3335,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancellation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -4021,8 +4107,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4390,6 +4482,7 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
+ release_conn_addrinfo(conn);
pq_release_conn_hosts(conn);
free(conn->client_encoding_initial);
@@ -4540,6 +4633,15 @@ pq_release_conn_hosts(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4593,7 +4695,13 @@ pqClosePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we would have to rebuild it during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index defc415fa3f..857ba54d943 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_STARTING /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,30 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* Create a PGcancelConn that's used to cancel a query on the given PGconn */
+extern PGcancelConn * PQcancelConn(PGconn *conn);
+/* issue a blocking cancel request */
+extern int PQcancelSend(PGcancelConn * conn);
+
+/* issue or poll a non-blocking cancel request */
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn * cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
+extern int PQcancelSocket(const PGcancelConn * cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
+extern void PQcancelReset(PGcancelConn * cancelConn);
+extern void PQcancelFinish(PGcancelConn * cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* a less secure version of PQcancelSend, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index c1ff12dd396..e780b62b2bd 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -409,6 +409,10 @@ struct pg_conn
char *require_auth; /* name of the expected auth method */
char *load_balance_hosts; /* load balance over hosts */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -621,6 +625,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
@@ -678,6 +687,7 @@ extern void pqDropConnection(PGconn *conn, bool flushInput);
extern int pqPacketSend(PGconn *conn, char pack_type,
const void *buf, size_t buf_len);
extern bool pqGetHomeDirectory(char *buf, int bufsize);
+extern bool pqCopyPGconn(PGconn *srcConn, PGconn *dstConn);
extern bool pq_parse_int_param(const char *value, int *result, PGconn *conn,
const char *context);
extern void pq_release_conn_hosts(PGconn *conn);
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 5f43aa40de4..580003002e4 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got canceled.
+ *
+ * This is a function wrapped in a macro to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_canceled(conn) confirm_query_canceled_impl(__LINE__, conn)
+static void
+confirm_query_canceled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_canceled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_canceled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_canceled(conn);
+
+ /* test PQcancelSend */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (!PQcancelSend(cancelConn))
+ pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -1789,6 +2047,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1890,7 +2149,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
On 2024-Jan-28, Jelte Fennema-Nio wrote:
On Sun, 28 Jan 2024 at 10:51, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
Both of those are fixed now.
Okay, there turned out to also be an issue on Windows with
setKeepalivesWin32 not being available in fe-cancel.c. That's fixed
now too (as well as some minor formatting issues).
Thanks! I committed 0001 now. I also renamed the new
pq_parse_int_param to pqParseIntParam, for consistency with other
routines there. Please rebase the other patches.
Thanks,
--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/
Thou shalt check the array bounds of all strings (indeed, all arrays), for
surely where thou typest "foo" someone someday shall type
"supercalifragilisticexpialidocious" (5th Commandment for C programmers)
On Mon, 29 Jan 2024 at 12:44, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Thanks! I committed 0001 now. I also renamed the new
pq_parse_int_param to pqParseIntParam, for consistency with other
routines there. Please rebase the other patches.
Awesome! Rebased, and renamed pq_release_conn_hosts to
pqReleaseConnHosts for the same consistency reasons.
Attachments:
v29-0004-Add-non-blocking-version-of-PQcancel.patchapplication/octet-stream; name=v29-0004-Add-non-blocking-version-of-PQcancel.patchDownload
From c24759d9932d6cf5d9e18c30105e1bb520270de9 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Fri, 26 Jan 2024 17:01:00 +0100
Subject: [PATCH v29 4/5] Add non-blocking version of PQcancel
This patch makes the following changes in libpq:
1. Add a new PQcancelSend function, which sends cancellation requests
using the regular connection establishment code. This makes sure
that cancel requests support and use all connection options
including encryption.
2. Add a new PQcancelConn function which allows sending cancellation in
a non-blocking way by using it together with the newly added
PQcancelPoll and PQcancelSocket.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
doc/src/sgml/libpq.sgml | 280 +++++++++++++++--
src/interfaces/libpq/exports.txt | 8 +
src/interfaces/libpq/fe-cancel.c | 284 ++++++++++++++++++
src/interfaces/libpq/fe-connect.c | 130 +++++++-
src/interfaces/libpq/libpq-fe.h | 27 +-
src/interfaces/libpq/libpq-int.h | 10 +
.../modules/libpq_pipeline/libpq_pipeline.c | 263 +++++++++++++++-
7 files changed, 964 insertions(+), 38 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index d0d5aefadc0..9808e678650 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -5281,7 +5281,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelSend"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -6034,13 +6034,223 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Prepares a connection over which a cancel request can be sent.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>
+ object, but it won't instantly start sending a cancel request over this
+ connection. A cancel request can be sent over this connection in a
+ blocking manner using <xref linkend="libpq-PQcancelSend"/> and in a
+ non-blocking manner using <xref linkend="libpq-PQcancelPoll"/>.
+ The return value should can be passed to <xref linkend="libpq-PQcancelStatus"/>,
+ to check if the <structname>PGcancelConn</structname> object was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe way.
+ </para>
+
+ <para>
+ If the original connection is encrypted (using TLS or GSS), then the
+ connection for the cancel request is encrypted in the same way. Any
+ connection options that are only used during authentication or after
+ authentication of the client are ignored though, because cancellation
+ requests do not require authentication and the connection is closed right
+ after the cancellation request is submitted.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSend">
+ <term><function>PQcancelSend</function><indexterm><primary>PQcancelSend</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a blocking manner.
+<synopsis>
+int PQcancelSend(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelConn"/>
+ The return value of <xref linkend="libpq-PQcancelSend"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQstatus"/> that can be used for
+ cancellation connections.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ <para>
+ In addition to all the statuses that a <structname>PGconn</structname>
+ can have, this connection can have one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQcancelPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelPoll"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually canceled) and that the
+ connection is now closed. While a <symbol>CONNECTION_OK</symbol> result
+ for <structname>PGconn</structname> means that queries can be sent over
+ the connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSocket">
+ <term><function>PQcancelSocket</function><indexterm><primary>PQcancelSocket</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQsocket"/> that can be used for
+ cancellation connections.
+<synopsis>
+int PQcancelSocket(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelPoll">
+ <term><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQconnectPoll"/> that can be used for
+ cancellation connections.
+<synopsis>
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -6082,14 +6292,28 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ An insecure version of <xref linkend="libpq-PQcancelSend"/>, but one
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
+ <xref linkend="libpq-PQcancel"/> should only be used if it's necessary
+ to cancel a query from a signal-handler. If signal-safety is not needed,
+ <xref linkend="libpq-PQcancelSend"/> should be used to cancel the query
+ instead. <xref linkend="libpq-PQcancel"/> can be safely invoked from a
+ signal handler, if the <parameter>errbuf</parameter> is a local variable
+ in the signal handler. The <structname>PGcancel</structname> object is
+ read-only as far as <xref linkend="libpq-PQcancel"/> is concerned, so it
+ can also be invoked from a thread that is separate from the one
+ manipulating the <structname>PGconn</structname> object.
+ </para>
+
+ <para>
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
with an explanatory error message. <parameter>errbuf</parameter>
must be a char array of size <parameter>errbufsize</parameter> (the
@@ -6097,21 +6321,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
- <para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. This function connects to
+ postgres on the same address and port as the original connection. The
+ only connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. <emphasis>This means the connection
+ for the cancel request is never encrypted using TLS or GSS</emphasis>.
</para>
</listitem>
</varlistentry>
@@ -6123,13 +6348,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelSend"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelSend"/> should be
+ used instead, to avoid the security and thread-safety issues that this
+ function has. This function has the same security issues as
+ <xref linkend="libpq-PQcancel"/>, but without the benefit of being
+ signal-safe.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -9356,7 +9590,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelSend"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 088592deb16..125bc80679a 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -193,3 +193,11 @@ PQsendClosePrepared 190
PQsendClosePortal 191
PQchangePassword 192
PQsendPipelineSync 193
+PQcancelSend 194
+PQcancelConn 195
+PQcancelPoll 196
+PQcancelStatus 197
+PQcancelSocket 198
+PQcancelErrorMessage 199
+PQcancelReset 200
+PQcancelFinish 201
diff --git a/src/interfaces/libpq/fe-cancel.c b/src/interfaces/libpq/fe-cancel.c
index 51f8d8a78c4..7416791d9f0 100644
--- a/src/interfaces/libpq/fe-cancel.c
+++ b/src/interfaces/libpq/fe-cancel.c
@@ -21,6 +21,290 @@
#include "libpq-int.h"
#include "port/pg_bswap.h"
+
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a query on the given connection. This requires polling
+ * the returned PGcancelConn to actually complete the cancellation of the
+ * query.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = pqMakeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!pqCopyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!pqConnectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancellation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we manually create the host and address arrays with
+ * a single element after freeing the host array that we generated from
+ * the connection options.
+ */
+ pqReleaseConnHosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
+
+/*
+ * PQcancelSend
+ *
+ * Send a cancellation request in a blocking fashion.
+ * Returns 1 if successful 0 if not.
+ */
+int
+PQcancelSend(PGcancelConn * cancelConn)
+{
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return 1;
+
+ if (!pqConnectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_BAD;
+ return 1;
+ }
+
+ return pqConnectDBComplete(&cancelConn->conn);
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn * cancelConn)
+{
+ PGconn *conn = (PGconn *) cancelConn;
+ int n;
+
+ /*
+ * Before we can call PQconnectPoll we first need to start the connection
+ * using pqConnectDBStart. Non-cancel connections already do this whenever
+ * the connection is initialized. But cancel connections wait until the
+ * caller starts polling, because there might be a large delay between
+ * creating a cancel connection and actually wanting to use it.
+ */
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!pqConnectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+ }
+
+ /*
+ * The rest of the connection establishement we leave to PQconnectPoll,
+ * since it's very similar to normal connection establishment. But once we
+ * get to the CONNECTION_AWAITING_RESPONSE we need to do our own thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * If we receive an error report it, but only if errno is non-zero.
+ * Otherwise we assume it's an EOF, which is what we expect from the
+ * server.
+ *
+ * We skip this for Windows, because Windows is a bit special in its EOF
+ * behaviour for TCP. Sometimes it will error with an ECONNRESET when
+ * there is a clean connection closure. See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here.
+ */
+ if (n < 0 && errno != 0)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangly do
+ * receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF. Which is what we were
+ * expecting. The cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn * cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn * cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn * cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn * cancelConn)
+{
+ pqClosePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_STARTING;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn * cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
+
+
/*
* PQgetCancel: get a PGcancel structure corresponding to a connection.
*
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index dd240d42d70..4add35ec5cf 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -616,8 +616,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections should save their be_pid and be_key across
+ * PQcancelReset invocations. Otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel anymore.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -923,6 +932,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+bool
+pqCopyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2308,10 +2356,18 @@ pqConnectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address. These fields have already set up in PQcancelConn. So leave
+ * these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2453,7 +2509,10 @@ pqConnectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2578,13 +2637,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -3226,6 +3289,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancellation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -3975,8 +4061,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4344,6 +4436,7 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
+ release_conn_addrinfo(conn);
pqReleaseConnHosts(conn);
free(conn->client_encoding_initial);
@@ -4494,6 +4587,15 @@ pqReleaseConnHosts(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4547,7 +4649,13 @@ pqClosePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we would have to rebuild it during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index defc415fa3f..857ba54d943 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_STARTING /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,30 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* Create a PGcancelConn that's used to cancel a query on the given PGconn */
+extern PGcancelConn * PQcancelConn(PGconn *conn);
+/* issue a blocking cancel request */
+extern int PQcancelSend(PGcancelConn * conn);
+
+/* issue or poll a non-blocking cancel request */
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn * cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
+extern int PQcancelSocket(const PGcancelConn * cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
+extern void PQcancelReset(PGcancelConn * cancelConn);
+extern void PQcancelFinish(PGcancelConn * cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* a less secure version of PQcancelSend, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 07732927a5b..be45d6098a5 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -409,6 +409,10 @@ struct pg_conn
char *require_auth; /* name of the expected auth method */
char *load_balance_hosts; /* load balance over hosts */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -621,6 +625,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
@@ -681,6 +690,7 @@ extern int pqSetKeepalivesWin32(pgsocket sock, int idle, int interval);
extern int pqPacketSend(PGconn *conn, char pack_type,
const void *buf, size_t buf_len);
extern bool pqGetHomeDirectory(char *buf, int bufsize);
+extern bool pqCopyPGconn(PGconn *srcConn, PGconn *dstConn);
extern bool pqParseIntParam(const char *value, int *result, PGconn *conn,
const char *context);
extern void pqReleaseConnHosts(PGconn *conn);
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 5f43aa40de4..580003002e4 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got canceled.
+ *
+ * This is a function wrapped in a macro to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_canceled(conn) confirm_query_canceled_impl(__LINE__, conn)
+static void
+confirm_query_canceled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_canceled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_canceled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_canceled(conn);
+
+ /* test PQcancelSend */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (!PQcancelSend(cancelConn))
+ pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -1789,6 +2047,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1890,7 +2149,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
v29-0002-libpq-Add-pqReleaseConnHosts-function.patchapplication/octet-stream; name=v29-0002-libpq-Add-pqReleaseConnHosts-function.patchDownload
From b9db005e37e3dce8aa05d4f09e03a1d806bd8bcd Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Fri, 26 Jan 2024 17:01:28 +0100
Subject: [PATCH v29 2/5] libpq: Add pqReleaseConnHosts function
In a follow up PR we'll need to free this connhost field in a function
defined in fe-cancel.c
So this extracts the logic to a dedicated extern function.
---
src/interfaces/libpq/fe-connect.c | 38 ++++++++++++++++++++-----------
src/interfaces/libpq/libpq-int.h | 1 +
2 files changed, 26 insertions(+), 13 deletions(-)
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index c0dea144a00..079abfca9e2 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -4349,19 +4349,7 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
- /* clean up pg_conn_host structures */
- for (int i = 0; i < conn->nconnhost; ++i)
- {
- free(conn->connhost[i].host);
- free(conn->connhost[i].hostaddr);
- free(conn->connhost[i].port);
- if (conn->connhost[i].password != NULL)
- {
- explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
- free(conn->connhost[i].password);
- }
- }
- free(conn->connhost);
+ pqReleaseConnHosts(conn);
free(conn->client_encoding_initial);
free(conn->events);
@@ -4480,6 +4468,30 @@ release_conn_addrinfo(PGconn *conn)
}
}
+/*
+ * pqReleaseConnHosts
+ * - Free the host list in the PGconn.
+ */
+void
+pqReleaseConnHosts(PGconn *conn)
+{
+ if (conn->connhost)
+ {
+ for (int i = 0; i < conn->nconnhost; ++i)
+ {
+ free(conn->connhost[i].host);
+ free(conn->connhost[i].hostaddr);
+ free(conn->connhost[i].port);
+ if (conn->connhost[i].password != NULL)
+ {
+ explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
+ free(conn->connhost[i].password);
+ }
+ }
+ free(conn->connhost);
+ }
+}
+
/*
* sendTerminateConn
* - Send a terminate message to backend.
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index ff8e0dce776..0d06e260262 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -683,6 +683,7 @@ extern int pqPacketSend(PGconn *conn, char pack_type,
extern bool pqGetHomeDirectory(char *buf, int bufsize);
extern bool pqParseIntParam(const char *value, int *result, PGconn *conn,
const char *context);
+extern void pqReleaseConnHosts(PGconn *conn);
extern pgthreadlock_t pg_g_threadlock;
base-commit: 6a1ea02c491d16474a6214603dce40b5b122d4d1
--
2.34.1
v29-0005-Start-using-new-libpq-cancel-APIs.patchapplication/octet-stream; name=v29-0005-Start-using-new-libpq-cancel-APIs.patchDownload
From bb3c4589264f962e0b833450958e49be4fb1f4a8 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Dec 2023 13:39:09 +0100
Subject: [PATCH v29 5/5] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in the codebase with these newer
ones.
---
contrib/dblink/dblink.c | 30 +++--
contrib/postgres_fdw/connection.c | 105 +++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 11 +-
src/test/isolation/isolationtester.c | 29 ++---
6 files changed, 145 insertions(+), 52 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 19a362526d2..81749b2cdd0 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1346,22 +1346,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelConn(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (!PQcancelSend(cancelConn))
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 4931ebf5915..3ac74ff6a7f 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -133,7 +133,7 @@ static void pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue);
static void pgfdw_reject_incomplete_xact_state_change(ConnCacheEntry *entry);
static void pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_cancel_query(PGconn *conn);
-static bool pgfdw_cancel_query_begin(PGconn *conn);
+static bool pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime);
static bool pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
bool consume_input);
static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query,
@@ -1315,36 +1315,104 @@ pgfdw_cancel_query(PGconn *conn)
endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
CONNECTION_CLEANUP_TIMEOUT);
- if (!pgfdw_cancel_query_begin(conn))
+ if (!pgfdw_cancel_query_begin(conn, endtime))
return false;
return pgfdw_cancel_query_end(conn, endtime, false);
}
static bool
-pgfdw_cancel_query_begin(PGconn *conn)
+pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
{
- PGcancel *cancel;
- char errbuf[256];
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQcancelStatus(cancel_conn) == CONNECTION_BAD)
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
}
- return true;
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ return !failed;
}
static bool
@@ -1685,7 +1753,10 @@ pgfdw_abort_cleanup_begin(ConnCacheEntry *entry, bool toplevel,
*/
if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
{
- if (!pgfdw_cancel_query_begin(entry->conn))
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ CONNECTION_CLEANUP_TIMEOUT);
+
+ if (!pgfdw_cancel_query_begin(entry->conn, endtime))
return false; /* Unable to cancel running query */
*cancel_requested = lappend(*cancel_requested, entry);
}
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index b5a38aeb214..16206a23a9d 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2698,6 +2698,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index f410c3db4e6..01a98750611 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -717,6 +717,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 808d54461fd..c5cd2f57875 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,14 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
+ PGcancelConn *cancelConn = PQcancelConn(conn);
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQcancelSend(cancelConn);
+ PQcancelFinish(cancelConn);
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153a..de31a875716 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- if (cancel != NULL)
+ if (PQcancelSend(cancel_conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
v29-0003-libpq-Change-some-static-functions-to-extern.patchapplication/octet-stream; name=v29-0003-libpq-Change-some-static-functions-to-extern.patchDownload
From 01991be38133f43dd89328ca74f0653d1bb4ca27 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Fri, 26 Jan 2024 16:47:51 +0100
Subject: [PATCH v29 3/5] libpq: Change some static functions to extern
This is in preparation of a follow up commit that starts using these
functions from fe-cancel.c.
---
src/interfaces/libpq/fe-connect.c | 85 +++++++++++++++----------------
src/interfaces/libpq/libpq-int.h | 6 +++
2 files changed, 46 insertions(+), 45 deletions(-)
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 079abfca9e2..dd240d42d70 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -387,15 +387,10 @@ static const char uri_designator[] = "postgresql://";
static const char short_uri_designator[] = "postgres://";
static bool connectOptions1(PGconn *conn, const char *conninfo);
-static bool connectOptions2(PGconn *conn);
-static int connectDBStart(PGconn *conn);
-static int connectDBComplete(PGconn *conn);
static PGPing internal_ping(PGconn *conn);
-static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
static void freePGconn(PGconn *conn);
-static void closePGconn(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
static int store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
@@ -644,7 +639,7 @@ pqDropServerData(PGconn *conn)
* PQconnectStart or PQconnectStartParams (which differ in the same way as
* PQconnectdb and PQconnectdbParams) and PQconnectPoll.
*
- * Internally, the static functions connectDBStart, connectDBComplete
+ * Internally, the static functions pqConnectDBStart, pqConnectDBComplete
* are part of the connection procedure.
*/
@@ -678,7 +673,7 @@ PQconnectdbParams(const char *const *keywords,
PGconn *conn = PQconnectStartParams(keywords, values, expand_dbname);
if (conn && conn->status != CONNECTION_BAD)
- (void) connectDBComplete(conn);
+ (void) pqConnectDBComplete(conn);
return conn;
}
@@ -731,7 +726,7 @@ PQconnectdb(const char *conninfo)
PGconn *conn = PQconnectStart(conninfo);
if (conn && conn->status != CONNECTION_BAD)
- (void) connectDBComplete(conn);
+ (void) pqConnectDBComplete(conn);
return conn;
}
@@ -785,7 +780,7 @@ PQconnectStartParams(const char *const *keywords,
* to initialize conn->errorMessage to empty. All subsequent steps during
* connection initialization will only append to that buffer.
*/
- conn = makeEmptyPGconn();
+ conn = pqMakeEmptyPGconn();
if (conn == NULL)
return NULL;
@@ -819,15 +814,15 @@ PQconnectStartParams(const char *const *keywords,
/*
* Compute derived options
*/
- if (!connectOptions2(conn))
+ if (!pqConnectOptions2(conn))
return conn;
/*
* Connect to the database
*/
- if (!connectDBStart(conn))
+ if (!pqConnectDBStart(conn))
{
- /* Just in case we failed to set it in connectDBStart */
+ /* Just in case we failed to set it in pqConnectDBStart */
conn->status = CONNECTION_BAD;
}
@@ -863,7 +858,7 @@ PQconnectStart(const char *conninfo)
* to initialize conn->errorMessage to empty. All subsequent steps during
* connection initialization will only append to that buffer.
*/
- conn = makeEmptyPGconn();
+ conn = pqMakeEmptyPGconn();
if (conn == NULL)
return NULL;
@@ -876,15 +871,15 @@ PQconnectStart(const char *conninfo)
/*
* Compute derived options
*/
- if (!connectOptions2(conn))
+ if (!pqConnectOptions2(conn))
return conn;
/*
* Connect to the database
*/
- if (!connectDBStart(conn))
+ if (!pqConnectDBStart(conn))
{
- /* Just in case we failed to set it in connectDBStart */
+ /* Just in case we failed to set it in pqConnectDBStart */
conn->status = CONNECTION_BAD;
}
@@ -895,7 +890,7 @@ PQconnectStart(const char *conninfo)
* Move option values into conn structure
*
* Don't put anything cute here --- intelligence should be in
- * connectOptions2 ...
+ * pqConnectOptions2 ...
*
* Returns true on success. On failure, returns false and sets error message.
*/
@@ -933,7 +928,7 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
*
* Internal subroutine to set up connection parameters given an already-
* created PGconn and a conninfo string. Derived settings should be
- * processed by calling connectOptions2 next. (We split them because
+ * processed by calling pqConnectOptions2 next. (We split them because
* PQsetdbLogin overrides defaults in between.)
*
* Returns true if OK, false if trouble (in which case errorMessage is set
@@ -1055,15 +1050,15 @@ libpq_prng_init(PGconn *conn)
}
/*
- * connectOptions2
+ * pqConnectOptions2
*
* Compute derived connection options after absorbing all user-supplied info.
*
* Returns true if OK, false if trouble (in which case errorMessage is set
* and so is conn->status).
*/
-static bool
-connectOptions2(PGconn *conn)
+bool
+pqConnectOptions2(PGconn *conn)
{
int i;
@@ -1822,7 +1817,7 @@ PQsetdbLogin(const char *pghost, const char *pgport, const char *pgoptions,
* to initialize conn->errorMessage to empty. All subsequent steps during
* connection initialization will only append to that buffer.
*/
- conn = makeEmptyPGconn();
+ conn = pqMakeEmptyPGconn();
if (conn == NULL)
return NULL;
@@ -1901,14 +1896,14 @@ PQsetdbLogin(const char *pghost, const char *pgport, const char *pgoptions,
/*
* Compute derived options
*/
- if (!connectOptions2(conn))
+ if (!pqConnectOptions2(conn))
return conn;
/*
* Connect to the database
*/
- if (connectDBStart(conn))
- (void) connectDBComplete(conn);
+ if (pqConnectDBStart(conn))
+ (void) pqConnectDBComplete(conn);
return conn;
@@ -2277,14 +2272,14 @@ setTCPUserTimeout(PGconn *conn)
}
/* ----------
- * connectDBStart -
+ * pqConnectDBStart -
* Begin the process of making a connection to the backend.
*
* Returns 1 if successful, 0 if not.
* ----------
*/
-static int
-connectDBStart(PGconn *conn)
+int
+pqConnectDBStart(PGconn *conn)
{
if (!conn)
return 0;
@@ -2347,14 +2342,14 @@ connect_errReturn:
/*
- * connectDBComplete
+ * pqConnectDBComplete
*
* Block and complete a connection.
*
* Returns 1 on success, 0 on failure.
*/
-static int
-connectDBComplete(PGconn *conn)
+int
+pqConnectDBComplete(PGconn *conn)
{
PostgresPollingStatusType flag = PGRES_POLLING_WRITING;
time_t finish_time = ((time_t) -1);
@@ -2704,7 +2699,7 @@ keep_going: /* We will come back to here until there is
* combining it with the insertion.
*
* We don't need to initialize conn->prng_state here, because that
- * already happened in connectOptions2.
+ * already happened in pqConnectOptions2.
*/
for (int i = 1; i < conn->naddr; i++)
{
@@ -4181,7 +4176,7 @@ internal_ping(PGconn *conn)
/* Attempt to complete the connection */
if (conn->status != CONNECTION_BAD)
- (void) connectDBComplete(conn);
+ (void) pqConnectDBComplete(conn);
/* Definitely OK if we succeeded */
if (conn->status != CONNECTION_BAD)
@@ -4233,11 +4228,11 @@ internal_ping(PGconn *conn)
/*
- * makeEmptyPGconn
+ * pqMakeEmptyPGconn
* - create a PGconn data structure with (as yet) no interesting data
*/
-static PGconn *
-makeEmptyPGconn(void)
+PGconn *
+pqMakeEmptyPGconn(void)
{
PGconn *conn;
@@ -4330,7 +4325,7 @@ makeEmptyPGconn(void)
* freePGconn
* - free an idle (closed) PGconn data structure
*
- * NOTE: this should not overlap any functionality with closePGconn().
+ * NOTE: this should not overlap any functionality with pqClosePGconn().
* Clearing/resetting of transient state belongs there; what we do here is
* release data that is to be held for the life of the PGconn structure.
* If a value ought to be cleared/freed during PQreset(), do it there not here.
@@ -4516,15 +4511,15 @@ sendTerminateConn(PGconn *conn)
}
/*
- * closePGconn
+ * pqClosePGconn
* - properly close a connection to the backend
*
* This should reset or release all transient state, but NOT the connection
* parameters. On exit, the PGconn should be in condition to start a fresh
* connection with the same parameters (see PQreset()).
*/
-static void
-closePGconn(PGconn *conn)
+void
+pqClosePGconn(PGconn *conn)
{
/*
* If possible, send Terminate message to close the connection politely.
@@ -4567,7 +4562,7 @@ PQfinish(PGconn *conn)
{
if (conn)
{
- closePGconn(conn);
+ pqClosePGconn(conn);
freePGconn(conn);
}
}
@@ -4581,9 +4576,9 @@ PQreset(PGconn *conn)
{
if (conn)
{
- closePGconn(conn);
+ pqClosePGconn(conn);
- if (connectDBStart(conn) && connectDBComplete(conn))
+ if (pqConnectDBStart(conn) && pqConnectDBComplete(conn))
{
/*
* Notify event procs of successful reset.
@@ -4614,9 +4609,9 @@ PQresetStart(PGconn *conn)
{
if (conn)
{
- closePGconn(conn);
+ pqClosePGconn(conn);
- return connectDBStart(conn);
+ return pqConnectDBStart(conn);
}
return 0;
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 0d06e260262..07732927a5b 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -684,6 +684,12 @@ extern bool pqGetHomeDirectory(char *buf, int bufsize);
extern bool pqParseIntParam(const char *value, int *result, PGconn *conn,
const char *context);
extern void pqReleaseConnHosts(PGconn *conn);
+extern bool pqConnectOptions2(PGconn *conn);
+extern int pqConnectDBStart(PGconn *conn);
+extern int pqConnectDBComplete(PGconn *conn);
+extern PGconn *pqMakeEmptyPGconn(void);
+extern bool pqCopyPGconn(PGconn *srcConn, PGconn *dstConn);
+extern void pqClosePGconn(PGconn *conn);
extern pgthreadlock_t pg_g_threadlock;
--
2.34.1
On 2024-Jan-29, Jelte Fennema-Nio wrote:
On Mon, 29 Jan 2024 at 12:44, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Thanks! I committed 0001 now. I also renamed the new
pq_parse_int_param to pqParseIntParam, for consistency with other
routines there. Please rebase the other patches.Awesome! Rebased, and renamed pq_release_conn_hosts to
pqReleaseConnHosts for the same consistency reasons.
Thank you, looks good.
I propose the following minor/trivial fixes over your initial 3 patches.
--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/
"I can't go to a restaurant and order food because I keep looking at the
fonts on the menu. Five minutes later I realize that it's also talking
about food" (Donald Knuth)
Attachments:
0001-pgindent.patch.txttext/plain; charset=utf-8Download
From 92ca8dc2739a777ff5a0df990d6e9818c5729ac5 Mon Sep 17 00:00:00 2001
From: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Fri, 2 Feb 2024 10:57:02 +0100
Subject: [PATCH 1/4] pgindent
---
src/interfaces/libpq/fe-cancel.c | 14 +++++++-------
src/interfaces/libpq/libpq-fe.h | 17 +++++++++--------
src/tools/pgindent/typedefs.list | 1 +
3 files changed, 17 insertions(+), 15 deletions(-)
diff --git a/src/interfaces/libpq/fe-cancel.c b/src/interfaces/libpq/fe-cancel.c
index 7416791d9f..d75c9628e7 100644
--- a/src/interfaces/libpq/fe-cancel.c
+++ b/src/interfaces/libpq/fe-cancel.c
@@ -137,7 +137,7 @@ oom_error:
* Returns 1 if successful 0 if not.
*/
int
-PQcancelSend(PGcancelConn * cancelConn)
+PQcancelSend(PGcancelConn *cancelConn)
{
if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
return 1;
@@ -157,7 +157,7 @@ PQcancelSend(PGcancelConn * cancelConn)
* Poll a cancel connection. For usage details see PQconnectPoll.
*/
PostgresPollingStatusType
-PQcancelPoll(PGcancelConn * cancelConn)
+PQcancelPoll(PGcancelConn *cancelConn)
{
PGconn *conn = (PGconn *) cancelConn;
int n;
@@ -249,7 +249,7 @@ PQcancelPoll(PGcancelConn * cancelConn)
* Get the status of a cancel connection.
*/
ConnStatusType
-PQcancelStatus(const PGcancelConn * cancelConn)
+PQcancelStatus(const PGcancelConn *cancelConn)
{
return PQstatus((const PGconn *) cancelConn);
}
@@ -260,7 +260,7 @@ PQcancelStatus(const PGcancelConn * cancelConn)
* Get the socket of the cancel connection.
*/
int
-PQcancelSocket(const PGcancelConn * cancelConn)
+PQcancelSocket(const PGcancelConn *cancelConn)
{
return PQsocket((const PGconn *) cancelConn);
}
@@ -271,7 +271,7 @@ PQcancelSocket(const PGcancelConn * cancelConn)
* Get the socket of the cancel connection.
*/
char *
-PQcancelErrorMessage(const PGcancelConn * cancelConn)
+PQcancelErrorMessage(const PGcancelConn *cancelConn)
{
return PQerrorMessage((const PGconn *) cancelConn);
}
@@ -283,7 +283,7 @@ PQcancelErrorMessage(const PGcancelConn * cancelConn)
* request.
*/
void
-PQcancelReset(PGcancelConn * cancelConn)
+PQcancelReset(PGcancelConn *cancelConn)
{
pqClosePGconn((PGconn *) cancelConn);
cancelConn->conn.status = CONNECTION_STARTING;
@@ -299,7 +299,7 @@ PQcancelReset(PGcancelConn * cancelConn)
* Closes and frees the cancel connection.
*/
void
-PQcancelFinish(PGcancelConn * cancelConn)
+PQcancelFinish(PGcancelConn *cancelConn)
{
PQfinish((PGconn *) cancelConn);
}
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 857ba54d94..851e549355 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -329,17 +329,18 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
extern void PQreset(PGconn *conn);
/* Create a PGcancelConn that's used to cancel a query on the given PGconn */
-extern PGcancelConn * PQcancelConn(PGconn *conn);
+extern PGcancelConn *PQcancelConn(PGconn *conn);
+
/* issue a blocking cancel request */
-extern int PQcancelSend(PGcancelConn * conn);
+extern int PQcancelSend(PGcancelConn *conn);
/* issue or poll a non-blocking cancel request */
-extern PostgresPollingStatusType PQcancelPoll(PGcancelConn * cancelConn);
-extern ConnStatusType PQcancelStatus(const PGcancelConn * cancelConn);
-extern int PQcancelSocket(const PGcancelConn * cancelConn);
-extern char *PQcancelErrorMessage(const PGcancelConn * cancelConn);
-extern void PQcancelReset(PGcancelConn * cancelConn);
-extern void PQcancelFinish(PGcancelConn * cancelConn);
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn *cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn *cancelConn);
+extern int PQcancelSocket(const PGcancelConn *cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn *cancelConn);
+extern void PQcancelReset(PGcancelConn *cancelConn);
+extern void PQcancelFinish(PGcancelConn *cancelConn);
/* request a cancel structure */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 91433d439b..9ffb169e9d 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1760,6 +1760,7 @@ PG_Locale_Strategy
PG_Lock_Status
PG_init_t
PGcancel
+PGcancelConn
PGcmdQueueEntry
PGconn
PGdataValue
--
2.39.2
0002-Add-missing-period.patch.txttext/plain; charset=utf-8Download
From 2fb777f47288815a370a15c7c17f39496b64c4f5 Mon Sep 17 00:00:00 2001
From: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Fri, 2 Feb 2024 13:16:51 +0100
Subject: [PATCH 2/4] Add missing period
---
doc/src/sgml/libpq.sgml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 9808e67865..67f6378ba8 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -6093,7 +6093,7 @@ int PQcancelSend(PGcancelConn *conn);
<para>
The request is made over the given <structname>PGcancelConn</structname>,
- which needs to be created with <xref linkend="libpq-PQcancelConn"/>
+ which needs to be created with <xref linkend="libpq-PQcancelConn"/>.
The return value of <xref linkend="libpq-PQcancelSend"/>
is 1 if the cancel request was successfully
dispatched and 0 if not. If it was unsuccessful, the error message can be
--
2.39.2
0003-Add-missing-const-decorator-in-documentation.patch.txttext/plain; charset=utf-8Download
From c2a6bc14174a1e7a8a4f1508a5c30ee0d30d5f47 Mon Sep 17 00:00:00 2001
From: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Fri, 2 Feb 2024 13:17:10 +0100
Subject: [PATCH 3/4] Add missing 'const' decorator in documentation
---
doc/src/sgml/libpq.sgml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 67f6378ba8..8648379f8d 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -6166,7 +6166,7 @@ ConnStatusType PQcancelStatus(const PGcancelConn *conn);
A version of <xref linkend="libpq-PQsocket"/> that can be used for
cancellation connections.
<synopsis>
-int PQcancelSocket(PGcancelConn *conn);
+int PQcancelSocket(const PGcancelConn *conn);
</synopsis>
</para>
</listitem>
--
2.39.2
0004-Wording-whitespace-changes.patch.txttext/plain; charset=utf-8Download
From ba6cc0ba4d387015d64a98a37eb72188191c3672 Mon Sep 17 00:00:00 2001
From: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Fri, 2 Feb 2024 13:17:25 +0100
Subject: [PATCH 4/4] Wording, whitespace changes
---
doc/src/sgml/libpq.sgml | 8 ++++----
src/interfaces/libpq/fe-cancel.c | 9 +++------
src/interfaces/libpq/fe-connect.c | 12 ++++++------
3 files changed, 13 insertions(+), 16 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 8648379f8d..81b4028381 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -6052,7 +6052,7 @@ PGcancelConn *PQcancelConn(PGconn *conn);
connection. A cancel request can be sent over this connection in a
blocking manner using <xref linkend="libpq-PQcancelSend"/> and in a
non-blocking manner using <xref linkend="libpq-PQcancelPoll"/>.
- The return value should can be passed to <xref linkend="libpq-PQcancelStatus"/>,
+ The return value can be passed to <xref linkend="libpq-PQcancelStatus"/>
to check if the <structname>PGcancelConn</structname> object was
created successfully. The <structname>PGcancelConn</structname> object
is an opaque structure that is not meant to be accessed directly by the
@@ -6150,9 +6150,9 @@ ConnStatusType PQcancelStatus(const PGcancelConn *conn);
returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
it means that that the dispatch of the cancel request has completed (although
this is no promise that the query was actually canceled) and that the
- connection is now closed. While a <symbol>CONNECTION_OK</symbol> result
- for <structname>PGconn</structname> means that queries can be sent over
- the connection.
+ cancel connection is now closed, while a <symbol>CONNECTION_OK</symbol>
+ result for <structname>PGconn</structname> means that queries can be
+ sent over the connection.
</para>
</listitem>
diff --git a/src/interfaces/libpq/fe-cancel.c b/src/interfaces/libpq/fe-cancel.c
index d75c9628e7..6420384be7 100644
--- a/src/interfaces/libpq/fe-cancel.c
+++ b/src/interfaces/libpq/fe-cancel.c
@@ -51,7 +51,6 @@ PQcancelConn(PGconn *conn)
return (PGcancelConn *) cancelConn;
}
-
/*
* Indicate that this connection is used to send a cancellation
*/
@@ -223,20 +222,19 @@ PQcancelPoll(PGcancelConn *cancelConn)
#endif
/*
- * We don't expect any data, only connection closure. So if we strangly do
+ * We don't expect any data, only connection closure. So if we strangely do
* receive some data we consider that an error.
*/
if (n > 0)
{
-
libpq_append_conn_error(conn, "received unexpected response from server");
conn->status = CONNECTION_BAD;
return PGRES_POLLING_FAILED;
}
/*
- * Getting here means that we received an EOF. Which is what we were
- * expecting. The cancel request has completed.
+ * Getting here means that we received an EOF, which is what we were
+ * expecting -- the cancel request has completed.
*/
cancelConn->conn.status = CONNECTION_OK;
resetPQExpBuffer(&conn->errorMessage);
@@ -304,7 +302,6 @@ PQcancelFinish(PGcancelConn *cancelConn)
PQfinish((PGconn *) cancelConn);
}
-
/*
* PQgetCancel: get a PGcancel structure corresponding to a connection.
*
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 4add35ec5c..4f9b2182db 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -618,9 +618,9 @@ pqDropServerData(PGconn *conn)
conn->write_err_msg = NULL;
/*
- * Cancel connections should save their be_pid and be_key across
- * PQcancelReset invocations. Otherwise they would not have access to the
- * secret token of the connection they are supposed to cancel anymore.
+ * Cancel connections need to retain their be_pid and be_key across
+ * PQcancelReset invocations, otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel.
*/
if (!conn->cancelRequest)
{
@@ -648,8 +648,8 @@ pqDropServerData(PGconn *conn)
* PQconnectStart or PQconnectStartParams (which differ in the same way as
* PQconnectdb and PQconnectdbParams) and PQconnectPoll.
*
- * Internally, the static functions pqConnectDBStart, pqConnectDBComplete
- * are part of the connection procedure.
+ * The non-exported functions pqConnectDBStart, pqConnectDBComplete are
+ * part of the connection procedure implementation.
*/
/*
@@ -2358,7 +2358,7 @@ pqConnectDBStart(PGconn *conn)
* anything else looks at it.)
*
* Cancel requests are special though, they should only try one host and
- * address. These fields have already set up in PQcancelConn. So leave
+ * address, and these fields have already set up in PQcancelConn, so leave
* these fields alone for cancel requests.
*/
if (!conn->cancelRequest)
--
2.39.2
On Fri, 2 Feb 2024 at 13:19, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Thank you, looks good.
I propose the following minor/trivial fixes over your initial 3 patches.
All of those seem good like fixes. Attached is an updated patchset
where they are all applied. As well as adding a missing word ("been")
in a comment that I noticed while reading your fixes.
Attachments:
v30-0003-libpq-Change-some-static-functions-to-extern.patchtext/x-patch; charset=US-ASCII; name=v30-0003-libpq-Change-some-static-functions-to-extern.patchDownload
From 7736e940567878c32355c2143cddba3b13bfa71e Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Fri, 26 Jan 2024 16:47:51 +0100
Subject: [PATCH v30 3/5] libpq: Change some static functions to extern
This is in preparation of a follow up commit that starts using these
functions from fe-cancel.c.
---
src/interfaces/libpq/fe-connect.c | 87 +++++++++++++++----------------
src/interfaces/libpq/libpq-int.h | 6 +++
2 files changed, 47 insertions(+), 46 deletions(-)
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 079abfca9e..7d8616eb6d 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -387,15 +387,10 @@ static const char uri_designator[] = "postgresql://";
static const char short_uri_designator[] = "postgres://";
static bool connectOptions1(PGconn *conn, const char *conninfo);
-static bool connectOptions2(PGconn *conn);
-static int connectDBStart(PGconn *conn);
-static int connectDBComplete(PGconn *conn);
static PGPing internal_ping(PGconn *conn);
-static PGconn *makeEmptyPGconn(void);
static void pqFreeCommandQueue(PGcmdQueueEntry *queue);
static bool fillPGconn(PGconn *conn, PQconninfoOption *connOptions);
static void freePGconn(PGconn *conn);
-static void closePGconn(PGconn *conn);
static void release_conn_addrinfo(PGconn *conn);
static int store_conn_addrinfo(PGconn *conn, struct addrinfo *addrlist);
static void sendTerminateConn(PGconn *conn);
@@ -644,8 +639,8 @@ pqDropServerData(PGconn *conn)
* PQconnectStart or PQconnectStartParams (which differ in the same way as
* PQconnectdb and PQconnectdbParams) and PQconnectPoll.
*
- * Internally, the static functions connectDBStart, connectDBComplete
- * are part of the connection procedure.
+ * The non-exported functions pqConnectDBStart, pqConnectDBComplete are
+ * part of the connection procedure implementation.
*/
/*
@@ -678,7 +673,7 @@ PQconnectdbParams(const char *const *keywords,
PGconn *conn = PQconnectStartParams(keywords, values, expand_dbname);
if (conn && conn->status != CONNECTION_BAD)
- (void) connectDBComplete(conn);
+ (void) pqConnectDBComplete(conn);
return conn;
}
@@ -731,7 +726,7 @@ PQconnectdb(const char *conninfo)
PGconn *conn = PQconnectStart(conninfo);
if (conn && conn->status != CONNECTION_BAD)
- (void) connectDBComplete(conn);
+ (void) pqConnectDBComplete(conn);
return conn;
}
@@ -785,7 +780,7 @@ PQconnectStartParams(const char *const *keywords,
* to initialize conn->errorMessage to empty. All subsequent steps during
* connection initialization will only append to that buffer.
*/
- conn = makeEmptyPGconn();
+ conn = pqMakeEmptyPGconn();
if (conn == NULL)
return NULL;
@@ -819,15 +814,15 @@ PQconnectStartParams(const char *const *keywords,
/*
* Compute derived options
*/
- if (!connectOptions2(conn))
+ if (!pqConnectOptions2(conn))
return conn;
/*
* Connect to the database
*/
- if (!connectDBStart(conn))
+ if (!pqConnectDBStart(conn))
{
- /* Just in case we failed to set it in connectDBStart */
+ /* Just in case we failed to set it in pqConnectDBStart */
conn->status = CONNECTION_BAD;
}
@@ -863,7 +858,7 @@ PQconnectStart(const char *conninfo)
* to initialize conn->errorMessage to empty. All subsequent steps during
* connection initialization will only append to that buffer.
*/
- conn = makeEmptyPGconn();
+ conn = pqMakeEmptyPGconn();
if (conn == NULL)
return NULL;
@@ -876,15 +871,15 @@ PQconnectStart(const char *conninfo)
/*
* Compute derived options
*/
- if (!connectOptions2(conn))
+ if (!pqConnectOptions2(conn))
return conn;
/*
* Connect to the database
*/
- if (!connectDBStart(conn))
+ if (!pqConnectDBStart(conn))
{
- /* Just in case we failed to set it in connectDBStart */
+ /* Just in case we failed to set it in pqConnectDBStart */
conn->status = CONNECTION_BAD;
}
@@ -895,7 +890,7 @@ PQconnectStart(const char *conninfo)
* Move option values into conn structure
*
* Don't put anything cute here --- intelligence should be in
- * connectOptions2 ...
+ * pqConnectOptions2 ...
*
* Returns true on success. On failure, returns false and sets error message.
*/
@@ -933,7 +928,7 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
*
* Internal subroutine to set up connection parameters given an already-
* created PGconn and a conninfo string. Derived settings should be
- * processed by calling connectOptions2 next. (We split them because
+ * processed by calling pqConnectOptions2 next. (We split them because
* PQsetdbLogin overrides defaults in between.)
*
* Returns true if OK, false if trouble (in which case errorMessage is set
@@ -1055,15 +1050,15 @@ libpq_prng_init(PGconn *conn)
}
/*
- * connectOptions2
+ * pqConnectOptions2
*
* Compute derived connection options after absorbing all user-supplied info.
*
* Returns true if OK, false if trouble (in which case errorMessage is set
* and so is conn->status).
*/
-static bool
-connectOptions2(PGconn *conn)
+bool
+pqConnectOptions2(PGconn *conn)
{
int i;
@@ -1822,7 +1817,7 @@ PQsetdbLogin(const char *pghost, const char *pgport, const char *pgoptions,
* to initialize conn->errorMessage to empty. All subsequent steps during
* connection initialization will only append to that buffer.
*/
- conn = makeEmptyPGconn();
+ conn = pqMakeEmptyPGconn();
if (conn == NULL)
return NULL;
@@ -1901,14 +1896,14 @@ PQsetdbLogin(const char *pghost, const char *pgport, const char *pgoptions,
/*
* Compute derived options
*/
- if (!connectOptions2(conn))
+ if (!pqConnectOptions2(conn))
return conn;
/*
* Connect to the database
*/
- if (connectDBStart(conn))
- (void) connectDBComplete(conn);
+ if (pqConnectDBStart(conn))
+ (void) pqConnectDBComplete(conn);
return conn;
@@ -2277,14 +2272,14 @@ setTCPUserTimeout(PGconn *conn)
}
/* ----------
- * connectDBStart -
+ * pqConnectDBStart -
* Begin the process of making a connection to the backend.
*
* Returns 1 if successful, 0 if not.
* ----------
*/
-static int
-connectDBStart(PGconn *conn)
+int
+pqConnectDBStart(PGconn *conn)
{
if (!conn)
return 0;
@@ -2347,14 +2342,14 @@ connect_errReturn:
/*
- * connectDBComplete
+ * pqConnectDBComplete
*
* Block and complete a connection.
*
* Returns 1 on success, 0 on failure.
*/
-static int
-connectDBComplete(PGconn *conn)
+int
+pqConnectDBComplete(PGconn *conn)
{
PostgresPollingStatusType flag = PGRES_POLLING_WRITING;
time_t finish_time = ((time_t) -1);
@@ -2704,7 +2699,7 @@ keep_going: /* We will come back to here until there is
* combining it with the insertion.
*
* We don't need to initialize conn->prng_state here, because that
- * already happened in connectOptions2.
+ * already happened in pqConnectOptions2.
*/
for (int i = 1; i < conn->naddr; i++)
{
@@ -4181,7 +4176,7 @@ internal_ping(PGconn *conn)
/* Attempt to complete the connection */
if (conn->status != CONNECTION_BAD)
- (void) connectDBComplete(conn);
+ (void) pqConnectDBComplete(conn);
/* Definitely OK if we succeeded */
if (conn->status != CONNECTION_BAD)
@@ -4233,11 +4228,11 @@ internal_ping(PGconn *conn)
/*
- * makeEmptyPGconn
+ * pqMakeEmptyPGconn
* - create a PGconn data structure with (as yet) no interesting data
*/
-static PGconn *
-makeEmptyPGconn(void)
+PGconn *
+pqMakeEmptyPGconn(void)
{
PGconn *conn;
@@ -4330,7 +4325,7 @@ makeEmptyPGconn(void)
* freePGconn
* - free an idle (closed) PGconn data structure
*
- * NOTE: this should not overlap any functionality with closePGconn().
+ * NOTE: this should not overlap any functionality with pqClosePGconn().
* Clearing/resetting of transient state belongs there; what we do here is
* release data that is to be held for the life of the PGconn structure.
* If a value ought to be cleared/freed during PQreset(), do it there not here.
@@ -4516,15 +4511,15 @@ sendTerminateConn(PGconn *conn)
}
/*
- * closePGconn
+ * pqClosePGconn
* - properly close a connection to the backend
*
* This should reset or release all transient state, but NOT the connection
* parameters. On exit, the PGconn should be in condition to start a fresh
* connection with the same parameters (see PQreset()).
*/
-static void
-closePGconn(PGconn *conn)
+void
+pqClosePGconn(PGconn *conn)
{
/*
* If possible, send Terminate message to close the connection politely.
@@ -4567,7 +4562,7 @@ PQfinish(PGconn *conn)
{
if (conn)
{
- closePGconn(conn);
+ pqClosePGconn(conn);
freePGconn(conn);
}
}
@@ -4581,9 +4576,9 @@ PQreset(PGconn *conn)
{
if (conn)
{
- closePGconn(conn);
+ pqClosePGconn(conn);
- if (connectDBStart(conn) && connectDBComplete(conn))
+ if (pqConnectDBStart(conn) && pqConnectDBComplete(conn))
{
/*
* Notify event procs of successful reset.
@@ -4614,9 +4609,9 @@ PQresetStart(PGconn *conn)
{
if (conn)
{
- closePGconn(conn);
+ pqClosePGconn(conn);
- return connectDBStart(conn);
+ return pqConnectDBStart(conn);
}
return 0;
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 0d06e26026..07732927a5 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -684,6 +684,12 @@ extern bool pqGetHomeDirectory(char *buf, int bufsize);
extern bool pqParseIntParam(const char *value, int *result, PGconn *conn,
const char *context);
extern void pqReleaseConnHosts(PGconn *conn);
+extern bool pqConnectOptions2(PGconn *conn);
+extern int pqConnectDBStart(PGconn *conn);
+extern int pqConnectDBComplete(PGconn *conn);
+extern PGconn *pqMakeEmptyPGconn(void);
+extern bool pqCopyPGconn(PGconn *srcConn, PGconn *dstConn);
+extern void pqClosePGconn(PGconn *conn);
extern pgthreadlock_t pg_g_threadlock;
--
2.34.1
v30-0004-Add-non-blocking-version-of-PQcancel.patchtext/x-patch; charset=US-ASCII; name=v30-0004-Add-non-blocking-version-of-PQcancel.patchDownload
From f14412006e804ededda2063b08b37aaa8dbba355 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Fri, 26 Jan 2024 17:01:00 +0100
Subject: [PATCH v30 4/5] Add non-blocking version of PQcancel
This patch makes the following changes in libpq:
1. Add a new PQcancelSend function, which sends cancellation requests
using the regular connection establishment code. This makes sure
that cancel requests support and use all connection options
including encryption.
2. Add a new PQcancelConn function which allows sending cancellation in
a non-blocking way by using it together with the newly added
PQcancelPoll and PQcancelSocket.
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests.
This patch also includes a test for all of libpq cancellation APIs. The
test can be easily run like this:
cd src/test/modules/libpq_pipeline
make && ./libpq_pipeline cancel
---
doc/src/sgml/libpq.sgml | 280 +++++++++++++++--
src/interfaces/libpq/exports.txt | 8 +
src/interfaces/libpq/fe-cancel.c | 281 ++++++++++++++++++
src/interfaces/libpq/fe-connect.c | 130 +++++++-
src/interfaces/libpq/libpq-fe.h | 28 +-
src/interfaces/libpq/libpq-int.h | 10 +
.../modules/libpq_pipeline/libpq_pipeline.c | 263 +++++++++++++++-
src/tools/pgindent/typedefs.list | 1 +
8 files changed, 963 insertions(+), 38 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index d0d5aefadc..81b4028381 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -5281,7 +5281,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelSend"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -6034,13 +6034,223 @@ int PQsetSingleRowMode(PGconn *conn);
this section.
<variablelist>
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Prepares a connection over which a cancel request can be sent.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>
+ object, but it won't instantly start sending a cancel request over this
+ connection. A cancel request can be sent over this connection in a
+ blocking manner using <xref linkend="libpq-PQcancelSend"/> and in a
+ non-blocking manner using <xref linkend="libpq-PQcancelPoll"/>.
+ The return value can be passed to <xref linkend="libpq-PQcancelStatus"/>
+ to check if the <structname>PGcancelConn</structname> object was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe way.
+ </para>
+
+ <para>
+ If the original connection is encrypted (using TLS or GSS), then the
+ connection for the cancel request is encrypted in the same way. Any
+ connection options that are only used during authentication or after
+ authentication of the client are ignored though, because cancellation
+ requests do not require authentication and the connection is closed right
+ after the cancellation request is submitted.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSend">
+ <term><function>PQcancelSend</function><indexterm><primary>PQcancelSend</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a blocking manner.
+<synopsis>
+int PQcancelSend(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelConn"/>.
+ The return value of <xref linkend="libpq-PQcancelSend"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQstatus"/> that can be used for
+ cancellation connections.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ <para>
+ In addition to all the statuses that a <structname>PGconn</structname>
+ can have, this connection can have one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_STARTING</symbol></term>
+ <listitem>
+ <para>
+ Waiting for the first call to <xref linkend="libpq-PQcancelPoll"/>,
+ to actually open the socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelPoll"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually canceled) and that the
+ cancel connection is now closed, while a <symbol>CONNECTION_OK</symbol>
+ result for <structname>PGconn</structname> means that queries can be
+ sent over the connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSocket">
+ <term><function>PQcancelSocket</function><indexterm><primary>PQcancelSocket</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQsocket"/> that can be used for
+ cancellation connections.
+<synopsis>
+int PQcancelSocket(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelPoll">
+ <term><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQconnectPoll"/> that can be used for
+ cancellation connections.
+<synopsis>
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *conn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-PQgetCancel">
<term><function>PQgetCancel</function><indexterm><primary>PQgetCancel</primary></indexterm></term>
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -6082,14 +6292,28 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
+ An insecure version of <xref linkend="libpq-PQcancelSend"/>, but one
+ that can be used safely from within a signal handler.
<synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
+ <xref linkend="libpq-PQcancel"/> should only be used if it's necessary
+ to cancel a query from a signal-handler. If signal-safety is not needed,
+ <xref linkend="libpq-PQcancelSend"/> should be used to cancel the query
+ instead. <xref linkend="libpq-PQcancel"/> can be safely invoked from a
+ signal handler, if the <parameter>errbuf</parameter> is a local variable
+ in the signal handler. The <structname>PGcancel</structname> object is
+ read-only as far as <xref linkend="libpq-PQcancel"/> is concerned, so it
+ can also be invoked from a thread that is separate from the one
+ manipulating the <structname>PGconn</structname> object.
+ </para>
+
+ <para>
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
with an explanatory error message. <parameter>errbuf</parameter>
must be a char array of size <parameter>errbufsize</parameter> (the
@@ -6097,21 +6321,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
- <para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ To achieve signal-safety, some concessions needed to be made in the
+ implementation of <xref linkend="libpq-PQcancel"/>. Not all connection
+ options of the original connection are used when establishing a
+ connection for the cancellation request. This function connects to
+ postgres on the same address and port as the original connection. The
+ only connection options that are honored during this connection are
+ <varname>keepalives</varname>,
+ <varname>keepalives_idle</varname>,
+ <varname>keepalives_interval</varname>,
+ <varname>keepalives_count</varname>, and
+ <varname>tcp_user_timeout</varname>.
+ So, for example
+ <varname>connect_timeout</varname>,
+ <varname>gssencmode</varname>, and
+ <varname>sslmode</varname> are ignored. <emphasis>This means the connection
+ for the cancel request is never encrypted using TLS or GSS</emphasis>.
</para>
</listitem>
</varlistentry>
@@ -6123,13 +6348,22 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelSend"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelSend"/> should be
+ used instead, to avoid the security and thread-safety issues that this
+ function has. This function has the same security issues as
+ <xref linkend="libpq-PQcancel"/>, but without the benefit of being
+ signal-safe.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -9356,7 +9590,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelSend"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 088592deb1..125bc80679 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -193,3 +193,11 @@ PQsendClosePrepared 190
PQsendClosePortal 191
PQchangePassword 192
PQsendPipelineSync 193
+PQcancelSend 194
+PQcancelConn 195
+PQcancelPoll 196
+PQcancelStatus 197
+PQcancelSocket 198
+PQcancelErrorMessage 199
+PQcancelReset 200
+PQcancelFinish 201
diff --git a/src/interfaces/libpq/fe-cancel.c b/src/interfaces/libpq/fe-cancel.c
index 51f8d8a78c..6420384be7 100644
--- a/src/interfaces/libpq/fe-cancel.c
+++ b/src/interfaces/libpq/fe-cancel.c
@@ -21,6 +21,287 @@
#include "libpq-int.h"
#include "port/pg_bswap.h"
+
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a query on the given connection. This requires polling
+ * the returned PGcancelConn to actually complete the cancellation of the
+ * query.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = pqMakeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!pqCopyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!pqConnectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancellation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we manually create the host and address arrays with
+ * a single element after freeing the host array that we generated from
+ * the connection options.
+ */
+ pqReleaseConnHosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_STARTING;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
+
+/*
+ * PQcancelSend
+ *
+ * Send a cancellation request in a blocking fashion.
+ * Returns 1 if successful 0 if not.
+ */
+int
+PQcancelSend(PGcancelConn *cancelConn)
+{
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return 1;
+
+ if (!pqConnectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_BAD;
+ return 1;
+ }
+
+ return pqConnectDBComplete(&cancelConn->conn);
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn *cancelConn)
+{
+ PGconn *conn = (PGconn *) cancelConn;
+ int n;
+
+ /*
+ * Before we can call PQconnectPoll we first need to start the connection
+ * using pqConnectDBStart. Non-cancel connections already do this whenever
+ * the connection is initialized. But cancel connections wait until the
+ * caller starts polling, because there might be a large delay between
+ * creating a cancel connection and actually wanting to use it.
+ */
+ if (conn->status == CONNECTION_STARTING)
+ {
+ if (!pqConnectDBStart(&cancelConn->conn))
+ {
+ cancelConn->conn.status = CONNECTION_STARTED;
+ return PGRES_POLLING_WRITING;
+ }
+ }
+
+ /*
+ * The rest of the connection establishement we leave to PQconnectPoll,
+ * since it's very similar to normal connection establishment. But once we
+ * get to the CONNECTION_AWAITING_RESPONSE we need to do our own thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * If we receive an error report it, but only if errno is non-zero.
+ * Otherwise we assume it's an EOF, which is what we expect from the
+ * server.
+ *
+ * We skip this for Windows, because Windows is a bit special in its EOF
+ * behaviour for TCP. Sometimes it will error with an ECONNRESET when
+ * there is a clean connection closure. See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here.
+ */
+ if (n < 0 && errno != 0)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangely do
+ * receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF, which is what we were
+ * expecting -- the cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn *cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn *cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn *cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn *cancelConn)
+{
+ pqClosePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_STARTING;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn *cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
+
/*
* PQgetCancel: get a PGcancel structure corresponding to a connection.
*
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 7d8616eb6d..ef33652475 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -616,8 +616,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections need to retain their be_pid and be_key across
+ * PQcancelReset invocations, otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -923,6 +932,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+bool
+pqCopyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2308,10 +2356,18 @@ pqConnectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address, and these fields have already been set up in PQcancelConn, so
+ * leave these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2453,7 +2509,10 @@ pqConnectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2578,13 +2637,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -3226,6 +3289,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancellation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -3975,8 +4061,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4344,6 +4436,7 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
+ release_conn_addrinfo(conn);
pqReleaseConnHosts(conn);
free(conn->client_encoding_initial);
@@ -4494,6 +4587,15 @@ pqReleaseConnHosts(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4547,7 +4649,13 @@ pqClosePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we would have to rebuild it during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index defc415fa3..851e549355 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_STARTING /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,31 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* Create a PGcancelConn that's used to cancel a query on the given PGconn */
+extern PGcancelConn *PQcancelConn(PGconn *conn);
+
+/* issue a blocking cancel request */
+extern int PQcancelSend(PGcancelConn *conn);
+
+/* issue or poll a non-blocking cancel request */
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn *cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn *cancelConn);
+extern int PQcancelSocket(const PGcancelConn *cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn *cancelConn);
+extern void PQcancelReset(PGcancelConn *cancelConn);
+extern void PQcancelFinish(PGcancelConn *cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* a less secure version of PQcancelSend, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 07732927a5..be45d6098a 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -409,6 +409,10 @@ struct pg_conn
char *require_auth; /* name of the expected auth method */
char *load_balance_hosts; /* load balance over hosts */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -621,6 +625,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
@@ -681,6 +690,7 @@ extern int pqSetKeepalivesWin32(pgsocket sock, int idle, int interval);
extern int pqPacketSend(PGconn *conn, char pack_type,
const void *buf, size_t buf_len);
extern bool pqGetHomeDirectory(char *buf, int bufsize);
+extern bool pqCopyPGconn(PGconn *srcConn, PGconn *dstConn);
extern bool pqParseIntParam(const char *value, int *result, PGconn *conn,
const char *context);
extern void pqReleaseConnHosts(PGconn *conn);
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 5f43aa40de..580003002e 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got canceled.
+ *
+ * This is a function wrapped in a macro to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_canceled(conn) confirm_query_canceled_impl(__LINE__, conn)
+static void
+confirm_query_canceled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_canceled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_canceled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_canceled(conn);
+
+ /* test PQcancelSend */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (!PQcancelSend(cancelConn))
+ pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (PQcancelStatus(cancelConn) == CONNECTION_BAD)
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -1789,6 +2047,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1890,7 +2149,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 91433d439b..9ffb169e9d 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1760,6 +1760,7 @@ PG_Locale_Strategy
PG_Lock_Status
PG_init_t
PGcancel
+PGcancelConn
PGcmdQueueEntry
PGconn
PGdataValue
--
2.34.1
v30-0002-libpq-Add-pqReleaseConnHosts-function.patchtext/x-patch; charset=US-ASCII; name=v30-0002-libpq-Add-pqReleaseConnHosts-function.patchDownload
From 6b9930707cf960e36aeada8ae689c7cef97594a0 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Fri, 26 Jan 2024 17:01:28 +0100
Subject: [PATCH v30 2/5] libpq: Add pqReleaseConnHosts function
In a follow up PR we'll need to free this connhost field in a function
defined in fe-cancel.c
So this extracts the logic to a dedicated extern function.
---
src/interfaces/libpq/fe-connect.c | 38 ++++++++++++++++++++-----------
src/interfaces/libpq/libpq-int.h | 1 +
2 files changed, 26 insertions(+), 13 deletions(-)
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index c0dea144a0..079abfca9e 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -4349,19 +4349,7 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
- /* clean up pg_conn_host structures */
- for (int i = 0; i < conn->nconnhost; ++i)
- {
- free(conn->connhost[i].host);
- free(conn->connhost[i].hostaddr);
- free(conn->connhost[i].port);
- if (conn->connhost[i].password != NULL)
- {
- explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
- free(conn->connhost[i].password);
- }
- }
- free(conn->connhost);
+ pqReleaseConnHosts(conn);
free(conn->client_encoding_initial);
free(conn->events);
@@ -4480,6 +4468,30 @@ release_conn_addrinfo(PGconn *conn)
}
}
+/*
+ * pqReleaseConnHosts
+ * - Free the host list in the PGconn.
+ */
+void
+pqReleaseConnHosts(PGconn *conn)
+{
+ if (conn->connhost)
+ {
+ for (int i = 0; i < conn->nconnhost; ++i)
+ {
+ free(conn->connhost[i].host);
+ free(conn->connhost[i].hostaddr);
+ free(conn->connhost[i].port);
+ if (conn->connhost[i].password != NULL)
+ {
+ explicit_bzero(conn->connhost[i].password, strlen(conn->connhost[i].password));
+ free(conn->connhost[i].password);
+ }
+ }
+ free(conn->connhost);
+ }
+}
+
/*
* sendTerminateConn
* - Send a terminate message to backend.
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index ff8e0dce77..0d06e26026 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -683,6 +683,7 @@ extern int pqPacketSend(PGconn *conn, char pack_type,
extern bool pqGetHomeDirectory(char *buf, int bufsize);
extern bool pqParseIntParam(const char *value, int *result, PGconn *conn,
const char *context);
+extern void pqReleaseConnHosts(PGconn *conn);
extern pgthreadlock_t pg_g_threadlock;
base-commit: 7e0ade0ffe0a76b1926a4af39ecdf799c96ef1ba
--
2.34.1
v30-0005-Start-using-new-libpq-cancel-APIs.patchtext/x-patch; charset=US-ASCII; name=v30-0005-Start-using-new-libpq-cancel-APIs.patchDownload
From 920e43636033ad384868db2d8d0479c803ca8a74 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Dec 2023 13:39:09 +0100
Subject: [PATCH v30 5/5] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in the codebase with these newer
ones.
---
contrib/dblink/dblink.c | 30 +++--
contrib/postgres_fdw/connection.c | 105 +++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 11 +-
src/test/isolation/isolationtester.c | 29 ++---
6 files changed, 145 insertions(+), 52 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 19a362526d..81749b2cdd 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1346,22 +1346,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelConn(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (!PQcancelSend(cancelConn))
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 4931ebf591..3ac74ff6a7 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -133,7 +133,7 @@ static void pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue);
static void pgfdw_reject_incomplete_xact_state_change(ConnCacheEntry *entry);
static void pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_cancel_query(PGconn *conn);
-static bool pgfdw_cancel_query_begin(PGconn *conn);
+static bool pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime);
static bool pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
bool consume_input);
static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query,
@@ -1315,36 +1315,104 @@ pgfdw_cancel_query(PGconn *conn)
endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
CONNECTION_CLEANUP_TIMEOUT);
- if (!pgfdw_cancel_query_begin(conn))
+ if (!pgfdw_cancel_query_begin(conn, endtime))
return false;
return pgfdw_cancel_query_end(conn, endtime, false);
}
static bool
-pgfdw_cancel_query_begin(PGconn *conn)
+pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
{
- PGcancel *cancel;
- char errbuf[256];
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (PQcancelStatus(cancel_conn) == CONNECTION_BAD)
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
}
- return true;
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ return !failed;
}
static bool
@@ -1685,7 +1753,10 @@ pgfdw_abort_cleanup_begin(ConnCacheEntry *entry, bool toplevel,
*/
if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
{
- if (!pgfdw_cancel_query_begin(entry->conn))
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ CONNECTION_CLEANUP_TIMEOUT);
+
+ if (!pgfdw_cancel_query_begin(entry->conn, endtime))
return false; /* Unable to cancel running query */
*cancel_requested = lappend(*cancel_requested, entry);
}
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index b5a38aeb21..16206a23a9 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2698,6 +2698,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index f410c3db4e..01a9875061 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -717,6 +717,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 808d54461f..c5cd2f5787 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,14 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
+ PGcancelConn *cancelConn = PQcancelConn(conn);
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQcancelSend(cancelConn);
+ PQcancelFinish(cancelConn);
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153..de31a87571 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- if (cancel != NULL)
+ if (PQcancelSend(cancel_conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
Hello,
The patched docs claim that PQrequestCancel is insecure, but neither the
code nor docs explain why. The docs for PQcancel on the other hand do
mention that encryption is not used; does that apply to PQrequestCancel
as well and is that the reason? If so, I think we should copy the
warning and perhaps include a code comment about that. Also, maybe that
final phrase in PQcancel should be a <caution> box: remove from "So, for
example" and add <caution><para>Because gssencmode and sslencmode are
not preserved from the original connection, the cancel request is not
encrypted.</para></caution> or something like that.
I wonder if Section 33.7 Canceling Queries in Progress should be split
in three subsections, and I propose the following order:
33.7.1 PGcancelConn-based Cancellation API
PQcancelConn -- we first document the basics
PQcancelSend
PQcancelFinish
PQcancelPoll -- the nonblocking interface is documented next
PQcancelReset -- reuse a cancelconn, later in docs because it's more advanced
PQcancelStatus -- accessors go last
PQcancelSocket
PQcancelErrorMessage
33.7.2 Obsolete interface
PQgetCancel
PQfreeCancel
PQcancel
33.7.3 Deprecated and Insecure Methods
PQrequestCancel
I have a hard time coming up with good subsection titles though.
Now, looking at this list, I think it's surprising that the nonblocking
request for a cancellation is called PQcancelPoll. PQcancelSend() is at
odds with the asynchronous query API, which uses the verb "send" for the
asynchronous variants. This would suggest that PQcancelPoll should
actually be called PQcancelSend or maybe PQcancelStart (mimicking
PQconnectStart). I'm not sure what's a good alternative name for the
blocking one, which you have called PQcancelSend.
I see upthread that the names of these functions were already quite
heavily debated. Sorry to beat that dead horse some more ... I'm just
not sure it's decided matter.
Lastly -- the doc blurbs that say simply "a version of XYZ that can be
used for cancellation connections" are a bit underwhelming. Shouldn't
we document these more fully instead of making users go read the docs
for the other functions and wonder what the differences might be, if
any?
--
Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/
"Before you were born your parents weren't as boring as they are now. They
got that way paying your bills, cleaning up your room and listening to you
tell them how idealistic you are." -- Charles J. Sykes' advice to teenagers
On Fri, 2 Feb 2024 at 16:06, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Now, looking at this list, I think it's surprising that the nonblocking
request for a cancellation is called PQcancelPoll. PQcancelSend() is at
odds with the asynchronous query API, which uses the verb "send" for the
asynchronous variants. This would suggest that PQcancelPoll should
actually be called PQcancelSend or maybe PQcancelStart (mimicking
PQconnectStart). I'm not sure what's a good alternative name for the
blocking one, which you have called PQcancelSend.
I agree that Send is an unfortunate suffix. I'd love to use PQcancel
for this, but obviously that one is already taken. Some other options
that I can think of are (from favorite to less favorite):
- PQcancelBlocking
- PQcancelAndWait
- PQcancelGo
- PQcancelNow
Finally, another option would be to renome PQcancelConn to
PQgetCancelConn and then rename PQcancelSend to PQcancelConn.
Regarding PQcancelPoll, I think it's a good name for the polling
function, but I agree it's a bit confusing to use it to also start
sending the connection. Even the code of PQcancelPoll basically admits
that this is confusing behaviour:
/*
* Before we can call PQconnectPoll we first need to start the connection
* using pqConnectDBStart. Non-cancel connections already do this whenever
* the connection is initialized. But cancel connections wait until the
* caller starts polling, because there might be a large delay between
* creating a cancel connection and actually wanting to use it.
*/
if (conn->status == CONNECTION_STARTING)
{
if (!pqConnectDBStart(&cancelConn->conn))
{
cancelConn->conn.status = CONNECTION_STARTED;
return PGRES_POLLING_WRITING;
}
}
The only reasonable thing I can think of to make that situation better
is to move that part of the function outside of PQcancelPoll and
create a dedicated PQcancelStart function for it. It introduces an
extra function, but it does seem more in line with how we do the
regular connection establishment. Basically you would have code like
this then, which looks quite nice honestly:
cancelConn = PQcancelConn(conn);
if (!PQcancelStart(cancelConn))
pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
while (true)
{
// polling using PQcancelPoll here
}
On 2024-Feb-02, Jelte Fennema-Nio wrote:
The only reasonable thing I can think of to make that situation better
is to move that part of the function outside of PQcancelPoll and
create a dedicated PQcancelStart function for it. It introduces an
extra function, but it does seem more in line with how we do the
regular connection establishment. Basically you would have code like
this then, which looks quite nice honestly:cancelConn = PQcancelConn(conn);
if (!PQcancelStart(cancelConn))
pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
while (true)
{
// polling using PQcancelPoll here
}
Maybe this is okay? I'll have a look at the whole final situation more
carefully later; or if somebody else wants to share an opinion, please
do so.
In the meantime I pushed your 0002 and 0003 patches, so you can take
this as an opportunity to rebase the remaining ones.
--
Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/
"The saddest aspect of life right now is that science gathers knowledge faster
than society gathers wisdom." (Isaac Asimov)
On Sun, 4 Feb 2024 at 16:39, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Maybe this is okay? I'll have a look at the whole final situation more
carefully later; or if somebody else wants to share an opinion, please
do so.
Attached is a new version of the final patches, with much improved
docs (imho) and the new function names: PQcancelStart and
PQcancelBlocking.
Attachments:
v31-0004-libpq-Add-encrypted-and-non-blocking-versions-of.patchapplication/octet-stream; name=v31-0004-libpq-Add-encrypted-and-non-blocking-versions-of.patchDownload
From f7787e2e4fc3428353acc213c12f6604b3784a9a Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Fri, 26 Jan 2024 17:01:00 +0100
Subject: [PATCH v31 4/5] libpq: Add encrypted and non-blocking versions of
PQcancel
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests. It also doesn't
encrypt the connection over which the cancel request is sent, even when
the original connection required encryption.
This patch adds a bunch of new functions which, together, allow users to
send cancel requests in an encrypted and performant way. The primary new
functions are PQcancelBlocking and PQcancelStart (for blocking and
non-blocking requests respectively). These functions reuse the normal
connection establishement code, so that they can apply the same connection
options such sslmode and gssencmode that the original connection used.
---
doc/src/sgml/libpq.sgml | 354 ++++++++++++++++--
src/interfaces/libpq/exports.txt | 9 +
src/interfaces/libpq/fe-cancel.c | 282 ++++++++++++++
src/interfaces/libpq/fe-connect.c | 130 ++++++-
src/interfaces/libpq/libpq-fe.h | 31 +-
src/interfaces/libpq/libpq-int.h | 10 +
.../modules/libpq_pipeline/libpq_pipeline.c | 263 ++++++++++++-
src/tools/pgindent/typedefs.list | 1 +
8 files changed, 1035 insertions(+), 45 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 1d8998efb2a..1613fcc7bb4 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -5278,7 +5278,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelBlocking"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -6025,10 +6025,295 @@ int PQsetSingleRowMode(PGconn *conn);
<secondary>SQL command</secondary>
</indexterm>
- <para>
- A client application can request cancellation of a command that is
- still being processed by the server, using the functions described in
- this section.
+ <sect2 id="libpq-cancel-conn">
+ <title>Functions for Sending Cancel Requests</title>
+ <variablelist>
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Prepares a connection over which a cancel request can be sent.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>
+ object, but it won't instantly start sending a cancel request over this
+ connection. A cancel request can be sent over this connection in a
+ blocking manner using <xref linkend="libpq-PQcancelBlocking"/> and in a
+ non-blocking manner using <xref linkend="libpq-PQcancelStart"/>.
+ The return value can be passed to <xref linkend="libpq-PQcancelStatus"/>
+ to check if the <structname>PGcancelConn</structname> object was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe way.
+ </para>
+
+ <para>
+ If the original connection is encrypted (using TLS or GSS), then the
+ connection for the cancel request is encrypted in the same way. Any
+ connection options that are only used during authentication or after
+ authentication of the client are ignored though, because cancellation
+ requests do not require authentication and the connection is closed right
+ after the cancellation request is submitted.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelBlocking">
+ <term><function>PQcancelBlocking</function><indexterm><primary>PQcancelBlocking</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a blocking manner.
+<synopsis>
+int PQcancelBlocking(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelConn"/>.
+ The return value of <xref linkend="libpq-PQcancelBlocking"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStart">
+ <term><function>PQcancelStart</function><indexterm><primary>PQcancelStart</primary></indexterm></term>
+ <term id="libpq-PQcancelPoll"><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a non-blocking manner.
+<synopsis>
+int PQcancelStart(PGcancelConn *cancelConn);
+
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelConn"/>.
+ The return value of <xref linkend="libpq-PQcancelBlocking"/>
+ is 1 if the cancellation request could be started and 0 if not.
+ If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ If <function>PQcancelStart</function> succeeds, the next stage
+ is to poll <application>libpq</application> so that it can proceed with
+ the connection sequence.
+ Use <xref linkend="libpq-PQcancelSocket"/> to obtain the descriptor of the
+ socket underlying the database connection.
+ (Caution: do not assume that the socket remains the same
+ across <function>PQconnectPoll</function> calls.)
+ Loop thus: If <function>PQcancelPoll(cancelConn)</function> last returned
+ <symbol>PGRES_POLLING_READING</symbol>, wait until the socket is ready to
+ read (as indicated by <function>select()</function>, <function>poll()</function>, or
+ similar system function).
+ Then call <function>PQcancelPoll(cancelConn)</function> again.
+ Conversely, if <function>PQcancelPoll(cancelConn)</function> last returned
+ <symbol>PGRES_POLLING_WRITING</symbol>, wait until the socket is ready
+ to write, then call <function>PQcancelPoll(cancelConn)</function> again.
+ On the first iteration, i.e., if you have yet to call
+ <function>PQcancelPoll(cancelConn)</function>, behave as if it last returned
+ <symbol>PGRES_POLLING_WRITING</symbol>. Continue this loop until
+ <function>PQcancelPoll(cancelConn)</function> returns
+ <symbol>PGRES_POLLING_FAILED</symbol>, indicating the connection procedure
+ has failed, or <symbol>PGRES_POLLING_OK</symbol>, indicating the connection
+ has been successfully made.
+ </para>
+
+ <para>
+ At any time during connection, the status of the connection can be
+ checked by calling <xref linkend="libpq-PQcancelStatus"/>. If this call returns <symbol>CONNECTION_BAD</symbol>, then the
+ connection procedure has failed; if the call returns <function>CONNECTION_OK</function>, then the
+ connection is ready. Both of these states are equally detectable
+ from the return value of <function>PQcancelPoll</function>, described above. Other states might also occur
+ during (and only during) an asynchronous connection procedure. These
+ indicate the current stage of the connection procedure and might be useful
+ to provide feedback to the user for example. These statuses are:
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Returns the status of the cancel connection.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+ <para>
+ This function is almost the same as <xref linkend="libpq-PQstatus"/>, only
+ for a <structname>PGcancelConn</structname> instead of a
+ <structname>PGconn</structname>. In addition to all the statuses that
+ <xref linkend="libpq-PQstatus"/> can return for a
+ <structname>PGconn</structname>, <function>PQcancelStatus</function>
+ can return one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_ALLOCATED</symbol></term>
+ <listitem>
+ <para>
+ Waiting for a call to <xref linkend="libpq-PQcancelStart"/> or
+ <xref linkend="libpq-PQcancelBlocking"/>, to actually open the
+ socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>
+ or <xref linkend="libpq-PQcancelReset"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelStart"/> or
+ <xref linkend="libpq-PQcancelBlocking"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually canceled) and that the
+ cancel connection is now closed, while a <symbol>CONNECTION_OK</symbol>
+ result for <structname>PGconn</structname> means that queries can be
+ sent over the connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSocket">
+ <term><function>PQcancelSocket</function><indexterm><primary>PQcancelSocket</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Obtains the file descriptor number of the cancel connection socket to
+ the server. A valid descriptor will be greater than or equal
+ to 0; a result of -1 indicates that no server connection is
+ currently open. This might change as a result of calling all of the
+ functions in this section on the (except for
+ <xref linkend="libpq-PQcancelErrorMessage"/> and
+ <function>PQcancelSocket</function> itself).
+<synopsis>
+int PQcancelSocket(const PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections. If <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_BAD</symbol>, then this function can be
+ called on the <structname>PGcancelConn</structname> to retrieve the
+ error message.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *cancelconn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </sect2>
+
+ <sect2 id="libpq-cancel-deprecated">
+ <title>Obsolete Functions for Sending Cancel Requests</title>
+
+ <para>
+ These functions represent older methods of sending cancel requests.
+ Although they still work, they are deprecated due to not sending the cancel
+ requests in an encrypted manner, even when the original connection
+ specified <literal>sslmode</literal> or <literal>gssencmode</literal> to
+ require encryption. Thus these older methods are heavily discouraged from
+ being used in new code, and it is recommended to change existing code to
+ use the new functions instead.
+ </para>
<variablelist>
<varlistentry id="libpq-PQgetCancel">
@@ -6037,7 +6322,7 @@ int PQsetSingleRowMode(PGconn *conn);
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -6079,36 +6364,37 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
-<synopsis>
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelBlocking"/>, but one that can be
+ used safely from within a signal handler. <synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
- dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
- with an explanatory error message. <parameter>errbuf</parameter>
- must be a char array of size <parameter>errbufsize</parameter> (the
- recommended size is 256 bytes).
+ <xref linkend="libpq-PQcancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelBlocking"/> should be
+ used instead. The only benefit that <xref linkend="libpq-PQcancel"/> has
+ is that it can be safely invoked from a signal handler, if the
+ <parameter>errbuf</parameter> is a local variable in the signal handler.
+ However, this is generally not considered a big enough benefit to be
+ worth the security issues that this function has.
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
+ The <structname>PGcancel</structname> object is read-only as far as
+ <xref linkend="libpq-PQcancel"/> is concerned, so it can also be invoked
+ from a thread that is separate from the one manipulating the
+ <structname>PGconn</structname> object.
</para>
<para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
+ with an explanatory error message. <parameter>errbuf</parameter>
+ must be a char array of size <parameter>errbufsize</parameter> (the
+ recommended size is 256 bytes).
</para>
</listitem>
</varlistentry>
@@ -6120,13 +6406,21 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelBlocking"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelBlocking"/> should be
+ used instead. There is no benefit to using
+ <xref linkend="libpq-PQrequestCancel"/> over
+ <xref linkend="libpq-PQcancelBlocking"/>.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -6141,7 +6435,7 @@ int PQrequestCancel(PGconn *conn);
</listitem>
</varlistentry>
</variablelist>
- </para>
+ </sect2>
</sect1>
@@ -9353,7 +9647,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelBlocking"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 088592deb16..0ae814490e6 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -193,3 +193,12 @@ PQsendClosePrepared 190
PQsendClosePortal 191
PQchangePassword 192
PQsendPipelineSync 193
+PQcancelBlocking 194
+PQcancelStart 195
+PQcancelConn 196
+PQcancelPoll 197
+PQcancelStatus 198
+PQcancelSocket 199
+PQcancelErrorMessage 200
+PQcancelReset 201
+PQcancelFinish 202
diff --git a/src/interfaces/libpq/fe-cancel.c b/src/interfaces/libpq/fe-cancel.c
index 51f8d8a78c4..e66b8819ee0 100644
--- a/src/interfaces/libpq/fe-cancel.c
+++ b/src/interfaces/libpq/fe-cancel.c
@@ -21,6 +21,288 @@
#include "libpq-int.h"
#include "port/pg_bswap.h"
+
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a query on the given connection. This requires polling
+ * the returned PGcancelConn to actually complete the cancellation of the
+ * query.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = pqMakeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!pqCopyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!pqConnectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancellation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we manually create the host and address arrays with
+ * a single element after freeing the host array that we generated from
+ * the connection options.
+ */
+ pqReleaseConnHosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_ALLOCATED;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
+
+/*
+ * PQcancelBlocking
+ *
+ * Send a cancellation request in a blocking fashion.
+ * Returns 1 if successful 0 if not.
+ */
+int
+PQcancelBlocking(PGcancelConn *cancelConn)
+{
+ if (!PQcancelStart(cancelConn))
+ return 0;
+ return pqConnectDBComplete(&cancelConn->conn);
+}
+
+/*
+ * PQcancelStart
+ *
+ * Starts sending a cancellation request in a blocking fashion. Returns
+ * 1 if successful 0 if not.
+ */
+int
+PQcancelStart(PGcancelConn *cancelConn)
+{
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return 0;
+
+ if (cancelConn->conn.status != CONNECTION_ALLOCATED)
+ {
+ libpq_append_conn_error(&cancelConn->conn,
+ "cancel request is already being sent on this connection");
+ cancelConn->conn.status = CONNECTION_BAD;
+ return 0;
+ }
+
+ return pqConnectDBStart(&cancelConn->conn);
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn *cancelConn)
+{
+ PGconn *conn = (PGconn *) cancelConn;
+ int n;
+
+ /*
+ * We leave most of the connection establishement to PQconnectPoll, since
+ * it's very similar to normal connection establishment. But once we get
+ * to the CONNECTION_AWAITING_RESPONSE we need to start doing our own
+ * thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * If we receive an error report it, but only if errno is non-zero.
+ * Otherwise we assume it's an EOF, which is what we expect from the
+ * server.
+ *
+ * We skip this for Windows, because Windows is a bit special in its EOF
+ * behaviour for TCP. Sometimes it will error with an ECONNRESET when
+ * there is a clean connection closure. See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here.
+ */
+ if (n < 0 && errno != 0)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangely
+ * do receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF, which is what we were
+ * expecting -- the cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn *cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn *cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn *cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn *cancelConn)
+{
+ pqClosePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_ALLOCATED;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn *cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
+
/*
* PQgetCancel: get a PGcancel structure corresponding to a connection.
*
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index d4e10a0c4f3..b4e7394314f 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -616,8 +616,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections need to retain their be_pid and be_key across
+ * PQcancelReset invocations, otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -923,6 +932,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+bool
+pqCopyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2308,10 +2356,18 @@ pqConnectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address, and these fields have already been set up in PQcancelConn, so
+ * leave these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2453,7 +2509,10 @@ pqConnectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2578,13 +2637,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -3226,6 +3289,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancellation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -3975,8 +4061,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4344,6 +4436,7 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
+ release_conn_addrinfo(conn);
pqReleaseConnHosts(conn);
free(conn->client_encoding_initial);
@@ -4495,6 +4588,15 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4548,7 +4650,13 @@ pqClosePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we would have to rebuild it during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index defc415fa3f..523ea6535f3 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_ALLOCATED /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,34 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* Create a PGcancelConn that's used to cancel a query on the given PGconn */
+extern PGcancelConn *PQcancelConn(PGconn *conn);
+
+/* issue a cancel request in a non-blocking manner */
+extern int PQcancelStart(PGcancelConn *cancelConn);
+
+/* issue a blocking cancel request */
+extern int PQcancelBlocking(PGcancelConn *cancelConn);
+
+/* poll a non-blocking cancel request */
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn *cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn *cancelConn);
+extern int PQcancelSocket(const PGcancelConn *cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn *cancelConn);
+extern void PQcancelReset(PGcancelConn *cancelConn);
+extern void PQcancelFinish(PGcancelConn *cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* deprecated version of PQcancelBlocking, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 82c18f870d2..1982cd4ded2 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -409,6 +409,10 @@ struct pg_conn
char *require_auth; /* name of the expected auth method */
char *load_balance_hosts; /* load balance over hosts */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -621,6 +625,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
@@ -687,6 +696,7 @@ extern void pqClosePGconn(PGconn *conn);
extern int pqPacketSend(PGconn *conn, char pack_type,
const void *buf, size_t buf_len);
extern bool pqGetHomeDirectory(char *buf, int bufsize);
+extern bool pqCopyPGconn(PGconn *srcConn, PGconn *dstConn);
extern bool pqParseIntParam(const char *value, int *result, PGconn *conn,
const char *context);
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 5f43aa40de4..7f32b61df40 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got canceled.
+ *
+ * This is a function wrapped in a macro to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_canceled(conn) confirm_query_canceled_impl(__LINE__, conn)
+static void
+confirm_query_canceled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_canceled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_canceled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_canceled(conn);
+
+ /* test PQcancelSend */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (!PQcancelSend(cancelConn))
+ pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (!PQcancelStart(cancelConn))
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancelStart(cancelConn))
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -1789,6 +2047,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1890,7 +2149,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index d808aad8b05..b2b83b69c2f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1760,6 +1760,7 @@ PG_Locale_Strategy
PG_Lock_Status
PG_init_t
PGcancel
+PGcancelConn
PGcmdQueueEntry
PGconn
PGdataValue
base-commit: bd8fc1677b88ed80e4e00e0e46401ec537952482
--
2.34.1
v31-0005-Start-using-new-libpq-cancel-APIs.patchapplication/octet-stream; name=v31-0005-Start-using-new-libpq-cancel-APIs.patchDownload
From be305ce80abb63a6fb38aa6b41de735717820aa5 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Dec 2023 13:39:09 +0100
Subject: [PATCH v31 5/5] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in most of the codebase with
these newer ones. This specifically leaves out changes to psql and
pgbench as those would need a much larger refactor to be able to call
them, due to the new functions not being signal-safe.
---
contrib/dblink/dblink.c | 30 +++--
contrib/postgres_fdw/connection.c | 105 +++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 11 +-
src/test/isolation/isolationtester.c | 29 ++---
.../modules/libpq_pipeline/libpq_pipeline.c | 6 +-
7 files changed, 148 insertions(+), 55 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 19a362526d2..8b4013c480a 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1346,22 +1346,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelConn(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (!PQcancelBlocking(cancelConn))
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 4931ebf5915..cf32bd986f0 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -133,7 +133,7 @@ static void pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue);
static void pgfdw_reject_incomplete_xact_state_change(ConnCacheEntry *entry);
static void pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_cancel_query(PGconn *conn);
-static bool pgfdw_cancel_query_begin(PGconn *conn);
+static bool pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime);
static bool pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
bool consume_input);
static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query,
@@ -1315,36 +1315,104 @@ pgfdw_cancel_query(PGconn *conn)
endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
CONNECTION_CLEANUP_TIMEOUT);
- if (!pgfdw_cancel_query_begin(conn))
+ if (!pgfdw_cancel_query_begin(conn, endtime))
return false;
return pgfdw_cancel_query_end(conn, endtime, false);
}
static bool
-pgfdw_cancel_query_begin(PGconn *conn)
+pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
{
- PGcancel *cancel;
- char errbuf[256];
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (!PQcancelStart(cancel_conn))
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
}
- return true;
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ return !failed;
}
static bool
@@ -1685,7 +1753,10 @@ pgfdw_abort_cleanup_begin(ConnCacheEntry *entry, bool toplevel,
*/
if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
{
- if (!pgfdw_cancel_query_begin(entry->conn))
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ CONNECTION_CLEANUP_TIMEOUT);
+
+ if (!pgfdw_cancel_query_begin(entry->conn, endtime))
return false; /* Unable to cancel running query */
*cancel_requested = lappend(*cancel_requested, entry);
}
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index b5a38aeb214..16206a23a9d 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2698,6 +2698,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index f410c3db4e6..01a98750611 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -717,6 +717,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 808d54461fd..80692d073cd 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,14 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
+ PGcancelConn *cancelConn = PQcancelConn(conn);
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQcancelBlocking(cancelConn);
+ PQcancelFinish(cancelConn);
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153a..b049fab267d 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- if (cancel != NULL)
+ if (PQcancelBlocking(cancel_conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 7f32b61df40..97f21fe9271 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -217,11 +217,11 @@ test_cancel(PGconn *conn, const char *conninfo)
pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
confirm_query_canceled(conn);
- /* test PQcancelSend */
+ /* test PQcancelBlocking */
send_cancellable_query(conn, monitorConn);
cancelConn = PQcancelConn(conn);
- if (!PQcancelSend(cancelConn))
- pg_fatal("failed to run PQcancelSend: %s", PQcancelErrorMessage(cancelConn));
+ if (!PQcancelBlocking(cancelConn))
+ pg_fatal("failed to run PQcancelBlocking: %s", PQcancelErrorMessage(cancelConn));
confirm_query_canceled(conn);
PQcancelFinish(cancelConn);
--
2.34.1
On 2024-Feb-14, Jelte Fennema-Nio wrote:
Attached is a new version of the final patches, with much improved
docs (imho) and the new function names: PQcancelStart and
PQcancelBlocking.
Hmm, I think the changes to libpq_pipeline in 0005 should be in 0004.
--
Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/
On Wed, 14 Feb 2024 at 18:41, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Hmm, I think the changes to libpq_pipeline in 0005 should be in 0004.
Yeah, you're correct. Fixed that now.
Attachments:
v32-0005-Start-using-new-libpq-cancel-APIs.patchapplication/octet-stream; name=v32-0005-Start-using-new-libpq-cancel-APIs.patchDownload
From 2922e578a7348825116024d2e2793af664af2c8d Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Dec 2023 13:39:09 +0100
Subject: [PATCH v32 5/5] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in most of the codebase with
these newer ones. This specifically leaves out changes to psql and
pgbench as those would need a much larger refactor to be able to call
them, due to the new functions not being signal-safe.
---
contrib/dblink/dblink.c | 30 +++--
contrib/postgres_fdw/connection.c | 105 +++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 11 +-
src/test/isolation/isolationtester.c | 29 ++---
6 files changed, 145 insertions(+), 52 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 19a362526d2..8b4013c480a 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1346,22 +1346,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelConn(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (!PQcancelBlocking(cancelConn))
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 4931ebf5915..cf32bd986f0 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -133,7 +133,7 @@ static void pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue);
static void pgfdw_reject_incomplete_xact_state_change(ConnCacheEntry *entry);
static void pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_cancel_query(PGconn *conn);
-static bool pgfdw_cancel_query_begin(PGconn *conn);
+static bool pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime);
static bool pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
bool consume_input);
static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query,
@@ -1315,36 +1315,104 @@ pgfdw_cancel_query(PGconn *conn)
endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
CONNECTION_CLEANUP_TIMEOUT);
- if (!pgfdw_cancel_query_begin(conn))
+ if (!pgfdw_cancel_query_begin(conn, endtime))
return false;
return pgfdw_cancel_query_end(conn, endtime, false);
}
static bool
-pgfdw_cancel_query_begin(PGconn *conn)
+pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
{
- PGcancel *cancel;
- char errbuf[256];
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (!PQcancelStart(cancel_conn))
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
}
- return true;
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ return !failed;
}
static bool
@@ -1685,7 +1753,10 @@ pgfdw_abort_cleanup_begin(ConnCacheEntry *entry, bool toplevel,
*/
if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
{
- if (!pgfdw_cancel_query_begin(entry->conn))
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ CONNECTION_CLEANUP_TIMEOUT);
+
+ if (!pgfdw_cancel_query_begin(entry->conn, endtime))
return false; /* Unable to cancel running query */
*cancel_requested = lappend(*cancel_requested, entry);
}
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index b5a38aeb214..16206a23a9d 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2698,6 +2698,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index f410c3db4e6..01a98750611 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -717,6 +717,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 808d54461fd..80692d073cd 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,14 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
+ PGcancelConn *cancelConn = PQcancelConn(conn);
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQcancelBlocking(cancelConn);
+ PQcancelFinish(cancelConn);
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153a..b049fab267d 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- if (cancel != NULL)
+ if (PQcancelBlocking(cancel_conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
v32-0004-libpq-Add-encrypted-and-non-blocking-versions-of.patchapplication/octet-stream; name=v32-0004-libpq-Add-encrypted-and-non-blocking-versions-of.patchDownload
From b7320567c17d4660089dd621818768c41c33a66c Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Fri, 26 Jan 2024 17:01:00 +0100
Subject: [PATCH v32 4/5] libpq: Add encrypted and non-blocking versions of
PQcancel
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests. It also doesn't
encrypt the connection over which the cancel request is sent, even when
the original connection required encryption.
This patch adds a bunch of new functions which, together, allow users to
send cancel requests in an encrypted and performant way. The primary new
functions are PQcancelBlocking and PQcancelStart (for blocking and
non-blocking requests respectively). These functions reuse the normal
connection establishement code, so that they can apply the same connection
options such sslmode and gssencmode that the original connection used.
---
doc/src/sgml/libpq.sgml | 354 ++++++++++++++++--
src/interfaces/libpq/exports.txt | 9 +
src/interfaces/libpq/fe-cancel.c | 282 ++++++++++++++
src/interfaces/libpq/fe-connect.c | 130 ++++++-
src/interfaces/libpq/libpq-fe.h | 31 +-
src/interfaces/libpq/libpq-int.h | 10 +
.../modules/libpq_pipeline/libpq_pipeline.c | 263 ++++++++++++-
src/tools/pgindent/typedefs.list | 1 +
8 files changed, 1035 insertions(+), 45 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 1d8998efb2a..1613fcc7bb4 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -5278,7 +5278,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelBlocking"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -6025,10 +6025,295 @@ int PQsetSingleRowMode(PGconn *conn);
<secondary>SQL command</secondary>
</indexterm>
- <para>
- A client application can request cancellation of a command that is
- still being processed by the server, using the functions described in
- this section.
+ <sect2 id="libpq-cancel-conn">
+ <title>Functions for Sending Cancel Requests</title>
+ <variablelist>
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Prepares a connection over which a cancel request can be sent.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>
+ object, but it won't instantly start sending a cancel request over this
+ connection. A cancel request can be sent over this connection in a
+ blocking manner using <xref linkend="libpq-PQcancelBlocking"/> and in a
+ non-blocking manner using <xref linkend="libpq-PQcancelStart"/>.
+ The return value can be passed to <xref linkend="libpq-PQcancelStatus"/>
+ to check if the <structname>PGcancelConn</structname> object was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe way.
+ </para>
+
+ <para>
+ If the original connection is encrypted (using TLS or GSS), then the
+ connection for the cancel request is encrypted in the same way. Any
+ connection options that are only used during authentication or after
+ authentication of the client are ignored though, because cancellation
+ requests do not require authentication and the connection is closed right
+ after the cancellation request is submitted.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelBlocking">
+ <term><function>PQcancelBlocking</function><indexterm><primary>PQcancelBlocking</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a blocking manner.
+<synopsis>
+int PQcancelBlocking(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelConn"/>.
+ The return value of <xref linkend="libpq-PQcancelBlocking"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStart">
+ <term><function>PQcancelStart</function><indexterm><primary>PQcancelStart</primary></indexterm></term>
+ <term id="libpq-PQcancelPoll"><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a non-blocking manner.
+<synopsis>
+int PQcancelStart(PGcancelConn *cancelConn);
+
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelConn"/>.
+ The return value of <xref linkend="libpq-PQcancelBlocking"/>
+ is 1 if the cancellation request could be started and 0 if not.
+ If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ If <function>PQcancelStart</function> succeeds, the next stage
+ is to poll <application>libpq</application> so that it can proceed with
+ the connection sequence.
+ Use <xref linkend="libpq-PQcancelSocket"/> to obtain the descriptor of the
+ socket underlying the database connection.
+ (Caution: do not assume that the socket remains the same
+ across <function>PQconnectPoll</function> calls.)
+ Loop thus: If <function>PQcancelPoll(cancelConn)</function> last returned
+ <symbol>PGRES_POLLING_READING</symbol>, wait until the socket is ready to
+ read (as indicated by <function>select()</function>, <function>poll()</function>, or
+ similar system function).
+ Then call <function>PQcancelPoll(cancelConn)</function> again.
+ Conversely, if <function>PQcancelPoll(cancelConn)</function> last returned
+ <symbol>PGRES_POLLING_WRITING</symbol>, wait until the socket is ready
+ to write, then call <function>PQcancelPoll(cancelConn)</function> again.
+ On the first iteration, i.e., if you have yet to call
+ <function>PQcancelPoll(cancelConn)</function>, behave as if it last returned
+ <symbol>PGRES_POLLING_WRITING</symbol>. Continue this loop until
+ <function>PQcancelPoll(cancelConn)</function> returns
+ <symbol>PGRES_POLLING_FAILED</symbol>, indicating the connection procedure
+ has failed, or <symbol>PGRES_POLLING_OK</symbol>, indicating the connection
+ has been successfully made.
+ </para>
+
+ <para>
+ At any time during connection, the status of the connection can be
+ checked by calling <xref linkend="libpq-PQcancelStatus"/>. If this call returns <symbol>CONNECTION_BAD</symbol>, then the
+ connection procedure has failed; if the call returns <function>CONNECTION_OK</function>, then the
+ connection is ready. Both of these states are equally detectable
+ from the return value of <function>PQcancelPoll</function>, described above. Other states might also occur
+ during (and only during) an asynchronous connection procedure. These
+ indicate the current stage of the connection procedure and might be useful
+ to provide feedback to the user for example. These statuses are:
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Returns the status of the cancel connection.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+ <para>
+ This function is almost the same as <xref linkend="libpq-PQstatus"/>, only
+ for a <structname>PGcancelConn</structname> instead of a
+ <structname>PGconn</structname>. In addition to all the statuses that
+ <xref linkend="libpq-PQstatus"/> can return for a
+ <structname>PGconn</structname>, <function>PQcancelStatus</function>
+ can return one additional status:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-starting">
+ <term><symbol>CONNECTION_ALLOCATED</symbol></term>
+ <listitem>
+ <para>
+ Waiting for a call to <xref linkend="libpq-PQcancelStart"/> or
+ <xref linkend="libpq-PQcancelBlocking"/>, to actually open the
+ socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>
+ or <xref linkend="libpq-PQcancelReset"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelStart"/> or
+ <xref linkend="libpq-PQcancelBlocking"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </para>
+
+ <para>
+ One final note about the returned statuses is that
+ <symbol>CONNECTION_OK</symbol> has a slightly different meaning for a
+ <structname>PGcancelConn</structname> than what it has for a
+ <structname>PGconn</structname>. When <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_OK</symbol> for a <structname>PGcancelConn</structname>
+ it means that that the dispatch of the cancel request has completed (although
+ this is no promise that the query was actually canceled) and that the
+ cancel connection is now closed, while a <symbol>CONNECTION_OK</symbol>
+ result for <structname>PGconn</structname> means that queries can be
+ sent over the connection.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSocket">
+ <term><function>PQcancelSocket</function><indexterm><primary>PQcancelSocket</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Obtains the file descriptor number of the cancel connection socket to
+ the server. A valid descriptor will be greater than or equal
+ to 0; a result of -1 indicates that no server connection is
+ currently open. This might change as a result of calling all of the
+ functions in this section on the (except for
+ <xref linkend="libpq-PQcancelErrorMessage"/> and
+ <function>PQcancelSocket</function> itself).
+<synopsis>
+int PQcancelSocket(const PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections. If <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_BAD</symbol>, then this function can be
+ called on the <structname>PGcancelConn</structname> to retrieve the
+ error message.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *cancelconn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </sect2>
+
+ <sect2 id="libpq-cancel-deprecated">
+ <title>Obsolete Functions for Sending Cancel Requests</title>
+
+ <para>
+ These functions represent older methods of sending cancel requests.
+ Although they still work, they are deprecated due to not sending the cancel
+ requests in an encrypted manner, even when the original connection
+ specified <literal>sslmode</literal> or <literal>gssencmode</literal> to
+ require encryption. Thus these older methods are heavily discouraged from
+ being used in new code, and it is recommended to change existing code to
+ use the new functions instead.
+ </para>
<variablelist>
<varlistentry id="libpq-PQgetCancel">
@@ -6037,7 +6322,7 @@ int PQsetSingleRowMode(PGconn *conn);
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -6079,36 +6364,37 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
-<synopsis>
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelBlocking"/>, but one that can be
+ used safely from within a signal handler. <synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
- dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
- with an explanatory error message. <parameter>errbuf</parameter>
- must be a char array of size <parameter>errbufsize</parameter> (the
- recommended size is 256 bytes).
+ <xref linkend="libpq-PQcancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelBlocking"/> should be
+ used instead. The only benefit that <xref linkend="libpq-PQcancel"/> has
+ is that it can be safely invoked from a signal handler, if the
+ <parameter>errbuf</parameter> is a local variable in the signal handler.
+ However, this is generally not considered a big enough benefit to be
+ worth the security issues that this function has.
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
+ The <structname>PGcancel</structname> object is read-only as far as
+ <xref linkend="libpq-PQcancel"/> is concerned, so it can also be invoked
+ from a thread that is separate from the one manipulating the
+ <structname>PGconn</structname> object.
</para>
<para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
+ with an explanatory error message. <parameter>errbuf</parameter>
+ must be a char array of size <parameter>errbufsize</parameter> (the
+ recommended size is 256 bytes).
</para>
</listitem>
</varlistentry>
@@ -6120,13 +6406,21 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelBlocking"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelBlocking"/> should be
+ used instead. There is no benefit to using
+ <xref linkend="libpq-PQrequestCancel"/> over
+ <xref linkend="libpq-PQcancelBlocking"/>.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -6141,7 +6435,7 @@ int PQrequestCancel(PGconn *conn);
</listitem>
</varlistentry>
</variablelist>
- </para>
+ </sect2>
</sect1>
@@ -9353,7 +9647,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelBlocking"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 088592deb16..0ae814490e6 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -193,3 +193,12 @@ PQsendClosePrepared 190
PQsendClosePortal 191
PQchangePassword 192
PQsendPipelineSync 193
+PQcancelBlocking 194
+PQcancelStart 195
+PQcancelConn 196
+PQcancelPoll 197
+PQcancelStatus 198
+PQcancelSocket 199
+PQcancelErrorMessage 200
+PQcancelReset 201
+PQcancelFinish 202
diff --git a/src/interfaces/libpq/fe-cancel.c b/src/interfaces/libpq/fe-cancel.c
index 51f8d8a78c4..e66b8819ee0 100644
--- a/src/interfaces/libpq/fe-cancel.c
+++ b/src/interfaces/libpq/fe-cancel.c
@@ -21,6 +21,288 @@
#include "libpq-int.h"
#include "port/pg_bswap.h"
+
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a query on the given connection. This requires polling
+ * the returned PGcancelConn to actually complete the cancellation of the
+ * query.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = pqMakeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!pqCopyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!pqConnectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancellation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we manually create the host and address arrays with
+ * a single element after freeing the host array that we generated from
+ * the connection options.
+ */
+ pqReleaseConnHosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_ALLOCATED;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
+
+/*
+ * PQcancelBlocking
+ *
+ * Send a cancellation request in a blocking fashion.
+ * Returns 1 if successful 0 if not.
+ */
+int
+PQcancelBlocking(PGcancelConn *cancelConn)
+{
+ if (!PQcancelStart(cancelConn))
+ return 0;
+ return pqConnectDBComplete(&cancelConn->conn);
+}
+
+/*
+ * PQcancelStart
+ *
+ * Starts sending a cancellation request in a blocking fashion. Returns
+ * 1 if successful 0 if not.
+ */
+int
+PQcancelStart(PGcancelConn *cancelConn)
+{
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return 0;
+
+ if (cancelConn->conn.status != CONNECTION_ALLOCATED)
+ {
+ libpq_append_conn_error(&cancelConn->conn,
+ "cancel request is already being sent on this connection");
+ cancelConn->conn.status = CONNECTION_BAD;
+ return 0;
+ }
+
+ return pqConnectDBStart(&cancelConn->conn);
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn *cancelConn)
+{
+ PGconn *conn = (PGconn *) cancelConn;
+ int n;
+
+ /*
+ * We leave most of the connection establishement to PQconnectPoll, since
+ * it's very similar to normal connection establishment. But once we get
+ * to the CONNECTION_AWAITING_RESPONSE we need to start doing our own
+ * thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * If we receive an error report it, but only if errno is non-zero.
+ * Otherwise we assume it's an EOF, which is what we expect from the
+ * server.
+ *
+ * We skip this for Windows, because Windows is a bit special in its EOF
+ * behaviour for TCP. Sometimes it will error with an ECONNRESET when
+ * there is a clean connection closure. See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here.
+ */
+ if (n < 0 && errno != 0)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangely
+ * do receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF, which is what we were
+ * expecting -- the cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn *cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn *cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn *cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn *cancelConn)
+{
+ pqClosePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_ALLOCATED;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn *cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
+
/*
* PQgetCancel: get a PGcancel structure corresponding to a connection.
*
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index d4e10a0c4f3..b4e7394314f 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -616,8 +616,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections need to retain their be_pid and be_key across
+ * PQcancelReset invocations, otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -923,6 +932,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+bool
+pqCopyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2308,10 +2356,18 @@ pqConnectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address, and these fields have already been set up in PQcancelConn, so
+ * leave these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2453,7 +2509,10 @@ pqConnectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2578,13 +2637,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -3226,6 +3289,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancellation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -3975,8 +4061,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4344,6 +4436,7 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
+ release_conn_addrinfo(conn);
pqReleaseConnHosts(conn);
free(conn->client_encoding_initial);
@@ -4495,6 +4588,15 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4548,7 +4650,13 @@ pqClosePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we would have to rebuild it during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index defc415fa3f..523ea6535f3 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -78,7 +78,9 @@ typedef enum
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Checking target server properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_ALLOCATED /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -165,6 +167,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -321,16 +328,34 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* Create a PGcancelConn that's used to cancel a query on the given PGconn */
+extern PGcancelConn *PQcancelConn(PGconn *conn);
+
+/* issue a cancel request in a non-blocking manner */
+extern int PQcancelStart(PGcancelConn *cancelConn);
+
+/* issue a blocking cancel request */
+extern int PQcancelBlocking(PGcancelConn *cancelConn);
+
+/* poll a non-blocking cancel request */
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn *cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn *cancelConn);
+extern int PQcancelSocket(const PGcancelConn *cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn *cancelConn);
+extern void PQcancelReset(PGcancelConn *cancelConn);
+extern void PQcancelFinish(PGcancelConn *cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* deprecated version of PQcancelBlocking, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 82c18f870d2..1982cd4ded2 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -409,6 +409,10 @@ struct pg_conn
char *require_auth; /* name of the expected auth method */
char *load_balance_hosts; /* load balance over hosts */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -621,6 +625,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
@@ -687,6 +696,7 @@ extern void pqClosePGconn(PGconn *conn);
extern int pqPacketSend(PGconn *conn, char pack_type,
const void *buf, size_t buf_len);
extern bool pqGetHomeDirectory(char *buf, int bufsize);
+extern bool pqCopyPGconn(PGconn *srcConn, PGconn *dstConn);
extern bool pqParseIntParam(const char *value, int *result, PGconn *conn,
const char *context);
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 5f43aa40de4..97f21fe9271 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got canceled.
+ *
+ * This is a function wrapped in a macro to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_canceled(conn) confirm_query_canceled_impl(__LINE__, conn)
+static void
+confirm_query_canceled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_canceled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_canceled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_canceled(conn);
+
+ /* test PQcancelBlocking */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (!PQcancelBlocking(cancelConn))
+ pg_fatal("failed to run PQcancelBlocking: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (!PQcancelStart(cancelConn))
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancelStart(cancelConn))
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -1789,6 +2047,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1890,7 +2149,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index d808aad8b05..b2b83b69c2f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1760,6 +1760,7 @@ PG_Locale_Strategy
PG_Lock_Status
PG_init_t
PGcancel
+PGcancelConn
PGcmdQueueEntry
PGconn
PGdataValue
base-commit: bd8fc1677b88ed80e4e00e0e46401ec537952482
--
2.34.1
In patch 0004, I noticed a couple of typos in the documentation; please
find attached a fixup patch correcting these.
Still in the documentation, same patch, the last paragraph documenting
PQcancelPoll() ends as:
+ indicate the current stage of the connection procedure and might
be useful
+ to provide feedback to the user for example. These statuses are:
+ </para>
while not actually listing the "statuses". Should we list them? Adjust
the wording? Or refer to PQconnectPoll() documentation (since the
paragraph is copied from there it seems)?
Otherwise, the feature still works fine as far as I can tell.
Attachments:
0001-fixup-libpq-Add-encrypted-and-non-blocking-versions-.patchtext/x-patch; charset=UTF-8; name=0001-fixup-libpq-Add-encrypted-and-non-blocking-versions-.patchDownload
From 3e04442e3f283829ed38e4a2b435fd182addf87a Mon Sep 17 00:00:00 2001
From: Denis Laxalde <denis.laxalde@dalibo.com>
Date: Wed, 6 Mar 2024 14:55:40 +0100
Subject: [PATCH] fixup! libpq: Add encrypted and non-blocking versions of
PQcancel
---
doc/src/sgml/libpq.sgml | 2 +-
src/interfaces/libpq/fe-cancel.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 1613fcc7bb..1281cac284 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -6122,7 +6122,7 @@ PostgresPollingStatusType PQcancelPoll(PGcancelConn *cancelConn);
<para>
The request is made over the given <structname>PGcancelConn</structname>,
which needs to be created with <xref linkend="libpq-PQcancelConn"/>.
- The return value of <xref linkend="libpq-PQcancelBlocking"/>
+ The return value of <xref linkend="libpq-PQcancelStart"/>
is 1 if the cancellation request could be started and 0 if not.
If it was unsuccessful, the error message can be
retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
diff --git a/src/interfaces/libpq/fe-cancel.c b/src/interfaces/libpq/fe-cancel.c
index e66b8819ee..9c9a23bb4d 100644
--- a/src/interfaces/libpq/fe-cancel.c
+++ b/src/interfaces/libpq/fe-cancel.c
@@ -146,7 +146,7 @@ PQcancelBlocking(PGcancelConn *cancelConn)
/*
* PQcancelStart
*
- * Starts sending a cancellation request in a blocking fashion. Returns
+ * Starts sending a cancellation request in a non-blocking fashion. Returns
* 1 if successful 0 if not.
*/
int
--
2.39.2
On Wed, 6 Mar 2024 at 15:03, Denis Laxalde <denis.laxalde@dalibo.com> wrote:
In patch 0004, I noticed a couple of typos in the documentation; please
find attached a fixup patch correcting these.
Thanks, applied.
while not actually listing the "statuses". Should we list them?
I listed the relevant statuses over now and updated the PQcancelStatus
docs to look more like the PQstatus one. I didn't list any statuses
that a cancel connection could never have (but a normal connection
can).
While going over the list of statuses possible for a cancel connection
I realized that the docs for PQconnectStart were not listing all
relevant statuses, so I fixed that in patch 0001.
Attachments:
v32-0001-Add-missing-connection-statuses-to-docs.patchtext/x-patch; charset=US-ASCII; name=v32-0001-Add-missing-connection-statuses-to-docs.patchDownload
From 40d3d9b0f4058bcf3041e63f71ce4c56e43e73f2 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Wed, 6 Mar 2024 18:33:49 +0100
Subject: [PATCH v32 1/3] Add missing connection statuses to docs
The list of connection statuses that PQstatus might return during an
asynchronous connection attempt was incorrect:
1. CONNECTION_SETENV is never returned anymore and is only part of the
enum for backwards compatibility. So it's removed from the list.
2. CONNECTION_CHECK_STANDBY and CONNECTION_GSS_STARTUP were not listed.
This addresses those problems. CONNECTION_NEEDED and
CONNECTION_CHECK_TARGET are not listed in the docs on purpose, since
these states are internal states that can never be observed by a caller
of PQstatus.
---
doc/src/sgml/libpq.sgml | 15 ++++++++++++---
src/interfaces/libpq/libpq-fe.h | 3 ++-
2 files changed, 14 insertions(+), 4 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 1d8998efb2a..a2bbf33d029 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -428,11 +428,11 @@ PostgresPollingStatusType PQconnectPoll(PGconn *conn);
</listitem>
</varlistentry>
- <varlistentry id="libpq-connection-setenv">
- <term><symbol>CONNECTION_SETENV</symbol></term>
+ <varlistentry id="libpq-connection-gss-startup">
+ <term><symbol>CONNECTION_GSS_STARTUP</symbol></term>
<listitem>
<para>
- Negotiating environment-driven parameter settings.
+ Negotiating GSS encryption.
</para>
</listitem>
</varlistentry>
@@ -446,6 +446,15 @@ PostgresPollingStatusType PQconnectPoll(PGconn *conn);
</listitem>
</varlistentry>
+ <varlistentry id="libpq-connection-check-standby">
+ <term><symbol>CONNECTION_CHECK_STANDBY</symbol></term>
+ <listitem>
+ <para>
+ Checking if connection is to a server in standby mode.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-connection-consume">
<term><symbol>CONNECTION_CONSUME</symbol></term>
<listitem>
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index defc415fa3f..1e5e7481a7c 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -77,7 +77,8 @@ typedef enum
CONNECTION_CHECK_WRITABLE, /* Checking if session is read-write. */
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
- CONNECTION_CHECK_TARGET, /* Checking target server properties. */
+ CONNECTION_CHECK_TARGET, /* Internal state: Checking target server
+ * properties. */
CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
} ConnStatusType;
base-commit: de7c6fe8347ab726c80ebbfcdb57f4b714d5243d
--
2.34.1
v32-0003-Start-using-new-libpq-cancel-APIs.patchtext/x-patch; charset=US-ASCII; name=v32-0003-Start-using-new-libpq-cancel-APIs.patchDownload
From 15d91bb8ca87764eee02d785126495df1f6ffd5f Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Dec 2023 13:39:09 +0100
Subject: [PATCH v32 3/3] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in most of the codebase with
these newer ones. This specifically leaves out changes to psql and
pgbench as those would need a much larger refactor to be able to call
them, due to the new functions not being signal-safe.
---
contrib/dblink/dblink.c | 30 +++--
contrib/postgres_fdw/connection.c | 105 +++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 11 +-
src/test/isolation/isolationtester.c | 29 ++---
6 files changed, 145 insertions(+), 52 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 19a362526d2..8b4013c480a 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1346,22 +1346,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelConn(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (!PQcancelBlocking(cancelConn))
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 4931ebf5915..cf32bd986f0 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -133,7 +133,7 @@ static void pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue);
static void pgfdw_reject_incomplete_xact_state_change(ConnCacheEntry *entry);
static void pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_cancel_query(PGconn *conn);
-static bool pgfdw_cancel_query_begin(PGconn *conn);
+static bool pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime);
static bool pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
bool consume_input);
static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query,
@@ -1315,36 +1315,104 @@ pgfdw_cancel_query(PGconn *conn)
endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
CONNECTION_CLEANUP_TIMEOUT);
- if (!pgfdw_cancel_query_begin(conn))
+ if (!pgfdw_cancel_query_begin(conn, endtime))
return false;
return pgfdw_cancel_query_end(conn, endtime, false);
}
static bool
-pgfdw_cancel_query_begin(PGconn *conn)
+pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
{
- PGcancel *cancel;
- char errbuf[256];
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (!PQcancelStart(cancel_conn))
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
}
- return true;
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ return !failed;
}
static bool
@@ -1685,7 +1753,10 @@ pgfdw_abort_cleanup_begin(ConnCacheEntry *entry, bool toplevel,
*/
if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
{
- if (!pgfdw_cancel_query_begin(entry->conn))
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ CONNECTION_CLEANUP_TIMEOUT);
+
+ if (!pgfdw_cancel_query_begin(entry->conn, endtime))
return false; /* Unable to cancel running query */
*cancel_requested = lappend(*cancel_requested, entry);
}
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index c355e8f3f7d..8892abc3502 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2698,6 +2698,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index 812e7646e16..8aa528002f7 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -717,6 +717,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 808d54461fd..80692d073cd 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,14 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
+ PGcancelConn *cancelConn = PQcancelConn(conn);
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQcancelBlocking(cancelConn);
+ PQcancelFinish(cancelConn);
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153a..b049fab267d 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelConn(conn);
- if (cancel != NULL)
+ if (PQcancelBlocking(cancel_conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
v32-0002-libpq-Add-encrypted-and-non-blocking-versions-of.patchtext/x-patch; charset=US-ASCII; name=v32-0002-libpq-Add-encrypted-and-non-blocking-versions-of.patchDownload
From 0701581620b4b43fa11aba9939ae0c67b6e601eb Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Fri, 26 Jan 2024 17:01:00 +0100
Subject: [PATCH v32 2/3] libpq: Add encrypted and non-blocking versions of
PQcancel
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests. It also doesn't
encrypt the connection over which the cancel request is sent, even when
the original connection required encryption.
This patch adds a bunch of new functions which, together, allow users to
send cancel requests in an encrypted and performant way. The primary new
functions are PQcancelBlocking and PQcancelStart (for blocking and
non-blocking requests respectively). These functions reuse the normal
connection establishement code, so that they can apply the same connection
options such sslmode and gssencmode that the original connection used.
---
doc/src/sgml/libpq.sgml | 446 ++++++++++++++++--
src/interfaces/libpq/exports.txt | 9 +
src/interfaces/libpq/fe-cancel.c | 282 +++++++++++
src/interfaces/libpq/fe-connect.c | 130 ++++-
src/interfaces/libpq/libpq-fe.h | 31 +-
src/interfaces/libpq/libpq-int.h | 10 +
.../modules/libpq_pipeline/libpq_pipeline.c | 263 ++++++++++-
src/tools/pgindent/typedefs.list | 1 +
8 files changed, 1127 insertions(+), 45 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index a2bbf33d029..325eddec8bf 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -5287,7 +5287,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelBlocking"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -6034,10 +6034,387 @@ int PQsetSingleRowMode(PGconn *conn);
<secondary>SQL command</secondary>
</indexterm>
- <para>
- A client application can request cancellation of a command that is
- still being processed by the server, using the functions described in
- this section.
+ <sect2 id="libpq-cancel-conn">
+ <title>Functions for Sending Cancel Requests</title>
+ <variablelist>
+ <varlistentry id="libpq-PQcancelConn">
+ <term><function>PQcancelConn</function><indexterm><primary>PQcancelConn</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Prepares a connection over which a cancel request can be sent.
+<synopsis>
+PGcancelConn *PQcancelConn(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelConn"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>
+ object, but it won't instantly start sending a cancel request over this
+ connection. A cancel request can be sent over this connection in a
+ blocking manner using <xref linkend="libpq-PQcancelBlocking"/> and in a
+ non-blocking manner using <xref linkend="libpq-PQcancelStart"/>.
+ The return value can be passed to <xref linkend="libpq-PQcancelStatus"/>
+ to check if the <structname>PGcancelConn</structname> object was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe way.
+ </para>
+
+ <para>
+ If the original connection is encrypted (using TLS or GSS), then the
+ connection for the cancel request is encrypted in the same way. Any
+ connection options that are only used during authentication or after
+ authentication of the client are ignored though, because cancellation
+ requests do not require authentication and the connection is closed right
+ after the cancellation request is submitted.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelConn</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelBlocking">
+ <term><function>PQcancelBlocking</function><indexterm><primary>PQcancelBlocking</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a blocking manner.
+<synopsis>
+int PQcancelBlocking(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelConn"/>.
+ The return value of <xref linkend="libpq-PQcancelBlocking"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStart">
+ <term><function>PQcancelStart</function><indexterm><primary>PQcancelStart</primary></indexterm></term>
+ <term id="libpq-PQcancelPoll"><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a non-blocking manner.
+<synopsis>
+int PQcancelStart(PGcancelConn *cancelConn);
+
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelConn"/>.
+ The return value of <xref linkend="libpq-PQcancelStart"/>
+ is 1 if the cancellation request could be started and 0 if not.
+ If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ If <function>PQcancelStart</function> succeeds, the next stage
+ is to poll <application>libpq</application> so that it can proceed with
+ the cancel connection sequence.
+ Use <xref linkend="libpq-PQcancelSocket"/> to obtain the descriptor of the
+ socket underlying the database connection.
+ (Caution: do not assume that the socket remains the same
+ across <function>PQconnectPoll</function> calls.)
+ Loop thus: If <function>PQcancelPoll(cancelConn)</function> last returned
+ <symbol>PGRES_POLLING_READING</symbol>, wait until the socket is ready to
+ read (as indicated by <function>select()</function>, <function>poll()</function>, or
+ similar system function).
+ Then call <function>PQcancelPoll(cancelConn)</function> again.
+ Conversely, if <function>PQcancelPoll(cancelConn)</function> last returned
+ <symbol>PGRES_POLLING_WRITING</symbol>, wait until the socket is ready
+ to write, then call <function>PQcancelPoll(cancelConn)</function> again.
+ On the first iteration, i.e., if you have yet to call
+ <function>PQcancelPoll(cancelConn)</function>, behave as if it last returned
+ <symbol>PGRES_POLLING_WRITING</symbol>. Continue this loop until
+ <function>PQcancelPoll(cancelConn)</function> returns
+ <symbol>PGRES_POLLING_FAILED</symbol>, indicating the connection procedure
+ has failed, or <symbol>PGRES_POLLING_OK</symbol>, indicating cancel
+ request was successfully dispatched.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ <para>
+ At any time during connection, the status of the connection can be
+ checked by calling <xref linkend="libpq-PQcancelStatus"/>. If this call returns <symbol>CONNECTION_BAD</symbol>, then the
+ cancel procedure has failed; if the call returns <function>CONNECTION_OK</function>, then cancel request was successfully dispatched. Both of these states are equally detectable
+ from the return value of <function>PQcancelPoll</function>, described above. Other states might also occur
+ during (and only during) an asynchronous connection procedure. These
+ indicate the current stage of the connection procedure and might be useful
+ to provide feedback to the user for example. These statuses are:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-allocated">
+ <term><symbol>CONNECTION_ALLOCATED</symbol></term>
+ <listitem>
+ <para>
+ Waiting for a call to <xref linkend="libpq-PQcancelStart"/> or
+ <xref linkend="libpq-PQcancelBlocking"/>, to actually open the
+ socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelConn"/>
+ or <xref linkend="libpq-PQcancelReset"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelStart"/> or
+ <xref linkend="libpq-PQcancelBlocking"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-started">
+ <term><symbol>CONNECTION_STARTED</symbol></term>
+ <listitem>
+ <para>
+ Waiting for connection to be made.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-made">
+ <term><symbol>CONNECTION_MADE</symbol></term>
+ <listitem>
+ <para>
+ Connection OK; waiting to send.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-awaiting-response">
+ <term><symbol>CONNECTION_AWAITING_RESPONSE</symbol></term>
+ <listitem>
+ <para>
+ Waiting for a response from the server.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-ssl-startup">
+ <term><symbol>CONNECTION_SSL_STARTUP</symbol></term>
+ <listitem>
+ <para>
+ Negotiating SSL encryption.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-gss-startup">
+ <term><symbol>CONNECTION_GSS_STARTUP</symbol></term>
+ <listitem>
+ <para>
+ Negotiating GSS encryption.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+
+ Note that, although these constants will remain (in order to maintain
+ compatibility), an application should never rely upon these occurring in a
+ particular order, or at all, or on the status always being one of these
+ documented values. An application might do something like this:
+<programlisting>
+switch(PQcancelStatus(conn))
+{
+ case CONNECTION_STARTED:
+ feedback = "Connecting...";
+ break;
+
+ case CONNECTION_MADE:
+ feedback = "Connected to server...";
+ break;
+.
+.
+.
+ default:
+ feedback = "Connecting...";
+}
+</programlisting>
+ </para>
+
+ <para>
+ The <literal>connect_timeout</literal> connection parameter is ignored
+ when using <function>PQcancelPoll</function>; it is the application's
+ responsibility to decide whether an excessive amount of time has elapsed.
+ Otherwise, <function>PQcancelStart</function> followed by a
+ <function>PQcancelPoll</function> loop is equivalent to
+ <xref linkend="libpq-PQcancelBlocking"/>.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Returns the status of the cancel connection.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ The status can be one of a number of values. However, only three of
+ these are seen outside of an asynchronous cancel procedure:
+ <literal>CONNECTION_ALLOCATED</literal>,
+ <literal>CONNECTION_OK</literal> and
+ <literal>CONNECTION_BAD</literal>. The initial state of a
+ <function>PGcancelConn</function> that's successfully created using
+ <xref linkend="libpq-PQcancelConn"/> is <literal>CONNECTION_ALLOCATED</literal>.
+ A cancel request that was successfully dispatched
+ has the status <literal>CONNECTION_OK</literal>. A failed
+ cancel attempt is signaled by status
+ <literal>CONNECTION_BAD</literal>. An OK status will
+ remain so until <xref linkend="libpq-PQcancelFinish"/> or
+ <xref linkend="libpq-PQcancelReset"/> is called.
+ </para>
+
+ <para>
+ See the entry for <xref linkend="libpq-PQcancelStart"/> and <xref
+ linkend="libpq-PQcancelPoll"/> with regards to other status codes that
+ might be returned.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSocket">
+ <term><function>PQcancelSocket</function><indexterm><primary>PQcancelSocket</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Obtains the file descriptor number of the cancel connection socket to
+ the server. A valid descriptor will be greater than or equal
+ to 0; a result of -1 indicates that no server connection is
+ currently open. This might change as a result of calling all of the
+ functions in this section on the (except for
+ <xref linkend="libpq-PQcancelErrorMessage"/> and
+ <function>PQcancelSocket</function> itself).
+<synopsis>
+int PQcancelSocket(const PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ A version of <xref linkend="libpq-PQerrorMessage"/> that can be used for
+ cancellation connections. If <xref linkend="libpq-PQcancelStatus"/>
+ returns <symbol>CONNECTION_BAD</symbol>, then this function can be
+ called on the <structname>PGcancelConn</structname> to retrieve the
+ error message.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *cancelconn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </sect2>
+
+ <sect2 id="libpq-cancel-deprecated">
+ <title>Obsolete Functions for Sending Cancel Requests</title>
+
+ <para>
+ These functions represent older methods of sending cancel requests.
+ Although they still work, they are deprecated due to not sending the cancel
+ requests in an encrypted manner, even when the original connection
+ specified <literal>sslmode</literal> or <literal>gssencmode</literal> to
+ require encryption. Thus these older methods are heavily discouraged from
+ being used in new code, and it is recommended to change existing code to
+ use the new functions instead.
+ </para>
<variablelist>
<varlistentry id="libpq-PQgetCancel">
@@ -6046,7 +6423,7 @@ int PQsetSingleRowMode(PGconn *conn);
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -6088,36 +6465,37 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
-<synopsis>
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelBlocking"/>, but one that can be
+ used safely from within a signal handler. <synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
- dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
- with an explanatory error message. <parameter>errbuf</parameter>
- must be a char array of size <parameter>errbufsize</parameter> (the
- recommended size is 256 bytes).
+ <xref linkend="libpq-PQcancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelBlocking"/> should be
+ used instead. The only benefit that <xref linkend="libpq-PQcancel"/> has
+ is that it can be safely invoked from a signal handler, if the
+ <parameter>errbuf</parameter> is a local variable in the signal handler.
+ However, this is generally not considered a big enough benefit to be
+ worth the security issues that this function has.
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
+ The <structname>PGcancel</structname> object is read-only as far as
+ <xref linkend="libpq-PQcancel"/> is concerned, so it can also be invoked
+ from a thread that is separate from the one manipulating the
+ <structname>PGconn</structname> object.
</para>
<para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
+ with an explanatory error message. <parameter>errbuf</parameter>
+ must be a char array of size <parameter>errbufsize</parameter> (the
+ recommended size is 256 bytes).
</para>
</listitem>
</varlistentry>
@@ -6129,13 +6507,21 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelBlocking"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelBlocking"/> should be
+ used instead. There is no benefit to using
+ <xref linkend="libpq-PQrequestCancel"/> over
+ <xref linkend="libpq-PQcancelBlocking"/>.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -6150,7 +6536,7 @@ int PQrequestCancel(PGconn *conn);
</listitem>
</varlistentry>
</variablelist>
- </para>
+ </sect2>
</sect1>
@@ -9362,7 +9748,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelBlocking"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 088592deb16..0ae814490e6 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -193,3 +193,12 @@ PQsendClosePrepared 190
PQsendClosePortal 191
PQchangePassword 192
PQsendPipelineSync 193
+PQcancelBlocking 194
+PQcancelStart 195
+PQcancelConn 196
+PQcancelPoll 197
+PQcancelStatus 198
+PQcancelSocket 199
+PQcancelErrorMessage 200
+PQcancelReset 201
+PQcancelFinish 202
diff --git a/src/interfaces/libpq/fe-cancel.c b/src/interfaces/libpq/fe-cancel.c
index 51f8d8a78c4..9c9a23bb4d0 100644
--- a/src/interfaces/libpq/fe-cancel.c
+++ b/src/interfaces/libpq/fe-cancel.c
@@ -21,6 +21,288 @@
#include "libpq-int.h"
#include "port/pg_bswap.h"
+
+/*
+ * PQcancelConn
+ *
+ * Asynchronously cancel a query on the given connection. This requires polling
+ * the returned PGcancelConn to actually complete the cancellation of the
+ * query.
+ */
+PGcancelConn *
+PQcancelConn(PGconn *conn)
+{
+ PGconn *cancelConn = pqMakeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!pqCopyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!pqConnectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancellation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we manually create the host and address arrays with
+ * a single element after freeing the host array that we generated from
+ * the connection options.
+ */
+ pqReleaseConnHosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_ALLOCATED;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
+
+/*
+ * PQcancelBlocking
+ *
+ * Send a cancellation request in a blocking fashion.
+ * Returns 1 if successful 0 if not.
+ */
+int
+PQcancelBlocking(PGcancelConn *cancelConn)
+{
+ if (!PQcancelStart(cancelConn))
+ return 0;
+ return pqConnectDBComplete(&cancelConn->conn);
+}
+
+/*
+ * PQcancelStart
+ *
+ * Starts sending a cancellation request in a non-blocking fashion. Returns
+ * 1 if successful 0 if not.
+ */
+int
+PQcancelStart(PGcancelConn *cancelConn)
+{
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return 0;
+
+ if (cancelConn->conn.status != CONNECTION_ALLOCATED)
+ {
+ libpq_append_conn_error(&cancelConn->conn,
+ "cancel request is already being sent on this connection");
+ cancelConn->conn.status = CONNECTION_BAD;
+ return 0;
+ }
+
+ return pqConnectDBStart(&cancelConn->conn);
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn *cancelConn)
+{
+ PGconn *conn = (PGconn *) cancelConn;
+ int n;
+
+ /*
+ * We leave most of the connection establishement to PQconnectPoll, since
+ * it's very similar to normal connection establishment. But once we get
+ * to the CONNECTION_AWAITING_RESPONSE we need to start doing our own
+ * thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * If we receive an error report it, but only if errno is non-zero.
+ * Otherwise we assume it's an EOF, which is what we expect from the
+ * server.
+ *
+ * We skip this for Windows, because Windows is a bit special in its EOF
+ * behaviour for TCP. Sometimes it will error with an ECONNRESET when
+ * there is a clean connection closure. See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here.
+ */
+ if (n < 0 && errno != 0)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangely
+ * do receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF, which is what we were
+ * expecting -- the cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn *cancelConn)
+{
+ return PQstatus((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn *cancelConn)
+{
+ return PQsocket((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn *cancelConn)
+{
+ return PQerrorMessage((const PGconn *) cancelConn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn *cancelConn)
+{
+ pqClosePGconn((PGconn *) cancelConn);
+ cancelConn->conn.status = CONNECTION_ALLOCATED;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn *cancelConn)
+{
+ PQfinish((PGconn *) cancelConn);
+}
+
/*
* PQgetCancel: get a PGcancel structure corresponding to a connection.
*
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index d4e10a0c4f3..b4e7394314f 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -616,8 +616,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections need to retain their be_pid and be_key across
+ * PQcancelReset invocations, otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -923,6 +932,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+bool
+pqCopyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2308,10 +2356,18 @@ pqConnectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address, and these fields have already been set up in PQcancelConn, so
+ * leave these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2453,7 +2509,10 @@ pqConnectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2578,13 +2637,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -3226,6 +3289,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancellation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -3975,8 +4061,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4344,6 +4436,7 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
+ release_conn_addrinfo(conn);
pqReleaseConnHosts(conn);
free(conn->client_encoding_initial);
@@ -4495,6 +4588,15 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4548,7 +4650,13 @@ pqClosePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we would have to rebuild it during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 1e5e7481a7c..3c966b95133 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -79,7 +79,9 @@ typedef enum
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Internal state: Checking target server
* properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_ALLOCATED /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -166,6 +168,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -322,16 +329,34 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* Create a PGcancelConn that's used to cancel a query on the given PGconn */
+extern PGcancelConn *PQcancelConn(PGconn *conn);
+
+/* issue a cancel request in a non-blocking manner */
+extern int PQcancelStart(PGcancelConn *cancelConn);
+
+/* issue a blocking cancel request */
+extern int PQcancelBlocking(PGcancelConn *cancelConn);
+
+/* poll a non-blocking cancel request */
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn *cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn *cancelConn);
+extern int PQcancelSocket(const PGcancelConn *cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn *cancelConn);
+extern void PQcancelReset(PGcancelConn *cancelConn);
+extern void PQcancelFinish(PGcancelConn *cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* deprecated version of PQcancelBlocking, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 82c18f870d2..1982cd4ded2 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -409,6 +409,10 @@ struct pg_conn
char *require_auth; /* name of the expected auth method */
char *load_balance_hosts; /* load balance over hosts */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -621,6 +625,11 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
/* PGcancel stores all data necessary to cancel a connection. A copy of this
* data is required to safely cancel a connection running on a different
* thread.
@@ -687,6 +696,7 @@ extern void pqClosePGconn(PGconn *conn);
extern int pqPacketSend(PGconn *conn, char pack_type,
const void *buf, size_t buf_len);
extern bool pqGetHomeDirectory(char *buf, int bufsize);
+extern bool pqCopyPGconn(PGconn *srcConn, PGconn *dstConn);
extern bool pqParseIntParam(const char *value, int *result, PGconn *conn,
const char *context);
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 5f43aa40de4..97f21fe9271 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,264 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got canceled.
+ *
+ * This is a function wrapped in a macro to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_canceled(conn) confirm_query_canceled_impl(__LINE__, conn)
+static void
+confirm_query_canceled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_canceled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_canceled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_canceled(conn);
+
+ /* test PQcancelBlocking */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (!PQcancelBlocking(cancelConn))
+ pg_fatal("failed to run PQcancelBlocking: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelConn and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelConn(conn);
+ if (!PQcancelStart(cancelConn))
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancelStart(cancelConn))
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ PQcancelFinish(cancelConn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -1789,6 +2047,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1890,7 +2149,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 95ae7845d86..0c0114d26de 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1762,6 +1762,7 @@ PG_Locale_Strategy
PG_Lock_Status
PG_init_t
PGcancel
+PGcancelConn
PGcmdQueueEntry
PGconn
PGdataValue
--
2.34.1
Docs: one bogus "that that".
Did we consider having PQcancelConn() instead be called
PQcancelCreate()? I think this better conveys that what we're doing is
create an object that can be used to do something, and that nothing else
is done with it by default. Also, the comment still says
"Asynchronously cancel a query on the given connection. This requires
polling the returned PGcancelConn to actually complete the cancellation
of the query." but this is no longer a good description of what this
function does.
Why do we return a non-NULL pointer from PQcancelConn in the first three
cases where we return errors? (original conn was NULL, original conn is
PGINVALID_SOCKET, pqCopyPGconn returns failure) Wouldn't it make more
sense to free the allocated object and return NULL? Actually, I wonder
if there's any reason at all to return a valid pointer in any failure
cases; I mean, do we really expect that application authors are going to
read/report the error message from a PGcancelConn that failed to be fully
created? Anyway, maybe there are reasons for this; but in any case we
should set ->cancelRequest in all cases, not only after the first tests
for errors.
I think the extra PGconn inside pg_cancel_conn is useless; it would be
simpler to typedef PGcancelConn to PGconn in fe-cancel.c, and remove the
indirection through the extra struct. You're actually dereferencing the
object in two ways in the new code, both by casting the outer object
straight to PGconn (taking advantage that the struct member is first in
the struct), and by using PGcancelConn->conn. This seems pointless. I
mean, if we're going to cast to "PGconn *" in some places anyway, then
we may as well access all members directly. Perhaps, if you want, you
could add asserts that ->cancelRequest is set true in all the
fe-cancel.c functions. Anyway, we'd still have compiler support to tell
you that you're passing the wrong struct to the function. (I didn't
actually try to change the code this way, so I might be wrong.)
We could move the definition of struct pg_cancel to fe-cancel.c. Nobody
outside that needs to know that definition anyway.
--
Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/
"XML!" Exclaimed C++. "What are you doing here? You're not a programming
language."
"Tell that to the people who use me," said XML.
https://burningbird.net/the-parable-of-the-languages/
On Wed, 6 Mar 2024 at 19:22, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Docs: one bogus "that that".
will fix
Did we consider having PQcancelConn() instead be called
PQcancelCreate()?
Fine by me
Also, the comment still says
"Asynchronously cancel a query on the given connection. This requires
polling the returned PGcancelConn to actually complete the cancellation
of the query." but this is no longer a good description of what this
function does.
will fix
Why do we return a non-NULL pointer from PQcancelConn in the first three
cases where we return errors? (original conn was NULL, original conn is
PGINVALID_SOCKET, pqCopyPGconn returns failure) Wouldn't it make more
sense to free the allocated object and return NULL? Actually, I wonder
if there's any reason at all to return a valid pointer in any failure
cases; I mean, do we really expect that application authors are going to
read/report the error message from a PGcancelConn that failed to be fully
created?
I think having a useful error message when possible is quite nice. And
I do think people will read/report this error message. Especially
since many people will simply pass it to PQcancelBlocking, whether
it's NULL or not. And then check the status, and then report the error
if the status was CONNECTION_BAD.
but in any case we
should set ->cancelRequest in all cases, not only after the first tests
for errors.
makes sense
I think the extra PGconn inside pg_cancel_conn is useless; it would be
simpler to typedef PGcancelConn to PGconn in fe-cancel.c, and remove the
indirection through the extra struct.
That sounds nice indeed. I'll try it out.
We could move the definition of struct pg_cancel to fe-cancel.c. Nobody
outside that needs to know that definition anyway.
will do
Attached is a new patchset with various changes. I created a dedicated
0002 patch to add tests for the already existing cancellation
functions, because that seemed useful for another thread where changes
to the cancellation protocol are being proposed[1]/messages/by-id/508d0505-8b7a-4864-a681-e7e5edfe32aa@iki.fi.
[1]: /messages/by-id/508d0505-8b7a-4864-a681-e7e5edfe32aa@iki.fi
On Wed, 6 Mar 2024 at 19:22, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Docs: one bogus "that that".
This was already fixed by my previous doc changes in v32, I guess that
email got crossed with this one
Did we consider having PQcancelConn() instead be called
PQcancelCreate()?
Done
"Asynchronously cancel a query on the given connection. This requires
polling the returned PGcancelConn to actually complete the cancellation
of the query." but this is no longer a good description of what this
function does.
Fixed
Anyway, maybe there are reasons for this; but in any case we
should set ->cancelRequest in all cases, not only after the first tests
for errors.
Done
I think the extra PGconn inside pg_cancel_conn is useless; it would be
simpler to typedef PGcancelConn to PGconn in fe-cancel.c, and remove the
indirection through the extra struct. You're actually dereferencing the
object in two ways in the new code, both by casting the outer object
straight to PGconn (taking advantage that the struct member is first in
the struct), and by using PGcancelConn->conn. This seems pointless. I
mean, if we're going to cast to "PGconn *" in some places anyway, then
we may as well access all members directly. Perhaps, if you want, you
could add asserts that ->cancelRequest is set true in all the
fe-cancel.c functions. Anyway, we'd still have compiler support to tell
you that you're passing the wrong struct to the function. (I didn't
actually try to change the code this way, so I might be wrong.)
Turns out you were wrong about the compiler support to tell us we're
passing the wrong struct: When both the PGconn and PGcancelConn
typedefs refer to the same struct, the compiler allows passing PGconn
to PGcancelConn functions and vice versa without complaining. This
seems enough reason for me to keep indirection through the extra
struct.
So instead of adding the proposed typed this typedef I chose to add a
comment to pg_cancel_conn explaining its purpose, as well as not
casting PGcancelConn to PGconn but always accessing the conn field for
consistency.
We could move the definition of struct pg_cancel to fe-cancel.c. Nobody
outside that needs to know that definition anyway.
Done in 0003
Attachments:
v33-0003-libpq-Move-pg_cancel-to-fe-cancel.c.patchapplication/octet-stream; name=v33-0003-libpq-Move-pg_cancel-to-fe-cancel.c.patchDownload
From 54cb4e8a42b0f8342b765a1e3e222f8d24b432a8 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 7 Mar 2024 10:11:32 +0100
Subject: [PATCH v33 3/5] libpq: Move pg_cancel to fe-cancel.c
No other files need to access this structs. So there is no need to have
its definition in a header file.
---
src/interfaces/libpq/fe-cancel.c | 19 +++++++++++++++++++
src/interfaces/libpq/libpq-int.h | 18 ------------------
2 files changed, 19 insertions(+), 18 deletions(-)
diff --git a/src/interfaces/libpq/fe-cancel.c b/src/interfaces/libpq/fe-cancel.c
index 51f8d8a78c4..29e66608be6 100644
--- a/src/interfaces/libpq/fe-cancel.c
+++ b/src/interfaces/libpq/fe-cancel.c
@@ -21,6 +21,25 @@
#include "libpq-int.h"
#include "port/pg_bswap.h"
+/* PGcancel stores all data necessary to cancel a connection. A copy of this
+ * data is required to safely cancel a connection running on a different
+ * thread.
+ */
+struct pg_cancel
+{
+ SockAddr raddr; /* Remote address */
+ int be_pid; /* PID of backend --- needed for cancels */
+ int be_key; /* key of backend --- needed for cancels */
+ int pgtcp_user_timeout; /* tcp user timeout */
+ int keepalives; /* use TCP keepalives? */
+ int keepalives_idle; /* time between TCP keepalives */
+ int keepalives_interval; /* time between TCP keepalive
+ * retransmits */
+ int keepalives_count; /* maximum number of TCP keepalive
+ * retransmits */
+};
+
+
/*
* PQgetCancel: get a PGcancel structure corresponding to a connection.
*
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 82c18f870d2..3abcd180d6d 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -621,24 +621,6 @@ struct pg_conn
PQExpBufferData workBuffer; /* expansible string */
};
-/* PGcancel stores all data necessary to cancel a connection. A copy of this
- * data is required to safely cancel a connection running on a different
- * thread.
- */
-struct pg_cancel
-{
- SockAddr raddr; /* Remote address */
- int be_pid; /* PID of backend --- needed for cancels */
- int be_key; /* key of backend --- needed for cancels */
- int pgtcp_user_timeout; /* tcp user timeout */
- int keepalives; /* use TCP keepalives? */
- int keepalives_idle; /* time between TCP keepalives */
- int keepalives_interval; /* time between TCP keepalive
- * retransmits */
- int keepalives_count; /* maximum number of TCP keepalive
- * retransmits */
-};
-
/* String descriptions of the ExecStatusTypes.
* direct use of this array is deprecated; call PQresStatus() instead.
--
2.34.1
v33-0002-Add-tests-for-libpq-query-cancellation-APIs.patchapplication/octet-stream; name=v33-0002-Add-tests-for-libpq-query-cancellation-APIs.patchDownload
From 8a3845e6754e0d323d498e596bc0d82e40de0cdb Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 7 Mar 2024 10:18:17 +0100
Subject: [PATCH v33 2/5] Add tests for libpq query cancellation APIs
This is in preparation of making changes and additions to these APIs.
---
.../modules/libpq_pipeline/libpq_pipeline.c | 138 +++++++++++++++++-
1 file changed, 137 insertions(+), 1 deletion(-)
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 5f43aa40de4..3517a852736 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -86,6 +86,139 @@ pg_fatal_impl(int line, const char *fmt,...)
exit(1);
}
+/*
+ * Check that the query on the given connection got canceled.
+ *
+ * This is a function wrapped in a macro to make the reported line number
+ * in an error match the line number of the invocation.
+ */
+#define confirm_query_canceled(conn) confirm_query_canceled_impl(__LINE__, conn)
+static void
+confirm_query_canceled_impl(int line, PGconn *conn)
+{
+ PGresult *res = NULL;
+
+ res = PQgetResult(conn);
+ if (res == NULL)
+ pg_fatal_impl(line, "PQgetResult returned null: %s",
+ PQerrorMessage(conn));
+ if (PQresultStatus(res) != PGRES_FATAL_ERROR)
+ pg_fatal_impl(line, "query did not fail when it was expected");
+ if (strcmp(PQresultErrorField(res, PG_DIAG_SQLSTATE), "57014") != 0)
+ pg_fatal_impl(line, "query failed with a different error than cancellation: %s",
+ PQerrorMessage(conn));
+ PQclear(res);
+ while (PQisBusy(conn))
+ {
+ PQconsumeInput(conn);
+ }
+}
+
+#define send_cancellable_query(conn, monitorConn) send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes, &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait until the query is actually running. Otherwise sending a
+ * cancellation request might not cancel the query due to race conditions.
+ */
+ while (true)
+ {
+ char *value = NULL;
+ PGresult *res = PQexec(
+ monitorConn,
+ "SELECT count(*) FROM pg_stat_activity WHERE "
+ "query = 'SELECT pg_sleep($1)' "
+ "AND state = 'active'");
+
+ if (PQresultStatus(res) != PGRES_TUPLES_OK)
+ {
+ pg_fatal("Connection to database failed: %s", PQerrorMessage(monitorConn));
+ }
+ if (PQntuples(res) != 1)
+ {
+ pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ }
+ if (PQnfields(res) != 1)
+ {
+ pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ }
+ value = PQgetvalue(res, 0, 0);
+ if (*value != '0')
+ {
+ PQclear(res);
+ break;
+ }
+ PQclear(res);
+
+ /*
+ * wait 10ms before polling again
+ */
+ pg_usleep(10000);
+ }
+}
+
+static void
+test_cancel(PGconn *conn, const char *conninfo)
+{
+ PGcancel *cancel = NULL;
+ PGconn *monitorConn = NULL;
+ char errorbuf[256];
+
+ fprintf(stderr, "test cancellations... ");
+
+ if (PQsetnonblocking(conn, 1) != 0)
+ pg_fatal("failed to set nonblocking mode: %s", PQerrorMessage(conn));
+
+ /*
+ * Make a connection to the database to monitor the query on the main
+ * connection.
+ */
+ monitorConn = PQconnectdb(conninfo);
+ if (PQstatus(conn) != CONNECTION_OK)
+ {
+ pg_fatal("Connection to database failed: %s",
+ PQerrorMessage(conn));
+ }
+
+ /* test PQcancel */
+ send_cancellable_query(conn, monitorConn);
+ cancel = PQgetCancel(conn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_canceled(conn);
+
+ /* PGcancel object can be reused for the next query */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancel(cancel, errorbuf, sizeof(errorbuf)))
+ {
+ pg_fatal("failed to run PQcancel: %s", errorbuf);
+ };
+ confirm_query_canceled(conn);
+
+ PQfreeCancel(cancel);
+
+ /* test PQrequestCancel */
+ send_cancellable_query(conn, monitorConn);
+ if (!PQrequestCancel(conn))
+ pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
+ confirm_query_canceled(conn);
+
+ fprintf(stderr, "ok\n");
+}
+
static void
test_disallowed_in_pipeline(PGconn *conn)
{
@@ -1789,6 +1922,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
+ printf("cancel\n");
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
@@ -1890,7 +2024,9 @@ main(int argc, char **argv)
PQTRACE_SUPPRESS_TIMESTAMPS | PQTRACE_REGRESS_MODE);
}
- if (strcmp(testname, "disallowed_in_pipeline") == 0)
+ if (strcmp(testname, "cancel") == 0)
+ test_cancel(conn, conninfo);
+ else if (strcmp(testname, "disallowed_in_pipeline") == 0)
test_disallowed_in_pipeline(conn);
else if (strcmp(testname, "multi_pipelines") == 0)
test_multi_pipelines(conn);
--
2.34.1
v33-0001-Add-missing-connection-statuses-to-docs.patchapplication/octet-stream; name=v33-0001-Add-missing-connection-statuses-to-docs.patchDownload
From 05f80e7711b69b6735c22cf2cfe63271bf9e4954 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Wed, 6 Mar 2024 18:33:49 +0100
Subject: [PATCH v33 1/5] Add missing connection statuses to docs
The list of connection statuses that PQstatus might return during an
asynchronous connection attempt was incorrect:
1. CONNECTION_SETENV is never returned anymore and is only part of the
enum for backwards compatibility. So it's removed from the list.
2. CONNECTION_CHECK_STANDBY and CONNECTION_GSS_STARTUP were not listed.
This addresses those problems. CONNECTION_NEEDED and
CONNECTION_CHECK_TARGET are not listed in the docs on purpose, since
these states are internal states that can never be observed by a caller
of PQstatus.
---
doc/src/sgml/libpq.sgml | 15 ++++++++++++---
src/interfaces/libpq/libpq-fe.h | 3 ++-
2 files changed, 14 insertions(+), 4 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 1d8998efb2a..a2bbf33d029 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -428,11 +428,11 @@ PostgresPollingStatusType PQconnectPoll(PGconn *conn);
</listitem>
</varlistentry>
- <varlistentry id="libpq-connection-setenv">
- <term><symbol>CONNECTION_SETENV</symbol></term>
+ <varlistentry id="libpq-connection-gss-startup">
+ <term><symbol>CONNECTION_GSS_STARTUP</symbol></term>
<listitem>
<para>
- Negotiating environment-driven parameter settings.
+ Negotiating GSS encryption.
</para>
</listitem>
</varlistentry>
@@ -446,6 +446,15 @@ PostgresPollingStatusType PQconnectPoll(PGconn *conn);
</listitem>
</varlistentry>
+ <varlistentry id="libpq-connection-check-standby">
+ <term><symbol>CONNECTION_CHECK_STANDBY</symbol></term>
+ <listitem>
+ <para>
+ Checking if connection is to a server in standby mode.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="libpq-connection-consume">
<term><symbol>CONNECTION_CONSUME</symbol></term>
<listitem>
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index defc415fa3f..1e5e7481a7c 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -77,7 +77,8 @@ typedef enum
CONNECTION_CHECK_WRITABLE, /* Checking if session is read-write. */
CONNECTION_CONSUME, /* Consuming any extra messages. */
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
- CONNECTION_CHECK_TARGET, /* Checking target server properties. */
+ CONNECTION_CHECK_TARGET, /* Internal state: Checking target server
+ * properties. */
CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
} ConnStatusType;
base-commit: e444ebcb85c0b55b1ccf7bcb785ad2708090a2a2
--
2.34.1
v33-0005-Start-using-new-libpq-cancel-APIs.patchapplication/octet-stream; name=v33-0005-Start-using-new-libpq-cancel-APIs.patchDownload
From a92799883555474448ae5d1ab698804c40bf1391 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Dec 2023 13:39:09 +0100
Subject: [PATCH v33 5/5] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in most of the codebase with
these newer ones. This specifically leaves out changes to psql and
pgbench as those would need a much larger refactor to be able to call
them, due to the new functions not being signal-safe.
---
contrib/dblink/dblink.c | 30 +++--
contrib/postgres_fdw/connection.c | 105 +++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 11 +-
src/test/isolation/isolationtester.c | 29 ++---
6 files changed, 145 insertions(+), 52 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 19a362526d2..98dcca3e6fd 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1346,22 +1346,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelCreate(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (!PQcancelBlocking(cancelConn))
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 4931ebf5915..dcc13dc3b24 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -133,7 +133,7 @@ static void pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue);
static void pgfdw_reject_incomplete_xact_state_change(ConnCacheEntry *entry);
static void pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_cancel_query(PGconn *conn);
-static bool pgfdw_cancel_query_begin(PGconn *conn);
+static bool pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime);
static bool pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
bool consume_input);
static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query,
@@ -1315,36 +1315,104 @@ pgfdw_cancel_query(PGconn *conn)
endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
CONNECTION_CLEANUP_TIMEOUT);
- if (!pgfdw_cancel_query_begin(conn))
+ if (!pgfdw_cancel_query_begin(conn, endtime))
return false;
return pgfdw_cancel_query_end(conn, endtime, false);
}
static bool
-pgfdw_cancel_query_begin(PGconn *conn)
+pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
{
- PGcancel *cancel;
- char errbuf[256];
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelCreate(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (!PQcancelStart(cancel_conn))
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
}
- return true;
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ return !failed;
}
static bool
@@ -1685,7 +1753,10 @@ pgfdw_abort_cleanup_begin(ConnCacheEntry *entry, bool toplevel,
*/
if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
{
- if (!pgfdw_cancel_query_begin(entry->conn))
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ CONNECTION_CLEANUP_TIMEOUT);
+
+ if (!pgfdw_cancel_query_begin(entry->conn, endtime))
return false; /* Unable to cancel running query */
*cancel_requested = lappend(*cancel_requested, entry);
}
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index c355e8f3f7d..8892abc3502 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2698,6 +2698,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index 812e7646e16..8aa528002f7 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -717,6 +717,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 808d54461fd..5ed9f3ba17b 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,14 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
+ PGcancelConn *cancelConn = PQcancelCreate(conn);
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQcancelBlocking(cancelConn);
+ PQcancelFinish(cancelConn);
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index 0a66235153a..65a9abd6888 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelCreate(conn);
- if (cancel != NULL)
+ if (PQcancelBlocking(cancel_conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
v33-0004-libpq-Add-encrypted-and-non-blocking-versions-of.patchapplication/octet-stream; name=v33-0004-libpq-Add-encrypted-and-non-blocking-versions-of.patchDownload
From 4a660f5fd3fa3daf51639ea4d595b58c73548066 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Fri, 26 Jan 2024 17:01:00 +0100
Subject: [PATCH v33 4/5] libpq: Add encrypted and non-blocking versions of
PQcancel
The existing PQcancel API is using blocking IO. This makes PQcancel
impossible to use in an event loop based codebase, without blocking the
event loop until the call returns. PQcancelConn can now be used instead,
to have a non-blocking way of sending cancel requests. It also doesn't
encrypt the connection over which the cancel request is sent, even when
the original connection required encryption.
This patch adds a bunch of new functions which, together, allow users to
send cancel requests in an encrypted and performant way. The primary new
functions are PQcancelBlocking and PQcancelStart (for blocking and
non-blocking requests respectively). These functions reuse the normal
connection establishement code, so that they can apply the same connection
options such sslmode and gssencmode that the original connection used.
---
doc/src/sgml/libpq.sgml | 461 ++++++++++++++++--
src/interfaces/libpq/exports.txt | 9 +
src/interfaces/libpq/fe-cancel.c | 301 +++++++++++-
src/interfaces/libpq/fe-connect.c | 130 ++++-
src/interfaces/libpq/libpq-fe.h | 31 +-
src/interfaces/libpq/libpq-int.h | 5 +
.../modules/libpq_pipeline/libpq_pipeline.c | 125 +++++
src/tools/pgindent/typedefs.list | 1 +
8 files changed, 1015 insertions(+), 48 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index a2bbf33d029..373d0dc3223 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -5287,7 +5287,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelBlocking"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -6034,10 +6034,402 @@ int PQsetSingleRowMode(PGconn *conn);
<secondary>SQL command</secondary>
</indexterm>
- <para>
- A client application can request cancellation of a command that is
- still being processed by the server, using the functions described in
- this section.
+ <sect2 id="libpq-cancel-conn">
+ <title>Functions for Sending Cancel Requests</title>
+ <variablelist>
+ <varlistentry id="libpq-PQcancelCreate">
+ <term><function>PQcancelCreate</function><indexterm><primary>PQcancelCreate</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Prepares a connection over which a cancel request can be sent.
+<synopsis>
+PGcancelConn *PQcancelCreate(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelCreate"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>
+ object, but it won't instantly start sending a cancel request over this
+ connection. A cancel request can be sent over this connection in a
+ blocking manner using <xref linkend="libpq-PQcancelBlocking"/> and in a
+ non-blocking manner using <xref linkend="libpq-PQcancelStart"/>.
+ The return value can be passed to <xref linkend="libpq-PQcancelStatus"/>
+ to check if the <structname>PGcancelConn</structname> object was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe way.
+ </para>
+
+ <para>
+ Many connection parameters of the original client will be reused when
+ setting up the connection for the cancel request. Importantly, if the
+ original connection requires encryption of the connection and/or
+ verification of the target host (using <literal>sslmode</literal> or
+ <literal>gssencmode</literal>), then the connection for the cancel
+ request is made with these same requirements. Any connection options
+ that are only used during authentication or after authentication of the
+ client are ignored though, because cancellation requests do not require
+ authentication and the connection is closed right after the cancellation
+ request is submitted.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelCreate</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelBlocking">
+ <term><function>PQcancelBlocking</function><indexterm><primary>PQcancelBlocking</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a blocking manner.
+<synopsis>
+int PQcancelBlocking(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelCreate"/>.
+ The return value of <xref linkend="libpq-PQcancelBlocking"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStart">
+ <term><function>PQcancelStart</function><indexterm><primary>PQcancelStart</primary></indexterm></term>
+ <term id="libpq-PQcancelPoll"><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a non-blocking manner.
+<synopsis>
+int PQcancelStart(PGcancelConn *cancelConn);
+
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelCreate"/>.
+ The return value of <xref linkend="libpq-PQcancelStart"/>
+ is 1 if the cancellation request could be started and 0 if not.
+ If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ If <function>PQcancelStart</function> succeeds, the next stage
+ is to poll <application>libpq</application> so that it can proceed with
+ the cancel connection sequence.
+ Use <xref linkend="libpq-PQcancelSocket"/> to obtain the descriptor of the
+ socket underlying the database connection.
+ (Caution: do not assume that the socket remains the same
+ across <function>PQcancelPoll</function> calls.)
+ Loop thus: If <function>PQcancelPoll(cancelConn)</function> last returned
+ <symbol>PGRES_POLLING_READING</symbol>, wait until the socket is ready to
+ read (as indicated by <function>select()</function>, <function>poll()</function>, or
+ similar system function).
+ Then call <function>PQcancelPoll(cancelConn)</function> again.
+ Conversely, if <function>PQcancelPoll(cancelConn)</function> last returned
+ <symbol>PGRES_POLLING_WRITING</symbol>, wait until the socket is ready
+ to write, then call <function>PQcancelPoll(cancelConn)</function> again.
+ On the first iteration, i.e., if you have yet to call
+ <function>PQcancelPoll(cancelConn)</function>, behave as if it last returned
+ <symbol>PGRES_POLLING_WRITING</symbol>. Continue this loop until
+ <function>PQcancelPoll(cancelConn)</function> returns
+ <symbol>PGRES_POLLING_FAILED</symbol>, indicating the connection procedure
+ has failed, or <symbol>PGRES_POLLING_OK</symbol>, indicating cancel
+ request was successfully dispatched.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ <para>
+ At any time during connection, the status of the connection can be
+ checked by calling <xref linkend="libpq-PQcancelStatus"/>. If this call returns <symbol>CONNECTION_BAD</symbol>, then the
+ cancel procedure has failed; if the call returns <function>CONNECTION_OK</function>, then cancel request was successfully dispatched. Both of these states are equally detectable
+ from the return value of <function>PQcancelPoll</function>, described above. Other states might also occur
+ during (and only during) an asynchronous connection procedure. These
+ indicate the current stage of the connection procedure and might be useful
+ to provide feedback to the user for example. These statuses are:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-allocated">
+ <term><symbol>CONNECTION_ALLOCATED</symbol></term>
+ <listitem>
+ <para>
+ Waiting for a call to <xref linkend="libpq-PQcancelStart"/> or
+ <xref linkend="libpq-PQcancelBlocking"/>, to actually open the
+ socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelCreate"/>
+ or <xref linkend="libpq-PQcancelReset"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelStart"/> or
+ <xref linkend="libpq-PQcancelBlocking"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-started">
+ <term><symbol>CONNECTION_STARTED</symbol></term>
+ <listitem>
+ <para>
+ Waiting for connection to be made.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-made">
+ <term><symbol>CONNECTION_MADE</symbol></term>
+ <listitem>
+ <para>
+ Connection OK; waiting to send.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-awaiting-response">
+ <term><symbol>CONNECTION_AWAITING_RESPONSE</symbol></term>
+ <listitem>
+ <para>
+ Waiting for a response from the server.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-ssl-startup">
+ <term><symbol>CONNECTION_SSL_STARTUP</symbol></term>
+ <listitem>
+ <para>
+ Negotiating SSL encryption.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-gss-startup">
+ <term><symbol>CONNECTION_GSS_STARTUP</symbol></term>
+ <listitem>
+ <para>
+ Negotiating GSS encryption.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+
+ Note that, although these constants will remain (in order to maintain
+ compatibility), an application should never rely upon these occurring in a
+ particular order, or at all, or on the status always being one of these
+ documented values. An application might do something like this:
+<programlisting>
+switch(PQcancelStatus(conn))
+{
+ case CONNECTION_STARTED:
+ feedback = "Connecting...";
+ break;
+
+ case CONNECTION_MADE:
+ feedback = "Connected to server...";
+ break;
+.
+.
+.
+ default:
+ feedback = "Connecting...";
+}
+</programlisting>
+ </para>
+
+ <para>
+ The <literal>connect_timeout</literal> connection parameter is ignored
+ when using <function>PQcancelPoll</function>; it is the application's
+ responsibility to decide whether an excessive amount of time has elapsed.
+ Otherwise, <function>PQcancelStart</function> followed by a
+ <function>PQcancelPoll</function> loop is equivalent to
+ <xref linkend="libpq-PQcancelBlocking"/>.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Returns the status of the cancel connection.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ The status can be one of a number of values. However, only three of
+ these are seen outside of an asynchronous cancel procedure:
+ <literal>CONNECTION_ALLOCATED</literal>,
+ <literal>CONNECTION_OK</literal> and
+ <literal>CONNECTION_BAD</literal>. The initial state of a
+ <function>PGcancelConn</function> that's successfully created using
+ <xref linkend="libpq-PQcancelCreate"/> is <literal>CONNECTION_ALLOCATED</literal>.
+ A cancel request that was successfully dispatched
+ has the status <literal>CONNECTION_OK</literal>. A failed
+ cancel attempt is signaled by status
+ <literal>CONNECTION_BAD</literal>. An OK status will
+ remain so until <xref linkend="libpq-PQcancelFinish"/> or
+ <xref linkend="libpq-PQcancelReset"/> is called.
+ </para>
+
+ <para>
+ See the entry for <xref linkend="libpq-PQcancelStart"/> and <xref
+ linkend="libpq-PQcancelPoll"/> with regards to other status codes that
+ might be returned.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSocket">
+ <term><function>PQcancelSocket</function><indexterm><primary>PQcancelSocket</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Obtains the file descriptor number of the cancel connection socket to
+ the server. A valid descriptor will be greater than or equal
+ to 0; a result of -1 indicates that no server connection is
+ currently open. This might change as a result of calling all of the
+ functions in this section on the (except for
+ <xref linkend="libpq-PQcancelErrorMessage"/> and
+ <function>PQcancelSocket</function> itself).
+<synopsis>
+int PQcancelSocket(const PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ <indexterm><primary>error message</primary></indexterm> Returns the error message
+ most recently generated by an operation on the cancel connection.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *cancelconn);
+</synopsis>
+ </para>
+
+ <para>
+ Nearly all <application>libpq</application> functions that take a
+ <structname>PGcancelConn</structname> will set a message for
+ <xref linkend="libpq-PQcancelErrorMessage"/> if they fail. Note that by
+ <application>libpq</application> convention, a nonempty
+ <xref linkend="libpq-PQcancelErrorMessage"/> result can consist of multiple lines,
+ and will include a trailing newline. The caller should not free
+ the result directly. It will be freed when the associated
+ <structname>PGcancelConn</structname> handle is passed to
+ <xref linkend="libpq-PQcancelFinish"/>. The result string should not be
+ expected to remain the same across operations on the
+ <literal>PGcancelConn</literal> structure.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </sect2>
+
+ <sect2 id="libpq-cancel-deprecated">
+ <title>Obsolete Functions for Sending Cancel Requests</title>
+
+ <para>
+ These functions represent older methods of sending cancel requests.
+ Although they still work, they are deprecated due to not sending the cancel
+ requests in an encrypted manner, even when the original connection
+ specified <literal>sslmode</literal> or <literal>gssencmode</literal> to
+ require encryption. Thus these older methods are heavily discouraged from
+ being used in new code, and it is recommended to change existing code to
+ use the new functions instead.
+ </para>
<variablelist>
<varlistentry id="libpq-PQgetCancel">
@@ -6046,7 +6438,7 @@ int PQsetSingleRowMode(PGconn *conn);
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -6088,36 +6480,37 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
-<synopsis>
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelBlocking"/>, but one that can be
+ used safely from within a signal handler. <synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
- dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
- with an explanatory error message. <parameter>errbuf</parameter>
- must be a char array of size <parameter>errbufsize</parameter> (the
- recommended size is 256 bytes).
+ <xref linkend="libpq-PQcancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelBlocking"/> should be
+ used instead. The only benefit that <xref linkend="libpq-PQcancel"/> has
+ is that it can be safely invoked from a signal handler, if the
+ <parameter>errbuf</parameter> is a local variable in the signal handler.
+ However, this is generally not considered a big enough benefit to be
+ worth the security issues that this function has.
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
+ The <structname>PGcancel</structname> object is read-only as far as
+ <xref linkend="libpq-PQcancel"/> is concerned, so it can also be invoked
+ from a thread that is separate from the one manipulating the
+ <structname>PGconn</structname> object.
</para>
<para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
+ with an explanatory error message. <parameter>errbuf</parameter>
+ must be a char array of size <parameter>errbufsize</parameter> (the
+ recommended size is 256 bytes).
</para>
</listitem>
</varlistentry>
@@ -6129,13 +6522,21 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelBlocking"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelBlocking"/> should be
+ used instead. There is no benefit to using
+ <xref linkend="libpq-PQrequestCancel"/> over
+ <xref linkend="libpq-PQcancelBlocking"/>.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -6150,7 +6551,7 @@ int PQrequestCancel(PGconn *conn);
</listitem>
</varlistentry>
</variablelist>
- </para>
+ </sect2>
</sect1>
@@ -9362,7 +9763,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelBlocking"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 088592deb16..9fbd3d34074 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -193,3 +193,12 @@ PQsendClosePrepared 190
PQsendClosePortal 191
PQchangePassword 192
PQsendPipelineSync 193
+PQcancelBlocking 194
+PQcancelStart 195
+PQcancelCreate 196
+PQcancelPoll 197
+PQcancelStatus 198
+PQcancelSocket 199
+PQcancelErrorMessage 200
+PQcancelReset 201
+PQcancelFinish 202
diff --git a/src/interfaces/libpq/fe-cancel.c b/src/interfaces/libpq/fe-cancel.c
index 29e66608be6..066129a9272 100644
--- a/src/interfaces/libpq/fe-cancel.c
+++ b/src/interfaces/libpq/fe-cancel.c
@@ -21,9 +21,21 @@
#include "libpq-int.h"
#include "port/pg_bswap.h"
-/* PGcancel stores all data necessary to cancel a connection. A copy of this
- * data is required to safely cancel a connection running on a different
- * thread.
+/*
+ * pg_cancel_conn is a wrapper around a PGconn to send cancellations using
+ * PQcancelBlocking and PQcancelStart. This isn't just a typedef because we
+ * want the compiler to complain when we pass a PGconn to a function that
+ * expects a PGcancelConn (or vice versa).
+ */
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
+/*
+ * pg_cancel stores all data necessary to cancel a connection using the
+ * deprecated PQcancel function. A copy of this data is required to safely
+ * cancel a connection running on a different thread.
*/
struct pg_cancel
{
@@ -40,6 +52,287 @@ struct pg_cancel
};
+/*
+ * PQcancelCreate
+ *
+ * Asynchronously cancel a query on the given connection. This requires polling
+ * the returned PGcancelConn to actually complete the cancellation of the
+ * query.
+ */
+PGcancelConn *
+PQcancelCreate(PGconn *conn)
+{
+ PGconn *cancelConn = pqMakeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!pqCopyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!pqConnectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancellation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we manually create the host and address arrays with
+ * a single element after freeing the host array that we generated from
+ * the connection options.
+ */
+ pqReleaseConnHosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_ALLOCATED;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
+
+/*
+ * PQcancelBlocking
+ *
+ * Send a cancellation request in a blocking fashion.
+ * Returns 1 if successful 0 if not.
+ */
+int
+PQcancelBlocking(PGcancelConn *cancelConn)
+{
+ if (!PQcancelStart(cancelConn))
+ return 0;
+ return pqConnectDBComplete(&cancelConn->conn);
+}
+
+/*
+ * PQcancelStart
+ *
+ * Starts sending a cancellation request in a non-blocking fashion. Returns
+ * 1 if successful 0 if not.
+ */
+int
+PQcancelStart(PGcancelConn *cancelConn)
+{
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return 0;
+
+ if (cancelConn->conn.status != CONNECTION_ALLOCATED)
+ {
+ libpq_append_conn_error(&cancelConn->conn,
+ "cancel request is already being sent on this connection");
+ cancelConn->conn.status = CONNECTION_BAD;
+ return 0;
+ }
+
+ return pqConnectDBStart(&cancelConn->conn);
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn *cancelConn)
+{
+ PGconn *conn = &cancelConn->conn;
+ int n;
+
+ /*
+ * We leave most of the connection establishement to PQconnectPoll, since
+ * it's very similar to normal connection establishment. But once we get
+ * to the CONNECTION_AWAITING_RESPONSE we need to start doing our own
+ * thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * If we receive an error report it, but only if errno is non-zero.
+ * Otherwise we assume it's an EOF, which is what we expect from the
+ * server.
+ *
+ * We skip this for Windows, because Windows is a bit special in its EOF
+ * behaviour for TCP. Sometimes it will error with an ECONNRESET when
+ * there is a clean connection closure. See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here.
+ */
+ if (n < 0 && errno != 0)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangely
+ * do receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF, which is what we were
+ * expecting -- the cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn *cancelConn)
+{
+ return PQstatus(&cancelConn->conn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn *cancelConn)
+{
+ return PQsocket(&cancelConn->conn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn *cancelConn)
+{
+ return PQerrorMessage(&cancelConn->conn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn *cancelConn)
+{
+ pqClosePGconn(&cancelConn->conn);
+ cancelConn->conn.status = CONNECTION_ALLOCATED;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn *cancelConn)
+{
+ PQfinish(&cancelConn->conn);
+}
+
/*
* PQgetCancel: get a PGcancel structure corresponding to a connection.
*
@@ -144,7 +437,7 @@ optional_setsockopt(int fd, int protoid, int optid, int value)
/*
- * PQcancel: request query cancel
+ * PQcancel: old, non-encrypted, but signal-safe way of requesting query cancel
*
* The return value is true if the cancel request was successfully
* dispatched, false if not (in which case an error message is available).
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index d4e10a0c4f3..6f1a4b2a430 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -616,8 +616,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections need to retain their be_pid and be_key across
+ * PQcancelReset invocations, otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -923,6 +932,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+bool
+pqCopyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2308,10 +2356,18 @@ pqConnectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address, and these fields have already been set up in PQcancelCreate, so
+ * leave these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2453,7 +2509,10 @@ pqConnectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2578,13 +2637,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -3226,6 +3289,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancellation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -3975,8 +4061,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4344,6 +4436,7 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
+ release_conn_addrinfo(conn);
pqReleaseConnHosts(conn);
free(conn->client_encoding_initial);
@@ -4495,6 +4588,15 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ {
+ return;
+ }
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4548,7 +4650,13 @@ pqClosePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Since cancel requests never change their addrinfo we don't free it
+ * here. Otherwise we would have to rebuild it during a PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 1e5e7481a7c..e888fc4789a 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -79,7 +79,9 @@ typedef enum
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Internal state: Checking target server
* properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_ALLOCATED /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -166,6 +168,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -322,16 +329,34 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* Create a PGcancelConn that's used to cancel a query on the given PGconn */
+extern PGcancelConn *PQcancelCreate(PGconn *conn);
+
+/* issue a cancel request in a non-blocking manner */
+extern int PQcancelStart(PGcancelConn *cancelConn);
+
+/* issue a blocking cancel request */
+extern int PQcancelBlocking(PGcancelConn *cancelConn);
+
+/* poll a non-blocking cancel request */
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn *cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn *cancelConn);
+extern int PQcancelSocket(const PGcancelConn *cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn *cancelConn);
+extern void PQcancelReset(PGcancelConn *cancelConn);
+extern void PQcancelFinish(PGcancelConn *cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* deprecated version of PQcancelBlocking, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 3abcd180d6d..9c05f11a6e9 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -409,6 +409,10 @@ struct pg_conn
char *require_auth; /* name of the expected auth method */
char *load_balance_hosts; /* load balance over hosts */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -669,6 +673,7 @@ extern void pqClosePGconn(PGconn *conn);
extern int pqPacketSend(PGconn *conn, char pack_type,
const void *buf, size_t buf_len);
extern bool pqGetHomeDirectory(char *buf, int bufsize);
+extern bool pqCopyPGconn(PGconn *srcConn, PGconn *dstConn);
extern bool pqParseIntParam(const char *value, int *result, PGconn *conn,
const char *context);
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 3517a852736..e6499be6613 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -172,6 +172,7 @@ static void
test_cancel(PGconn *conn, const char *conninfo)
{
PGcancel *cancel = NULL;
+ PGcancelConn *cancelConn = NULL;
PGconn *monitorConn = NULL;
char errorbuf[256];
@@ -216,6 +217,130 @@ test_cancel(PGconn *conn, const char *conninfo)
pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
confirm_query_canceled(conn);
+ /* test PQcancelBlocking */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelCreate(conn);
+ if (!PQcancelBlocking(cancelConn))
+ pg_fatal("failed to run PQcancelBlocking: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelCreate and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelCreate(conn);
+ if (!PQcancelStart(cancelConn))
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * after
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancelStart(cancelConn))
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ PQcancelFinish(cancelConn);
+
fprintf(stderr, "ok\n");
}
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index cc3611e6068..12d7821135f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1762,6 +1762,7 @@ PG_Locale_Strategy
PG_Lock_Status
PG_init_t
PGcancel
+PGcancelConn
PGcmdQueueEntry
PGconn
PGdataValue
--
2.34.1
Here's a last one for the cfbot.
I have a question about this one
int
PQcancelStart(PGcancelConn *cancelConn)
{
[...]
if (cancelConn->conn.status != CONNECTION_ALLOCATED)
{
libpq_append_conn_error(&cancelConn->conn,
"cancel request is already being sent on this connection");
cancelConn->conn.status = CONNECTION_BAD;
return 0;
}
If we do this and we see conn.status is not ALLOCATED, meaning a cancel
is already ongoing, shouldn't we leave conn.status alone instead of
changing to CONNECTION_BAD? I mean, we shouldn't be juggling the elbow
of whoever's doing that, should we? Maybe just add the error message
and return 0?
--
Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/
"If it is not right, do not do it.
If it is not true, do not say it." (Marcus Aurelius, Meditations)
Attachments:
v34-0001-libpq-Add-encrypted-and-non-blocking-query-cance.patchtext/x-diff; charset=utf-8Download
From fc0cbf0a6184d374e12e051f88f9f8eef7cc30d9 Mon Sep 17 00:00:00 2001
From: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Tue, 12 Mar 2024 10:09:25 +0100
Subject: [PATCH v34 1/2] libpq: Add encrypted and non-blocking query
cancellation routines
The existing PQcancel API uses blocking IO, which makes PQcancel
impossible to use in an event loop based codebase without blocking the
event loop until the call returns. It also doesn't encrypt the
connection over which the cancel request is sent, even when the original
connection required encryption.
This commit adds a PQcancelConn struct and assorted functions, which
provide a better mechanism of sending cancel requests; in particular all
the encryption used in the original connection are also used in the
cancel connection. The main entry points are:
- PQcancelCreate creates the PQcancelConn based on the original
connection (but does not establish an actual connection).
- PQcancelStart can be used to initiate non-blocking cancel requests,
using encryption if the original connection did so, which must be
pumped using
- PQcancelPoll.
- PQcancelReset puts a PQcancelConn back in state so that it can be
reused to send a new cancel request to the same connection.
- PQcancelBlocking is a simpler-to-use blocking API that still uses
encryption.
Additional functions are
- PQcancelStatus, mimicks PQstatus;
- PQcancelSocket, mimicks PQcancelSocket;
- PQcancelErrorMessage, mimicks PQerrorMessage;
- PQcancelFinish, mimicks PQfinish.
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Denis Laxalde <denis.laxalde@dalibo.com>
Discussion: https://postgr.es/m/AM5PR83MB0178D3B31CA1B6EC4A8ECC42F7529@AM5PR83MB0178.EURPRD83.prod.outlook.com
---
doc/src/sgml/libpq.sgml | 465 ++++++++++++++++--
src/interfaces/libpq/exports.txt | 9 +
src/interfaces/libpq/fe-cancel.c | 297 ++++++++++-
src/interfaces/libpq/fe-connect.c | 129 ++++-
src/interfaces/libpq/libpq-fe.h | 31 +-
src/interfaces/libpq/libpq-int.h | 5 +
.../modules/libpq_pipeline/libpq_pipeline.c | 125 +++++
src/tools/pgindent/typedefs.list | 1 +
8 files changed, 1015 insertions(+), 47 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index a2bbf33d02..373d0dc322 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -5287,7 +5287,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelBlocking"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -6034,10 +6034,402 @@ int PQsetSingleRowMode(PGconn *conn);
<secondary>SQL command</secondary>
</indexterm>
- <para>
- A client application can request cancellation of a command that is
- still being processed by the server, using the functions described in
- this section.
+ <sect2 id="libpq-cancel-conn">
+ <title>Functions for Sending Cancel Requests</title>
+ <variablelist>
+ <varlistentry id="libpq-PQcancelCreate">
+ <term><function>PQcancelCreate</function><indexterm><primary>PQcancelCreate</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Prepares a connection over which a cancel request can be sent.
+<synopsis>
+PGcancelConn *PQcancelCreate(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelCreate"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>
+ object, but it won't instantly start sending a cancel request over this
+ connection. A cancel request can be sent over this connection in a
+ blocking manner using <xref linkend="libpq-PQcancelBlocking"/> and in a
+ non-blocking manner using <xref linkend="libpq-PQcancelStart"/>.
+ The return value can be passed to <xref linkend="libpq-PQcancelStatus"/>
+ to check if the <structname>PGcancelConn</structname> object was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe way.
+ </para>
+
+ <para>
+ Many connection parameters of the original client will be reused when
+ setting up the connection for the cancel request. Importantly, if the
+ original connection requires encryption of the connection and/or
+ verification of the target host (using <literal>sslmode</literal> or
+ <literal>gssencmode</literal>), then the connection for the cancel
+ request is made with these same requirements. Any connection options
+ that are only used during authentication or after authentication of the
+ client are ignored though, because cancellation requests do not require
+ authentication and the connection is closed right after the cancellation
+ request is submitted.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelCreate</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelBlocking">
+ <term><function>PQcancelBlocking</function><indexterm><primary>PQcancelBlocking</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a blocking manner.
+<synopsis>
+int PQcancelBlocking(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelCreate"/>.
+ The return value of <xref linkend="libpq-PQcancelBlocking"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStart">
+ <term><function>PQcancelStart</function><indexterm><primary>PQcancelStart</primary></indexterm></term>
+ <term id="libpq-PQcancelPoll"><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a non-blocking manner.
+<synopsis>
+int PQcancelStart(PGcancelConn *cancelConn);
+
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelCreate"/>.
+ The return value of <xref linkend="libpq-PQcancelStart"/>
+ is 1 if the cancellation request could be started and 0 if not.
+ If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ If <function>PQcancelStart</function> succeeds, the next stage
+ is to poll <application>libpq</application> so that it can proceed with
+ the cancel connection sequence.
+ Use <xref linkend="libpq-PQcancelSocket"/> to obtain the descriptor of the
+ socket underlying the database connection.
+ (Caution: do not assume that the socket remains the same
+ across <function>PQcancelPoll</function> calls.)
+ Loop thus: If <function>PQcancelPoll(cancelConn)</function> last returned
+ <symbol>PGRES_POLLING_READING</symbol>, wait until the socket is ready to
+ read (as indicated by <function>select()</function>, <function>poll()</function>, or
+ similar system function).
+ Then call <function>PQcancelPoll(cancelConn)</function> again.
+ Conversely, if <function>PQcancelPoll(cancelConn)</function> last returned
+ <symbol>PGRES_POLLING_WRITING</symbol>, wait until the socket is ready
+ to write, then call <function>PQcancelPoll(cancelConn)</function> again.
+ On the first iteration, i.e., if you have yet to call
+ <function>PQcancelPoll(cancelConn)</function>, behave as if it last returned
+ <symbol>PGRES_POLLING_WRITING</symbol>. Continue this loop until
+ <function>PQcancelPoll(cancelConn)</function> returns
+ <symbol>PGRES_POLLING_FAILED</symbol>, indicating the connection procedure
+ has failed, or <symbol>PGRES_POLLING_OK</symbol>, indicating cancel
+ request was successfully dispatched.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ <para>
+ At any time during connection, the status of the connection can be
+ checked by calling <xref linkend="libpq-PQcancelStatus"/>. If this call returns <symbol>CONNECTION_BAD</symbol>, then the
+ cancel procedure has failed; if the call returns <function>CONNECTION_OK</function>, then cancel request was successfully dispatched. Both of these states are equally detectable
+ from the return value of <function>PQcancelPoll</function>, described above. Other states might also occur
+ during (and only during) an asynchronous connection procedure. These
+ indicate the current stage of the connection procedure and might be useful
+ to provide feedback to the user for example. These statuses are:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-allocated">
+ <term><symbol>CONNECTION_ALLOCATED</symbol></term>
+ <listitem>
+ <para>
+ Waiting for a call to <xref linkend="libpq-PQcancelStart"/> or
+ <xref linkend="libpq-PQcancelBlocking"/>, to actually open the
+ socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelCreate"/>
+ or <xref linkend="libpq-PQcancelReset"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelStart"/> or
+ <xref linkend="libpq-PQcancelBlocking"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-started">
+ <term><symbol>CONNECTION_STARTED</symbol></term>
+ <listitem>
+ <para>
+ Waiting for connection to be made.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-made">
+ <term><symbol>CONNECTION_MADE</symbol></term>
+ <listitem>
+ <para>
+ Connection OK; waiting to send.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-awaiting-response">
+ <term><symbol>CONNECTION_AWAITING_RESPONSE</symbol></term>
+ <listitem>
+ <para>
+ Waiting for a response from the server.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-ssl-startup">
+ <term><symbol>CONNECTION_SSL_STARTUP</symbol></term>
+ <listitem>
+ <para>
+ Negotiating SSL encryption.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-gss-startup">
+ <term><symbol>CONNECTION_GSS_STARTUP</symbol></term>
+ <listitem>
+ <para>
+ Negotiating GSS encryption.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+
+ Note that, although these constants will remain (in order to maintain
+ compatibility), an application should never rely upon these occurring in a
+ particular order, or at all, or on the status always being one of these
+ documented values. An application might do something like this:
+<programlisting>
+switch(PQcancelStatus(conn))
+{
+ case CONNECTION_STARTED:
+ feedback = "Connecting...";
+ break;
+
+ case CONNECTION_MADE:
+ feedback = "Connected to server...";
+ break;
+.
+.
+.
+ default:
+ feedback = "Connecting...";
+}
+</programlisting>
+ </para>
+
+ <para>
+ The <literal>connect_timeout</literal> connection parameter is ignored
+ when using <function>PQcancelPoll</function>; it is the application's
+ responsibility to decide whether an excessive amount of time has elapsed.
+ Otherwise, <function>PQcancelStart</function> followed by a
+ <function>PQcancelPoll</function> loop is equivalent to
+ <xref linkend="libpq-PQcancelBlocking"/>.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Returns the status of the cancel connection.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ The status can be one of a number of values. However, only three of
+ these are seen outside of an asynchronous cancel procedure:
+ <literal>CONNECTION_ALLOCATED</literal>,
+ <literal>CONNECTION_OK</literal> and
+ <literal>CONNECTION_BAD</literal>. The initial state of a
+ <function>PGcancelConn</function> that's successfully created using
+ <xref linkend="libpq-PQcancelCreate"/> is <literal>CONNECTION_ALLOCATED</literal>.
+ A cancel request that was successfully dispatched
+ has the status <literal>CONNECTION_OK</literal>. A failed
+ cancel attempt is signaled by status
+ <literal>CONNECTION_BAD</literal>. An OK status will
+ remain so until <xref linkend="libpq-PQcancelFinish"/> or
+ <xref linkend="libpq-PQcancelReset"/> is called.
+ </para>
+
+ <para>
+ See the entry for <xref linkend="libpq-PQcancelStart"/> and <xref
+ linkend="libpq-PQcancelPoll"/> with regards to other status codes that
+ might be returned.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSocket">
+ <term><function>PQcancelSocket</function><indexterm><primary>PQcancelSocket</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Obtains the file descriptor number of the cancel connection socket to
+ the server. A valid descriptor will be greater than or equal
+ to 0; a result of -1 indicates that no server connection is
+ currently open. This might change as a result of calling all of the
+ functions in this section on the (except for
+ <xref linkend="libpq-PQcancelErrorMessage"/> and
+ <function>PQcancelSocket</function> itself).
+<synopsis>
+int PQcancelSocket(const PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ <indexterm><primary>error message</primary></indexterm> Returns the error message
+ most recently generated by an operation on the cancel connection.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *cancelconn);
+</synopsis>
+ </para>
+
+ <para>
+ Nearly all <application>libpq</application> functions that take a
+ <structname>PGcancelConn</structname> will set a message for
+ <xref linkend="libpq-PQcancelErrorMessage"/> if they fail. Note that by
+ <application>libpq</application> convention, a nonempty
+ <xref linkend="libpq-PQcancelErrorMessage"/> result can consist of multiple lines,
+ and will include a trailing newline. The caller should not free
+ the result directly. It will be freed when the associated
+ <structname>PGcancelConn</structname> handle is passed to
+ <xref linkend="libpq-PQcancelFinish"/>. The result string should not be
+ expected to remain the same across operations on the
+ <literal>PGcancelConn</literal> structure.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </sect2>
+
+ <sect2 id="libpq-cancel-deprecated">
+ <title>Obsolete Functions for Sending Cancel Requests</title>
+
+ <para>
+ These functions represent older methods of sending cancel requests.
+ Although they still work, they are deprecated due to not sending the cancel
+ requests in an encrypted manner, even when the original connection
+ specified <literal>sslmode</literal> or <literal>gssencmode</literal> to
+ require encryption. Thus these older methods are heavily discouraged from
+ being used in new code, and it is recommended to change existing code to
+ use the new functions instead.
+ </para>
<variablelist>
<varlistentry id="libpq-PQgetCancel">
@@ -6046,7 +6438,7 @@ int PQsetSingleRowMode(PGconn *conn);
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -6088,37 +6480,38 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
-<synopsis>
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelBlocking"/>, but one that can be
+ used safely from within a signal handler. <synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
+ <xref linkend="libpq-PQcancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelBlocking"/> should be
+ used instead. The only benefit that <xref linkend="libpq-PQcancel"/> has
+ is that it can be safely invoked from a signal handler, if the
+ <parameter>errbuf</parameter> is a local variable in the signal handler.
+ However, this is generally not considered a big enough benefit to be
+ worth the security issues that this function has.
+ </para>
+
+ <para>
+ The <structname>PGcancel</structname> object is read-only as far as
+ <xref linkend="libpq-PQcancel"/> is concerned, so it can also be invoked
+ from a thread that is separate from the one manipulating the
+ <structname>PGconn</structname> object.
+ </para>
+
+ <para>
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
with an explanatory error message. <parameter>errbuf</parameter>
must be a char array of size <parameter>errbufsize</parameter> (the
recommended size is 256 bytes).
</para>
-
- <para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
- </para>
-
- <para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
- </para>
</listitem>
</varlistentry>
</variablelist>
@@ -6129,13 +6522,21 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelBlocking"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelBlocking"/> should be
+ used instead. There is no benefit to using
+ <xref linkend="libpq-PQrequestCancel"/> over
+ <xref linkend="libpq-PQcancelBlocking"/>.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -6150,7 +6551,7 @@ int PQrequestCancel(PGconn *conn);
</listitem>
</varlistentry>
</variablelist>
- </para>
+ </sect2>
</sect1>
@@ -9362,7 +9763,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelBlocking"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 088592deb1..9fbd3d3407 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -193,3 +193,12 @@ PQsendClosePrepared 190
PQsendClosePortal 191
PQchangePassword 192
PQsendPipelineSync 193
+PQcancelBlocking 194
+PQcancelStart 195
+PQcancelCreate 196
+PQcancelPoll 197
+PQcancelStatus 198
+PQcancelSocket 199
+PQcancelErrorMessage 200
+PQcancelReset 201
+PQcancelFinish 202
diff --git a/src/interfaces/libpq/fe-cancel.c b/src/interfaces/libpq/fe-cancel.c
index d69b8f9f9f..5cc674820a 100644
--- a/src/interfaces/libpq/fe-cancel.c
+++ b/src/interfaces/libpq/fe-cancel.c
@@ -22,6 +22,18 @@
#include "port/pg_bswap.h"
+/*
+ * pg_cancel_conn is a wrapper around a PGconn to send cancellations using
+ * PQcancelBlocking and PQcancelStart. This isn't just a typedef because we
+ * want the compiler to complain when a PGconn is passed to a function that
+ * expects a PGcancelConn, and vice versa.
+ */
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
+
/*
* pg_cancel (backing struct for PGcancel) stores all data necessary to send a
* cancel request.
@@ -41,6 +53,289 @@ struct pg_cancel
};
+/*
+ * PQcancelCreate
+ *
+ * Create and return a PGcancelConn, which can be used to securely cancel a
+ * query on the given connection.
+ *
+ * This requires either following the non-blocking flow through
+ * PQcancelStart() and PQcancelPoll(), or the blocking PQcancelBlocking().
+ */
+PGcancelConn *
+PQcancelCreate(PGconn *conn)
+{
+ PGconn *cancelConn = pqMakeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!pqCopyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!pqConnectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancellation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we manually create the host and address arrays with
+ * a single element after freeing the host array that we generated from
+ * the connection options.
+ */
+ pqReleaseConnHosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_ALLOCATED;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
+
+/*
+ * PQcancelBlocking
+ *
+ * Send a cancellation request in a blocking fashion.
+ * Returns 1 if successful 0 if not.
+ */
+int
+PQcancelBlocking(PGcancelConn *cancelConn)
+{
+ if (!PQcancelStart(cancelConn))
+ return 0;
+ return pqConnectDBComplete(&cancelConn->conn);
+}
+
+/*
+ * PQcancelStart
+ *
+ * Starts sending a cancellation request in a non-blocking fashion. Returns
+ * 1 if successful 0 if not.
+ */
+int
+PQcancelStart(PGcancelConn *cancelConn)
+{
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return 0;
+
+ if (cancelConn->conn.status != CONNECTION_ALLOCATED)
+ {
+ libpq_append_conn_error(&cancelConn->conn,
+ "cancel request is already being sent on this connection");
+ cancelConn->conn.status = CONNECTION_BAD;
+ return 0;
+ }
+
+ return pqConnectDBStart(&cancelConn->conn);
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn *cancelConn)
+{
+ PGconn *conn = &cancelConn->conn;
+ int n;
+
+ /*
+ * We leave most of the connection establishement to PQconnectPoll, since
+ * it's very similar to normal connection establishment. But once we get
+ * to the CONNECTION_AWAITING_RESPONSE we need to start doing our own
+ * thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * If we receive an error report it, but only if errno is non-zero.
+ * Otherwise we assume it's an EOF, which is what we expect from the
+ * server.
+ *
+ * We skip this for Windows, because Windows is a bit special in its EOF
+ * behaviour for TCP. Sometimes it will error with an ECONNRESET when
+ * there is a clean connection closure. See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here.
+ */
+ if (n < 0 && errno != 0)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangely
+ * do receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF, which is what we were
+ * expecting -- the cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn *cancelConn)
+{
+ return PQstatus(&cancelConn->conn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn *cancelConn)
+{
+ return PQsocket(&cancelConn->conn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn *cancelConn)
+{
+ return PQerrorMessage(&cancelConn->conn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn *cancelConn)
+{
+ pqClosePGconn(&cancelConn->conn);
+ cancelConn->conn.status = CONNECTION_ALLOCATED;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn *cancelConn)
+{
+ PQfinish(&cancelConn->conn);
+}
+
/*
* PQgetCancel: get a PGcancel structure corresponding to a connection.
*
@@ -145,7 +440,7 @@ optional_setsockopt(int fd, int protoid, int optid, int value)
/*
- * PQcancel: request query cancel
+ * PQcancel: old, non-encrypted, but signal-safe way of requesting query cancel
*
* The return value is true if the cancel request was successfully
* dispatched, false if not (in which case an error message is available).
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index d4e10a0c4f..8e8634e5ba 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -616,8 +616,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections need to retain their be_pid and be_key across
+ * PQcancelReset invocations, otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -923,6 +932,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+bool
+pqCopyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2308,10 +2356,18 @@ pqConnectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address, and these fields have already been set up in PQcancelCreate, so
+ * leave these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2453,7 +2509,10 @@ pqConnectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2578,13 +2637,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -3226,6 +3289,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancellation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -3975,8 +4061,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4344,6 +4436,7 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
+ release_conn_addrinfo(conn);
pqReleaseConnHosts(conn);
free(conn->client_encoding_initial);
@@ -4495,6 +4588,13 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ return;
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4548,7 +4648,14 @@ pqClosePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Release addrinfo, but since cancel requests never change their addrinfo
+ * we don't do that. Otherwise we would have to rebuild it during a
+ * PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 2c06044a75..09b485bd2b 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -79,7 +79,9 @@ typedef enum
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Internal state: checking target server
* properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_ALLOCATED /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -166,6 +168,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -322,16 +329,34 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* Create a PGcancelConn that's used to cancel a query on the given PGconn */
+extern PGcancelConn *PQcancelCreate(PGconn *conn);
+
+/* issue a cancel request in a non-blocking manner */
+extern int PQcancelStart(PGcancelConn *cancelConn);
+
+/* issue a blocking cancel request */
+extern int PQcancelBlocking(PGcancelConn *cancelConn);
+
+/* poll a non-blocking cancel request */
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn *cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn *cancelConn);
+extern int PQcancelSocket(const PGcancelConn *cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn *cancelConn);
+extern void PQcancelReset(PGcancelConn *cancelConn);
+extern void PQcancelFinish(PGcancelConn *cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* deprecated version of PQcancelBlocking, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 3abcd180d6..9c05f11a6e 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -409,6 +409,10 @@ struct pg_conn
char *require_auth; /* name of the expected auth method */
char *load_balance_hosts; /* load balance over hosts */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -669,6 +673,7 @@ extern void pqClosePGconn(PGconn *conn);
extern int pqPacketSend(PGconn *conn, char pack_type,
const void *buf, size_t buf_len);
extern bool pqGetHomeDirectory(char *buf, int bufsize);
+extern bool pqCopyPGconn(PGconn *srcConn, PGconn *dstConn);
extern bool pqParseIntParam(const char *value, int *result, PGconn *conn,
const char *context);
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index c6c7b1c3a1..a17c97bdaf 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -215,6 +215,7 @@ static void
test_cancel(PGconn *conn)
{
PGcancel *cancel;
+ PGcancelConn *cancelConn;
PGconn *monitorConn;
char errorbuf[256];
@@ -251,6 +252,130 @@ test_cancel(PGconn *conn)
pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
confirm_query_canceled(conn);
+ /* test PQcancelBlocking */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelCreate(conn);
+ if (!PQcancelBlocking(cancelConn))
+ pg_fatal("failed to run PQcancelBlocking: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelCreate and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelCreate(conn);
+ if (!PQcancelStart(cancelConn))
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * afterwards
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancelStart(cancelConn))
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ PQcancelFinish(cancelConn);
+
fprintf(stderr, "ok\n");
}
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index d3a7f75b08..504f55a7a1 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1762,6 +1762,7 @@ PG_Locale_Strategy
PG_Lock_Status
PG_init_t
PGcancel
+PGcancelConn
PGcmdQueueEntry
PGconn
PGdataValue
--
2.39.2
v34-0002-Start-using-new-libpq-cancel-APIs.patchtext/x-diff; charset=utf-8Download
From 2f23b515aa31a0b524db48b68d254bf96dbfe265 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Dec 2023 13:39:09 +0100
Subject: [PATCH v34 2/2] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in most of the codebase with
these newer ones. This specifically leaves out changes to psql and
pgbench as those would need a much larger refactor to be able to call
them, due to the new functions not being signal-safe.
---
contrib/dblink/dblink.c | 30 +++--
contrib/postgres_fdw/connection.c | 105 +++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 11 +-
src/test/isolation/isolationtester.c | 29 ++---
6 files changed, 145 insertions(+), 52 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 19a362526d..98dcca3e6f 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1346,22 +1346,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelCreate(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (!PQcancelBlocking(cancelConn))
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 4931ebf591..dcc13dc3b2 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -133,7 +133,7 @@ static void pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue);
static void pgfdw_reject_incomplete_xact_state_change(ConnCacheEntry *entry);
static void pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_cancel_query(PGconn *conn);
-static bool pgfdw_cancel_query_begin(PGconn *conn);
+static bool pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime);
static bool pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
bool consume_input);
static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query,
@@ -1315,36 +1315,104 @@ pgfdw_cancel_query(PGconn *conn)
endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
CONNECTION_CLEANUP_TIMEOUT);
- if (!pgfdw_cancel_query_begin(conn))
+ if (!pgfdw_cancel_query_begin(conn, endtime))
return false;
return pgfdw_cancel_query_end(conn, endtime, false);
}
static bool
-pgfdw_cancel_query_begin(PGconn *conn)
+pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
{
- PGcancel *cancel;
- char errbuf[256];
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelCreate(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (!PQcancelStart(cancel_conn))
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
}
- return true;
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ return !failed;
}
static bool
@@ -1685,7 +1753,10 @@ pgfdw_abort_cleanup_begin(ConnCacheEntry *entry, bool toplevel,
*/
if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
{
- if (!pgfdw_cancel_query_begin(entry->conn))
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ CONNECTION_CLEANUP_TIMEOUT);
+
+ if (!pgfdw_cancel_query_begin(entry->conn, endtime))
return false; /* Unable to cancel running query */
*cancel_requested = lappend(*cancel_requested, entry);
}
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 58a603ac56..e03160bd97 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2739,6 +2739,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index e3d147de6d..2626e68cc6 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -737,6 +737,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 808d54461f..5ed9f3ba17 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,14 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
+ PGcancelConn *cancelConn = PQcancelCreate(conn);
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQcancelBlocking(cancelConn);
+ PQcancelFinish(cancelConn);
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index ed110f740f..0b342b5c2b 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelCreate(conn);
- if (cancel != NULL)
+ if (PQcancelBlocking(cancel_conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.39.2
On Tue, 12 Mar 2024 at 10:19, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
If we do this and we see conn.status is not ALLOCATED, meaning a cancel
is already ongoing, shouldn't we leave conn.status alone instead of
changing to CONNECTION_BAD? I mean, we shouldn't be juggling the elbow
of whoever's doing that, should we? Maybe just add the error message
and return 0?
I'd rather fail as hard as possible when someone is using the API
wrongly. Not doing so is bound to cause confusion imho. e.g. if the
state is still CONNECTION_OK because the user forgot to call
PQcancelReset then keeping the connection status "as is" might seem as
if the cancel request succeeded even though nothing happened. So if
the user uses the API incorrectly then I'd rather use all the avenues
possible to indicate that there was an error. Especially since in all
other cases if PQcancelStart returns false CONNECTION_BAD is the
status, and this in turn means that PQconnectPoll will return
PGRES_POLLING_FAILED. So I doubt people will always check the actual
return value of the function to check if an error happened. They might
check PQcancelStatus or PQconnectPoll instead, because that integrates
easier with the rest of their code.
On Tue, 12 Mar 2024 at 10:53, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
I'd rather fail as hard as possible when someone is using the API
wrongly.
To be clear, this is my way of looking at it. If you feel strongly
about that we should not change conn.status, I'm fine with making that
change to the patchset.
On Tue, 12 Mar 2024 at 10:19, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Here's a last one for the cfbot.
Thanks for committing the first 3 patches btw. Attached a tiny change
to 0001, which adds "(backing struct for PGcancelConn)" to the comment
on pg_cancel_conn.
Attachments:
v35-0002-Start-using-new-libpq-cancel-APIs.patchapplication/x-patch; name=v35-0002-Start-using-new-libpq-cancel-APIs.patchDownload
From d340fde6883a249fd7c1a90033675a3b5edb603e Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Dec 2023 13:39:09 +0100
Subject: [PATCH v35 2/2] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in most of the codebase with
these newer ones. This specifically leaves out changes to psql and
pgbench as those would need a much larger refactor to be able to call
them, due to the new functions not being signal-safe.
---
contrib/dblink/dblink.c | 30 +++--
contrib/postgres_fdw/connection.c | 105 +++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 11 +-
src/test/isolation/isolationtester.c | 29 ++---
6 files changed, 145 insertions(+), 52 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 19a362526d2..98dcca3e6fd 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1346,22 +1346,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelCreate(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (!PQcancelBlocking(cancelConn))
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 4931ebf5915..dcc13dc3b24 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -133,7 +133,7 @@ static void pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue);
static void pgfdw_reject_incomplete_xact_state_change(ConnCacheEntry *entry);
static void pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_cancel_query(PGconn *conn);
-static bool pgfdw_cancel_query_begin(PGconn *conn);
+static bool pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime);
static bool pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
bool consume_input);
static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query,
@@ -1315,36 +1315,104 @@ pgfdw_cancel_query(PGconn *conn)
endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
CONNECTION_CLEANUP_TIMEOUT);
- if (!pgfdw_cancel_query_begin(conn))
+ if (!pgfdw_cancel_query_begin(conn, endtime))
return false;
return pgfdw_cancel_query_end(conn, endtime, false);
}
static bool
-pgfdw_cancel_query_begin(PGconn *conn)
+pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
{
- PGcancel *cancel;
- char errbuf[256];
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelCreate(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (!PQcancelStart(cancel_conn))
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
}
- return true;
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ return !failed;
}
static bool
@@ -1685,7 +1753,10 @@ pgfdw_abort_cleanup_begin(ConnCacheEntry *entry, bool toplevel,
*/
if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
{
- if (!pgfdw_cancel_query_begin(entry->conn))
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ CONNECTION_CLEANUP_TIMEOUT);
+
+ if (!pgfdw_cancel_query_begin(entry->conn, endtime))
return false; /* Unable to cancel running query */
*cancel_requested = lappend(*cancel_requested, entry);
}
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 58a603ac56f..e03160bd975 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2739,6 +2739,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index e3d147de6da..2626e68cc69 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -737,6 +737,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 808d54461fd..5ed9f3ba17b 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,14 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
+ PGcancelConn *cancelConn = PQcancelCreate(conn);
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQcancelBlocking(cancelConn);
+ PQcancelFinish(cancelConn);
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index ed110f740f1..0b342b5c2bb 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelCreate(conn);
- if (cancel != NULL)
+ if (PQcancelBlocking(cancel_conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
v35-0001-libpq-Add-encrypted-and-non-blocking-query-cance.patchapplication/x-patch; name=v35-0001-libpq-Add-encrypted-and-non-blocking-query-cance.patchDownload
From 1c2becaff422b66bc9c263fcdf5c318736f147f6 Mon Sep 17 00:00:00 2001
From: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Tue, 12 Mar 2024 10:09:25 +0100
Subject: [PATCH v35 1/2] libpq: Add encrypted and non-blocking query
cancellation routines
The existing PQcancel API uses blocking IO, which makes PQcancel
impossible to use in an event loop based codebase without blocking the
event loop until the call returns. It also doesn't encrypt the
connection over which the cancel request is sent, even when the original
connection required encryption.
This commit adds a PQcancelConn struct and assorted functions, which
provide a better mechanism of sending cancel requests; in particular all
the encryption used in the original connection are also used in the
cancel connection. The main entry points are:
- PQcancelCreate creates the PQcancelConn based on the original
connection (but does not establish an actual connection).
- PQcancelStart can be used to initiate non-blocking cancel requests,
using encryption if the original connection did so, which must be
pumped using
- PQcancelPoll.
- PQcancelReset puts a PQcancelConn back in state so that it can be
reused to send a new cancel request to the same connection.
- PQcancelBlocking is a simpler-to-use blocking API that still uses
encryption.
Additional functions are
- PQcancelStatus, mimicks PQstatus;
- PQcancelSocket, mimicks PQcancelSocket;
- PQcancelErrorMessage, mimicks PQerrorMessage;
- PQcancelFinish, mimicks PQfinish.
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Denis Laxalde <denis.laxalde@dalibo.com>
Discussion: https://postgr.es/m/AM5PR83MB0178D3B31CA1B6EC4A8ECC42F7529@AM5PR83MB0178.EURPRD83.prod.outlook.com
---
doc/src/sgml/libpq.sgml | 461 ++++++++++++++++--
src/interfaces/libpq/exports.txt | 9 +
src/interfaces/libpq/fe-cancel.c | 297 ++++++++++-
src/interfaces/libpq/fe-connect.c | 129 ++++-
src/interfaces/libpq/libpq-fe.h | 31 +-
src/interfaces/libpq/libpq-int.h | 5 +
.../modules/libpq_pipeline/libpq_pipeline.c | 125 +++++
src/tools/pgindent/typedefs.list | 1 +
8 files changed, 1013 insertions(+), 45 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index a2bbf33d029..373d0dc3223 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -5287,7 +5287,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelBlocking"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -6034,10 +6034,402 @@ int PQsetSingleRowMode(PGconn *conn);
<secondary>SQL command</secondary>
</indexterm>
- <para>
- A client application can request cancellation of a command that is
- still being processed by the server, using the functions described in
- this section.
+ <sect2 id="libpq-cancel-conn">
+ <title>Functions for Sending Cancel Requests</title>
+ <variablelist>
+ <varlistentry id="libpq-PQcancelCreate">
+ <term><function>PQcancelCreate</function><indexterm><primary>PQcancelCreate</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Prepares a connection over which a cancel request can be sent.
+<synopsis>
+PGcancelConn *PQcancelCreate(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelCreate"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>
+ object, but it won't instantly start sending a cancel request over this
+ connection. A cancel request can be sent over this connection in a
+ blocking manner using <xref linkend="libpq-PQcancelBlocking"/> and in a
+ non-blocking manner using <xref linkend="libpq-PQcancelStart"/>.
+ The return value can be passed to <xref linkend="libpq-PQcancelStatus"/>
+ to check if the <structname>PGcancelConn</structname> object was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe way.
+ </para>
+
+ <para>
+ Many connection parameters of the original client will be reused when
+ setting up the connection for the cancel request. Importantly, if the
+ original connection requires encryption of the connection and/or
+ verification of the target host (using <literal>sslmode</literal> or
+ <literal>gssencmode</literal>), then the connection for the cancel
+ request is made with these same requirements. Any connection options
+ that are only used during authentication or after authentication of the
+ client are ignored though, because cancellation requests do not require
+ authentication and the connection is closed right after the cancellation
+ request is submitted.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelCreate</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelBlocking">
+ <term><function>PQcancelBlocking</function><indexterm><primary>PQcancelBlocking</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a blocking manner.
+<synopsis>
+int PQcancelBlocking(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelCreate"/>.
+ The return value of <xref linkend="libpq-PQcancelBlocking"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStart">
+ <term><function>PQcancelStart</function><indexterm><primary>PQcancelStart</primary></indexterm></term>
+ <term id="libpq-PQcancelPoll"><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a non-blocking manner.
+<synopsis>
+int PQcancelStart(PGcancelConn *cancelConn);
+
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelCreate"/>.
+ The return value of <xref linkend="libpq-PQcancelStart"/>
+ is 1 if the cancellation request could be started and 0 if not.
+ If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ If <function>PQcancelStart</function> succeeds, the next stage
+ is to poll <application>libpq</application> so that it can proceed with
+ the cancel connection sequence.
+ Use <xref linkend="libpq-PQcancelSocket"/> to obtain the descriptor of the
+ socket underlying the database connection.
+ (Caution: do not assume that the socket remains the same
+ across <function>PQcancelPoll</function> calls.)
+ Loop thus: If <function>PQcancelPoll(cancelConn)</function> last returned
+ <symbol>PGRES_POLLING_READING</symbol>, wait until the socket is ready to
+ read (as indicated by <function>select()</function>, <function>poll()</function>, or
+ similar system function).
+ Then call <function>PQcancelPoll(cancelConn)</function> again.
+ Conversely, if <function>PQcancelPoll(cancelConn)</function> last returned
+ <symbol>PGRES_POLLING_WRITING</symbol>, wait until the socket is ready
+ to write, then call <function>PQcancelPoll(cancelConn)</function> again.
+ On the first iteration, i.e., if you have yet to call
+ <function>PQcancelPoll(cancelConn)</function>, behave as if it last returned
+ <symbol>PGRES_POLLING_WRITING</symbol>. Continue this loop until
+ <function>PQcancelPoll(cancelConn)</function> returns
+ <symbol>PGRES_POLLING_FAILED</symbol>, indicating the connection procedure
+ has failed, or <symbol>PGRES_POLLING_OK</symbol>, indicating cancel
+ request was successfully dispatched.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ <para>
+ At any time during connection, the status of the connection can be
+ checked by calling <xref linkend="libpq-PQcancelStatus"/>. If this call returns <symbol>CONNECTION_BAD</symbol>, then the
+ cancel procedure has failed; if the call returns <function>CONNECTION_OK</function>, then cancel request was successfully dispatched. Both of these states are equally detectable
+ from the return value of <function>PQcancelPoll</function>, described above. Other states might also occur
+ during (and only during) an asynchronous connection procedure. These
+ indicate the current stage of the connection procedure and might be useful
+ to provide feedback to the user for example. These statuses are:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-allocated">
+ <term><symbol>CONNECTION_ALLOCATED</symbol></term>
+ <listitem>
+ <para>
+ Waiting for a call to <xref linkend="libpq-PQcancelStart"/> or
+ <xref linkend="libpq-PQcancelBlocking"/>, to actually open the
+ socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelCreate"/>
+ or <xref linkend="libpq-PQcancelReset"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelStart"/> or
+ <xref linkend="libpq-PQcancelBlocking"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-started">
+ <term><symbol>CONNECTION_STARTED</symbol></term>
+ <listitem>
+ <para>
+ Waiting for connection to be made.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-made">
+ <term><symbol>CONNECTION_MADE</symbol></term>
+ <listitem>
+ <para>
+ Connection OK; waiting to send.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-awaiting-response">
+ <term><symbol>CONNECTION_AWAITING_RESPONSE</symbol></term>
+ <listitem>
+ <para>
+ Waiting for a response from the server.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-ssl-startup">
+ <term><symbol>CONNECTION_SSL_STARTUP</symbol></term>
+ <listitem>
+ <para>
+ Negotiating SSL encryption.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-gss-startup">
+ <term><symbol>CONNECTION_GSS_STARTUP</symbol></term>
+ <listitem>
+ <para>
+ Negotiating GSS encryption.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+
+ Note that, although these constants will remain (in order to maintain
+ compatibility), an application should never rely upon these occurring in a
+ particular order, or at all, or on the status always being one of these
+ documented values. An application might do something like this:
+<programlisting>
+switch(PQcancelStatus(conn))
+{
+ case CONNECTION_STARTED:
+ feedback = "Connecting...";
+ break;
+
+ case CONNECTION_MADE:
+ feedback = "Connected to server...";
+ break;
+.
+.
+.
+ default:
+ feedback = "Connecting...";
+}
+</programlisting>
+ </para>
+
+ <para>
+ The <literal>connect_timeout</literal> connection parameter is ignored
+ when using <function>PQcancelPoll</function>; it is the application's
+ responsibility to decide whether an excessive amount of time has elapsed.
+ Otherwise, <function>PQcancelStart</function> followed by a
+ <function>PQcancelPoll</function> loop is equivalent to
+ <xref linkend="libpq-PQcancelBlocking"/>.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Returns the status of the cancel connection.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ The status can be one of a number of values. However, only three of
+ these are seen outside of an asynchronous cancel procedure:
+ <literal>CONNECTION_ALLOCATED</literal>,
+ <literal>CONNECTION_OK</literal> and
+ <literal>CONNECTION_BAD</literal>. The initial state of a
+ <function>PGcancelConn</function> that's successfully created using
+ <xref linkend="libpq-PQcancelCreate"/> is <literal>CONNECTION_ALLOCATED</literal>.
+ A cancel request that was successfully dispatched
+ has the status <literal>CONNECTION_OK</literal>. A failed
+ cancel attempt is signaled by status
+ <literal>CONNECTION_BAD</literal>. An OK status will
+ remain so until <xref linkend="libpq-PQcancelFinish"/> or
+ <xref linkend="libpq-PQcancelReset"/> is called.
+ </para>
+
+ <para>
+ See the entry for <xref linkend="libpq-PQcancelStart"/> and <xref
+ linkend="libpq-PQcancelPoll"/> with regards to other status codes that
+ might be returned.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSocket">
+ <term><function>PQcancelSocket</function><indexterm><primary>PQcancelSocket</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Obtains the file descriptor number of the cancel connection socket to
+ the server. A valid descriptor will be greater than or equal
+ to 0; a result of -1 indicates that no server connection is
+ currently open. This might change as a result of calling all of the
+ functions in this section on the (except for
+ <xref linkend="libpq-PQcancelErrorMessage"/> and
+ <function>PQcancelSocket</function> itself).
+<synopsis>
+int PQcancelSocket(const PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ <indexterm><primary>error message</primary></indexterm> Returns the error message
+ most recently generated by an operation on the cancel connection.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *cancelconn);
+</synopsis>
+ </para>
+
+ <para>
+ Nearly all <application>libpq</application> functions that take a
+ <structname>PGcancelConn</structname> will set a message for
+ <xref linkend="libpq-PQcancelErrorMessage"/> if they fail. Note that by
+ <application>libpq</application> convention, a nonempty
+ <xref linkend="libpq-PQcancelErrorMessage"/> result can consist of multiple lines,
+ and will include a trailing newline. The caller should not free
+ the result directly. It will be freed when the associated
+ <structname>PGcancelConn</structname> handle is passed to
+ <xref linkend="libpq-PQcancelFinish"/>. The result string should not be
+ expected to remain the same across operations on the
+ <literal>PGcancelConn</literal> structure.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </sect2>
+
+ <sect2 id="libpq-cancel-deprecated">
+ <title>Obsolete Functions for Sending Cancel Requests</title>
+
+ <para>
+ These functions represent older methods of sending cancel requests.
+ Although they still work, they are deprecated due to not sending the cancel
+ requests in an encrypted manner, even when the original connection
+ specified <literal>sslmode</literal> or <literal>gssencmode</literal> to
+ require encryption. Thus these older methods are heavily discouraged from
+ being used in new code, and it is recommended to change existing code to
+ use the new functions instead.
+ </para>
<variablelist>
<varlistentry id="libpq-PQgetCancel">
@@ -6046,7 +6438,7 @@ int PQsetSingleRowMode(PGconn *conn);
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -6088,36 +6480,37 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
-<synopsis>
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelBlocking"/>, but one that can be
+ used safely from within a signal handler. <synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
- dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
- with an explanatory error message. <parameter>errbuf</parameter>
- must be a char array of size <parameter>errbufsize</parameter> (the
- recommended size is 256 bytes).
+ <xref linkend="libpq-PQcancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelBlocking"/> should be
+ used instead. The only benefit that <xref linkend="libpq-PQcancel"/> has
+ is that it can be safely invoked from a signal handler, if the
+ <parameter>errbuf</parameter> is a local variable in the signal handler.
+ However, this is generally not considered a big enough benefit to be
+ worth the security issues that this function has.
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
+ The <structname>PGcancel</structname> object is read-only as far as
+ <xref linkend="libpq-PQcancel"/> is concerned, so it can also be invoked
+ from a thread that is separate from the one manipulating the
+ <structname>PGconn</structname> object.
</para>
<para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
+ with an explanatory error message. <parameter>errbuf</parameter>
+ must be a char array of size <parameter>errbufsize</parameter> (the
+ recommended size is 256 bytes).
</para>
</listitem>
</varlistentry>
@@ -6129,13 +6522,21 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelBlocking"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelBlocking"/> should be
+ used instead. There is no benefit to using
+ <xref linkend="libpq-PQrequestCancel"/> over
+ <xref linkend="libpq-PQcancelBlocking"/>.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -6150,7 +6551,7 @@ int PQrequestCancel(PGconn *conn);
</listitem>
</varlistentry>
</variablelist>
- </para>
+ </sect2>
</sect1>
@@ -9362,7 +9763,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelBlocking"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 088592deb16..9fbd3d34074 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -193,3 +193,12 @@ PQsendClosePrepared 190
PQsendClosePortal 191
PQchangePassword 192
PQsendPipelineSync 193
+PQcancelBlocking 194
+PQcancelStart 195
+PQcancelCreate 196
+PQcancelPoll 197
+PQcancelStatus 198
+PQcancelSocket 199
+PQcancelErrorMessage 200
+PQcancelReset 201
+PQcancelFinish 202
diff --git a/src/interfaces/libpq/fe-cancel.c b/src/interfaces/libpq/fe-cancel.c
index d69b8f9f9f4..f9ea32470ae 100644
--- a/src/interfaces/libpq/fe-cancel.c
+++ b/src/interfaces/libpq/fe-cancel.c
@@ -22,6 +22,18 @@
#include "port/pg_bswap.h"
+/*
+ * pg_cancel_conn (backing struct for PGcancelConn) is a wrapper around a
+ * PGconn to send cancellations using PQcancelBlocking and PQcancelStart. This
+ * isn't just a typedef because we want the compiler to complain when a PGconn
+ * is passed to a function that expects a PGcancelConn, and vice versa.
+ */
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
+
/*
* pg_cancel (backing struct for PGcancel) stores all data necessary to send a
* cancel request.
@@ -41,6 +53,289 @@ struct pg_cancel
};
+/*
+ * PQcancelCreate
+ *
+ * Create and return a PGcancelConn, which can be used to securely cancel a
+ * query on the given connection.
+ *
+ * This requires either following the non-blocking flow through
+ * PQcancelStart() and PQcancelPoll(), or the blocking PQcancelBlocking().
+ */
+PGcancelConn *
+PQcancelCreate(PGconn *conn)
+{
+ PGconn *cancelConn = pqMakeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!pqCopyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!pqConnectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancellation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we manually create the host and address arrays with
+ * a single element after freeing the host array that we generated from
+ * the connection options.
+ */
+ pqReleaseConnHosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_ALLOCATED;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
+
+/*
+ * PQcancelBlocking
+ *
+ * Send a cancellation request in a blocking fashion.
+ * Returns 1 if successful 0 if not.
+ */
+int
+PQcancelBlocking(PGcancelConn *cancelConn)
+{
+ if (!PQcancelStart(cancelConn))
+ return 0;
+ return pqConnectDBComplete(&cancelConn->conn);
+}
+
+/*
+ * PQcancelStart
+ *
+ * Starts sending a cancellation request in a non-blocking fashion. Returns
+ * 1 if successful 0 if not.
+ */
+int
+PQcancelStart(PGcancelConn *cancelConn)
+{
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return 0;
+
+ if (cancelConn->conn.status != CONNECTION_ALLOCATED)
+ {
+ libpq_append_conn_error(&cancelConn->conn,
+ "cancel request is already being sent on this connection");
+ cancelConn->conn.status = CONNECTION_BAD;
+ return 0;
+ }
+
+ return pqConnectDBStart(&cancelConn->conn);
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn *cancelConn)
+{
+ PGconn *conn = &cancelConn->conn;
+ int n;
+
+ /*
+ * We leave most of the connection establishement to PQconnectPoll, since
+ * it's very similar to normal connection establishment. But once we get
+ * to the CONNECTION_AWAITING_RESPONSE we need to start doing our own
+ * thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * If we receive an error report it, but only if errno is non-zero.
+ * Otherwise we assume it's an EOF, which is what we expect from the
+ * server.
+ *
+ * We skip this for Windows, because Windows is a bit special in its EOF
+ * behaviour for TCP. Sometimes it will error with an ECONNRESET when
+ * there is a clean connection closure. See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here.
+ */
+ if (n < 0 && errno != 0)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangely
+ * do receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF, which is what we were
+ * expecting -- the cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn *cancelConn)
+{
+ return PQstatus(&cancelConn->conn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn *cancelConn)
+{
+ return PQsocket(&cancelConn->conn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn *cancelConn)
+{
+ return PQerrorMessage(&cancelConn->conn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn *cancelConn)
+{
+ pqClosePGconn(&cancelConn->conn);
+ cancelConn->conn.status = CONNECTION_ALLOCATED;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn *cancelConn)
+{
+ PQfinish(&cancelConn->conn);
+}
+
/*
* PQgetCancel: get a PGcancel structure corresponding to a connection.
*
@@ -145,7 +440,7 @@ optional_setsockopt(int fd, int protoid, int optid, int value)
/*
- * PQcancel: request query cancel
+ * PQcancel: old, non-encrypted, but signal-safe way of requesting query cancel
*
* The return value is true if the cancel request was successfully
* dispatched, false if not (in which case an error message is available).
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index d4e10a0c4f3..8e8634e5baf 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -616,8 +616,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections need to retain their be_pid and be_key across
+ * PQcancelReset invocations, otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -923,6 +932,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+bool
+pqCopyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2308,10 +2356,18 @@ pqConnectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address, and these fields have already been set up in PQcancelCreate, so
+ * leave these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2453,7 +2509,10 @@ pqConnectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2578,13 +2637,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -3226,6 +3289,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancellation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -3975,8 +4061,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4344,6 +4436,7 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
+ release_conn_addrinfo(conn);
pqReleaseConnHosts(conn);
free(conn->client_encoding_initial);
@@ -4495,6 +4588,13 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ return;
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4548,7 +4648,14 @@ pqClosePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Release addrinfo, but since cancel requests never change their addrinfo
+ * we don't do that. Otherwise we would have to rebuild it during a
+ * PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 2c06044a75e..09b485bd2bc 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -79,7 +79,9 @@ typedef enum
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Internal state: checking target server
* properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_ALLOCATED /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -166,6 +168,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -322,16 +329,34 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* Create a PGcancelConn that's used to cancel a query on the given PGconn */
+extern PGcancelConn *PQcancelCreate(PGconn *conn);
+
+/* issue a cancel request in a non-blocking manner */
+extern int PQcancelStart(PGcancelConn *cancelConn);
+
+/* issue a blocking cancel request */
+extern int PQcancelBlocking(PGcancelConn *cancelConn);
+
+/* poll a non-blocking cancel request */
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn *cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn *cancelConn);
+extern int PQcancelSocket(const PGcancelConn *cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn *cancelConn);
+extern void PQcancelReset(PGcancelConn *cancelConn);
+extern void PQcancelFinish(PGcancelConn *cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* deprecated version of PQcancelBlocking, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 3abcd180d6d..9c05f11a6e9 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -409,6 +409,10 @@ struct pg_conn
char *require_auth; /* name of the expected auth method */
char *load_balance_hosts; /* load balance over hosts */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -669,6 +673,7 @@ extern void pqClosePGconn(PGconn *conn);
extern int pqPacketSend(PGconn *conn, char pack_type,
const void *buf, size_t buf_len);
extern bool pqGetHomeDirectory(char *buf, int bufsize);
+extern bool pqCopyPGconn(PGconn *srcConn, PGconn *dstConn);
extern bool pqParseIntParam(const char *value, int *result, PGconn *conn,
const char *context);
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index c6c7b1c3a17..a17c97bdaf4 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -215,6 +215,7 @@ static void
test_cancel(PGconn *conn)
{
PGcancel *cancel;
+ PGcancelConn *cancelConn;
PGconn *monitorConn;
char errorbuf[256];
@@ -251,6 +252,130 @@ test_cancel(PGconn *conn)
pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
confirm_query_canceled(conn);
+ /* test PQcancelBlocking */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelCreate(conn);
+ if (!PQcancelBlocking(cancelConn))
+ pg_fatal("failed to run PQcancelBlocking: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelCreate and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelCreate(conn);
+ if (!PQcancelStart(cancelConn))
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * afterwards
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancelStart(cancelConn))
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ PQcancelFinish(cancelConn);
+
fprintf(stderr, "ok\n");
}
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a3052a181d1..aa7a25b8f8c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1763,6 +1763,7 @@ PG_Locale_Strategy
PG_Lock_Status
PG_init_t
PGcancel
+PGcancelConn
PGcmdQueueEntry
PGconn
PGdataValue
base-commit: 4945e4ed4a72c3ff41560ccef722c3d70ae07dbb
--
2.34.1
On Tue, 12 Mar 2024 at 15:04, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
Attached a tiny change to 0001
One more tiny comment change, stating that pg_cancel is used by the
deprecated PQcancel function.
Attachments:
v36-0001-libpq-Add-encrypted-and-non-blocking-query-cance.patchapplication/octet-stream; name=v36-0001-libpq-Add-encrypted-and-non-blocking-query-cance.patchDownload
From 612e621defb0887f8bc695822b78dbc35f005832 Mon Sep 17 00:00:00 2001
From: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Tue, 12 Mar 2024 10:09:25 +0100
Subject: [PATCH v36 1/2] libpq: Add encrypted and non-blocking query
cancellation routines
The existing PQcancel API uses blocking IO, which makes PQcancel
impossible to use in an event loop based codebase without blocking the
event loop until the call returns. It also doesn't encrypt the
connection over which the cancel request is sent, even when the original
connection required encryption.
This commit adds a PQcancelConn struct and assorted functions, which
provide a better mechanism of sending cancel requests; in particular all
the encryption used in the original connection are also used in the
cancel connection. The main entry points are:
- PQcancelCreate creates the PQcancelConn based on the original
connection (but does not establish an actual connection).
- PQcancelStart can be used to initiate non-blocking cancel requests,
using encryption if the original connection did so, which must be
pumped using
- PQcancelPoll.
- PQcancelReset puts a PQcancelConn back in state so that it can be
reused to send a new cancel request to the same connection.
- PQcancelBlocking is a simpler-to-use blocking API that still uses
encryption.
Additional functions are
- PQcancelStatus, mimicks PQstatus;
- PQcancelSocket, mimicks PQcancelSocket;
- PQcancelErrorMessage, mimicks PQerrorMessage;
- PQcancelFinish, mimicks PQfinish.
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Denis Laxalde <denis.laxalde@dalibo.com>
Discussion: https://postgr.es/m/AM5PR83MB0178D3B31CA1B6EC4A8ECC42F7529@AM5PR83MB0178.EURPRD83.prod.outlook.com
---
doc/src/sgml/libpq.sgml | 461 ++++++++++++++++--
src/interfaces/libpq/exports.txt | 9 +
src/interfaces/libpq/fe-cancel.c | 299 +++++++++++-
src/interfaces/libpq/fe-connect.c | 129 ++++-
src/interfaces/libpq/libpq-fe.h | 31 +-
src/interfaces/libpq/libpq-int.h | 5 +
.../modules/libpq_pipeline/libpq_pipeline.c | 125 +++++
src/tools/pgindent/typedefs.list | 1 +
8 files changed, 1014 insertions(+), 46 deletions(-)
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index a2bbf33d029..373d0dc3223 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -265,7 +265,7 @@ PGconn *PQsetdb(char *pghost,
<varlistentry id="libpq-PQconnectStartParams">
<term><function>PQconnectStartParams</function><indexterm><primary>PQconnectStartParams</primary></indexterm></term>
<term><function>PQconnectStart</function><indexterm><primary>PQconnectStart</primary></indexterm></term>
- <term><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
+ <term id="libpq-PQconnectPoll"><function>PQconnectPoll</function><indexterm><primary>PQconnectPoll</primary></indexterm></term>
<listitem>
<para>
<indexterm><primary>nonblocking connection</primary></indexterm>
@@ -5287,7 +5287,7 @@ int PQisBusy(PGconn *conn);
<xref linkend="libpq-PQsendQuery"/>/<xref linkend="libpq-PQgetResult"/>
can also attempt to cancel a command that is still being processed
by the server; see <xref linkend="libpq-cancel"/>. But regardless of
- the return value of <xref linkend="libpq-PQcancel"/>, the application
+ the return value of <xref linkend="libpq-PQcancelBlocking"/>, the application
must continue with the normal result-reading sequence using
<xref linkend="libpq-PQgetResult"/>. A successful cancellation will
simply cause the command to terminate sooner than it would have
@@ -6034,10 +6034,402 @@ int PQsetSingleRowMode(PGconn *conn);
<secondary>SQL command</secondary>
</indexterm>
- <para>
- A client application can request cancellation of a command that is
- still being processed by the server, using the functions described in
- this section.
+ <sect2 id="libpq-cancel-conn">
+ <title>Functions for Sending Cancel Requests</title>
+ <variablelist>
+ <varlistentry id="libpq-PQcancelCreate">
+ <term><function>PQcancelCreate</function><indexterm><primary>PQcancelCreate</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Prepares a connection over which a cancel request can be sent.
+<synopsis>
+PGcancelConn *PQcancelCreate(PGconn *conn);
+</synopsis>
+ </para>
+
+ <para>
+ <xref linkend="libpq-PQcancelCreate"/> creates a
+ <structname>PGcancelConn</structname><indexterm><primary>PGcancelConn</primary></indexterm>
+ object, but it won't instantly start sending a cancel request over this
+ connection. A cancel request can be sent over this connection in a
+ blocking manner using <xref linkend="libpq-PQcancelBlocking"/> and in a
+ non-blocking manner using <xref linkend="libpq-PQcancelStart"/>.
+ The return value can be passed to <xref linkend="libpq-PQcancelStatus"/>
+ to check if the <structname>PGcancelConn</structname> object was
+ created successfully. The <structname>PGcancelConn</structname> object
+ is an opaque structure that is not meant to be accessed directly by the
+ application. This <structname>PGcancelConn</structname> object can be
+ used to cancel the query that's running on the original connection in a
+ thread-safe way.
+ </para>
+
+ <para>
+ Many connection parameters of the original client will be reused when
+ setting up the connection for the cancel request. Importantly, if the
+ original connection requires encryption of the connection and/or
+ verification of the target host (using <literal>sslmode</literal> or
+ <literal>gssencmode</literal>), then the connection for the cancel
+ request is made with these same requirements. Any connection options
+ that are only used during authentication or after authentication of the
+ client are ignored though, because cancellation requests do not require
+ authentication and the connection is closed right after the cancellation
+ request is submitted.
+ </para>
+
+ <para>
+ Note that when <function>PQcancelCreate</function> returns a non-null
+ pointer, you must call <xref linkend="libpq-PQcancelFinish"/> when you
+ are finished with it, in order to dispose of the structure and any
+ associated memory blocks. This must be done even if the cancel request
+ failed or was abandoned.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelBlocking">
+ <term><function>PQcancelBlocking</function><indexterm><primary>PQcancelBlocking</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a blocking manner.
+<synopsis>
+int PQcancelBlocking(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelCreate"/>.
+ The return value of <xref linkend="libpq-PQcancelBlocking"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStart">
+ <term><function>PQcancelStart</function><indexterm><primary>PQcancelStart</primary></indexterm></term>
+ <term id="libpq-PQcancelPoll"><function>PQcancelPoll</function><indexterm><primary>PQcancelPoll</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Requests that the server abandons processing of the current command in a non-blocking manner.
+<synopsis>
+int PQcancelStart(PGcancelConn *cancelConn);
+
+PostgresPollingStatusType PQcancelPoll(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ The request is made over the given <structname>PGcancelConn</structname>,
+ which needs to be created with <xref linkend="libpq-PQcancelCreate"/>.
+ The return value of <xref linkend="libpq-PQcancelStart"/>
+ is 1 if the cancellation request could be started and 0 if not.
+ If it was unsuccessful, the error message can be
+ retrieved using <xref linkend="libpq-PQcancelErrorMessage"/>.
+ </para>
+
+ <para>
+ If <function>PQcancelStart</function> succeeds, the next stage
+ is to poll <application>libpq</application> so that it can proceed with
+ the cancel connection sequence.
+ Use <xref linkend="libpq-PQcancelSocket"/> to obtain the descriptor of the
+ socket underlying the database connection.
+ (Caution: do not assume that the socket remains the same
+ across <function>PQcancelPoll</function> calls.)
+ Loop thus: If <function>PQcancelPoll(cancelConn)</function> last returned
+ <symbol>PGRES_POLLING_READING</symbol>, wait until the socket is ready to
+ read (as indicated by <function>select()</function>, <function>poll()</function>, or
+ similar system function).
+ Then call <function>PQcancelPoll(cancelConn)</function> again.
+ Conversely, if <function>PQcancelPoll(cancelConn)</function> last returned
+ <symbol>PGRES_POLLING_WRITING</symbol>, wait until the socket is ready
+ to write, then call <function>PQcancelPoll(cancelConn)</function> again.
+ On the first iteration, i.e., if you have yet to call
+ <function>PQcancelPoll(cancelConn)</function>, behave as if it last returned
+ <symbol>PGRES_POLLING_WRITING</symbol>. Continue this loop until
+ <function>PQcancelPoll(cancelConn)</function> returns
+ <symbol>PGRES_POLLING_FAILED</symbol>, indicating the connection procedure
+ has failed, or <symbol>PGRES_POLLING_OK</symbol>, indicating cancel
+ request was successfully dispatched.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ <para>
+ At any time during connection, the status of the connection can be
+ checked by calling <xref linkend="libpq-PQcancelStatus"/>. If this call returns <symbol>CONNECTION_BAD</symbol>, then the
+ cancel procedure has failed; if the call returns <function>CONNECTION_OK</function>, then cancel request was successfully dispatched. Both of these states are equally detectable
+ from the return value of <function>PQcancelPoll</function>, described above. Other states might also occur
+ during (and only during) an asynchronous connection procedure. These
+ indicate the current stage of the connection procedure and might be useful
+ to provide feedback to the user for example. These statuses are:
+
+ <variablelist>
+ <varlistentry id="libpq-connection-allocated">
+ <term><symbol>CONNECTION_ALLOCATED</symbol></term>
+ <listitem>
+ <para>
+ Waiting for a call to <xref linkend="libpq-PQcancelStart"/> or
+ <xref linkend="libpq-PQcancelBlocking"/>, to actually open the
+ socket. This is the connection state right after
+ calling <xref linkend="libpq-PQcancelCreate"/>
+ or <xref linkend="libpq-PQcancelReset"/>. No connection to the
+ server has been initiated yet at this point. To actually start
+ sending the cancel request use <xref linkend="libpq-PQcancelStart"/> or
+ <xref linkend="libpq-PQcancelBlocking"/>.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-started">
+ <term><symbol>CONNECTION_STARTED</symbol></term>
+ <listitem>
+ <para>
+ Waiting for connection to be made.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-made">
+ <term><symbol>CONNECTION_MADE</symbol></term>
+ <listitem>
+ <para>
+ Connection OK; waiting to send.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-awaiting-response">
+ <term><symbol>CONNECTION_AWAITING_RESPONSE</symbol></term>
+ <listitem>
+ <para>
+ Waiting for a response from the server.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-ssl-startup">
+ <term><symbol>CONNECTION_SSL_STARTUP</symbol></term>
+ <listitem>
+ <para>
+ Negotiating SSL encryption.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-cancel-connection-gss-startup">
+ <term><symbol>CONNECTION_GSS_STARTUP</symbol></term>
+ <listitem>
+ <para>
+ Negotiating GSS encryption.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+
+ Note that, although these constants will remain (in order to maintain
+ compatibility), an application should never rely upon these occurring in a
+ particular order, or at all, or on the status always being one of these
+ documented values. An application might do something like this:
+<programlisting>
+switch(PQcancelStatus(conn))
+{
+ case CONNECTION_STARTED:
+ feedback = "Connecting...";
+ break;
+
+ case CONNECTION_MADE:
+ feedback = "Connected to server...";
+ break;
+.
+.
+.
+ default:
+ feedback = "Connecting...";
+}
+</programlisting>
+ </para>
+
+ <para>
+ The <literal>connect_timeout</literal> connection parameter is ignored
+ when using <function>PQcancelPoll</function>; it is the application's
+ responsibility to decide whether an excessive amount of time has elapsed.
+ Otherwise, <function>PQcancelStart</function> followed by a
+ <function>PQcancelPoll</function> loop is equivalent to
+ <xref linkend="libpq-PQcancelBlocking"/>.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelStatus">
+ <term><function>PQcancelStatus</function><indexterm><primary>PQcancelStatus</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Returns the status of the cancel connection.
+<synopsis>
+ConnStatusType PQcancelStatus(const PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ The status can be one of a number of values. However, only three of
+ these are seen outside of an asynchronous cancel procedure:
+ <literal>CONNECTION_ALLOCATED</literal>,
+ <literal>CONNECTION_OK</literal> and
+ <literal>CONNECTION_BAD</literal>. The initial state of a
+ <function>PGcancelConn</function> that's successfully created using
+ <xref linkend="libpq-PQcancelCreate"/> is <literal>CONNECTION_ALLOCATED</literal>.
+ A cancel request that was successfully dispatched
+ has the status <literal>CONNECTION_OK</literal>. A failed
+ cancel attempt is signaled by status
+ <literal>CONNECTION_BAD</literal>. An OK status will
+ remain so until <xref linkend="libpq-PQcancelFinish"/> or
+ <xref linkend="libpq-PQcancelReset"/> is called.
+ </para>
+
+ <para>
+ See the entry for <xref linkend="libpq-PQcancelStart"/> and <xref
+ linkend="libpq-PQcancelPoll"/> with regards to other status codes that
+ might be returned.
+ </para>
+
+ <para>
+ Successful dispatch of the cancellation is no guarantee that the request
+ will have any effect, however. If the cancellation is effective, the
+ command being canceled will terminate early and return an error result.
+ If the cancellation fails (say, because the server was already done
+ processing the command), then there will be no visible result at all.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelSocket">
+ <term><function>PQcancelSocket</function><indexterm><primary>PQcancelSocket</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ Obtains the file descriptor number of the cancel connection socket to
+ the server. A valid descriptor will be greater than or equal
+ to 0; a result of -1 indicates that no server connection is
+ currently open. This might change as a result of calling all of the
+ functions in this section on the (except for
+ <xref linkend="libpq-PQcancelErrorMessage"/> and
+ <function>PQcancelSocket</function> itself).
+<synopsis>
+int PQcancelSocket(const PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelErrorMessage">
+ <term><function>PQcancelErrorMessage</function><indexterm><primary>PQcancelErrorMessage</primary></indexterm></term>
+
+ <listitem>
+ <para>
+ <indexterm><primary>error message</primary></indexterm> Returns the error message
+ most recently generated by an operation on the cancel connection.
+<synopsis>
+char *PQcancelErrorMessage(const PGcancelConn *cancelconn);
+</synopsis>
+ </para>
+
+ <para>
+ Nearly all <application>libpq</application> functions that take a
+ <structname>PGcancelConn</structname> will set a message for
+ <xref linkend="libpq-PQcancelErrorMessage"/> if they fail. Note that by
+ <application>libpq</application> convention, a nonempty
+ <xref linkend="libpq-PQcancelErrorMessage"/> result can consist of multiple lines,
+ and will include a trailing newline. The caller should not free
+ the result directly. It will be freed when the associated
+ <structname>PGcancelConn</structname> handle is passed to
+ <xref linkend="libpq-PQcancelFinish"/>. The result string should not be
+ expected to remain the same across operations on the
+ <literal>PGcancelConn</literal> structure.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelFinish">
+ <term><function>PQcancelFinish</function><indexterm><primary>PQcancelFinish</primary></indexterm></term>
+ <listitem>
+ <para>
+ Closes the cancel connection (if it did not finish sending the cancel
+ request yet). Also frees memory used by the <structname>PGcancelConn</structname>
+ object.
+<synopsis>
+void PQcancelFinish(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ Note that even if the cancel attempt fails (as
+ indicated by <xref linkend="libpq-PQcancelStatus"/>), the application should call <xref linkend="libpq-PQcancelFinish"/>
+ to free the memory used by the <structname>PGcancelConn</structname> object.
+ The <structname>PGcancelConn</structname> pointer must not be used again after
+ <xref linkend="libpq-PQcancelFinish"/> has been called.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry id="libpq-PQcancelReset">
+ <term><function>PQcancelReset</function><indexterm><primary>PQcancelReset</primary></indexterm></term>
+ <listitem>
+ <para>
+ Resets the <symbol>PGcancelConn</symbol> so it can be reused for a new
+ cancel connection.
+<synopsis>
+void PQcancelReset(PGcancelConn *cancelConn);
+</synopsis>
+ </para>
+
+ <para>
+ If the <symbol>PGcancelConn</symbol> is currently used to send a cancel
+ request, then this connection is closed. It will then prepare the
+ <symbol>PGcancelConn</symbol> object such that it can be used to send a
+ new cancel request. This can be used to create one <symbol>PGcancelConn</symbol>
+ for a <symbol>PGconn</symbol> and reuse that multiple times throughout
+ the lifetime of the original <symbol>PGconn</symbol>.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </sect2>
+
+ <sect2 id="libpq-cancel-deprecated">
+ <title>Obsolete Functions for Sending Cancel Requests</title>
+
+ <para>
+ These functions represent older methods of sending cancel requests.
+ Although they still work, they are deprecated due to not sending the cancel
+ requests in an encrypted manner, even when the original connection
+ specified <literal>sslmode</literal> or <literal>gssencmode</literal> to
+ require encryption. Thus these older methods are heavily discouraged from
+ being used in new code, and it is recommended to change existing code to
+ use the new functions instead.
+ </para>
<variablelist>
<varlistentry id="libpq-PQgetCancel">
@@ -6046,7 +6438,7 @@ int PQsetSingleRowMode(PGconn *conn);
<listitem>
<para>
Creates a data structure containing the information needed to cancel
- a command issued through a particular database connection.
+ a command using <xref linkend="libpq-PQcancel"/>.
<synopsis>
PGcancel *PQgetCancel(PGconn *conn);
</synopsis>
@@ -6088,36 +6480,37 @@ void PQfreeCancel(PGcancel *cancel);
<listitem>
<para>
- Requests that the server abandon processing of the current command.
-<synopsis>
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelBlocking"/>, but one that can be
+ used safely from within a signal handler. <synopsis>
int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
</synopsis>
</para>
<para>
- The return value is 1 if the cancel request was successfully
- dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
- with an explanatory error message. <parameter>errbuf</parameter>
- must be a char array of size <parameter>errbufsize</parameter> (the
- recommended size is 256 bytes).
+ <xref linkend="libpq-PQcancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelBlocking"/> should be
+ used instead. The only benefit that <xref linkend="libpq-PQcancel"/> has
+ is that it can be safely invoked from a signal handler, if the
+ <parameter>errbuf</parameter> is a local variable in the signal handler.
+ However, this is generally not considered a big enough benefit to be
+ worth the security issues that this function has.
</para>
<para>
- Successful dispatch is no guarantee that the request will have
- any effect, however. If the cancellation is effective, the current
- command will terminate early and return an error result. If the
- cancellation fails (say, because the server was already done
- processing the command), then there will be no visible result at
- all.
+ The <structname>PGcancel</structname> object is read-only as far as
+ <xref linkend="libpq-PQcancel"/> is concerned, so it can also be invoked
+ from a thread that is separate from the one manipulating the
+ <structname>PGconn</structname> object.
</para>
<para>
- <xref linkend="libpq-PQcancel"/> can safely be invoked from a signal
- handler, if the <parameter>errbuf</parameter> is a local variable in the
- signal handler. The <structname>PGcancel</structname> object is read-only
- as far as <xref linkend="libpq-PQcancel"/> is concerned, so it can
- also be invoked from a thread that is separate from the one
- manipulating the <structname>PGconn</structname> object.
+ The return value of <xref linkend="libpq-PQcancel"/>
+ is 1 if the cancel request was successfully
+ dispatched and 0 if not. If not, <parameter>errbuf</parameter> is filled
+ with an explanatory error message. <parameter>errbuf</parameter>
+ must be a char array of size <parameter>errbufsize</parameter> (the
+ recommended size is 256 bytes).
</para>
</listitem>
</varlistentry>
@@ -6129,13 +6522,21 @@ int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
<listitem>
<para>
- <xref linkend="libpq-PQrequestCancel"/> is a deprecated variant of
- <xref linkend="libpq-PQcancel"/>.
+ <xref linkend="libpq-PQrequestCancel"/> is a deprecated and insecure
+ variant of <xref linkend="libpq-PQcancelBlocking"/>.
<synopsis>
int PQrequestCancel(PGconn *conn);
</synopsis>
</para>
+ <para>
+ <xref linkend="libpq-PQrequestCancel"/> only exists because of backwards
+ compatibility reasons. <xref linkend="libpq-PQcancelBlocking"/> should be
+ used instead. There is no benefit to using
+ <xref linkend="libpq-PQrequestCancel"/> over
+ <xref linkend="libpq-PQcancelBlocking"/>.
+ </para>
+
<para>
Requests that the server abandon processing of the current
command. It operates directly on the
@@ -6150,7 +6551,7 @@ int PQrequestCancel(PGconn *conn);
</listitem>
</varlistentry>
</variablelist>
- </para>
+ </sect2>
</sect1>
@@ -9362,7 +9763,7 @@ int PQisthreadsafe();
The deprecated functions <xref linkend="libpq-PQrequestCancel"/> and
<xref linkend="libpq-PQoidStatus"/> are not thread-safe and should not be
used in multithread programs. <xref linkend="libpq-PQrequestCancel"/>
- can be replaced by <xref linkend="libpq-PQcancel"/>.
+ can be replaced by <xref linkend="libpq-PQcancelBlocking"/>.
<xref linkend="libpq-PQoidStatus"/> can be replaced by
<xref linkend="libpq-PQoidValue"/>.
</para>
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 088592deb16..9fbd3d34074 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -193,3 +193,12 @@ PQsendClosePrepared 190
PQsendClosePortal 191
PQchangePassword 192
PQsendPipelineSync 193
+PQcancelBlocking 194
+PQcancelStart 195
+PQcancelCreate 196
+PQcancelPoll 197
+PQcancelStatus 198
+PQcancelSocket 199
+PQcancelErrorMessage 200
+PQcancelReset 201
+PQcancelFinish 202
diff --git a/src/interfaces/libpq/fe-cancel.c b/src/interfaces/libpq/fe-cancel.c
index d69b8f9f9f4..4d67cb50e9b 100644
--- a/src/interfaces/libpq/fe-cancel.c
+++ b/src/interfaces/libpq/fe-cancel.c
@@ -22,9 +22,21 @@
#include "port/pg_bswap.h"
+/*
+ * pg_cancel_conn (backing struct for PGcancelConn) is a wrapper around a
+ * PGconn to send cancellations using PQcancelBlocking and PQcancelStart. This
+ * isn't just a typedef because we want the compiler to complain when a PGconn
+ * is passed to a function that expects a PGcancelConn, and vice versa.
+ */
+struct pg_cancel_conn
+{
+ PGconn conn;
+};
+
+
/*
* pg_cancel (backing struct for PGcancel) stores all data necessary to send a
- * cancel request.
+ * cancel request using the deprecated PQcancel function.
*/
struct pg_cancel
{
@@ -41,6 +53,289 @@ struct pg_cancel
};
+/*
+ * PQcancelCreate
+ *
+ * Create and return a PGcancelConn, which can be used to securely cancel a
+ * query on the given connection.
+ *
+ * This requires either following the non-blocking flow through
+ * PQcancelStart() and PQcancelPoll(), or the blocking PQcancelBlocking().
+ */
+PGcancelConn *
+PQcancelCreate(PGconn *conn)
+{
+ PGconn *cancelConn = pqMakeEmptyPGconn();
+ pg_conn_host originalHost;
+
+ if (cancelConn == NULL)
+ return NULL;
+
+ /* Check we have an open connection */
+ if (!conn)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection was NULL");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ if (conn->sock == PGINVALID_SOCKET)
+ {
+ libpq_append_conn_error(cancelConn, "passed connection is not open");
+ return (PGcancelConn *) cancelConn;
+ }
+
+ /*
+ * Indicate that this connection is used to send a cancellation
+ */
+ cancelConn->cancelRequest = true;
+
+ if (!pqCopyPGconn(conn, cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Compute derived options
+ */
+ if (!pqConnectOptions2(cancelConn))
+ return (PGcancelConn *) cancelConn;
+
+ /*
+ * Copy cancellation token data from the original connnection
+ */
+ cancelConn->be_pid = conn->be_pid;
+ cancelConn->be_key = conn->be_key;
+
+ /*
+ * Cancel requests should not iterate over all possible hosts. The request
+ * needs to be sent to the exact host and address that the original
+ * connection used. So we manually create the host and address arrays with
+ * a single element after freeing the host array that we generated from
+ * the connection options.
+ */
+ pqReleaseConnHosts(cancelConn);
+ cancelConn->nconnhost = 1;
+ cancelConn->naddr = 1;
+
+ cancelConn->connhost = calloc(cancelConn->nconnhost, sizeof(pg_conn_host));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ originalHost = conn->connhost[conn->whichhost];
+ if (originalHost.host)
+ {
+ cancelConn->connhost[0].host = strdup(originalHost.host);
+ if (!cancelConn->connhost[0].host)
+ goto oom_error;
+ }
+ if (originalHost.hostaddr)
+ {
+ cancelConn->connhost[0].hostaddr = strdup(originalHost.hostaddr);
+ if (!cancelConn->connhost[0].hostaddr)
+ goto oom_error;
+ }
+ if (originalHost.port)
+ {
+ cancelConn->connhost[0].port = strdup(originalHost.port);
+ if (!cancelConn->connhost[0].port)
+ goto oom_error;
+ }
+ if (originalHost.password)
+ {
+ cancelConn->connhost[0].password = strdup(originalHost.password);
+ if (!cancelConn->connhost[0].password)
+ goto oom_error;
+ }
+
+ cancelConn->addr = calloc(cancelConn->naddr, sizeof(AddrInfo));
+ if (!cancelConn->connhost)
+ goto oom_error;
+
+ cancelConn->addr[0].addr = conn->raddr;
+ cancelConn->addr[0].family = conn->raddr.addr.ss_family;
+
+ cancelConn->status = CONNECTION_ALLOCATED;
+ return (PGcancelConn *) cancelConn;
+
+oom_error:
+ conn->status = CONNECTION_BAD;
+ libpq_append_conn_error(cancelConn, "out of memory");
+ return (PGcancelConn *) cancelConn;
+}
+
+
+/*
+ * PQcancelBlocking
+ *
+ * Send a cancellation request in a blocking fashion.
+ * Returns 1 if successful 0 if not.
+ */
+int
+PQcancelBlocking(PGcancelConn *cancelConn)
+{
+ if (!PQcancelStart(cancelConn))
+ return 0;
+ return pqConnectDBComplete(&cancelConn->conn);
+}
+
+/*
+ * PQcancelStart
+ *
+ * Starts sending a cancellation request in a non-blocking fashion. Returns
+ * 1 if successful 0 if not.
+ */
+int
+PQcancelStart(PGcancelConn *cancelConn)
+{
+ if (!cancelConn || cancelConn->conn.status == CONNECTION_BAD)
+ return 0;
+
+ if (cancelConn->conn.status != CONNECTION_ALLOCATED)
+ {
+ libpq_append_conn_error(&cancelConn->conn,
+ "cancel request is already being sent on this connection");
+ cancelConn->conn.status = CONNECTION_BAD;
+ return 0;
+ }
+
+ return pqConnectDBStart(&cancelConn->conn);
+}
+
+/*
+ * PQcancelPoll
+ *
+ * Poll a cancel connection. For usage details see PQconnectPoll.
+ */
+PostgresPollingStatusType
+PQcancelPoll(PGcancelConn *cancelConn)
+{
+ PGconn *conn = &cancelConn->conn;
+ int n;
+
+ /*
+ * We leave most of the connection establishement to PQconnectPoll, since
+ * it's very similar to normal connection establishment. But once we get
+ * to the CONNECTION_AWAITING_RESPONSE we need to start doing our own
+ * thing.
+ */
+ if (conn->status != CONNECTION_AWAITING_RESPONSE)
+ {
+ return PQconnectPoll(conn);
+ }
+
+ /*
+ * At this point we are waiting on the server to close the connection,
+ * which is its way of communicating that the cancel has been handled.
+ */
+
+ n = pqReadData(conn);
+
+ if (n == 0)
+ return PGRES_POLLING_READING;
+
+#ifndef WIN32
+
+ /*
+ * If we receive an error report it, but only if errno is non-zero.
+ * Otherwise we assume it's an EOF, which is what we expect from the
+ * server.
+ *
+ * We skip this for Windows, because Windows is a bit special in its EOF
+ * behaviour for TCP. Sometimes it will error with an ECONNRESET when
+ * there is a clean connection closure. See these threads for details:
+ * https://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de
+ *
+ * https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com
+ *
+ * PQcancel ignores such errors and reports success for the cancellation
+ * anyway, so even if this is not always correct we do the same here.
+ */
+ if (n < 0 && errno != 0)
+ {
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+#endif
+
+ /*
+ * We don't expect any data, only connection closure. So if we strangely
+ * do receive some data we consider that an error.
+ */
+ if (n > 0)
+ {
+ libpq_append_conn_error(conn, "received unexpected response from server");
+ conn->status = CONNECTION_BAD;
+ return PGRES_POLLING_FAILED;
+ }
+
+ /*
+ * Getting here means that we received an EOF, which is what we were
+ * expecting -- the cancel request has completed.
+ */
+ cancelConn->conn.status = CONNECTION_OK;
+ resetPQExpBuffer(&conn->errorMessage);
+ return PGRES_POLLING_OK;
+}
+
+/*
+ * PQcancelStatus
+ *
+ * Get the status of a cancel connection.
+ */
+ConnStatusType
+PQcancelStatus(const PGcancelConn *cancelConn)
+{
+ return PQstatus(&cancelConn->conn);
+}
+
+/*
+ * PQcancelSocket
+ *
+ * Get the socket of the cancel connection.
+ */
+int
+PQcancelSocket(const PGcancelConn *cancelConn)
+{
+ return PQsocket(&cancelConn->conn);
+}
+
+/*
+ * PQcancelErrorMessage
+ *
+ * Get the socket of the cancel connection.
+ */
+char *
+PQcancelErrorMessage(const PGcancelConn *cancelConn)
+{
+ return PQerrorMessage(&cancelConn->conn);
+}
+
+/*
+ * PQcancelReset
+ *
+ * Resets the cancel connection, so it can be reused to send a new cancel
+ * request.
+ */
+void
+PQcancelReset(PGcancelConn *cancelConn)
+{
+ pqClosePGconn(&cancelConn->conn);
+ cancelConn->conn.status = CONNECTION_ALLOCATED;
+ cancelConn->conn.whichhost = 0;
+ cancelConn->conn.whichaddr = 0;
+ cancelConn->conn.try_next_host = false;
+ cancelConn->conn.try_next_addr = false;
+}
+
+/*
+ * PQcancelFinish
+ *
+ * Closes and frees the cancel connection.
+ */
+void
+PQcancelFinish(PGcancelConn *cancelConn)
+{
+ PQfinish(&cancelConn->conn);
+}
+
/*
* PQgetCancel: get a PGcancel structure corresponding to a connection.
*
@@ -145,7 +440,7 @@ optional_setsockopt(int fd, int protoid, int optid, int value)
/*
- * PQcancel: request query cancel
+ * PQcancel: old, non-encrypted, but signal-safe way of requesting query cancel
*
* The return value is true if the cancel request was successfully
* dispatched, false if not (in which case an error message is available).
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index d4e10a0c4f3..8e8634e5baf 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -616,8 +616,17 @@ pqDropServerData(PGconn *conn)
conn->write_failed = false;
free(conn->write_err_msg);
conn->write_err_msg = NULL;
- conn->be_pid = 0;
- conn->be_key = 0;
+
+ /*
+ * Cancel connections need to retain their be_pid and be_key across
+ * PQcancelReset invocations, otherwise they would not have access to the
+ * secret token of the connection they are supposed to cancel.
+ */
+ if (!conn->cancelRequest)
+ {
+ conn->be_pid = 0;
+ conn->be_key = 0;
+ }
}
@@ -923,6 +932,45 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
return true;
}
+/*
+ * Copy over option values from srcConn to dstConn
+ *
+ * Don't put anything cute here --- intelligence should be in
+ * connectOptions2 ...
+ *
+ * Returns true on success. On failure, returns false and sets error message of
+ * dstConn.
+ */
+bool
+pqCopyPGconn(PGconn *srcConn, PGconn *dstConn)
+{
+ const internalPQconninfoOption *option;
+
+ /* copy over connection options */
+ for (option = PQconninfoOptions; option->keyword; option++)
+ {
+ if (option->connofs >= 0)
+ {
+ const char **tmp = (const char **) ((char *) srcConn + option->connofs);
+
+ if (*tmp)
+ {
+ char **dstConnmember = (char **) ((char *) dstConn + option->connofs);
+
+ if (*dstConnmember)
+ free(*dstConnmember);
+ *dstConnmember = strdup(*tmp);
+ if (*dstConnmember == NULL)
+ {
+ libpq_append_conn_error(dstConn, "out of memory");
+ return false;
+ }
+ }
+ }
+ }
+ return true;
+}
+
/*
* connectOptions1
*
@@ -2308,10 +2356,18 @@ pqConnectDBStart(PGconn *conn)
* Set up to try to connect to the first host. (Setting whichhost = -1 is
* a bit of a cheat, but PQconnectPoll will advance it to 0 before
* anything else looks at it.)
+ *
+ * Cancel requests are special though, they should only try one host and
+ * address, and these fields have already been set up in PQcancelCreate, so
+ * leave these fields alone for cancel requests.
*/
- conn->whichhost = -1;
- conn->try_next_addr = false;
- conn->try_next_host = true;
+ if (!conn->cancelRequest)
+ {
+ conn->whichhost = -1;
+ conn->try_next_host = true;
+ conn->try_next_addr = false;
+ }
+
conn->status = CONNECTION_NEEDED;
/* Also reset the target_server_type state if needed */
@@ -2453,7 +2509,10 @@ pqConnectDBComplete(PGconn *conn)
/*
* Now try to advance the state machine.
*/
- flag = PQconnectPoll(conn);
+ if (conn->cancelRequest)
+ flag = PQcancelPoll((PGcancelConn *) conn);
+ else
+ flag = PQconnectPoll(conn);
}
}
@@ -2578,13 +2637,17 @@ keep_going: /* We will come back to here until there is
* Oops, no more hosts.
*
* If we are trying to connect in "prefer-standby" mode, then drop
- * the standby requirement and start over.
+ * the standby requirement and start over. Don't do this for
+ * cancel requests though, since we are certain the list of
+ * servers won't change as the target_server_type option is not
+ * applicable to those connections.
*
* Otherwise, an appropriate error message is already set up, so
* we just need to set the right status.
*/
if (conn->target_server_type == SERVER_TYPE_PREFER_STANDBY &&
- conn->nconnhost > 0)
+ conn->nconnhost > 0 &&
+ !conn->cancelRequest)
{
conn->target_server_type = SERVER_TYPE_PREFER_STANDBY_PASS2;
conn->whichhost = 0;
@@ -3226,6 +3289,29 @@ keep_going: /* We will come back to here until there is
}
#endif /* USE_SSL */
+ /*
+ * For cancel requests this is as far as we need to go in the
+ * connection establishment. Now we can actually send our
+ * cancellation request.
+ */
+ if (conn->cancelRequest)
+ {
+ CancelRequestPacket cancelpacket;
+
+ packetlen = sizeof(cancelpacket);
+ cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
+ cancelpacket.backendPID = pg_hton32(conn->be_pid);
+ cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+ if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
+ {
+ libpq_append_conn_error(conn, "could not send cancel packet: %s",
+ SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
+ goto error_return;
+ }
+ conn->status = CONNECTION_AWAITING_RESPONSE;
+ return PGRES_POLLING_READING;
+ }
+
/*
* Build the startup packet.
*/
@@ -3975,8 +4061,14 @@ keep_going: /* We will come back to here until there is
}
}
- /* We can release the address list now. */
- release_conn_addrinfo(conn);
+ /*
+ * For non cancel requests we can release the address list
+ * now. For cancel requests we never actually resolve
+ * addresses and instead the addrinfo exists for the lifetime
+ * of the connection.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/*
* Contents of conn->errorMessage are no longer interesting
@@ -4344,6 +4436,7 @@ freePGconn(PGconn *conn)
free(conn->events[i].name);
}
+ release_conn_addrinfo(conn);
pqReleaseConnHosts(conn);
free(conn->client_encoding_initial);
@@ -4495,6 +4588,13 @@ release_conn_addrinfo(PGconn *conn)
static void
sendTerminateConn(PGconn *conn)
{
+ /*
+ * The Postgres cancellation protocol does not have a notion of a
+ * Terminate message, so don't send one.
+ */
+ if (conn->cancelRequest)
+ return;
+
/*
* Note that the protocol doesn't allow us to send Terminate messages
* during the startup phase.
@@ -4548,7 +4648,14 @@ pqClosePGconn(PGconn *conn)
conn->pipelineStatus = PQ_PIPELINE_OFF;
pqClearAsyncResult(conn); /* deallocate result */
pqClearConnErrorState(conn);
- release_conn_addrinfo(conn);
+
+ /*
+ * Release addrinfo, but since cancel requests never change their addrinfo
+ * we don't do that. Otherwise we would have to rebuild it during a
+ * PQcancelReset.
+ */
+ if (!conn->cancelRequest)
+ release_conn_addrinfo(conn);
/* Reset all state obtained from server, too */
pqDropServerData(conn);
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 2c06044a75e..09b485bd2bc 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -79,7 +79,9 @@ typedef enum
CONNECTION_GSS_STARTUP, /* Negotiating GSSAPI. */
CONNECTION_CHECK_TARGET, /* Internal state: checking target server
* properties. */
- CONNECTION_CHECK_STANDBY /* Checking if server is in standby mode. */
+ CONNECTION_CHECK_STANDBY, /* Checking if server is in standby mode. */
+ CONNECTION_ALLOCATED /* Waiting for connection attempt to be
+ * started. */
} ConnStatusType;
typedef enum
@@ -166,6 +168,11 @@ typedef enum
*/
typedef struct pg_conn PGconn;
+/* PGcancelConn encapsulates a cancel connection to the backend.
+ * The contents of this struct are not supposed to be known to applications.
+ */
+typedef struct pg_cancel_conn PGcancelConn;
+
/* PGresult encapsulates the result of a query (or more precisely, of a single
* SQL command --- a query string given to PQsendQuery can contain multiple
* commands and thus return multiple PGresult objects).
@@ -322,16 +329,34 @@ extern PostgresPollingStatusType PQresetPoll(PGconn *conn);
/* Synchronous (blocking) */
extern void PQreset(PGconn *conn);
+/* Create a PGcancelConn that's used to cancel a query on the given PGconn */
+extern PGcancelConn *PQcancelCreate(PGconn *conn);
+
+/* issue a cancel request in a non-blocking manner */
+extern int PQcancelStart(PGcancelConn *cancelConn);
+
+/* issue a blocking cancel request */
+extern int PQcancelBlocking(PGcancelConn *cancelConn);
+
+/* poll a non-blocking cancel request */
+extern PostgresPollingStatusType PQcancelPoll(PGcancelConn *cancelConn);
+extern ConnStatusType PQcancelStatus(const PGcancelConn *cancelConn);
+extern int PQcancelSocket(const PGcancelConn *cancelConn);
+extern char *PQcancelErrorMessage(const PGcancelConn *cancelConn);
+extern void PQcancelReset(PGcancelConn *cancelConn);
+extern void PQcancelFinish(PGcancelConn *cancelConn);
+
+
/* request a cancel structure */
extern PGcancel *PQgetCancel(PGconn *conn);
/* free a cancel structure */
extern void PQfreeCancel(PGcancel *cancel);
-/* issue a cancel request */
+/* deprecated version of PQcancelBlocking, but one which is signal-safe */
extern int PQcancel(PGcancel *cancel, char *errbuf, int errbufsize);
-/* backwards compatible version of PQcancel; not thread-safe */
+/* deprecated version of PQcancel; not thread-safe */
extern int PQrequestCancel(PGconn *conn);
/* Accessor functions for PGconn objects */
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 3abcd180d6d..9c05f11a6e9 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -409,6 +409,10 @@ struct pg_conn
char *require_auth; /* name of the expected auth method */
char *load_balance_hosts; /* load balance over hosts */
+ bool cancelRequest; /* true if this connection is used to send a
+ * cancel request, instead of being a normal
+ * connection that's used for queries */
+
/* Optional file to write trace info to */
FILE *Pfdebug;
int traceFlags;
@@ -669,6 +673,7 @@ extern void pqClosePGconn(PGconn *conn);
extern int pqPacketSend(PGconn *conn, char pack_type,
const void *buf, size_t buf_len);
extern bool pqGetHomeDirectory(char *buf, int bufsize);
+extern bool pqCopyPGconn(PGconn *srcConn, PGconn *dstConn);
extern bool pqParseIntParam(const char *value, int *result, PGconn *conn,
const char *context);
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index c6c7b1c3a17..a17c97bdaf4 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -215,6 +215,7 @@ static void
test_cancel(PGconn *conn)
{
PGcancel *cancel;
+ PGcancelConn *cancelConn;
PGconn *monitorConn;
char errorbuf[256];
@@ -251,6 +252,130 @@ test_cancel(PGconn *conn)
pg_fatal("failed to run PQrequestCancel: %s", PQerrorMessage(conn));
confirm_query_canceled(conn);
+ /* test PQcancelBlocking */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelCreate(conn);
+ if (!PQcancelBlocking(cancelConn))
+ pg_fatal("failed to run PQcancelBlocking: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+ PQcancelFinish(cancelConn);
+
+ /* test PQcancelCreate and then polling with PQcancelPoll */
+ send_cancellable_query(conn, monitorConn);
+ cancelConn = PQcancelCreate(conn);
+ if (!PQcancelStart(cancelConn))
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ /*
+ * test PQcancelReset works on the cancel connection and it can be reused
+ * afterwards
+ */
+ PQcancelReset(cancelConn);
+
+ send_cancellable_query(conn, monitorConn);
+ if (!PQcancelStart(cancelConn))
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ while (true)
+ {
+ struct timeval tv;
+ fd_set input_mask;
+ fd_set output_mask;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancelConn);
+ int sock = PQcancelSocket(cancelConn);
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ pg_debug("polling for reads\n");
+ FD_SET(sock, &input_mask);
+ break;
+ case PGRES_POLLING_WRITING:
+ pg_debug("polling for writes\n");
+ FD_SET(sock, &output_mask);
+ break;
+ default:
+ pg_fatal("bad cancel connection: %s", PQcancelErrorMessage(cancelConn));
+ }
+
+ if (sock < 0)
+ pg_fatal("sock did not exist: %s", PQcancelErrorMessage(cancelConn));
+
+ tv.tv_sec = 3;
+ tv.tv_usec = 0;
+
+ while (true)
+ {
+ if (select(sock + 1, &input_mask, &output_mask, NULL, &tv) < 0)
+ {
+ if (errno == EINTR)
+ continue;
+ pg_fatal("select() failed: %m");
+ }
+ break;
+ }
+ }
+ if (PQcancelStatus(cancelConn) != CONNECTION_OK)
+ pg_fatal("unexpected cancel connection status: %s", PQcancelErrorMessage(cancelConn));
+ confirm_query_canceled(conn);
+
+ PQcancelFinish(cancelConn);
+
fprintf(stderr, "ok\n");
}
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a3052a181d1..aa7a25b8f8c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1763,6 +1763,7 @@ PG_Locale_Strategy
PG_Lock_Status
PG_init_t
PGcancel
+PGcancelConn
PGcmdQueueEntry
PGconn
PGdataValue
base-commit: 4945e4ed4a72c3ff41560ccef722c3d70ae07dbb
--
2.34.1
v36-0002-Start-using-new-libpq-cancel-APIs.patchapplication/octet-stream; name=v36-0002-Start-using-new-libpq-cancel-APIs.patchDownload
From 2164d349aed461bf0a4dc457fdf4b2ee1b333676 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Dec 2023 13:39:09 +0100
Subject: [PATCH v36 2/2] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in most of the codebase with
these newer ones. This specifically leaves out changes to psql and
pgbench as those would need a much larger refactor to be able to call
them, due to the new functions not being signal-safe.
---
contrib/dblink/dblink.c | 30 +++--
contrib/postgres_fdw/connection.c | 105 +++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 11 +-
src/test/isolation/isolationtester.c | 29 ++---
6 files changed, 145 insertions(+), 52 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 19a362526d2..98dcca3e6fd 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1346,22 +1346,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelCreate(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (!PQcancelBlocking(cancelConn))
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 4931ebf5915..dcc13dc3b24 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -133,7 +133,7 @@ static void pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue);
static void pgfdw_reject_incomplete_xact_state_change(ConnCacheEntry *entry);
static void pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_cancel_query(PGconn *conn);
-static bool pgfdw_cancel_query_begin(PGconn *conn);
+static bool pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime);
static bool pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
bool consume_input);
static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query,
@@ -1315,36 +1315,104 @@ pgfdw_cancel_query(PGconn *conn)
endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
CONNECTION_CLEANUP_TIMEOUT);
- if (!pgfdw_cancel_query_begin(conn))
+ if (!pgfdw_cancel_query_begin(conn, endtime))
return false;
return pgfdw_cancel_query_end(conn, endtime, false);
}
static bool
-pgfdw_cancel_query_begin(PGconn *conn)
+pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
{
- PGcancel *cancel;
- char errbuf[256];
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelCreate(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (!PQcancelStart(cancel_conn))
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
}
- return true;
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ return !failed;
}
static bool
@@ -1685,7 +1753,10 @@ pgfdw_abort_cleanup_begin(ConnCacheEntry *entry, bool toplevel,
*/
if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
{
- if (!pgfdw_cancel_query_begin(entry->conn))
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ CONNECTION_CLEANUP_TIMEOUT);
+
+ if (!pgfdw_cancel_query_begin(entry->conn, endtime))
return false; /* Unable to cancel running query */
*cancel_requested = lappend(*cancel_requested, entry);
}
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 58a603ac56f..e03160bd975 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2739,6 +2739,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index e3d147de6da..2626e68cc69 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -737,6 +737,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 808d54461fd..5ed9f3ba17b 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,14 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
+ PGcancelConn *cancelConn = PQcancelCreate(conn);
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQcancelBlocking(cancelConn);
+ PQcancelFinish(cancelConn);
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index ed110f740f1..0b342b5c2bb 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelCreate(conn);
- if (cancel != NULL)
+ if (PQcancelBlocking(cancel_conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
On 2024-Mar-12, Jelte Fennema-Nio wrote:
On Tue, 12 Mar 2024 at 10:19, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Here's a last one for the cfbot.
Thanks for committing the first 3 patches btw. Attached a tiny change
to 0001, which adds "(backing struct for PGcancelConn)" to the comment
on pg_cancel_conn.
Thanks, I included it. I hope there were no other changes, because I
didn't verify :-) but if there were, please let me know to incorporate
them.
I made a number of other small changes, mostly to the documentation,
nothing fundamental. (Someday we should stop using <listentry> to
document the libpq functions and use refentry's instead ... it'd be
useful to have manpages for these functions.)
One thing I don't like very much is release_conn_addrinfo(), which is
called conditionally in two places but unconditionally in other places.
Maybe it'd make more sense to put this conditionality inside the
function itself, possibly with a "bool force" flag to suppress that in
the cases where it is not desired.
In pqConnectDBComplete, we cast the PGconn * to PGcancelConn * in order
to call PQcancelPoll, which is a bit abusive, but I don't know how to do
better. Maybe we just accept this ... but if PQcancelStart is the only
way to have pqConnectDBStart called from a cancel connection, maybe it'd
be saner to duplicate pqConnectDBStart for cancel conns.
Thanks!
--
Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
Hmm, buildfarm member kestrel (which uses
-fsanitize=undefined,alignment) failed:
# Running: libpq_pipeline -r 700 cancel port=49975 host=/tmp/dFh46H7YGc
dbname='postgres'
test cancellations...
libpq_pipeline:260: query did not fail when it was expected
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-03-12%2016%3A41%3A27
--
Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/
"The saddest aspect of life right now is that science gathers knowledge faster
than society gathers wisdom." (Isaac Asimov)
On 2024-Mar-12, Alvaro Herrera wrote:
Hmm, buildfarm member kestrel (which uses
-fsanitize=undefined,alignment) failed:# Running: libpq_pipeline -r 700 cancel port=49975 host=/tmp/dFh46H7YGc
dbname='postgres'
test cancellations...
libpq_pipeline:260: query did not fail when it was expected
Hm, I tried using the same compile flags, couldn't reproduce.
--
Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
"Pido que me den el Nobel por razones humanitarias" (Nicanor Parra)
On Tue, 12 Mar 2024 at 19:28, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
On 2024-Mar-12, Alvaro Herrera wrote:
Hmm, buildfarm member kestrel (which uses
-fsanitize=undefined,alignment) failed:# Running: libpq_pipeline -r 700 cancel port=49975 host=/tmp/dFh46H7YGc
dbname='postgres'
test cancellations...
libpq_pipeline:260: query did not fail when it was expectedHm, I tried using the same compile flags, couldn't reproduce.
Okay, it passed now it seems so I guess this test is flaky somehow.
The error message and the timing difference between failed and
succeeded buildfarm run clearly indicates that the pg_sleep ran its
180 seconds to completion (so cancel was never processed for some
reason).
**failed case**
282/285 postgresql:libpq_pipeline / libpq_pipeline/001_libpq_pipeline
ERROR 191.56s exit status 1
**succeeded case**
252/285 postgresql:libpq_pipeline / libpq_pipeline/001_libpq_pipeline
OK 10.01s 21 subtests passed
I don't see any obvious reason for how this test can be flaky, but
I'll think a bit more about it tomorrow.
Jelte Fennema-Nio <postgres@jeltef.nl> writes:
On Tue, 12 Mar 2024 at 19:28, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Hmm, buildfarm member kestrel (which uses
-fsanitize=undefined,alignment) failed:
Hm, I tried using the same compile flags, couldn't reproduce.
Okay, it passed now it seems so I guess this test is flaky somehow.
Two more intermittent failures:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bushmaster&dt=2024-03-13%2003%3A15%3A09
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=taipan&dt=2024-03-13%2003%3A15%3A31
These animals all belong to Andres' flotilla, but other than that
I'm not seeing a connection. I suspect it's basically just a
timing dependency. Have you thought about the fact that a cancel
request is a no-op if it arrives after the query's done?
regards, tom lane
On Wed, 13 Mar 2024 at 04:53, Tom Lane <tgl@sss.pgh.pa.us> wrote:
I suspect it's basically just a
timing dependency. Have you thought about the fact that a cancel
request is a no-op if it arrives after the query's done?
I agree it's probably a timing issue. The cancel being received after
the query is done seems very unlikely, since the query takes 180
seconds (assuming PG_TEST_TIMEOUT_DEFAULT is not lowered for these
animals). I think it's more likely that the cancel request arrives too
early, and thus being ignored because no query is running yet. The
test already had logic to wait until the query backend was in the
"active" state, before sending a cancel to solve that issue. But my
guess is that that somehow isn't enough.
Sadly I'm having a hard time reliably reproducing this race condition
locally. So it's hard to be sure what is happening here. Attached is a
patch with a wild guess as to what the issue might be (i.e. seeing an
outdated "active" state and thus passing the check even though the
query is not running yet)
Attachments:
v37-0001-Hopefully-make-cancel-test-more-reliable.patchapplication/octet-stream; name=v37-0001-Hopefully-make-cancel-test-more-reliable.patchDownload
From ca13d7733b8a2f0f7113b579a23be617e6accaf9 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Wed, 13 Mar 2024 10:49:05 +0100
Subject: [PATCH v37 1/2] Hopefully make cancel test more reliable
The newly introduced cancel test in libpq_pipeline was flaky. It's not
completely clear why, but one option is that the check for "active" was
actually seeing the active state for the previous query. This change
should address any such race condition by first waiting until the
connection is reported as idle.
---
.../modules/libpq_pipeline/libpq_pipeline.c | 62 ++++++++++++-------
1 file changed, 39 insertions(+), 23 deletions(-)
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 1fe15ee8899..1d1549e7a1d 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -114,48 +114,34 @@ confirm_query_canceled_impl(int line, PGconn *conn)
PQconsumeInput(conn);
}
-#define send_cancellable_query(conn, monitorConn) \
- send_cancellable_query_impl(__LINE__, conn, monitorConn)
static void
-send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+wait_for_connection_state(int line, PGconn *conn, char *state, PGconn *monitorConn)
{
- const char *env_wait;
- const Oid paramTypes[1] = {INT4OID};
+ const Oid paramTypes[] = {INT4OID, TEXTOID};
int procpid = PQbackendPID(conn);
- env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
- if (env_wait == NULL)
- env_wait = "180";
-
- if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes,
- &env_wait, NULL, NULL, 0) != 1)
- pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
-
- /*
- * Wait until the query is actually running. Otherwise sending a
- * cancellation request might not cancel the query due to race conditions.
- */
while (true)
{
char *value;
PGresult *res;
- const char *paramValues[1];
+ const char *paramValues[2];
char pidval[16];
snprintf(pidval, 16, "%d", procpid);
paramValues[0] = pidval;
+ paramValues[1] = state;
res = PQexecParams(monitorConn,
"SELECT count(*) FROM pg_stat_activity WHERE "
- "pid = $1 AND state = 'active'",
- 1, NULL, paramValues, NULL, NULL, 1);
+ "pid = $1 AND state = $2",
+ 2, paramTypes, paramValues, NULL, NULL, 1);
if (PQresultStatus(res) != PGRES_TUPLES_OK)
- pg_fatal("could not query pg_stat_activity: %s", PQerrorMessage(monitorConn));
+ pg_fatal_impl(line, "could not query pg_stat_activity: %s", PQerrorMessage(monitorConn));
if (PQntuples(res) != 1)
- pg_fatal("unexpected number of rows received: %d", PQntuples(res));
+ pg_fatal_impl(line, "unexpected number of rows received: %d", PQntuples(res));
if (PQnfields(res) != 1)
- pg_fatal("unexpected number of columns received: %d", PQnfields(res));
+ pg_fatal_impl(line, "unexpected number of columns received: %d", PQnfields(res));
value = PQgetvalue(res, 0, 0);
if (*value != '0')
{
@@ -169,6 +155,36 @@ send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
}
}
+#define send_cancellable_query(conn, monitorConn) \
+ send_cancellable_query_impl(__LINE__, conn, monitorConn)
+static void
+send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
+{
+ const char *env_wait;
+ const Oid paramTypes[1] = {INT4OID};
+
+ /*
+ * Wait for the connection to be idle, so that our check for an active
+ * connection below is reliable, instead of possibly seeing an outdated
+ * state.
+ */
+ wait_for_connection_state(line, conn, "idle", monitorConn);
+
+ env_wait = getenv("PG_TEST_TIMEOUT_DEFAULT");
+ if (env_wait == NULL)
+ env_wait = "180";
+
+ if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes,
+ &env_wait, NULL, NULL, 0) != 1)
+ pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+
+ /*
+ * Wait for the query to start, because if the query is not running yet
+ * the cancel request that we send won't have any effect.
+ */
+ wait_for_connection_state(line, conn, "active", monitorConn);
+}
+
/*
* Create a new connection with the same conninfo as the given one.
*/
base-commit: a3da95deee38ee067b0bead639c830eacbe894d5
--
2.34.1
v37-0002-Start-using-new-libpq-cancel-APIs.patchapplication/octet-stream; name=v37-0002-Start-using-new-libpq-cancel-APIs.patchDownload
From fa0b581e928a3394528f2faee40ddc7714016ba8 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Dec 2023 13:39:09 +0100
Subject: [PATCH v37 2/2] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in most of the codebase with
these newer ones. This specifically leaves out changes to psql and
pgbench as those would need a much larger refactor to be able to call
them, due to the new functions not being signal-safe.
---
contrib/dblink/dblink.c | 30 +++--
contrib/postgres_fdw/connection.c | 105 +++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 11 +-
src/test/isolation/isolationtester.c | 29 ++---
6 files changed, 145 insertions(+), 52 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 19a362526d2..98dcca3e6fd 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1346,22 +1346,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelCreate(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (!PQcancelBlocking(cancelConn))
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 4931ebf5915..dcc13dc3b24 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -133,7 +133,7 @@ static void pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue);
static void pgfdw_reject_incomplete_xact_state_change(ConnCacheEntry *entry);
static void pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_cancel_query(PGconn *conn);
-static bool pgfdw_cancel_query_begin(PGconn *conn);
+static bool pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime);
static bool pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
bool consume_input);
static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query,
@@ -1315,36 +1315,104 @@ pgfdw_cancel_query(PGconn *conn)
endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
CONNECTION_CLEANUP_TIMEOUT);
- if (!pgfdw_cancel_query_begin(conn))
+ if (!pgfdw_cancel_query_begin(conn, endtime))
return false;
return pgfdw_cancel_query_end(conn, endtime, false);
}
static bool
-pgfdw_cancel_query_begin(PGconn *conn)
+pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
{
- PGcancel *cancel;
- char errbuf[256];
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelCreate(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (!PQcancelStart(cancel_conn))
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
}
- return true;
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ return !failed;
}
static bool
@@ -1685,7 +1753,10 @@ pgfdw_abort_cleanup_begin(ConnCacheEntry *entry, bool toplevel,
*/
if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
{
- if (!pgfdw_cancel_query_begin(entry->conn))
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ CONNECTION_CLEANUP_TIMEOUT);
+
+ if (!pgfdw_cancel_query_begin(entry->conn, endtime))
return false; /* Unable to cancel running query */
*cancel_requested = lappend(*cancel_requested, entry);
}
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 58a603ac56f..e03160bd975 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2739,6 +2739,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index e3d147de6da..2626e68cc69 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -737,6 +737,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 808d54461fd..5ed9f3ba17b 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,14 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
+ PGcancelConn *cancelConn = PQcancelCreate(conn);
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQcancelBlocking(cancelConn);
+ PQcancelFinish(cancelConn);
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index ed110f740f1..0b342b5c2bb 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelCreate(conn);
- if (cancel != NULL)
+ if (PQcancelBlocking(cancel_conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
On 2024-Mar-13, Jelte Fennema-Nio wrote:
I agree it's probably a timing issue. The cancel being received after
the query is done seems very unlikely, since the query takes 180
seconds (assuming PG_TEST_TIMEOUT_DEFAULT is not lowered for these
animals). I think it's more likely that the cancel request arrives too
early, and thus being ignored because no query is running yet. The
test already had logic to wait until the query backend was in the
"active" state, before sending a cancel to solve that issue. But my
guess is that that somehow isn't enough.Sadly I'm having a hard time reliably reproducing this race condition
locally. So it's hard to be sure what is happening here. Attached is a
patch with a wild guess as to what the issue might be (i.e. seeing an
outdated "active" state and thus passing the check even though the
query is not running yet)
I tried leaving the original running in my laptop to see if I could
reproduce it, but got no hits ... and we didn't get any other failures
apart from the three ones already reported ... so it's not terribly high
probability. Anyway I pushed your patch now since the theory seems
plausible; let's see if we still get the issue to reproduce. If it
does, we could make the script more verbose to hunt for further clues.
--
Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/
"Here's a general engineering tip: if the non-fun part is too complex for you
to figure out, that might indicate the fun part is too ambitious." (John Naylor)
/messages/by-id/CAFBsxsG4OWHBbSDM=sSeXrQGOtkPiOEOuME4yD7Ce41NtaAD9g@mail.gmail.com
On Wed, Mar 13, 2024 at 12:01 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
On 2024-Mar-13, Jelte Fennema-Nio wrote:
Sadly I'm having a hard time reliably reproducing this race condition
locally. So it's hard to be sure what is happening here. Attached is a
patch with a wild guess as to what the issue might be (i.e. seeing an
outdated "active" state and thus passing the check even though the
query is not running yet)I tried leaving the original running in my laptop to see if I could
reproduce it, but got no hits ... and we didn't get any other failures
apart from the three ones already reported ... so it's not terribly high
probability. Anyway I pushed your patch now since the theory seems
plausible; let's see if we still get the issue to reproduce. If it
does, we could make the script more verbose to hunt for further clues.
I hit this on my machine. With the attached diff I can reproduce
constantly (including with the most recent test patch); I think the
cancel must be arriving between the bind/execute steps?
Thanks,
--Jacob
Attachments:
repro.diff.txttext/plain; charset=US-ASCII; name=repro.diff.txtDownload
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 6b7903314a..22ce7c07d9 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -2073,6 +2073,9 @@ exec_bind_message(StringInfo input_message)
valgrind_report_error_query(debug_query_string);
debug_query_string = NULL;
+
+ if (strstr(psrc->query_string, "pg_sleep"))
+ sleep(1);
}
/*
On Wed, 13 Mar 2024 at 20:08, Jacob Champion
<jacob.champion@enterprisedb.com> wrote:
I hit this on my machine. With the attached diff I can reproduce
constantly (including with the most recent test patch); I think the
cancel must be arriving between the bind/execute steps?
Nice find! Your explanation makes total sense. Attached a patchset
that fixes/works around this issue by using the simple query protocol
in the cancel test.
Attachments:
v38-0001-Use-simple-query-protocol-for-cancel-test.patchapplication/octet-stream; name=v38-0001-Use-simple-query-protocol-for-cancel-test.patchDownload
From a89571fae6e91077050fc2ee6f0997376f1708fb Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Mar 2024 10:45:51 +0100
Subject: [PATCH v38 1/3] Use simple query protocol for cancel test
The new cancel test was randomly failing on the build farm. It turns out
that using the extended query protocol was the cause of this, because
it was possible for the cancel to arrive in between the Bind and Execute
messages. This fixes that by using the simple query protocol to send the
query that should be cancelled.
Reported-By: Jacob Champion
---
.../modules/libpq_pipeline/libpq_pipeline.c | 17 ++++++++++++++---
1 file changed, 14 insertions(+), 3 deletions(-)
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index e730ad37698..83f9caca726 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -165,7 +165,7 @@ static void
send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
{
const char *env_wait;
- const Oid paramTypes[1] = {INT4OID};
+ char *query;
/*
* Wait for the connection to be idle, so that our check for an active
@@ -178,10 +178,21 @@ send_cancellable_query_impl(int line, PGconn *conn, PGconn *monitorConn)
if (env_wait == NULL)
env_wait = "180";
- if (PQsendQueryParams(conn, "SELECT pg_sleep($1)", 1, paramTypes,
- &env_wait, NULL, NULL, 0) != 1)
+ /*
+ * We cannot use PQsendQueryParams here because it uses the extended
+ * protocol to send the query. And it turns out there exists a race
+ * condition where would send the cancel request in between the Bind and
+ * Execute messages, resulting in the cancel request being ignored. So
+ * instead we build the query string client side and send it using
+ * PQsendQuery so there is only a single Query message.
+ */
+ query = psprintf("SELECT pg_sleep(%d)", atoi(env_wait));
+
+ if (PQsendQuery(conn, query) != 1)
pg_fatal_impl(line, "failed to send query: %s", PQerrorMessage(conn));
+ pfree(query);
+
/*
* Wait for the query to start, because if the query is not running yet
* the cancel request that we send won't have any effect.
base-commit: cc6e64afda530576d83e331365d36c758495a7cd
--
2.34.1
v38-0003-Start-using-new-libpq-cancel-APIs.patchapplication/octet-stream; name=v38-0003-Start-using-new-libpq-cancel-APIs.patchDownload
From e7773d8228307ffbb1a14c1ce017db2bc285fe4b Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Dec 2023 13:39:09 +0100
Subject: [PATCH v38 3/3] Start using new libpq cancel APIs
A previous commit introduced new APIs to libpq for cancelling queries.
This replaces the usage of the old APIs in most of the codebase with
these newer ones. This specifically leaves out changes to psql and
pgbench as those would need a much larger refactor to be able to call
them, due to the new functions not being signal-safe.
---
contrib/dblink/dblink.c | 30 +++--
contrib/postgres_fdw/connection.c | 105 +++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/fe_utils/connect_utils.c | 11 +-
src/test/isolation/isolationtester.c | 29 ++---
6 files changed, 145 insertions(+), 52 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 19a362526d2..98dcca3e6fd 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1346,22 +1346,32 @@ PG_FUNCTION_INFO_V1(dblink_cancel_query);
Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
- int res;
PGconn *conn;
- PGcancel *cancel;
- char errbuf[256];
+ PGcancelConn *cancelConn;
+ char *msg;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancel = PQgetCancel(conn);
+ cancelConn = PQcancelCreate(conn);
- res = PQcancel(cancel, errbuf, 256);
- PQfreeCancel(cancel);
+ PG_TRY();
+ {
+ if (!PQcancelBlocking(cancelConn))
+ {
+ msg = pchomp(PQcancelErrorMessage(cancelConn));
+ }
+ else
+ {
+ msg = "OK";
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancelConn);
+ }
+ PG_END_TRY();
- if (res == 1)
- PG_RETURN_TEXT_P(cstring_to_text("OK"));
- else
- PG_RETURN_TEXT_P(cstring_to_text(errbuf));
+ PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 4931ebf5915..dcc13dc3b24 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -133,7 +133,7 @@ static void pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue);
static void pgfdw_reject_incomplete_xact_state_change(ConnCacheEntry *entry);
static void pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_cancel_query(PGconn *conn);
-static bool pgfdw_cancel_query_begin(PGconn *conn);
+static bool pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime);
static bool pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
bool consume_input);
static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query,
@@ -1315,36 +1315,104 @@ pgfdw_cancel_query(PGconn *conn)
endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
CONNECTION_CLEANUP_TIMEOUT);
- if (!pgfdw_cancel_query_begin(conn))
+ if (!pgfdw_cancel_query_begin(conn, endtime))
return false;
return pgfdw_cancel_query_end(conn, endtime, false);
}
static bool
-pgfdw_cancel_query_begin(PGconn *conn)
+pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
{
- PGcancel *cancel;
- char errbuf[256];
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelCreate(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+
+ if (!PQcancelStart(cancel_conn))
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
}
- return true;
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ {
+ break;
+ }
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ if (failed)
+ {
+ if (timed_out)
+ {
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ }
+ else
+ {
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ return !failed;
}
static bool
@@ -1685,7 +1753,10 @@ pgfdw_abort_cleanup_begin(ConnCacheEntry *entry, bool toplevel,
*/
if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
{
- if (!pgfdw_cancel_query_begin(entry->conn))
+ TimestampTz endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ CONNECTION_CLEANUP_TIMEOUT);
+
+ if (!pgfdw_cancel_query_begin(entry->conn, endtime))
return false; /* Unable to cancel running query */
*cancel_requested = lappend(*cancel_requested, entry);
}
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 58a603ac56f..e03160bd975 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2739,6 +2739,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index e3d147de6da..2626e68cc69 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -737,6 +737,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/src/fe_utils/connect_utils.c b/src/fe_utils/connect_utils.c
index 808d54461fd..5ed9f3ba17b 100644
--- a/src/fe_utils/connect_utils.c
+++ b/src/fe_utils/connect_utils.c
@@ -157,19 +157,14 @@ connectMaintenanceDatabase(ConnParams *cparams,
void
disconnectDatabase(PGconn *conn)
{
- char errbuf[256];
-
Assert(conn != NULL);
if (PQtransactionStatus(conn) == PQTRANS_ACTIVE)
{
- PGcancel *cancel;
+ PGcancelConn *cancelConn = PQcancelCreate(conn);
- if ((cancel = PQgetCancel(conn)))
- {
- (void) PQcancel(cancel, errbuf, sizeof(errbuf));
- PQfreeCancel(cancel);
- }
+ (void) PQcancelBlocking(cancelConn);
+ PQcancelFinish(cancelConn);
}
PQfinish(conn);
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index ed110f740f1..0b342b5c2bb 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -946,26 +946,21 @@ try_complete_step(TestSpec *testspec, PermutationStep *pstep, int flags)
*/
if (td > max_step_wait && !canceled)
{
- PGcancel *cancel = PQgetCancel(conn);
+ PGcancelConn *cancel_conn = PQcancelCreate(conn);
- if (cancel != NULL)
+ if (PQcancelBlocking(cancel_conn))
{
- char buf[256];
-
- if (PQcancel(cancel, buf, sizeof(buf)))
- {
- /*
- * print to stdout not stderr, as this should appear
- * in the test case's results
- */
- printf("isolationtester: canceling step %s after %d seconds\n",
- step->name, (int) (td / USECS_PER_SEC));
- canceled = true;
- }
- else
- fprintf(stderr, "PQcancel failed: %s\n", buf);
- PQfreeCancel(cancel);
+ /*
+ * print to stdout not stderr, as this should appear in
+ * the test case's results
+ */
+ printf("isolationtester: canceling step %s after %d seconds\n",
+ step->name, (int) (td / USECS_PER_SEC));
+ canceled = true;
}
+ else
+ fprintf(stderr, "PQcancel failed: %s\n", PQcancelErrorMessage(cancel_conn));
+ PQcancelFinish(cancel_conn);
}
/*
--
2.34.1
v38-0002-Revert-Comment-out-noisy-libpq_pipeline-test.patchapplication/octet-stream; name=v38-0002-Revert-Comment-out-noisy-libpq_pipeline-test.patchDownload
From 30c1302af63d852cc70163ee1b04343596121a9f Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Thu, 14 Mar 2024 10:38:51 +0100
Subject: [PATCH v38 2/3] Revert "Comment out noisy libpq_pipeline test"
This reverts commit cc6e64afda530576d83e331365d36c758495a7cd.
---
src/test/modules/libpq_pipeline/libpq_pipeline.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/src/test/modules/libpq_pipeline/libpq_pipeline.c b/src/test/modules/libpq_pipeline/libpq_pipeline.c
index 83f9caca726..f7936ea2070 100644
--- a/src/test/modules/libpq_pipeline/libpq_pipeline.c
+++ b/src/test/modules/libpq_pipeline/libpq_pipeline.c
@@ -2109,10 +2109,7 @@ usage(const char *progname)
static void
print_test_list(void)
{
-#if 0
- /* Commented out until further stabilized */
printf("cancel\n");
-#endif
printf("disallowed_in_pipeline\n");
printf("multi_pipelines\n");
printf("nosync\n");
--
2.34.1
On 2024-Mar-14, Jelte Fennema-Nio wrote:
On Wed, 13 Mar 2024 at 20:08, Jacob Champion
<jacob.champion@enterprisedb.com> wrote:I hit this on my machine. With the attached diff I can reproduce
constantly (including with the most recent test patch); I think the
cancel must be arriving between the bind/execute steps?Nice find! Your explanation makes total sense. Attached a patchset
that fixes/works around this issue by using the simple query protocol
in the cancel test.
Hmm, isn't this basically saying that we're giving up on reliably
canceling queries altogether? I mean, maybe we'd like to instead fix
the bug about canceling queries in extended query protocol ...
Isn't that something you're worried about?
--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/
"World domination is proceeding according to plan" (Andrew Morton)
On Thu, 14 Mar 2024 at 11:33, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Hmm, isn't this basically saying that we're giving up on reliably
canceling queries altogether? I mean, maybe we'd like to instead fix
the bug about canceling queries in extended query protocol ...
Isn't that something you're worried about?
In any case I think it's worth having (non-flaky) test coverage of our
libpq cancellation sending code. So I think it makes sense to commit
the patch I proposed, even if the backend code to handle that code is
arguably buggy.
Regarding the question if the backend code is actually buggy or not:
the way cancel requests are defined to work is a bit awkward. They
cancel whatever operation is running on the session when they arrive.
So if the session is just in the middle of a Bind and Execute message
there is nothing to cancel. While surprising and probably not what
someone would want, I don't think this behaviour is too horrible in
practice in this case. Most of the time people cancel queries while
the Execute message is being processed. The new test really only runs
into this problem because it sends a cancel request, immediately after
sending the query.
I definitely think it's worth rethinking the way we do query
cancellations though. I think what we would probably want is a way to
cancel a specific query/message on a session. Instead of cancelling
whatever is running at the moment when the cancel request is processed
by Postgres. Because this "cancel whatever is running" behaviour is
fraught with issues, this Bind/Execute issue being only one of them.
One really annoying race condition of a cancel request cancelling
another query than intended can happen with this flow (that I spend
lots of time on addressing in PgBouncer):
1. You send query A on session 1
2. You send a cancel request for session 1 (intending to cancel query A)
3. Query A completes by itself
4. You now send query B
5. The cancel request is now processed
6. Query B is now cancelled
But solving that race condition would involve changing the postgres
protocol. Which I'm trying to make possible with the first few commits
in[1]. And while those first few commits might still land in PG17, I
don't think a large protocol change like adding query identifiers to
cancel requests is feasible for PG17 anymore.
I enabled the test again and also pushed the changes to dblink,
isolationtester and fe_utils (which AFAICS is used by pg_dump,
pg_amcheck, reindexdb and vacuumdb). I chickened out of committing the
postgres_fdw changes though, so here they are again. Not sure I'll find
courage to get these done by tomorrow, or whether I should just let them
for Fujita-san or Noah, who have been the last committers to touch this.
--
Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
"No renuncies a nada. No te aferres a nada."
Attachments:
v39-0001-postgres_fdw-Start-using-new-libpq-cancel-APIs.patchtext/x-diff; charset=utf-8Download
From 737578fcdc6ed0de64838d2ad905054b18eb9ec1 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Mon, 18 Mar 2024 19:37:40 +0100
Subject: [PATCH v39] postgres_fdw: Start using new libpq cancel APIs
Commit 61461a300c1c introduced new functions to libpq for cancelling
queries. This replaces the usage of the old ones in postgres_fdw.
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Discussion: https://postgr.es/m/CAGECzQT_VgOWWENUqvUV9xQmbaCyXjtRRAYO8W07oqashk_N+g@mail.gmail.com
---
contrib/postgres_fdw/connection.c | 108 +++++++++++++++---
.../postgres_fdw/expected/postgres_fdw.out | 15 +++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
3 files changed, 113 insertions(+), 17 deletions(-)
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 4931ebf591..0c66eaa001 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -133,7 +133,7 @@ static void pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue);
static void pgfdw_reject_incomplete_xact_state_change(ConnCacheEntry *entry);
static void pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_cancel_query(PGconn *conn);
-static bool pgfdw_cancel_query_begin(PGconn *conn);
+static bool pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime);
static bool pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
bool consume_input);
static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query,
@@ -1315,36 +1315,106 @@ pgfdw_cancel_query(PGconn *conn)
endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
CONNECTION_CLEANUP_TIMEOUT);
- if (!pgfdw_cancel_query_begin(conn))
+ if (!pgfdw_cancel_query_begin(conn, endtime))
return false;
return pgfdw_cancel_query_end(conn, endtime, false);
}
+/*
+ * Submit a cancel request to the given connection, waiting only until
+ * the given time.
+ *
+ * We sleep interruptibly until we receive confirmation that the cancel
+ * request has been accepted, and if it is, return true; if the timeout
+ * lapses without that, or the request fails for whatever reason, return
+ * false.
+ */
static bool
-pgfdw_cancel_query_begin(PGconn *conn)
+pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
{
- PGcancel *cancel;
- char errbuf[256];
+ bool timed_out = false;
+ bool failed = false;
+ PGcancelConn *cancel_conn = PQcancelCreate(conn);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+ if (!PQcancelStart(cancel_conn))
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
+ PG_TRY();
{
ereport(WARNING,
(errcode(ERRCODE_CONNECTION_FAILURE),
errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
}
- PQfreeCancel(cancel);
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return false;
}
- return true;
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ break;
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ failed = true;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit:
+ if (failed)
+ {
+ if (timed_out)
+ ereport(WARNING,
+ (errmsg("could not cancel request due to timeout")));
+ else
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s",
+ pchomp(PQcancelErrorMessage(cancel_conn)))));
+ }
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ return !failed;
}
static bool
@@ -1685,7 +1755,11 @@ pgfdw_abort_cleanup_begin(ConnCacheEntry *entry, bool toplevel,
*/
if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
{
- if (!pgfdw_cancel_query_begin(entry->conn))
+ TimestampTz endtime;
+
+ endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ CONNECTION_CLEANUP_TIMEOUT);
+ if (!pgfdw_cancel_query_begin(entry->conn, endtime))
return false; /* Unable to cancel running query */
*cancel_requested = lappend(*cancel_requested, entry);
}
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 58a603ac56..e03160bd97 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2739,6 +2739,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index e3d147de6d..2626e68cc6 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -737,6 +737,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
--
2.39.2
On Mon, Mar 18, 2024 at 07:40:10PM +0100, Alvaro Herrera wrote:
I enabled the test again and also pushed the changes to dblink,
isolationtester and fe_utils (which AFAICS is used by pg_dump,
I recommend adding a libpqsrv_cancel() function to libpq-be-fe-helpers.h, to
use from dblink and postgres_fdw. pgxn modules calling PQcancel() from the
backend (citus pg_bulkload plproxy pmpp) then have a better chance to adopt
the new way.
On Thu, 21 Mar 2024 at 03:54, Noah Misch <noah@leadboat.com> wrote:
On Mon, Mar 18, 2024 at 07:40:10PM +0100, Alvaro Herrera wrote:
I enabled the test again and also pushed the changes to dblink,
isolationtester and fe_utils (which AFAICS is used by pg_dump,I recommend adding a libpqsrv_cancel() function to libpq-be-fe-helpers.h, to
use from dblink and postgres_fdw. pgxn modules calling PQcancel() from the
backend (citus pg_bulkload plproxy pmpp) then have a better chance to adopt
the new way.
Done
Attachments:
v37-0001-postgres_fdw-Start-using-new-libpq-cancel-APIs.patchapplication/octet-stream; name=v37-0001-postgres_fdw-Start-using-new-libpq-cancel-APIs.patchDownload
From 71b584be461281be2d412cbb9ca41a022bd92ff4 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Mon, 18 Mar 2024 19:37:40 +0100
Subject: [PATCH v37] postgres_fdw: Start using new libpq cancel APIs
Commit 61461a300c1c introduced new functions to libpq for cancelling
queries. This replaces the usage of the old ones in postgres_fdw.
This is done by introducing a new libpqsrv_cancel helper function. This
function takes a timeout and can itself be interupted while it is
sending a cancel request instead of being blocked. This new function is
now also used by dblink.
Finally, it also adds some test coverage for the cancel support in
postgres_fdw.
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Discussion: https://postgr.es/m/CAGECzQT_VgOWWENUqvUV9xQmbaCyXjtRRAYO8W07oqashk_N+g@mail.gmail.com
---
contrib/dblink/dblink.c | 21 ++---
contrib/postgres_fdw/connection.c | 45 +++++-----
.../postgres_fdw/expected/postgres_fdw.out | 15 ++++
contrib/postgres_fdw/sql/postgres_fdw.sql | 7 ++
src/include/libpq/libpq-be-fe-helpers.h | 84 +++++++++++++++++++
5 files changed, 135 insertions(+), 37 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index edbc9ab02ac..2c1d92326ed 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1347,25 +1347,16 @@ Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
PGconn *conn;
- PGcancelConn *cancelConn;
char *msg;
+ TimestampTz endtime;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancelConn = PQcancelCreate(conn);
-
- PG_TRY();
- {
- if (!PQcancelBlocking(cancelConn))
- msg = pchomp(PQcancelErrorMessage(cancelConn));
- else
- msg = "OK";
- }
- PG_FINALLY();
- {
- PQcancelFinish(cancelConn);
- }
- PG_END_TRY();
+ endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ 30000);
+ msg = libpqsrv_cancel(conn, endtime);
+ if (!msg)
+ msg = "OK";
PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 4931ebf5915..77a57bbdfd2 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -133,7 +133,7 @@ static void pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue);
static void pgfdw_reject_incomplete_xact_state_change(ConnCacheEntry *entry);
static void pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_cancel_query(PGconn *conn);
-static bool pgfdw_cancel_query_begin(PGconn *conn);
+static bool pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime);
static bool pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
bool consume_input);
static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query,
@@ -1315,35 +1315,32 @@ pgfdw_cancel_query(PGconn *conn)
endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
CONNECTION_CLEANUP_TIMEOUT);
- if (!pgfdw_cancel_query_begin(conn))
+ if (!pgfdw_cancel_query_begin(conn, endtime))
return false;
return pgfdw_cancel_query_end(conn, endtime, false);
}
+/*
+ * Submit a cancel request to the given connection, waiting only until
+ * the given time.
+ *
+ * We sleep interruptibly until we receive confirmation that the cancel
+ * request has been accepted, and if it is, return true; if the timeout
+ * lapses without that, or the request fails for whatever reason, return
+ * false.
+ */
static bool
-pgfdw_cancel_query_begin(PGconn *conn)
+pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
{
- PGcancel *cancel;
- char errbuf[256];
+ char *error = libpqsrv_cancel(conn, endtime);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
+ if (error)
{
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
- {
- ereport(WARNING,
- (errcode(ERRCODE_CONNECTION_FAILURE),
- errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
- }
- PQfreeCancel(cancel);
+ ereport(WARNING,
+ (errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s", error)));
+ return false;
}
-
return true;
}
@@ -1685,7 +1682,11 @@ pgfdw_abort_cleanup_begin(ConnCacheEntry *entry, bool toplevel,
*/
if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
{
- if (!pgfdw_cancel_query_begin(entry->conn))
+ TimestampTz endtime;
+
+ endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ CONNECTION_CLEANUP_TIMEOUT);
+ if (!pgfdw_cancel_query_begin(entry->conn, endtime))
return false; /* Unable to cancel running query */
*cancel_requested = lappend(*cancel_requested, entry);
}
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index acbbf3b56c8..98d16b1fd8c 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2739,6 +2739,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index e3d147de6da..2626e68cc69 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -737,6 +737,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/src/include/libpq/libpq-be-fe-helpers.h b/src/include/libpq/libpq-be-fe-helpers.h
index 5d33bcf32f7..33c7be7845c 100644
--- a/src/include/libpq/libpq-be-fe-helpers.h
+++ b/src/include/libpq/libpq-be-fe-helpers.h
@@ -44,6 +44,8 @@
#include "miscadmin.h"
#include "storage/fd.h"
#include "storage/latch.h"
+#include "utils/timestamp.h"
+#include "utils/wait_event.h"
static inline void libpqsrv_connect_prepare(void);
@@ -365,4 +367,86 @@ libpqsrv_get_result(PGconn *conn, uint32 wait_event_info)
return PQgetResult(conn);
}
+/*
+ * Submit a cancel request to the given connection, waiting only until
+ * the given time.
+ *
+ * We sleep interruptibly until we receive confirmation that the cancel
+ * request has been accepted, and if it is, return NULL; if the timeout
+ * lapses without that, or the request fails for whatever reason, return
+ * the error message.
+ */
+static char *
+libpqsrv_cancel(PGconn *conn, TimestampTz endtime)
+{
+ char *error = NULL;
+ PGcancelConn *cancel_conn = PQcancelCreate(conn);
+
+ if (!PQcancelStart(cancel_conn))
+ {
+ PG_TRY();
+ {
+ error = pchomp(PQcancelErrorMessage(cancel_conn));
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+ return error;
+ }
+
+ /* In what follows, do not leak any PGcancelConn on an error. */
+ PG_TRY();
+ {
+ while (true)
+ {
+ PostgresPollingStatusType pollres = PQcancelPoll(cancel_conn);
+ TimestampTz now = GetCurrentTimestamp();
+ long cur_timeout;
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ if (pollres == PGRES_POLLING_OK)
+ break;
+
+ /* If timeout has expired, give up, else get sleep time. */
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ error = "timed out";
+ break;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ error = pchomp(PQcancelErrorMessage(cancel_conn));
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_EXTENSION);
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ return error;
+}
+
+
#endif /* LIBPQ_BE_FE_HELPERS_H */
base-commit: b4080fa3dcf6c6359e542169e0e81a0662c53ba8
--
2.34.1
On 2024-Mar-22, Jelte Fennema-Nio wrote:
On Thu, 21 Mar 2024 at 03:54, Noah Misch <noah@leadboat.com> wrote:
On Mon, Mar 18, 2024 at 07:40:10PM +0100, Alvaro Herrera wrote:
I enabled the test again and also pushed the changes to dblink,
isolationtester and fe_utils (which AFAICS is used by pg_dump,I recommend adding a libpqsrv_cancel() function to libpq-be-fe-helpers.h, to
use from dblink and postgres_fdw. pgxn modules calling PQcancel() from the
backend (citus pg_bulkload plproxy pmpp) then have a better chance to adopt
the new way.Done
Nice, thanks. I played with it a bit, mostly trying to figure out if
the chosen API is usable. I toyed with making it return boolean success
and the error message as an output argument, because I was nervous about
what'd happen in OOM. But since this is backend environment, what
actually happens is that we elog(ERROR) anyway, so we never return a
NULL error message. So after the detour I think Jelte's API is okay.
I changed it so that the error messages are returned as translated
phrases, and was bothered by the fact that if errors happen repeatedly,
the memory for them might be leaked. Maybe this is fine depending on
the caller's memory context, but since it's only at most one string each
time, it's quite easy to just keep track of it so that we can release it
on the next.
I ended up reducing the two PG_TRY blocks to a single one. I see no
reason to split them up, and this way it looks more legible.
What do you think?
--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/
"Tiene valor aquel que admite que es un cobarde" (Fernandel)
Attachments:
libpqsrv_cancel.patchtext/x-diff; charset=utf-8Download
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index edbc9ab02a..de858e165a 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1347,25 +1347,16 @@ Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
PGconn *conn;
- PGcancelConn *cancelConn;
char *msg;
+ TimestampTz endtime;
dblink_init();
conn = dblink_get_named_conn(text_to_cstring(PG_GETARG_TEXT_PP(0)));
- cancelConn = PQcancelCreate(conn);
-
- PG_TRY();
- {
- if (!PQcancelBlocking(cancelConn))
- msg = pchomp(PQcancelErrorMessage(cancelConn));
- else
- msg = "OK";
- }
- PG_FINALLY();
- {
- PQcancelFinish(cancelConn);
- }
- PG_END_TRY();
+ endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ 30000);
+ msg = libpqsrv_cancel(conn, endtime);
+ if (msg == NULL)
+ msg = "OK";
PG_RETURN_TEXT_P(cstring_to_text(msg));
}
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 4931ebf591..2532e453c4 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -133,7 +133,7 @@ static void pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue);
static void pgfdw_reject_incomplete_xact_state_change(ConnCacheEntry *entry);
static void pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_cancel_query(PGconn *conn);
-static bool pgfdw_cancel_query_begin(PGconn *conn);
+static bool pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime);
static bool pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
bool consume_input);
static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query,
@@ -1315,36 +1315,31 @@ pgfdw_cancel_query(PGconn *conn)
endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
CONNECTION_CLEANUP_TIMEOUT);
- if (!pgfdw_cancel_query_begin(conn))
+ if (!pgfdw_cancel_query_begin(conn, endtime))
return false;
return pgfdw_cancel_query_end(conn, endtime, false);
}
+/*
+ * Submit a cancel request to the given connection, waiting only until
+ * the given time.
+ *
+ * We sleep interruptibly until we receive confirmation that the cancel
+ * request has been accepted, and if it is, return true; if the timeout
+ * lapses without that, or the request fails for whatever reason, return
+ * false.
+ */
static bool
-pgfdw_cancel_query_begin(PGconn *conn)
+pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
{
- PGcancel *cancel;
- char errbuf[256];
+ char *errormsg = libpqsrv_cancel(conn, endtime);
- /*
- * Issue cancel request. Unfortunately, there's no good way to limit the
- * amount of time that we might block inside PQgetCancel().
- */
- if ((cancel = PQgetCancel(conn)))
- {
- if (!PQcancel(cancel, errbuf, sizeof(errbuf)))
- {
- ereport(WARNING,
- (errcode(ERRCODE_CONNECTION_FAILURE),
- errmsg("could not send cancel request: %s",
- errbuf)));
- PQfreeCancel(cancel);
- return false;
- }
- PQfreeCancel(cancel);
- }
+ if (errormsg != NULL)
+ ereport(WARNING,
+ errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg("could not send cancel request: %s", errormsg));
- return true;
+ return errormsg == NULL;
}
static bool
@@ -1685,7 +1680,11 @@ pgfdw_abort_cleanup_begin(ConnCacheEntry *entry, bool toplevel,
*/
if (PQtransactionStatus(entry->conn) == PQTRANS_ACTIVE)
{
- if (!pgfdw_cancel_query_begin(entry->conn))
+ TimestampTz endtime;
+
+ endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ CONNECTION_CLEANUP_TIMEOUT);
+ if (!pgfdw_cancel_query_begin(entry->conn, endtime))
return false; /* Unable to cancel running query */
*cancel_requested = lappend(*cancel_requested, entry);
}
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 3f0110c52b..b7af86d351 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2739,6 +2739,21 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+RESET statement_timeout;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index 5fffc4c53b..6e1c819159 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -737,6 +737,13 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+SET statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+RESET statement_timeout;
+
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/src/include/libpq/libpq-be-fe-helpers.h b/src/include/libpq/libpq-be-fe-helpers.h
index 5d33bcf32f..123ffb96af 100644
--- a/src/include/libpq/libpq-be-fe-helpers.h
+++ b/src/include/libpq/libpq-be-fe-helpers.h
@@ -44,6 +44,8 @@
#include "miscadmin.h"
#include "storage/fd.h"
#include "storage/latch.h"
+#include "utils/timestamp.h"
+#include "utils/wait_event.h"
static inline void libpqsrv_connect_prepare(void);
@@ -365,4 +367,105 @@ libpqsrv_get_result(PGconn *conn, uint32 wait_event_info)
return PQgetResult(conn);
}
+/*
+ * Submit a cancel request to the given connection, waiting only until
+ * the given time.
+ *
+ * We sleep interruptibly until we receive confirmation that the cancel
+ * request has been accepted, and if it is, return NULL; if the cancel
+ * request fails, return an error message string (which is not to be
+ * freed).
+ *
+ * For other problems (to wit: OOM when strdup'ing an error message from
+ * libpq), this function can ereport(ERROR).
+ */
+static inline char *
+libpqsrv_cancel(PGconn *conn, TimestampTz endtime)
+{
+ PGcancelConn *cancel_conn;
+ static char *prverror = NULL;
+ char *error = NULL;
+
+ /*
+ * Most of the error strings we return are statically allocated so they
+ * don't need freeing, but there's a couple of cases where we cannot keep
+ * that promise. To avoid long-term leaks, we keep a static pointer to
+ * the last one we returned, and free it here next time around.
+ */
+ if (prverror != NULL)
+ {
+ pfree(prverror);
+ prverror = NULL;
+ }
+
+ cancel_conn = PQcancelCreate(conn);
+ if (cancel_conn == NULL)
+ return _("out of memory");
+
+ /* In what follows, do not leak any PGcancelConn on any errors. */
+
+ PG_TRY();
+ {
+ if (!PQcancelStart(cancel_conn))
+ {
+ error = pchomp(PQcancelErrorMessage(cancel_conn));
+ /* save pchomp output so we can free it next time */
+ prverror = error;
+ goto exit;
+ }
+
+ for (;;)
+ {
+ PostgresPollingStatusType pollres;
+ TimestampTz now;
+ long cur_timeout;
+ int waitEvents = WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH;
+
+ pollres = PQcancelPoll(cancel_conn);
+ if (pollres == PGRES_POLLING_OK)
+ break; /* success! */
+
+ /* If timeout has expired, give up, else get sleep time. */
+ now = GetCurrentTimestamp();
+ cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ if (cur_timeout <= 0)
+ {
+ error = _("cancel request timed out");
+ break;
+ }
+
+ switch (pollres)
+ {
+ case PGRES_POLLING_READING:
+ waitEvents |= WL_SOCKET_READABLE;
+ break;
+ case PGRES_POLLING_WRITING:
+ waitEvents |= WL_SOCKET_WRITEABLE;
+ break;
+ default:
+ /* save pchomp output so we can free it next time */
+ error = pchomp(PQcancelErrorMessage(cancel_conn));
+ prverror = error;
+ goto exit;
+ }
+
+ /* Sleep until there's something to do */
+ WaitLatchOrSocket(MyLatch, waitEvents, PQcancelSocket(cancel_conn),
+ cur_timeout, PG_WAIT_CLIENT);
+
+ ResetLatch(MyLatch);
+
+ CHECK_FOR_INTERRUPTS();
+ }
+exit: ;
+ }
+ PG_FINALLY();
+ {
+ PQcancelFinish(cancel_conn);
+ }
+ PG_END_TRY();
+
+ return error;
+}
+
#endif /* LIBPQ_BE_FE_HELPERS_H */
On 2024-Mar-27, Alvaro Herrera wrote:
I changed it so that the error messages are returned as translated
phrases, and was bothered by the fact that if errors happen repeatedly,
the memory for them might be leaked. Maybe this is fine depending on
the caller's memory context, but since it's only at most one string each
time, it's quite easy to just keep track of it so that we can release it
on the next.
(Actually this sounds clever but fails pretty obviously if the caller
does free the string, such as in a memory context reset. So I guess we
have to just accept the potential leakage.)
--
Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
"La conclusión que podemos sacar de esos estudios es que
no podemos sacar ninguna conclusión de ellos" (Tanenbaum)
On Wed, 27 Mar 2024 at 19:46, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
On 2024-Mar-27, Alvaro Herrera wrote:
I changed it so that the error messages are returned as translated
phrases, and was bothered by the fact that if errors happen repeatedly,
the memory for them might be leaked. Maybe this is fine depending on
the caller's memory context, but since it's only at most one string each
time, it's quite easy to just keep track of it so that we can release it
on the next.(Actually this sounds clever but fails pretty obviously if the caller
does free the string, such as in a memory context reset. So I guess we
have to just accept the potential leakage.)
Your changes look good, apart from the prverror stuff indeed. If you
remove the prverror stuff again I think this is ready to commit.
On Wed, 27 Mar 2024 at 19:27, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
I ended up reducing the two PG_TRY blocks to a single one. I see no
reason to split them up, and this way it looks more legible.
I definitely agree this looks better. Not sure why I hadn't done that,
maybe it wasn't possible in one of the earlier iterations of the API.
On 2024-Mar-28, Jelte Fennema-Nio wrote:
Your changes look good, apart from the prverror stuff indeed. If you
remove the prverror stuff again I think this is ready to commit.
Great, thanks for looking. Pushed now, I'll be closing the commitfest
entry shortly.
--
Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
"Always assume the user will do much worse than the stupidest thing
you can imagine." (Julien PUYDT)
Hm, indri failed:
ccache gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Werror=unguarded-availability-new -Wendif-labels -Wmissing-format-attribute -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -Wno-compound-token-split-by-macro -g -O2 -fno-common -Werror -fvisibility=hidden -bundle -o dblink.dylib dblink.o -L../../src/port -L../../src/common -L../../src/interfaces/libpq -lpq -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.4.sdk -L/opt/local/libexec/llvm-15/lib -L/opt/local/lib -L/opt/local/lib -L/opt/local/lib -L/opt/local/lib -Wl,-dead_strip_dylibs -Werror -fvisibility=hidden -bundle_loader ../../src/backend/postgres
Undefined symbols for architecture arm64:
"_libintl_gettext", referenced from:
_libpqsrv_cancel in dblink.o
_libpqsrv_cancel in dblink.o
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[1]: *** [dblink.dylib] Error 1
make: *** [all-dblink-recurse] Error 2
--
Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
On 2024-Mar-28, Alvaro Herrera wrote:
Undefined symbols for architecture arm64:
"_libintl_gettext", referenced from:
_libpqsrv_cancel in dblink.o
_libpqsrv_cancel in dblink.o
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[1]: *** [dblink.dylib] Error 1
make: *** [all-dblink-recurse] Error 2
I just removed the _() from the new function. There's not much point in
wasting more time on this, given that contrib doesn't have translation
support anyway, and we're not using this in libpqwalreceiver.
--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/
"Crear es tan difícil como ser libre" (Elsa Triolet)
Eh, kestrel has also failed[1]https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-03-28%2016%3A01%3A14, apparently every query after the large
JOIN that this commit added as test fails with a statement timeout error.
[1]: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-03-28%2016%3A01%3A14
--
Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
"No deja de ser humillante para una persona de ingenio saber
que no hay tonto que no le pueda enseñar algo." (Jean B. Say)
On Thu, 28 Mar 2024 at 17:34, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Eh, kestrel has also failed[1], apparently every query after the large
JOIN that this commit added as test fails with a statement timeout error.[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-03-28%2016%3A01%3A14
Ugh that's annoying, the RESET is timing out too I guess. That can
hopefully be easily fixed by changing the new test to:
BEGIN;
SET LOCAL statement_timeout = '10ms';
select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
-- this takes very long
ROLLBACK;
On 2024-Mar-28, Jelte Fennema-Nio wrote:
Ugh that's annoying, the RESET is timing out too I guess.
Hah, you're right, I can reproduce with a smaller timeout, and using SET
LOCAL works as a fix. If we're doing that, why not reduce the timeout
to 1ms? We don't need to wait extra 9ms ...
--
Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
“Cuando no hay humildad las personas se degradan” (A. Christie)
On Thu, 28 Mar 2024 at 17:43, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Hah, you're right, I can reproduce with a smaller timeout, and using SET
LOCAL works as a fix. If we're doing that, why not reduce the timeout
to 1ms? We don't need to wait extra 9ms ...
I think we don't really want to make the timeout too short. Otherwise
the query might get cancelled before we push any query down to the
FDW. I guess that means that for some slow machines even 10ms is not
enough to make the test do the intended purpose. I'd keep it at 10ms,
which seems long enough for normal systems, while still being pretty
short.
Jelte Fennema-Nio <postgres@jeltef.nl> writes:
On Thu, 28 Mar 2024 at 17:43, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Hah, you're right, I can reproduce with a smaller timeout, and using SET
LOCAL works as a fix. If we're doing that, why not reduce the timeout
to 1ms? We don't need to wait extra 9ms ...
I think we don't really want to make the timeout too short. Otherwise
the query might get cancelled before we push any query down to the
FDW. I guess that means that for some slow machines even 10ms is not
enough to make the test do the intended purpose. I'd keep it at 10ms,
which seems long enough for normal systems, while still being pretty
short.
If the test fails both when the machine is too slow and when it's
too fast, then there's zero hope of making it stable and we should
just remove it.
regards, tom lane
On 2024-Mar-28, Tom Lane wrote:
Jelte Fennema-Nio <postgres@jeltef.nl> writes:
I think we don't really want to make the timeout too short. Otherwise
the query might get cancelled before we push any query down to the
FDW. I guess that means that for some slow machines even 10ms is not
enough to make the test do the intended purpose. I'd keep it at 10ms,
which seems long enough for normal systems, while still being pretty
short.If the test fails both when the machine is too slow and when it's
too fast, then there's zero hope of making it stable and we should
just remove it.
It doesn't fail when it's too fast -- it's just that it doesn't cover
the case we want to cover.
--
Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/
"Escucha y olvidarás; ve y recordarás; haz y entenderás" (Confucio)
Alvaro Herrera <alvherre@alvh.no-ip.org> writes:
On 2024-Mar-28, Tom Lane wrote:
If the test fails both when the machine is too slow and when it's
too fast, then there's zero hope of making it stable and we should
just remove it.
It doesn't fail when it's too fast -- it's just that it doesn't cover
the case we want to cover.
That's hardly better, because then you think you have test
coverage but maybe you don't.
Could we make this test bulletproof by using an injection point?
If not, I remain of the opinion that we're better off without it.
regards, tom lane
On Thu, 28 Mar 2024 at 19:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Alvaro Herrera <alvherre@alvh.no-ip.org> writes:
It doesn't fail when it's too fast -- it's just that it doesn't cover
the case we want to cover.That's hardly better, because then you think you have test
coverage but maybe you don't.
Honestly, that seems quite a lot better. Instead of having randomly
failing builds, you have a test that creates coverage 80+% of the
time. And that also seems a lot better than having no coverage at all
(which is what we had for the last 7 years since introduction of
cancellations to postgres_fdw). It would be good to expand the comment
in the test though saying that the test might not always cover the
intended code path, due to timing problems.
Could we make this test bulletproof by using an injection point?
If not, I remain of the opinion that we're better off without it.
Possibly, and if so, I agree that would be better than the currently
added test. But I honestly don't feel like spending the time on
creating such a test. And given 7 years have passed without someone
adding any test for this codepath at all, I don't expect anyone else
will either.
If you both feel we're better off without the test, feel free to
remove it. This was just some small missing test coverage that I
noticed while working on this patch, that I thought I'd quickly
address. I don't particularly care a lot about the specific test.
On Fri, Mar 29, 2024 at 09:17:55AM +0100, Jelte Fennema-Nio wrote:
On Thu, 28 Mar 2024 at 19:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Could we make this test bulletproof by using an injection point?
If not, I remain of the opinion that we're better off without it.Possibly, and if so, I agree that would be better than the currently
added test. But I honestly don't feel like spending the time on
creating such a test.
The SQL test is more representative of real applications, and it's way simpler
to understand. In general, I prefer 6-line SQL tests that catch a problem 10%
of the time over injection point tests that catch it 100% of the time. For
low detection rate to be exciting, it needs to be low enough to have a serious
chance of all buildfarm members reporting green for the bad commit. With ~115
buildfarm members running in the last day, 0.1% detection rate would have been
low enough to bother improving, but 4% would be high enough to call it good.
Alvaro Herrera <alvherre@alvh.no-ip.org> writes:
Great, thanks for looking. Pushed now, I'll be closing the commitfest
entry shortly.
On my machine, headerscheck does not like this:
$ src/tools/pginclude/headerscheck --cplusplus
In file included from /tmp/headerscheck.4gTaW5/test.cpp:3:
./src/include/libpq/libpq-be-fe-helpers.h: In function 'char* libpqsrv_cancel(PGconn*, TimestampTz)':
./src/include/libpq/libpq-be-fe-helpers.h:393:10: warning: ISO C++ forbids converting a string constant to 'char*' [-Wwrite-strings]
return "out of memory";
^~~~~~~~~~~~~~~
./src/include/libpq/libpq-be-fe-helpers.h:421:13: warning: ISO C++ forbids converting a string constant to 'char*' [-Wwrite-strings]
error = "cancel request timed out";
^~~~~~~~~~~~~~~~~~~~~~~~~~
The second part of that could easily be fixed by declaring "error" as
"const char *". As for the first part, can we redefine the whole
function as returning "const char *"? (If not, this coding is very
questionable anyway.)
regards, tom lane
On 2024-Apr-03, Tom Lane wrote:
On my machine, headerscheck does not like this:
$ src/tools/pginclude/headerscheck --cplusplus
In file included from /tmp/headerscheck.4gTaW5/test.cpp:3:
./src/include/libpq/libpq-be-fe-helpers.h: In function 'char* libpqsrv_cancel(PGconn*, TimestampTz)':
./src/include/libpq/libpq-be-fe-helpers.h:393:10: warning: ISO C++ forbids converting a string constant to 'char*' [-Wwrite-strings]
return "out of memory";
^~~~~~~~~~~~~~~
./src/include/libpq/libpq-be-fe-helpers.h:421:13: warning: ISO C++ forbids converting a string constant to 'char*' [-Wwrite-strings]
error = "cancel request timed out";
^~~~~~~~~~~~~~~~~~~~~~~~~~The second part of that could easily be fixed by declaring "error" as
"const char *". As for the first part, can we redefine the whole
function as returning "const char *"? (If not, this coding is very
questionable anyway.)
Yeah, this seems to work and I no longer get that complaint from
headerscheck.
--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/
Attachments:
0001-Make-libpqsrv_cancel-s-return-type-const-char.patchtext/x-diff; charset=utf-8Download
From 0af8c7039b2c8ed80bc0bddacfe4a9abb8f527b3 Mon Sep 17 00:00:00 2001
From: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Thu, 4 Apr 2024 10:13:07 +0200
Subject: [PATCH] Make libpqsrv_cancel's return type const char *
---
contrib/dblink/dblink.c | 2 +-
contrib/postgres_fdw/connection.c | 2 +-
src/include/libpq/libpq-be-fe-helpers.h | 4 ++--
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index de858e165a..755293456f 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1347,7 +1347,7 @@ Datum
dblink_cancel_query(PG_FUNCTION_ARGS)
{
PGconn *conn;
- char *msg;
+ const char *msg;
TimestampTz endtime;
dblink_init();
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 2532e453c4..cac9d96d33 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -1332,7 +1332,7 @@ pgfdw_cancel_query(PGconn *conn)
static bool
pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
{
- char *errormsg = libpqsrv_cancel(conn, endtime);
+ const char *errormsg = libpqsrv_cancel(conn, endtime);
if (errormsg != NULL)
ereport(WARNING,
diff --git a/src/include/libpq/libpq-be-fe-helpers.h b/src/include/libpq/libpq-be-fe-helpers.h
index 8be9aa1f2f..fe50829274 100644
--- a/src/include/libpq/libpq-be-fe-helpers.h
+++ b/src/include/libpq/libpq-be-fe-helpers.h
@@ -382,11 +382,11 @@ libpqsrv_get_result(PGconn *conn, uint32 wait_event_info)
* Note: this function leaks a string's worth of memory when reporting
* libpq errors. Make sure to call it in a transient memory context.
*/
-static inline char *
+static inline const char *
libpqsrv_cancel(PGconn *conn, TimestampTz endtime)
{
PGcancelConn *cancel_conn;
- char *error = NULL;
+ const char *error = NULL;
cancel_conn = PQcancelCreate(conn);
if (cancel_conn == NULL)
--
2.39.2
On Thu, 4 Apr 2024 at 10:45, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Yeah, this seems to work and I no longer get that complaint from
headerscheck.
patch LGTM
[ from a week ago ]
Alvaro Herrera <alvherre@alvh.no-ip.org> writes:
Hm, indri failed:
ccache gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Werror=unguarded-availability-new -Wendif-labels -Wmissing-format-attribute -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -Wno-compound-token-split-by-macro -g -O2 -fno-common -Werror -fvisibility=hidden -bundle -o dblink.dylib dblink.o -L../../src/port -L../../src/common -L../../src/interfaces/libpq -lpq -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.4.sdk -L/opt/local/libexec/llvm-15/lib -L/opt/local/lib -L/opt/local/lib -L/opt/local/lib -L/opt/local/lib -Wl,-dead_strip_dylibs -Werror -fvisibility=hidden -bundle_loader ../../src/backend/postgres
Undefined symbols for architecture arm64:
"_libintl_gettext", referenced from:
_libpqsrv_cancel in dblink.o
_libpqsrv_cancel in dblink.o
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[1]: *** [dblink.dylib] Error 1
make: *** [all-dblink-recurse] Error 2
Having just fixed the same issue for test_json_parser, I now realize
what's going on there: dblink's link command doesn't actually mention
any of the external libraries that we might need, such as libintl.
You can get away with that on some platforms, but not macOS.
It would probably be possible to fix that if anyone cared to.
I'm not sufficiently excited about it to do so right now --- as
you say, we don't support translation in contrib anyway.
regards, tom lane
Hello hackers,
30.03.2024 01:17, Noah Misch wrote:
On Fri, Mar 29, 2024 at 09:17:55AM +0100, Jelte Fennema-Nio wrote:
On Thu, 28 Mar 2024 at 19:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Could we make this test bulletproof by using an injection point?
If not, I remain of the opinion that we're better off without it.Possibly, and if so, I agree that would be better than the currently
added test. But I honestly don't feel like spending the time on
creating such a test.The SQL test is more representative of real applications, and it's way simpler
to understand. In general, I prefer 6-line SQL tests that catch a problem 10%
of the time over injection point tests that catch it 100% of the time. For
low detection rate to be exciting, it needs to be low enough to have a serious
chance of all buildfarm members reporting green for the bad commit. With ~115
buildfarm members running in the last day, 0.1% detection rate would have been
low enough to bother improving, but 4% would be high enough to call it good.
As a recent buildfarm failure on orlingo (which tests asan-enabled builds)
[1]: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2024-06-20%2009%3A52%3A04
70/70 postgresql:postgres_fdw-running / postgres_fdw-running/regress ERROR 278.67s exit status 1
@@ -2775,6 +2775,7 @@
SET LOCAL statement_timeout = '10ms';
select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
ERROR: canceling statement due to statement timeout
+WARNING: could not get result of cancel request due to timeout
COMMIT;
(from the next run we can see normal duration:
"postgres_fdw-running/regress OK 6.30s ")
I reproduced the failure with an asan-enabled build on a slowed-down VM
and as far as I can see, it's caused by the following condition in
ProcessInterrupts():
/*
* If we are reading a command from the client, just ignore the cancel
* request --- sending an extra error message won't accomplish
* anything. Otherwise, go ahead and throw the error.
*/
if (!DoingCommandRead)
{
LockErrorCleanup();
ereport(ERROR,
(errcode(ERRCODE_QUERY_CANCELED),
errmsg("canceling statement due to user request")));
}
I think this failure can be reproduced easily (without asan/slowing down)
with this modification:
@@ -4630,6 +4630,7 @@ PostgresMain(const char *dbname, const char *username)
idle_session_timeout_enabled = false;
}
+if (rand() % 10 == 0) pg_usleep(10000);
/*
* (5) disable async signal conditions again.
*
Running this test in a loop (for ((i=1;i<=100;i++)); do \
echo "iteration $i"; make -s check -C contrib/postgres_fdw/ || break; \
done), I get:
...
iteration 56
# +++ regress check in contrib/postgres_fdw +++
# initializing database system by copying initdb template
# using temp instance on port 55312 with PID 991332
ok 1 - postgres_fdw 20093 ms
1..1
# All 1 tests passed.
iteration 57
# +++ regress check in contrib/postgres_fdw +++
# initializing database system by copying initdb template
# using temp instance on port 55312 with PID 992152
not ok 1 - postgres_fdw 62064 ms
1..1
...
--- .../contrib/postgres_fdw/expected/postgres_fdw.out 2024-06-22 02:52:42.991574907 +0000
+++ .../contrib/postgres_fdw/results/postgres_fdw.out 2024-06-22 14:43:43.949552927 +0000
@@ -2775,6 +2775,7 @@
SET LOCAL statement_timeout = '10ms';
select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
ERROR: canceling statement due to statement timeout
+WARNING: could not get result of cancel request due to timeout
COMMIT;
I also came across another failure of the test:
@@ -2774,7 +2774,7 @@
BEGIN;
SET LOCAL statement_timeout = '10ms';
select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
-ERROR: canceling statement due to statement timeout
+ERROR: canceling statement due to user request
COMMIT;
which is reproduced with a sleep added here:
@@ -1065,6 +1065,7 @@ exec_simple_query(const char *query_string)
*/
parsetree_list = pg_parse_query(query_string);
+pg_usleep(11000);
[1]: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=olingo&dt=2024-06-20%2009%3A52%3A04
Best regards,
Alexander
On Sat, 22 Jun 2024 at 17:00, Alexander Lakhin <exclusion@gmail.com> wrote:
@@ -2775,6 +2775,7 @@ SET LOCAL statement_timeout = '10ms'; select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long ERROR: canceling statement due to statement timeout +WARNING: could not get result of cancel request due to timeout COMMIT;
As you describe it, this problem occurs when the cancel request is
processed by the foreign server, before the query is actually
received. And postgres then (rightly) ignores the cancel request. I'm
not sure if the existing test is easily changeable to fix this. The
only thing that I can imagine works in practice is increasing the
statement_timeout, e.g. to 100ms.
I also came across another failure of the test: @@ -2774,7 +2774,7 @@ BEGIN; SET LOCAL statement_timeout = '10ms'; select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long -ERROR: canceling statement due to statement timeout +ERROR: canceling statement due to user request COMMIT;which is reproduced with a sleep added here:
@@ -1065,6 +1065,7 @@ exec_simple_query(const char *query_string)
*/
parsetree_list = pg_parse_query(query_string);
+pg_usleep(11000);
After investigating, I realized this actually exposes a bug in our
statement timeout logic. It has nothing to do with posgres_fdw and
reproduces with any regular postgres query too. Attached is a patch
that fixes this issue. This one should probably be backported.
Attachments:
v1-0001-Do-not-reset-statement_timeout-indicator-outside-.patchapplication/octet-stream; name=v1-0001-Do-not-reset-statement_timeout-indicator-outside-.patchDownload
From d9033c94681b4f916852469e211675d9781c81c2 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Mon, 24 Jun 2024 00:29:39 +0200
Subject: [PATCH v1] Do not reset statement_timeout indicator outside of
ProcessInterupts
The only way that ProcessInterupts can know why QueryCancelPending is
set is by looking at the indicator bits of the various timeouts. We were
resetting the one for STATEMENT_TIMEOUT in various places, thus possibly
causing ProcessInterupts to fail with the wrong error message.
---
src/backend/tcop/postgres.c | 21 ++++++++++++++++-----
1 file changed, 16 insertions(+), 5 deletions(-)
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index d843473f1c8..29244c96665 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -5156,22 +5156,33 @@ enable_statement_timeout(void)
if (StatementTimeout > 0
&& (StatementTimeout < TransactionTimeout || TransactionTimeout == 0))
{
- if (!get_timeout_active(STATEMENT_TIMEOUT))
+ /*
+ * We check both if it's active or if it's already triggered. If it's
+ * already triggered we don't want to restart it because that clears
+ * the indicator flag, which in turn would cause the wrong error
+ * message to be used by ProcessInterrupts() on the next
+ * CHECK_FOR_INTERRUPTS() call. Restarting the timer in that case
+ * would be pointless anyway, because the statement timeout error is
+ * going to trigger on the next CHECK_FOR_INTERRUPTS() call.
+ */
+ if (!get_timeout_active(STATEMENT_TIMEOUT)
+ && !get_timeout_indicator(STATEMENT_TIMEOUT, false))
enable_timeout_after(STATEMENT_TIMEOUT, StatementTimeout);
}
else
{
- if (get_timeout_active(STATEMENT_TIMEOUT))
- disable_timeout(STATEMENT_TIMEOUT, false);
+ disable_statement_timeout();
}
}
/*
- * Disable statement timeout, if active.
+ * Disable statement timeout, if active. We preserve the indicator flag
+ * though, otherwise we'd lose the knowledge in ProcessInterupts that the
+ * SIGINT came from a statement timeout.
*/
static void
disable_statement_timeout(void)
{
if (get_timeout_active(STATEMENT_TIMEOUT))
- disable_timeout(STATEMENT_TIMEOUT, false);
+ disable_timeout(STATEMENT_TIMEOUT, true);
}
--
2.34.1
24.06.2024 01:59, Jelte Fennema-Nio wrote:
On Sat, 22 Jun 2024 at 17:00, Alexander Lakhin <exclusion@gmail.com> wrote:
@@ -2775,6 +2775,7 @@ SET LOCAL statement_timeout = '10ms'; select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long ERROR: canceling statement due to statement timeout +WARNING: could not get result of cancel request due to timeout COMMIT;As you describe it, this problem occurs when the cancel request is
processed by the foreign server, before the query is actually
received. And postgres then (rightly) ignores the cancel request. I'm
not sure if the existing test is easily changeable to fix this. The
only thing that I can imagine works in practice is increasing the
statement_timeout, e.g. to 100ms.
I'd just like to add that that one original query assumes several "remote"
queries (see the attached excerpt from postmaster.log with verbose logging
enabled).
Best regards,
Alexander
Attachments:
On Tue, 25 Jun 2024 at 07:00, Alexander Lakhin <exclusion@gmail.com> wrote:
I'd just like to add that that one original query assumes several "remote"
queries (see the attached excerpt from postmaster.log with verbose logging
enabled).
Nice catch! All those EXPLAIN queries are definitely not intentional,
and likely to greatly increase the likelihood of this flakiness.
Attached is a patch that fixes that by moving the test before enabling
use_remote_estimate on any of the foreign tables, as well as
increasing the statement_timeout to 100ms.
My expectation is that that should remove all failure cases. If it
doesn't, I think our best bet is removing the test again.
Attachments:
v2-0001-Make-postgres_fdw-cancel-test-not-flaky-anymore.patchapplication/octet-stream; name=v2-0001-Make-postgres_fdw-cancel-test-not-flaky-anymore.patchDownload
From b98730ce927a5c194d617334407038907a9e04f4 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Tue, 25 Jun 2024 10:11:02 +0200
Subject: [PATCH v2 1/2] Make postgres_fdw cancel test not flaky anymore
The postgres_fdw cancel test turned out to be flaky. The reason for this
was that the cancel was sometimes sent earlier than intended. It was
meant to be sent during the actual CROSS JOIN, but (especially on slow
systems) it was sometimes sent in between two queries.
This patch tries to remove that issue in two ways:
1. Reduce the amount of queries that are sent by postgres_fdw, by
placing the test before enabling use_remote_estimate on any of the
tables in question.
2. Increasing the statement_timeout to 100ms
Reported-By: Alexander Lakhin
---
.../postgres_fdw/expected/postgres_fdw.out | 35 ++++++++++---------
contrib/postgres_fdw/sql/postgres_fdw.sql | 19 +++++-----
2 files changed, 30 insertions(+), 24 deletions(-)
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index ea566d50341..16071b3f7d3 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -250,6 +250,25 @@ SELECT c3, c4 FROM ft1 ORDER BY c3, c1 LIMIT 1; -- should work again
(1 row)
\set VERBOSITY default
+-- Let's test canceling a remote query, we do this before enabling
+-- use_remote_estimate on ft2 to avoid sending many queries to the remote
+-- server. Otherwise, we might be unlucky and the cancel to the remote might be
+-- sent right inbetween two of the queries, causing a flaky test. First let's
+-- confirm that the query is actually pushed down.
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+BEGIN;
+SET LOCAL statement_timeout = '100ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+COMMIT;
-- Now we should be able to run ANALYZE.
-- To exercise multiple code paths, we use local stats on ft1
-- and remote-estimate mode on ft2.
@@ -2760,22 +2779,6 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
--- Make sure this big CROSS JOIN query is pushed down
-EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
- QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
- Foreign Scan
- Output: (count(*))
- Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
- Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
-(4 rows)
-
--- Make sure query cancellation works
-BEGIN;
-SET LOCAL statement_timeout = '10ms';
-select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
-ERROR: canceling statement due to statement timeout
-COMMIT;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index b57f8cfda68..f9af38b44b9 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -242,6 +242,17 @@ ALTER USER MAPPING FOR CURRENT_USER SERVER loopback
SELECT c3, c4 FROM ft1 ORDER BY c3, c1 LIMIT 1; -- should work again
\set VERBOSITY default
+-- Let's test canceling a remote query, we do this before enabling
+-- use_remote_estimate on ft2 to avoid sending many queries to the remote
+-- server. Otherwise, we might be unlucky and the cancel to the remote might be
+-- sent right inbetween two of the queries, causing a flaky test. First let's
+-- confirm that the query is actually pushed down.
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+BEGIN;
+SET LOCAL statement_timeout = '100ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+COMMIT;
+
-- Now we should be able to run ANALYZE.
-- To exercise multiple code paths, we use local stats on ft1
-- and remote-estimate mode on ft2.
@@ -742,14 +753,6 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
--- Make sure this big CROSS JOIN query is pushed down
-EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
--- Make sure query cancellation works
-BEGIN;
-SET LOCAL statement_timeout = '10ms';
-select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
-COMMIT;
-
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
base-commit: 23c5a0e7d43bc925c6001538f04a458933a11fc1
--
2.34.1
v2-0002-Do-not-reset-statement_timeout-indicator-outside-.patchapplication/octet-stream; name=v2-0002-Do-not-reset-statement_timeout-indicator-outside-.patchDownload
From c741cfe751c0ae7977d1df30daddae5740ba9b27 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Mon, 24 Jun 2024 00:29:39 +0200
Subject: [PATCH v2 2/2] Do not reset statement_timeout indicator outside of
ProcessInterupts
The only way that ProcessInterupts can know why QueryCancelPending is
set is by looking at the indicator bits of the various timeouts. We were
resetting the one for STATEMENT_TIMEOUT in various places, thus possibly
causing ProcessInterupts to fail with the wrong error message.
---
src/backend/tcop/postgres.c | 21 ++++++++++++++++-----
1 file changed, 16 insertions(+), 5 deletions(-)
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 45a3794b8e3..160cc4df853 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -5150,22 +5150,33 @@ enable_statement_timeout(void)
if (StatementTimeout > 0
&& (StatementTimeout < TransactionTimeout || TransactionTimeout == 0))
{
- if (!get_timeout_active(STATEMENT_TIMEOUT))
+ /*
+ * We check both if it's active or if it's already triggered. If it's
+ * already triggered we don't want to restart it because that clears
+ * the indicator flag, which in turn would cause the wrong error
+ * message to be used by ProcessInterrupts() on the next
+ * CHECK_FOR_INTERRUPTS() call. Restarting the timer in that case
+ * would be pointless anyway, because the statement timeout error is
+ * going to trigger on the next CHECK_FOR_INTERRUPTS() call.
+ */
+ if (!get_timeout_active(STATEMENT_TIMEOUT)
+ && !get_timeout_indicator(STATEMENT_TIMEOUT, false))
enable_timeout_after(STATEMENT_TIMEOUT, StatementTimeout);
}
else
{
- if (get_timeout_active(STATEMENT_TIMEOUT))
- disable_timeout(STATEMENT_TIMEOUT, false);
+ disable_statement_timeout();
}
}
/*
- * Disable statement timeout, if active.
+ * Disable statement timeout, if active. We preserve the indicator flag
+ * though, otherwise we'd lose the knowledge in ProcessInterupts that the
+ * SIGINT came from a statement timeout.
*/
static void
disable_statement_timeout(void)
{
if (get_timeout_active(STATEMENT_TIMEOUT))
- disable_timeout(STATEMENT_TIMEOUT, false);
+ disable_timeout(STATEMENT_TIMEOUT, true);
}
--
2.34.1
On Tue, Mar 12, 2024 at 05:50:48PM +0100, Alvaro Herrera wrote:
On 2024-Mar-12, Jelte Fennema-Nio wrote:
On Tue, 12 Mar 2024 at 10:19, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Here's a last one for the cfbot.
Thanks for committing the first 3 patches btw.
Thanks, I included it.
PGcancelConn *
PQcancelCreate(PGconn *conn)
{
...
oom_error:
conn->status = CONNECTION_BAD;
libpq_append_conn_error(cancelConn, "out of memory");
return (PGcancelConn *) cancelConn;
}
Shouldn't that be s/conn->status/cancelConn->status/?
On Sun, 30 Jun 2024 at 21:00, Noah Misch <noah@leadboat.com> wrote:
Shouldn't that be s/conn->status/cancelConn->status/?
Ugh yes, I think this was a copy paste error. See attached patch 0003
to fix this (rest of the patches are untouched from previous
revision).
Attachments:
v3-0001-Make-postgres_fdw-cancel-test-not-flaky-anymore.patchapplication/octet-stream; name=v3-0001-Make-postgres_fdw-cancel-test-not-flaky-anymore.patchDownload
From ab0925cd4069d1f98df3ff58bc563e6852adc2f6 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Tue, 25 Jun 2024 10:11:02 +0200
Subject: [PATCH v3 1/3] Make postgres_fdw cancel test not flaky anymore
The postgres_fdw cancel test turned out to be flaky. The reason for this
was that the cancel was sometimes sent earlier than intended. It was
meant to be sent during the actual CROSS JOIN, but (especially on slow
systems) it was sometimes sent in between two queries.
This patch tries to remove that issue in two ways:
1. Reduce the amount of queries that are sent by postgres_fdw, by
placing the test before enabling use_remote_estimate on any of the
tables in question.
2. Increasing the statement_timeout to 100ms
Reported-By: Alexander Lakhin
---
.../postgres_fdw/expected/postgres_fdw.out | 35 ++++++++++---------
contrib/postgres_fdw/sql/postgres_fdw.sql | 19 +++++-----
2 files changed, 30 insertions(+), 24 deletions(-)
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index ea566d5034..16071b3f7d 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -250,6 +250,25 @@ SELECT c3, c4 FROM ft1 ORDER BY c3, c1 LIMIT 1; -- should work again
(1 row)
\set VERBOSITY default
+-- Let's test canceling a remote query, we do this before enabling
+-- use_remote_estimate on ft2 to avoid sending many queries to the remote
+-- server. Otherwise, we might be unlucky and the cancel to the remote might be
+-- sent right inbetween two of the queries, causing a flaky test. First let's
+-- confirm that the query is actually pushed down.
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+BEGIN;
+SET LOCAL statement_timeout = '100ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+COMMIT;
-- Now we should be able to run ANALYZE.
-- To exercise multiple code paths, we use local stats on ft1
-- and remote-estimate mode on ft2.
@@ -2760,22 +2779,6 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
--- Make sure this big CROSS JOIN query is pushed down
-EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
- QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
- Foreign Scan
- Output: (count(*))
- Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
- Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
-(4 rows)
-
--- Make sure query cancellation works
-BEGIN;
-SET LOCAL statement_timeout = '10ms';
-select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
-ERROR: canceling statement due to statement timeout
-COMMIT;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index b57f8cfda6..f9af38b44b 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -242,6 +242,17 @@ ALTER USER MAPPING FOR CURRENT_USER SERVER loopback
SELECT c3, c4 FROM ft1 ORDER BY c3, c1 LIMIT 1; -- should work again
\set VERBOSITY default
+-- Let's test canceling a remote query, we do this before enabling
+-- use_remote_estimate on ft2 to avoid sending many queries to the remote
+-- server. Otherwise, we might be unlucky and the cancel to the remote might be
+-- sent right inbetween two of the queries, causing a flaky test. First let's
+-- confirm that the query is actually pushed down.
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+BEGIN;
+SET LOCAL statement_timeout = '100ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+COMMIT;
+
-- Now we should be able to run ANALYZE.
-- To exercise multiple code paths, we use local stats on ft1
-- and remote-estimate mode on ft2.
@@ -742,14 +753,6 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
--- Make sure this big CROSS JOIN query is pushed down
-EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
--- Make sure query cancellation works
-BEGIN;
-SET LOCAL statement_timeout = '10ms';
-select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
-COMMIT;
-
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
base-commit: 54508209178bc73a497c460bd0ffd1645dceb1a2
--
2.34.1
v3-0003-Fix-copy-paste-mistake-in-PQcancelCreate.patchapplication/octet-stream; name=v3-0003-Fix-copy-paste-mistake-in-PQcancelCreate.patchDownload
From 5f72aa57fa0623c295b26e065e49e0f4945185ec Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <github-tech@jeltef.nl>
Date: Mon, 1 Jul 2024 00:33:41 +0200
Subject: [PATCH v3 3/3] Fix copy-paste mistake in PQcancelCreate
When an OOM occeured it was incorrectly setting a status of
CONNECTION_BAD on the passed in PGconn istead of on the newly
created PGcancelConn.
---
src/interfaces/libpq/fe-cancel.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/interfaces/libpq/fe-cancel.c b/src/interfaces/libpq/fe-cancel.c
index 3b6206ea7f..9562a7fe44 100644
--- a/src/interfaces/libpq/fe-cancel.c
+++ b/src/interfaces/libpq/fe-cancel.c
@@ -155,7 +155,7 @@ PQcancelCreate(PGconn *conn)
return (PGcancelConn *) cancelConn;
oom_error:
- conn->status = CONNECTION_BAD;
+ cancelConn->status = CONNECTION_BAD;
libpq_append_conn_error(cancelConn, "out of memory");
return (PGcancelConn *) cancelConn;
}
--
2.34.1
v3-0002-Do-not-reset-statement_timeout-indicator-outside-.patchapplication/octet-stream; name=v3-0002-Do-not-reset-statement_timeout-indicator-outside-.patchDownload
From c1f86c6766a25c8427cf90af8efc815830bfee9f Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Mon, 24 Jun 2024 00:29:39 +0200
Subject: [PATCH v3 2/3] Do not reset statement_timeout indicator outside of
ProcessInterupts
The only way that ProcessInterupts can know why QueryCancelPending is
set is by looking at the indicator bits of the various timeouts. We were
resetting the one for STATEMENT_TIMEOUT in various places, thus possibly
causing ProcessInterupts to fail with the wrong error message.
---
src/backend/tcop/postgres.c | 21 ++++++++++++++++-----
1 file changed, 16 insertions(+), 5 deletions(-)
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 45a3794b8e..160cc4df85 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -5150,22 +5150,33 @@ enable_statement_timeout(void)
if (StatementTimeout > 0
&& (StatementTimeout < TransactionTimeout || TransactionTimeout == 0))
{
- if (!get_timeout_active(STATEMENT_TIMEOUT))
+ /*
+ * We check both if it's active or if it's already triggered. If it's
+ * already triggered we don't want to restart it because that clears
+ * the indicator flag, which in turn would cause the wrong error
+ * message to be used by ProcessInterrupts() on the next
+ * CHECK_FOR_INTERRUPTS() call. Restarting the timer in that case
+ * would be pointless anyway, because the statement timeout error is
+ * going to trigger on the next CHECK_FOR_INTERRUPTS() call.
+ */
+ if (!get_timeout_active(STATEMENT_TIMEOUT)
+ && !get_timeout_indicator(STATEMENT_TIMEOUT, false))
enable_timeout_after(STATEMENT_TIMEOUT, StatementTimeout);
}
else
{
- if (get_timeout_active(STATEMENT_TIMEOUT))
- disable_timeout(STATEMENT_TIMEOUT, false);
+ disable_statement_timeout();
}
}
/*
- * Disable statement timeout, if active.
+ * Disable statement timeout, if active. We preserve the indicator flag
+ * though, otherwise we'd lose the knowledge in ProcessInterupts that the
+ * SIGINT came from a statement timeout.
*/
static void
disable_statement_timeout(void)
{
if (get_timeout_active(STATEMENT_TIMEOUT))
- disable_timeout(STATEMENT_TIMEOUT, false);
+ disable_timeout(STATEMENT_TIMEOUT, true);
}
--
2.34.1
On Mon, 1 Jul 2024 at 00:38, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
Ugh yes, I think this was a copy paste error. See attached patch 0003
to fix this (rest of the patches are untouched from previous
revision).
Alvaro committed 0003, which caused cfbot to think a rebase is
necessary. Attached should solve that.
Attachments:
v4-0001-Make-postgres_fdw-cancel-test-not-flaky-anymore.patchapplication/octet-stream; name=v4-0001-Make-postgres_fdw-cancel-test-not-flaky-anymore.patchDownload
From fb6db7ba6e756b37e17096f21fc3b39085fd2585 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Tue, 25 Jun 2024 10:11:02 +0200
Subject: [PATCH v4 1/2] Make postgres_fdw cancel test not flaky anymore
The postgres_fdw cancel test turned out to be flaky. The reason for this
was that the cancel was sometimes sent earlier than intended. It was
meant to be sent during the actual CROSS JOIN, but (especially on slow
systems) it was sometimes sent in between two queries.
This patch tries to remove that issue in two ways:
1. Reduce the amount of queries that are sent by postgres_fdw, by
placing the test before enabling use_remote_estimate on any of the
tables in question.
2. Increasing the statement_timeout to 100ms
Reported-By: Alexander Lakhin
---
.../postgres_fdw/expected/postgres_fdw.out | 35 ++++++++++---------
contrib/postgres_fdw/sql/postgres_fdw.sql | 19 +++++-----
2 files changed, 30 insertions(+), 24 deletions(-)
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 1f22309194..7fa1dd3907 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -250,6 +250,25 @@ SELECT c3, c4 FROM ft1 ORDER BY c3, c1 LIMIT 1; -- should work again
(1 row)
\set VERBOSITY default
+-- Let's test canceling a remote query, we do this before enabling
+-- use_remote_estimate on ft2 to avoid sending many queries to the remote
+-- server. Otherwise, we might be unlucky and the cancel to the remote might be
+-- sent right inbetween two of the queries, causing a flaky test. First let's
+-- confirm that the query is actually pushed down.
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+BEGIN;
+SET LOCAL statement_timeout = '100ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+COMMIT;
-- Now we should be able to run ANALYZE.
-- To exercise multiple code paths, we use local stats on ft1
-- and remote-estimate mode on ft2.
@@ -2760,22 +2779,6 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
--- Make sure this big CROSS JOIN query is pushed down
-EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
- QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
- Foreign Scan
- Output: (count(*))
- Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
- Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
-(4 rows)
-
--- Make sure query cancellation works
-BEGIN;
-SET LOCAL statement_timeout = '10ms';
-select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
-ERROR: canceling statement due to statement timeout
-COMMIT;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index b57f8cfda6..f9af38b44b 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -242,6 +242,17 @@ ALTER USER MAPPING FOR CURRENT_USER SERVER loopback
SELECT c3, c4 FROM ft1 ORDER BY c3, c1 LIMIT 1; -- should work again
\set VERBOSITY default
+-- Let's test canceling a remote query, we do this before enabling
+-- use_remote_estimate on ft2 to avoid sending many queries to the remote
+-- server. Otherwise, we might be unlucky and the cancel to the remote might be
+-- sent right inbetween two of the queries, causing a flaky test. First let's
+-- confirm that the query is actually pushed down.
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+BEGIN;
+SET LOCAL statement_timeout = '100ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+COMMIT;
+
-- Now we should be able to run ANALYZE.
-- To exercise multiple code paths, we use local stats on ft1
-- and remote-estimate mode on ft2.
@@ -742,14 +753,6 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
--- Make sure this big CROSS JOIN query is pushed down
-EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
--- Make sure query cancellation works
-BEGIN;
-SET LOCAL statement_timeout = '10ms';
-select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
-COMMIT;
-
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
base-commit: 05506510de6ae24ba6de00cef2f458920c8a72ea
--
2.34.1
v4-0002-Do-not-reset-statement_timeout-indicator-outside-.patchapplication/octet-stream; name=v4-0002-Do-not-reset-statement_timeout-indicator-outside-.patchDownload
From b87758a8bbfde9a035de65bf9048c1521afd4ca7 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <jelte.fennema@microsoft.com>
Date: Mon, 24 Jun 2024 00:29:39 +0200
Subject: [PATCH v4 2/2] Do not reset statement_timeout indicator outside of
ProcessInterupts
The only way that ProcessInterupts can know why QueryCancelPending is
set is by looking at the indicator bits of the various timeouts. We were
resetting the one for STATEMENT_TIMEOUT in various places, thus possibly
causing ProcessInterupts to fail with the wrong error message.
---
src/backend/tcop/postgres.c | 21 ++++++++++++++++-----
1 file changed, 16 insertions(+), 5 deletions(-)
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index e39c6804a7..03d1eb63ff 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -5150,22 +5150,33 @@ enable_statement_timeout(void)
if (StatementTimeout > 0
&& (StatementTimeout < TransactionTimeout || TransactionTimeout == 0))
{
- if (!get_timeout_active(STATEMENT_TIMEOUT))
+ /*
+ * We check both if it's active or if it's already triggered. If it's
+ * already triggered we don't want to restart it because that clears
+ * the indicator flag, which in turn would cause the wrong error
+ * message to be used by ProcessInterrupts() on the next
+ * CHECK_FOR_INTERRUPTS() call. Restarting the timer in that case
+ * would be pointless anyway, because the statement timeout error is
+ * going to trigger on the next CHECK_FOR_INTERRUPTS() call.
+ */
+ if (!get_timeout_active(STATEMENT_TIMEOUT)
+ && !get_timeout_indicator(STATEMENT_TIMEOUT, false))
enable_timeout_after(STATEMENT_TIMEOUT, StatementTimeout);
}
else
{
- if (get_timeout_active(STATEMENT_TIMEOUT))
- disable_timeout(STATEMENT_TIMEOUT, false);
+ disable_statement_timeout();
}
}
/*
- * Disable statement timeout, if active.
+ * Disable statement timeout, if active. We preserve the indicator flag
+ * though, otherwise we'd lose the knowledge in ProcessInterupts that the
+ * SIGINT came from a statement timeout.
*/
static void
disable_statement_timeout(void)
{
if (get_timeout_active(STATEMENT_TIMEOUT))
- disable_timeout(STATEMENT_TIMEOUT, false);
+ disable_timeout(STATEMENT_TIMEOUT, true);
}
--
2.34.1
Hello,
25.06.2024 11:24, Jelte Fennema-Nio wrote:
My expectation is that that should remove all failure cases. If it
doesn't, I think our best bet is removing the test again.
It looks like that test eventually showed what could be called a virtue.
Please take a look at a recent BF failure [1]https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lorikeet&dt=2024-07-12%2010%3A05%3A27:
timed out after 10800 secs
...
# +++ regress install-check in contrib/postgres_fdw +++
# using postmaster on /home/andrew/bf/root/tmp/buildfarm-e2ahpQ, port 5878
So the postgres_fdw test hanged for several hours while running on the
Cygwin animal lorikeet.
I've managed to reproduce this issue in my Cygwin environment by running
the postgres_fdw test in a loop (10 iterations are enough to get the
described effect). And what I'm seeing is that a query-cancelling backend
is stuck inside pgfdw_xact_callback() -> pgfdw_abort_cleanup() ->
pgfdw_cancel_query() -> pgfdw_cancel_query_begin() -> libpqsrv_cancel() ->
WaitLatchOrSocket() -> WaitEventSetWait() -> WaitEventSetWaitBlock() ->
poll().
The timeout value (approximately 30 seconds), which is passed to poll(),
is effectively ignored by this call — the waiting lasts for unlimited time.
This definitely is caused by 2466d6654. (I applied the test change from that
commit to 2466d6654~1 and saw no issue when running the same test in a
loop.)
With gdb attached to a hanging backend, I see the following stack trace:
#0 0x00007ffb7f70d5e4 in ntdll!ZwWaitForSingleObject () from /cygdrive/c/Windows/SYSTEM32/ntdll.dll
#1 0x00007ffb7d2e920e in WaitForSingleObjectEx () from /cygdrive/c/Windows/System32/KERNELBASE.dll
#2 0x00007ffb5ce78862 in fhandler_socket_wsock::evaluate_events (this=0x800126968, event_mask=50, events=@0x7ffffb208:
0, erase=erase@entry=false)
at /usr/src/debug/cygwin-3.5.3-1/winsup/cygwin/fhandler/socket_inet.cc:268
#3 0x00007ffb5cdef0f5 in peek_socket (me=0xa001a43c0) at /usr/src/debug/cygwin-3.5.3-1/winsup/cygwin/select.cc:1771
#4 0x00007ffb5cdf211e in select_stuff::poll (this=this@entry=0x7ffffb300, readfds=0x7ffffb570,
readfds@entry=0x800000000, writefds=0x7ffffb560, writefds@entry=0x7ffffb5c0, exceptfds=0x7ffffb550,
exceptfds@entry=0x7ffb5cdf2c97 <cygwin_select(int, fd_set*, fd_set*, fd_set*, timeval*)+71>) at
/usr/src/debug/cygwin-3.5.3-1/winsup/cygwin/select.cc:554
#5 0x00007ffb5cdf257e in select (maxfds=maxfds@entry=45, readfds=0x800000000, writefds=0x7ffffb5c0,
exceptfds=0x7ffb5cdf2c97 <cygwin_select(int, fd_set*, fd_set*, fd_set*, timeval*)+71>, us=4308570016,
us@entry=29973000) at /usr/src/debug/cygwin-3.5.3-1/winsup/cygwin/select.cc:204
#6 0x00007ffb5cdf2927 in pselect (maxfds=45, readfds=0x7ffffb570, writefds=0x7ffffb560, exceptfds=0x7ffffb550,
to=<optimized out>, to@entry=0x7ffffb500, set=<optimized out>, set@entry=0x0)
at /usr/src/debug/cygwin-3.5.3-1/winsup/cygwin/select.cc:120
#7 0x00007ffb5cdf2c97 in cygwin_select (maxfds=<optimized out>, readfds=<optimized out>, writefds=<optimized out>,
exceptfds=<optimized out>, to=0x7ffffb5b0)
at /usr/src/debug/cygwin-3.5.3-1/winsup/cygwin/select.cc:147
#8 0x00007ffb5cddc112 in poll (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at
/usr/src/debug/cygwin-3.5.3-1/winsup/cygwin/poll.cc:83
...
and socket_inet.c:268 ([2]https://www.cygwin.com/cgit/newlib-cygwin/tree/winsup/cygwin/fhandler/socket_inet.cc?h=cygwin-3.5.3) indeed contains an infinite wait call
(LOCK_EVENTS; / WaitForSingleObject (wsock_mtx, INFINITE)).
So it looks like a Cygwin bug, but maybe something should be done on our side
too, at least to prevent such lorikeet failures.
[1]: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lorikeet&dt=2024-07-12%2010%3A05%3A27
[2]: https://www.cygwin.com/cgit/newlib-cygwin/tree/winsup/cygwin/fhandler/socket_inet.cc?h=cygwin-3.5.3
Best regards,
Alexander
On 2024-Jul-16, Alexander Lakhin wrote:
I've managed to reproduce this issue in my Cygwin environment by running
the postgres_fdw test in a loop (10 iterations are enough to get the
described effect). And what I'm seeing is that a query-cancelling backend
is stuck inside pgfdw_xact_callback() -> pgfdw_abort_cleanup() ->
pgfdw_cancel_query() -> pgfdw_cancel_query_begin() -> libpqsrv_cancel() ->
WaitLatchOrSocket() -> WaitEventSetWait() -> WaitEventSetWaitBlock() ->
poll().The timeout value (approximately 30 seconds), which is passed to poll(),
is effectively ignored by this call — the waiting lasts for unlimited time.
Ugh. I tried to follow what's going on in that cygwin code, but I gave
up pretty quickly. It depends on a mutex, but I didn't see the mutex
being defined or initialized anywhere.
So it looks like a Cygwin bug, but maybe something should be done on our side
too, at least to prevent such lorikeet failures.
I don't know what else we can do other than remove the test.
Maybe we can disable this test specifically on Cygwin. We could do that
by creating a postgres_fdw_cancel.sql file, with the current output for
all platforms, and a "SELECT version() ~ 'cygwin' AS skip_test" query,
as we do for encoding tests and such.
--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/
"Doing what he did amounts to sticking his fingers under the hood of the
implementation; if he gets his fingers burnt, it's his problem." (Tom Lane)
On 2024-Jul-16, Alvaro Herrera wrote:
Maybe we can disable this test specifically on Cygwin. We could do that
by creating a postgres_fdw_cancel.sql file, with the current output for
all platforms, and a "SELECT version() ~ 'cygwin' AS skip_test" query,
as we do for encoding tests and such.
Something like this.
--
Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/
Attachments:
0001-Split-out-the-query_cancel-test-in-postgres_fdw.patchtext/x-diff; charset=utf-8Download
From dcb8876e16429e8cba9ea226454b0b77e2e00512 Mon Sep 17 00:00:00 2001
From: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Tue, 16 Jul 2024 17:18:55 +0200
Subject: [PATCH] Split out the query_cancel test in postgres_fdw
so that it can be skipped in Cygwin
---
contrib/postgres_fdw/Makefile | 2 +-
.../postgres_fdw/expected/postgres_fdw.out | 16 ---------------
.../postgres_fdw/expected/query_cancel.out | 20 +++++++++++++++++++
.../postgres_fdw/expected/query_cancel.sql | 17 ++++++++++++++++
.../postgres_fdw/expected/query_cancel_1.out | 3 +++
contrib/postgres_fdw/meson.build | 1 +
contrib/postgres_fdw/sql/postgres_fdw.sql | 8 --------
contrib/postgres_fdw/sql/query_cancel.sql | 12 +++++++++++
8 files changed, 54 insertions(+), 25 deletions(-)
create mode 100644 contrib/postgres_fdw/expected/query_cancel.out
create mode 100644 contrib/postgres_fdw/expected/query_cancel.sql
create mode 100644 contrib/postgres_fdw/expected/query_cancel_1.out
create mode 100644 contrib/postgres_fdw/sql/query_cancel.sql
diff --git a/contrib/postgres_fdw/Makefile b/contrib/postgres_fdw/Makefile
index c1b0cad453..b9fa699305 100644
--- a/contrib/postgres_fdw/Makefile
+++ b/contrib/postgres_fdw/Makefile
@@ -16,7 +16,7 @@ SHLIB_LINK_INTERNAL = $(libpq)
EXTENSION = postgres_fdw
DATA = postgres_fdw--1.0.sql postgres_fdw--1.0--1.1.sql
-REGRESS = postgres_fdw
+REGRESS = postgres_fdw query_cancel
ifdef USE_PGXS
PG_CONFIG = pg_config
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 0cc77190dc..8852c19b48 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -2760,22 +2760,6 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
(10 rows)
ALTER VIEW v4 OWNER TO regress_view_owner;
--- Make sure this big CROSS JOIN query is pushed down
-EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
- QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
- Foreign Scan
- Output: (count(*))
- Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
- Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
-(4 rows)
-
--- Make sure query cancellation works
-BEGIN;
-SET LOCAL statement_timeout = '10ms';
-select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
-ERROR: canceling statement due to statement timeout
-COMMIT;
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/expected/query_cancel.out b/contrib/postgres_fdw/expected/query_cancel.out
new file mode 100644
index 0000000000..afef67aa8d
--- /dev/null
+++ b/contrib/postgres_fdw/expected/query_cancel.out
@@ -0,0 +1,20 @@
+SELECT version() ~ 'cygwin' AS skip_test \gset
+\if :skip_test
+\quit
+\endif
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+BEGIN;
+SET LOCAL statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+COMMIT;
diff --git a/contrib/postgres_fdw/expected/query_cancel.sql b/contrib/postgres_fdw/expected/query_cancel.sql
new file mode 100644
index 0000000000..45a26e06f5
--- /dev/null
+++ b/contrib/postgres_fdw/expected/query_cancel.sql
@@ -0,0 +1,17 @@
+
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+ QUERY PLAN
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Foreign Scan
+ Output: (count(*))
+ Relations: Aggregate on ((((public.ft1) INNER JOIN (public.ft2)) INNER JOIN (public.ft4)) INNER JOIN (public.ft5))
+ Remote SQL: SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 3" r4 ON (TRUE)) INNER JOIN "S 1"."T 4" r6 ON (TRUE))
+(4 rows)
+
+-- Make sure query cancellation works
+BEGIN;
+SET LOCAL statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+ERROR: canceling statement due to statement timeout
+COMMIT;
diff --git a/contrib/postgres_fdw/expected/query_cancel_1.out b/contrib/postgres_fdw/expected/query_cancel_1.out
new file mode 100644
index 0000000000..c909f2917d
--- /dev/null
+++ b/contrib/postgres_fdw/expected/query_cancel_1.out
@@ -0,0 +1,3 @@
+SELECT version() ~ 'cygwin' AS skip_test \gset
+\if :skip_test
+\quit
diff --git a/contrib/postgres_fdw/meson.build b/contrib/postgres_fdw/meson.build
index 2b86d8a6ee..f0803ee077 100644
--- a/contrib/postgres_fdw/meson.build
+++ b/contrib/postgres_fdw/meson.build
@@ -36,6 +36,7 @@ tests += {
'regress': {
'sql': [
'postgres_fdw',
+ 'query_cancel',
],
'regress_args': ['--dlpath', meson.build_root() / 'src/test/regress'],
},
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index b57f8cfda6..1cfb5246ff 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -742,14 +742,6 @@ SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c
SELECT t1.c1, t2.c2 FROM v4 t1 LEFT JOIN ft5 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c1, t2.c1 OFFSET 10 LIMIT 10;
ALTER VIEW v4 OWNER TO regress_view_owner;
--- Make sure this big CROSS JOIN query is pushed down
-EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
--- Make sure query cancellation works
-BEGIN;
-SET LOCAL statement_timeout = '10ms';
-select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
-COMMIT;
-
-- ====================================================================
-- Check that userid to use when querying the remote table is correctly
-- propagated into foreign rels present in subqueries under an UNION ALL
diff --git a/contrib/postgres_fdw/sql/query_cancel.sql b/contrib/postgres_fdw/sql/query_cancel.sql
new file mode 100644
index 0000000000..11fe077417
--- /dev/null
+++ b/contrib/postgres_fdw/sql/query_cancel.sql
@@ -0,0 +1,12 @@
+SELECT version() || 'cygwin' ~ 'cygwin' AS skip_test \gset
+\if :skip_test
+\quit
+\endif
+
+-- Make sure this big CROSS JOIN query is pushed down
+EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
+-- Make sure query cancellation works
+BEGIN;
+SET LOCAL statement_timeout = '10ms';
+select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
+COMMIT;
--
2.39.2
On Wed, Jul 17, 2024 at 3:08 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Ugh. I tried to follow what's going on in that cygwin code, but I gave
up pretty quickly. It depends on a mutex, but I didn't see the mutex
being defined or initialized anywhere.
Not obvious how it'd be deadlocking (?), though... it's hard to see
how anything between LOCK_EVENTS and UNLOCK_EVENTS could escape/return
early. (Something weird going on with signal handlers? I can't
imagine where one would call poll() though).
Hello Thomas,
17.07.2024 03:05, Thomas Munro wrote:
On Wed, Jul 17, 2024 at 3:08 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
Ugh. I tried to follow what's going on in that cygwin code, but I gave
up pretty quickly. It depends on a mutex, but I didn't see the mutex
being defined or initialized anywhere.Not obvious how it'd be deadlocking (?), though... it's hard to see
how anything between LOCK_EVENTS and UNLOCK_EVENTS could escape/return
early. (Something weird going on with signal handlers? I can't
imagine where one would call poll() though).
I've simplified the repro to the following:
echo "
-- setup foreign server "loopback" --
CREATE TABLE t1(i int);
CREATE FOREIGN TABLE ft1 (i int) SERVER loopback OPTIONS (table_name 't1');
CREATE FOREIGN TABLE ft2 (i int) SERVER loopback OPTIONS (table_name 't1');
INSERT INTO t1 SELECT i FROM generate_series(1, 100000) g(i);
" | psql
cat << 'EOF' | psql
Select pg_sleep(10);
SET statement_timeout = '10ms';
SELECT 'SELECT count(*) FROM ft1 CROSS JOIN ft2;' FROM generate_series(1, 100)
\gexec
EOF
I've attached strace (with --mask=0x251, per [1]https://cygwin.com/cygwin-ug-net/strace.html) to the query-cancelling
backend and got strace.log (see in attachment), while observing:
ERROR: canceling statement due to statement timeout
...
ERROR: canceling statement due to statement timeout
-- total 14 lines, then the process hanged --
-- I interrupted it several seconds later --
As far as I can see (having analyzed a number of runs), the hanging occurs
when some itimer-related activity happens before "peek_socket" in this
event sequence:
[main] postgres {pid} select_stuff::wait: res after verify 0
[main] postgres {pid} select_stuff::wait: returning 0
[main] postgres {pid} select: sel.wait returns 0
[main] postgres {pid} peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
(See the last occurrence of the sequence in the log.)
[1]: https://cygwin.com/cygwin-ug-net/strace.html
Best regards,
Alexander
Attachments:
strace.logtext/x-log; charset=UTF-8; name=strace.logDownload
--- Process 3416 (pid: 58742) created
--- Process 3416 (pid: 58742) loaded C:\Windows\System32\ntdll.dll at 00007ffcb5e10000
--- Process 3416 (pid: 58742) thread 5820 created
--- Process 3416 (pid: 58742) thread 4796 created
--- Process 3416 (pid: 58742) thread 6484 created
--- Process 3416 (pid: 58742) thread 1712 created
--- Process 3416 (pid: 58742) thread 4900 created
--- Process 3416 (pid: 58742) thread 3316 created
--- Process 3416 (pid: 58742) thread 872 created
--- Process 3416 (pid: 58742) loaded C:\Windows\System32\kernel32.dll at 00007ffcb5030000
--- Process 3416 (pid: 58742) loaded C:\Windows\System32\KernelBase.dll at 00007ffcb3980000
--- Process 3416 (pid: 58742) loaded T:\cygwin64\bin\cygwin1.dll at 00007ffc90f40000
--- Process 3416 (pid: 58742) loaded T:\cygwin64\bin\cygcrypto-3.dll at 00000003ff8b0000
--- Process 3416 (pid: 58742) loaded T:\cygwin64\bin\cygz.dll at 00000003fe100000
--- Process 3416 (pid: 58742) loaded T:\cygwin64\bin\cygxml2-2.dll at 00000003fe160000
--- Process 3416 (pid: 58742) loaded T:\cygwin64\bin\cygssl-3.dll at 00000003fe520000
--- Process 3416 (pid: 58742) loaded T:\cygwin64\bin\cyggcc_s-seh-1.dll at 00000003ff570000
--- Process 3416 (pid: 58742) loaded T:\cygwin64\bin\cygiconv-2.dll at 00000003ff1f0000
--- Process 3416 (pid: 58742) loaded T:\cygwin64\bin\cyglzma-5.dll at 00000003fef90000
--- Process 3416 (pid: 58742) loaded C:\Windows\System32\advapi32.dll at 00007ffcb5340000
--- Process 3416 (pid: 58742) loaded C:\Windows\System32\msvcrt.dll at 00007ffcb5c20000
--- Process 3416 (pid: 58742) loaded C:\Windows\System32\sechost.dll at 00007ffcb52a0000
--- Process 3416 (pid: 58742) loaded C:\Windows\System32\rpcrt4.dll at 00007ffcb5600000
--- Process 3416 (pid: 58742) loaded C:\Windows\System32\bcrypt.dll at 00007ffcb34c0000
--- Process 3416 (pid: 58742) loaded C:\Windows\System32\cryptbase.dll at 00007ffcb2d60000
--- Process 3416 (pid: 58742) loaded C:\Windows\System32\bcryptprimitives.dll at 00007ffcb36c0000
--- Process 3416 (pid: 58742) loaded C:\Windows\System32\ws2_32.dll at 00007ffcb5d60000
--- Process 3416 (pid: 58742) loaded C:\Windows\System32\mswsock.dll at 00007ffcb2b70000
--- Process 3416 (pid: 58742) thread 2268 created
--- Process 3416 (pid: 58742) thread 2268 exited with status 0x0
00:00:00 [sig] postgres 58742 **********************************************
00:00:00 [sig] postgres 58742 Program name: T:\cygwin64\usr\local\pgsql\bin\postgres.exe (pid 58742, ppid 58729, windows pid 3416)
00:00:00 [sig] postgres 58742 OS version: Windows NT-10.0
00:00:00 [sig] postgres 58742 **********************************************
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 3, m = 4. verifying
00:00:08 [main] postgres 58742 select_stuff::wait: timed out
00:00:08 [main] postgres 58742 select_stuff::wait: returning 1
00:00:08 [main] postgres 58742 select: sel.wait returns 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select: recalculating us
00:00:08 [main] postgres 58742 select: timed out after verification
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 0 = select (7, 0x7FFFFB460, 0x7FFFFB450, 0x7FFFFB440, 0x7FFFFB3F0)
00:00:08 [main] postgres 58742 cygwin_send: 65 = send(11, 0xA00085318, 65, 0x0)
00:00:08 [main] postgres 58742 __set_errno: void __set_winsock_errno(const char*, int):234 setting errno 11
00:00:08 [main] postgres 58742 __set_winsock_errno: recv_internal:1278 - winsock error 10035 -> errno 11
00:00:08 [main] postgres 58742 cygwin_recv: -1 = recv(11, 0x100CF6D00, 8192, 0x0), errno 11
00:00:08 [main] postgres 58742 pselect: pselect (12, 0x7FFFFB920, 0x7FFFFB910, 0x7FFFFB900, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_read: fd 11
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFB6B8
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFB6B8, timeout 4294967295
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 1, write_ready: 0, except_ready: 0
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 2, m = 4. verifying
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 1, write_ready: 0, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA00123680, testing fd 11 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA00084670 si->thread 0x7FFC91135610
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 1 = select (12, 0x7FFFFB920, 0x7FFFFB910, 0x7FFFFB900, 0x0)
00:00:08 [main] postgres 58742 cygwin_recv: 37 = recv(11, 0x100CF6D00, 8192, 0x0)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBB38)
00:00:08 [main] postgres 58742 write: 110 = write(2, 0xA000026A0, 110)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBA28)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBA48)
00:00:08 [main] postgres 58742 cygwin_send: 15 = send(11, 0xA00085318, 15, 0x0)
00:00:08 [main] postgres 58742 __set_errno: void __set_winsock_errno(const char*, int):234 setting errno 11
00:00:08 [main] postgres 58742 __set_winsock_errno: recv_internal:1278 - winsock error 10035 -> errno 11
00:00:08 [main] postgres 58742 cygwin_recv: -1 = recv(11, 0x100CF6D00, 8192, 0x0), errno 11
00:00:08 [main] postgres 58742 pselect: pselect (12, 0x7FFFFB920, 0x7FFFFB910, 0x7FFFFB900, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_read: fd 11
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFB6B8
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFB6B8, timeout 4294967295
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 1, write_ready: 0, except_ready: 0
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 2, m = 4. verifying
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 1, write_ready: 0, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA00123680, testing fd 11 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA00084670 si->thread 0x7FFC91135610
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 1 = select (12, 0x7FFFFB920, 0x7FFFFB910, 0x7FFFFB900, 0x0)
00:00:08 [main] postgres 58742 cygwin_recv: 84 = recv(11, 0x100CF6D00, 8192, 0x0)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBB38)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer disarmed, Win32 error 0
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 write: 157 = write(2, 0xA000026A0, 157)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFAEF8)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBA28)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBA48)
00:00:08 [main] postgres 58742 cygwin_send: 5156 = send(11, 0xA00085318, 5156, 0x0)
00:00:08 [main] postgres 58742 __set_errno: void __set_winsock_errno(const char*, int):234 setting errno 11
00:00:08 [main] postgres 58742 __set_winsock_errno: recv_internal:1278 - winsock error 10035 -> errno 11
00:00:08 [main] postgres 58742 cygwin_recv: -1 = recv(11, 0x100CF6D00, 8192, 0x0), errno 11
00:00:08 [main] postgres 58742 pselect: pselect (12, 0x7FFFFB920, 0x7FFFFB910, 0x7FFFFB900, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_read: fd 11
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFB6B8
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFB6B8, timeout 4294967295
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 1, write_ready: 0, except_ready: 0
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 2, m = 4. verifying
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 1, write_ready: 0, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA00123680, testing fd 11 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA00084670 si->thread 0x7FFC91135610
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 1 = select (12, 0x7FFFFB920, 0x7FFFFB910, 0x7FFFFB900, 0x0)
00:00:08 [main] postgres 58742 cygwin_recv: 46 = recv(11, 0x100CF6D00, 8192, 0x0)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBB38)
00:00:08 [main] postgres 58742 write: 119 = write(2, 0xA000026A0, 119)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB008)
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: suspending thread, tls 0x7FFFFCE00, _main_tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: couldn't interrupt. trying again.
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x10077F40B
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [main] postgres 58742 fhandler_disk_file::pread: 8192 = pread(0x6FFFF7CE8000, 8192, 393216, 0x0)
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [main] postgres 58742 pread: 8192 = pread(13, 0x6FFFF7CE8000, 8192, 393216)
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 open: open(base/5/2650, 0x50002)
00:00:08 [main] postgres 58742 normalize_posix_path: src base/5/2650
00:00:08 [main] postgres 58742 cwdstuff::get: posix /home/1/postgresql/tmpdb
00:00:08 [main] postgres 58742 cwdstuff::get: (/home/1/postgresql/tmpdb) = cwdstuff::get (0x1A80730, 32768, 1, 0), errno 0
00:00:08 [main] postgres 58742 normalize_posix_path: /home/1/postgresql/tmpdb/base/5/2650 = normalize_posix_path (base/5/2650)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1/postgresql/tmpdb/base/5/2650)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1/postgresql/tmpdb/base/5/2650, dst T:\cygwin64\home\1\postgresql\tmpdb\base\5\2650, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\home\1\postgresql\tmpdb\base\5\2650)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1\postgresql\tmpdb\base\5\2650, 0x7FFFF80B0) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\home\1\postgresql\tmpdb\base\5\2650), has_acls(1)
00:00:08 [main] postgres 58742 build_fh_pc: fh 0x8000D99F8, dev 000000C3
00:00:08 [main] postgres 58742 fhandler_base::open: (\??\T:\cygwin64\home\1\postgresql\tmpdb\base\5\2650, 0x50002)
00:00:08 [main] postgres 58742 fhandler_base::set_flags: flags 0x50002, supplied_bin 0x10000
00:00:08 [main] postgres 58742 fhandler_base::set_flags: O_TEXT/O_BINARY set in flags 0x10000
00:00:08 [main] postgres 58742 fhandler_base::set_flags: filemode set to binary
00:00:08 [main] postgres 58742 fhandler_base::open: 0x0 = NtCreateFile (0x1BE8, 0xC0100000, \??\T:\cygwin64\home\1\postgresql\tmpdb\base\5\2650, io, NULL, 0x0, 0x7, 0x1, 0x4020, NULL, 0)
00:00:08 [main] postgres 58742 fhandler_base::open: 1 = fhandler_base::open(\??\T:\cygwin64\home\1\postgresql\tmpdb\base\5\2650, 0x50002)
00:00:08 [main] postgres 58742 fhandler_base::open_fs: 1 = fhandler_disk_file::open(\??\T:\cygwin64\home\1\postgresql\tmpdb\base\5\2650, 0x50002)
00:00:08 [main] postgres 58742 open: 15 = open(base/5/2650, 0x50002)
00:00:08 [main] postgres 58742 fhandler_base::lseek: setting file pointer to 16384
00:00:08 [main] postgres 58742 lseek: 16384 = lseek(15, 0, 2)
00:00:08 [main] postgres 58742 fhandler_disk_file::prw_open: 0x0 = NtOpenFile (0x19EC, 0xC0100000, \??\T:\cygwin64\home\1\postgresql\tmpdb\base\5\2650, io, 0x7, 0x4020)
00:00:08 [main] postgres 58742 fhandler_disk_file::pread: 8192 = pread(0x6FFFF7CEA000, 8192, 0, 0x0)
00:00:08 [main] postgres 58742 pread: 8192 = pread(15, 0x6FFFF7CEA000, 8192, 0)
00:00:08 [main] postgres 58742 fhandler_disk_file::pread: 8192 = pread(0x6FFFF7CEC000, 8192, 8192, 0x0)
00:00:08 [main] postgres 58742 pread: 8192 = pread(15, 0x6FFFF7CEC000, 8192, 8192)
00:00:08 [main] postgres 58742 open: open(base/5/2600, 0x50002)
00:00:08 [main] postgres 58742 normalize_posix_path: src base/5/2600
00:00:08 [main] postgres 58742 cwdstuff::get: posix /home/1/postgresql/tmpdb
00:00:08 [main] postgres 58742 cwdstuff::get: (/home/1/postgresql/tmpdb) = cwdstuff::get (0x1A80730, 32768, 1, 0), errno 0
00:00:08 [main] postgres 58742 normalize_posix_path: /home/1/postgresql/tmpdb/base/5/2600 = normalize_posix_path (base/5/2600)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1/postgresql/tmpdb/base/5/2600)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1/postgresql/tmpdb/base/5/2600, dst T:\cygwin64\home\1\postgresql\tmpdb\base\5\2600, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\home\1\postgresql\tmpdb\base\5\2600)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1\postgresql\tmpdb\base\5\2600, 0x7FFFF9480) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\home\1\postgresql\tmpdb\base\5\2600), has_acls(1)
00:00:08 [main] postgres 58742 build_fh_pc: fh 0x8000D96E8, dev 000000C3
00:00:08 [main] postgres 58742 fhandler_base::open: (\??\T:\cygwin64\home\1\postgresql\tmpdb\base\5\2600, 0x50002)
00:00:08 [main] postgres 58742 fhandler_base::set_flags: flags 0x50002, supplied_bin 0x10000
00:00:08 [main] postgres 58742 fhandler_base::set_flags: O_TEXT/O_BINARY set in flags 0x10000
00:00:08 [main] postgres 58742 fhandler_base::set_flags: filemode set to binary
00:00:08 [main] postgres 58742 fhandler_base::open: 0x0 = NtCreateFile (0x1BF0, 0xC0100000, \??\T:\cygwin64\home\1\postgresql\tmpdb\base\5\2600, io, NULL, 0x0, 0x7, 0x1, 0x4020, NULL, 0)
00:00:08 [main] postgres 58742 fhandler_base::open: 1 = fhandler_base::open(\??\T:\cygwin64\home\1\postgresql\tmpdb\base\5\2600, 0x50002)
00:00:08 [main] postgres 58742 fhandler_base::open_fs: 1 = fhandler_disk_file::open(\??\T:\cygwin64\home\1\postgresql\tmpdb\base\5\2600, 0x50002)
00:00:08 [main] postgres 58742 open: 16 = open(base/5/2600, 0x50002)
00:00:08 [main] postgres 58742 fhandler_base::lseek: setting file pointer to 16384
00:00:08 [main] postgres 58742 lseek: 16384 = lseek(16, 0, 2)
00:00:08 [main] postgres 58742 fhandler_disk_file::prw_open: 0x0 = NtOpenFile (0x1BEC, 0xC0100000, \??\T:\cygwin64\home\1\postgresql\tmpdb\base\5\2600, io, 0x7, 0x4020)
00:00:08 [main] postgres 58742 fhandler_disk_file::pread: 8192 = pread(0x6FFFF7CEE000, 8192, 0, 0x0)
00:00:08 [main] postgres 58742 pread: 8192 = pread(16, 0x6FFFF7CEE000, 8192, 0)
00:00:08 [main] postgres 58742 open: open(base/5/3379, 0x50002)
00:00:08 [main] postgres 58742 normalize_posix_path: src base/5/3379
00:00:08 [main] postgres 58742 cwdstuff::get: posix /home/1/postgresql/tmpdb
00:00:08 [main] postgres 58742 cwdstuff::get: (/home/1/postgresql/tmpdb) = cwdstuff::get (0x1A80730, 32768, 1, 0), errno 0
00:00:08 [main] postgres 58742 normalize_posix_path: /home/1/postgresql/tmpdb/base/5/3379 = normalize_posix_path (base/5/3379)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1/postgresql/tmpdb/base/5/3379)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1/postgresql/tmpdb/base/5/3379, dst T:\cygwin64\home\1\postgresql\tmpdb\base\5\3379, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\home\1\postgresql\tmpdb\base\5\3379)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1\postgresql\tmpdb\base\5\3379, 0x7FFFF8290) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\home\1\postgresql\tmpdb\base\5\3379), has_acls(1)
00:00:08 [main] postgres 58742 build_fh_pc: fh 0x8000D93D8, dev 000000C3
00:00:08 [main] postgres 58742 fhandler_base::open: (\??\T:\cygwin64\home\1\postgresql\tmpdb\base\5\3379, 0x50002)
00:00:08 [main] postgres 58742 fhandler_base::set_flags: flags 0x50002, supplied_bin 0x10000
00:00:08 [main] postgres 58742 fhandler_base::set_flags: O_TEXT/O_BINARY set in flags 0x10000
00:00:08 [main] postgres 58742 fhandler_base::set_flags: filemode set to binary
00:00:08 [main] postgres 58742 fhandler_base::open: 0x0 = NtCreateFile (0x1BF8, 0xC0100000, \??\T:\cygwin64\home\1\postgresql\tmpdb\base\5\3379, io, NULL, 0x0, 0x7, 0x1, 0x4020, NULL, 0)
00:00:08 [main] postgres 58742 fhandler_base::open: 1 = fhandler_base::open(\??\T:\cygwin64\home\1\postgresql\tmpdb\base\5\3379, 0x50002)
00:00:08 [main] postgres 58742 fhandler_base::open_fs: 1 = fhandler_disk_file::open(\??\T:\cygwin64\home\1\postgresql\tmpdb\base\5\3379, 0x50002)
00:00:08 [main] postgres 58742 open: 17 = open(base/5/3379, 0x50002)
00:00:08 [main] postgres 58742 fhandler_base::lseek: setting file pointer to 8192
00:00:08 [main] postgres 58742 lseek: 8192 = lseek(17, 0, 2)
00:00:08 [main] postgres 58742 fhandler_disk_file::prw_open: 0x0 = NtOpenFile (0x1BF4, 0xC0100000, \??\T:\cygwin64\home\1\postgresql\tmpdb\base\5\3379, io, 0x7, 0x4020)
00:00:08 [main] postgres 58742 fhandler_disk_file::pread: 8192 = pread(0x6FFFF7CF0000, 8192, 0, 0x0)
00:00:08 [main] postgres 58742 pread: 8192 = pread(17, 0x6FFFF7CF0000, 8192, 0)
00:00:08 [main] postgres 58742 stat: entering
00:00:08 [main] postgres 58742 normalize_posix_path: src /usr/local/pgsql/lib/postgres_fdw
00:00:08 [main] postgres 58742 normalize_posix_path: /usr/local/pgsql/lib/postgres_fdw = normalize_posix_path (/usr/local/pgsql/lib/postgres_fdw)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/usr/local/pgsql/lib/postgres_fdw)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /usr/local/pgsql/lib/postgres_fdw, dst T:\cygwin64\usr\local\pgsql\lib\postgres_fdw, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\usr\local\pgsql\lib\postgres_fdw)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\usr\local\pgsql\lib\postgres_fdw)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.exe)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.exe)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\usr\local\pgsql\lib\postgres_fdw, 0x7FFFF9E20) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/usr/local/pgsql/lib)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /usr/local/pgsql/lib, dst T:\cygwin64\usr\local\pgsql\lib, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\usr\local\pgsql\lib)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\usr\local\pgsql\lib, 0x7FFFF9E20) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\usr\local\pgsql\lib\postgres_fdw), has_acls(1)
00:00:08 [main] postgres 58742 __set_errno: int stat_worker(path_conv&, stat*):2026 setting errno 2
00:00:08 [main] postgres 58742 stat_worker: -1 = (\??\T:\cygwin64\usr\local\pgsql\lib\postgres_fdw,0x7FFFFB060)
00:00:08 [main] postgres 58742 stat: entering
00:00:08 [main] postgres 58742 normalize_posix_path: src /usr/local/pgsql/lib/postgres_fdw.dll
00:00:08 [main] postgres 58742 normalize_posix_path: /usr/local/pgsql/lib/postgres_fdw.dll = normalize_posix_path (/usr/local/pgsql/lib/postgres_fdw.dll)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/usr/local/pgsql/lib/postgres_fdw.dll)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /usr/local/pgsql/lib/postgres_fdw.dll, dst T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.dll, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.dll)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.dll, 0x7FFFF9E20) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.dll), has_acls(1)
00:00:08 [main] postgres 58742 build_fh_pc: fh 0x8000D90C8, dev 000000C3
00:00:08 [main] postgres 58742 stat_worker: (\??\T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.dll, 0x7FFFFB060, 0x8000D90C8), file_attributes 32
00:00:08 [main] postgres 58742 fhandler_base::fstat_helper: 0 = fstat (\??\T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.dll, 0x7FFFFB060) st_size=827461, st_mode=0100755, st_ino=567735028025453374st_atim=6697F676.2AD560B8 st_ctim=6697F459.28DA34 st_mtim=6697F459.28DA34 st_birthtim=6697F459.28DA34
00:00:08 [main] postgres 58742 stat_worker: 0 = (\??\T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.dll,0x7FFFFB060)
00:00:08 [main] postgres 58742 stat: entering
00:00:08 [main] postgres 58742 normalize_posix_path: src /usr/local/pgsql/lib/postgres_fdw.dll
00:00:08 [main] postgres 58742 normalize_posix_path: /usr/local/pgsql/lib/postgres_fdw.dll = normalize_posix_path (/usr/local/pgsql/lib/postgres_fdw.dll)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/usr/local/pgsql/lib/postgres_fdw.dll)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /usr/local/pgsql/lib/postgres_fdw.dll, dst T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.dll, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.dll)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.dll, 0x7FFFF9DB0) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.dll), has_acls(1)
00:00:08 [main] postgres 58742 build_fh_pc: fh 0x8000D90C8, dev 000000C3
00:00:08 [main] postgres 58742 stat_worker: (\??\T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.dll, 0x7FFFFB080, 0x8000D90C8), file_attributes 32
00:00:08 [main] postgres 58742 fhandler_base::fstat_helper: 0 = fstat (\??\T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.dll, 0x7FFFFB080) st_size=827461, st_mode=0100755, st_ino=567735028025453374st_atim=6697F676.2AD560B8 st_ctim=6697F459.28DA34 st_mtim=6697F459.28DA34 st_birthtim=6697F459.28DA34
00:00:08 [main] postgres 58742 stat_worker: 0 = (\??\T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.dll,0x7FFFFB080)
00:00:08 [main] postgres 58742 mount_info::conv_to_posix_path: conv_to_posix_path (T:\cygwin64\usr\local\pgsql\bin, 0x0, no-add-slash)
00:00:08 [main] postgres 58742 normalize_win32_path: T:\cygwin64\usr\local\pgsql\bin = normalize_win32_path (T:\cygwin64\usr\local\pgsql\bin)
00:00:08 [main] postgres 58742 mount_info::conv_to_posix_path: /usr/local/pgsql/bin = conv_to_posix_path (T:\cygwin64\usr\local\pgsql\bin)
00:00:08 [main] postgres 58742 normalize_posix_path: src /usr/local/pgsql/bin/postgres_fdw.dll
00:00:08 [main] postgres 58742 normalize_posix_path: /usr/local/pgsql/bin/postgres_fdw.dll = normalize_posix_path (/usr/local/pgsql/bin/postgres_fdw.dll)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/usr/local/pgsql/bin/postgres_fdw.dll)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /usr/local/pgsql/bin/postgres_fdw.dll, dst T:\cygwin64\usr\local\pgsql\bin\postgres_fdw.dll, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\usr\local\pgsql\bin\postgres_fdw.dll)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\usr\local\pgsql\bin\postgres_fdw.dll)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\usr\local\pgsql\bin\postgres_fdw.dll.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\usr\local\pgsql\bin\postgres_fdw.dll.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\usr\local\pgsql\bin\postgres_fdw.dll, 0x7FFFF9B60) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/usr/local/pgsql/bin)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /usr/local/pgsql/bin, dst T:\cygwin64\usr\local\pgsql\bin, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\usr\local\pgsql\bin)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\usr\local\pgsql\bin, 0x7FFFF9B60) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\usr\local\pgsql\bin\postgres_fdw.dll), has_acls(1)
00:00:08 [main] postgres 58742 pathfinder::find: not (exists and not dir), skip /usr/local/pgsql/bin/postgres_fdw.dll
00:00:08 [main] postgres 58742 normalize_posix_path: src /usr/local/pgsql/lib/postgres_fdw.dll
00:00:08 [main] postgres 58742 normalize_posix_path: /usr/local/pgsql/lib/postgres_fdw.dll = normalize_posix_path (/usr/local/pgsql/lib/postgres_fdw.dll)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/usr/local/pgsql/lib/postgres_fdw.dll)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /usr/local/pgsql/lib/postgres_fdw.dll, dst T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.dll, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.dll)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.dll, 0x7FFFF9B60) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.dll), has_acls(1)
00:00:08 [main] postgres 58742 pathfinder::find: (exists and not dir), take /usr/local/pgsql/lib/postgres_fdw.dll
--- Process 3416 (pid: 58742) loaded T:\cygwin64\usr\local\pgsql\lib\postgres_fdw.dll at 00000004a6430000
--- Process 3416 (pid: 58742) loaded T:\cygwin64\usr\local\pgsql\bin\cygpq.dll at 00000004f7970000
00:00:08 [main] postgres 58742 dlopen: ret 0x4A6430000
00:00:08 [main] postgres 58742 dlsym: ret 0x4A6440940
00:00:08 [main] postgres 58742 dlsym: ret 0x4A6439E50
00:00:08 [main] postgres 58742 dlsym: ret 0x4A6440960
00:00:08 [main] postgres 58742 dlsym: ret 0x4A6440950
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBA28)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB458)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x4F79749B9
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 kill0: kill (-58742, 2)
00:00:08 [main] postgres 58742 kill_pgrp: pid 58742, signal 2
00:00:08 [main] postgres 58742 open_shared: name cygpid.58741, shared 0x1A0060000 (wanted 0x1A0060000), h 0x1C04, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39224, shared 0x1A0070000 (wanted 0x1A0070000), h 0x1C0C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58735, shared 0x1A0080000 (wanted 0x1A0080000), h 0x1C14, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.550, shared 0x1A0090000 (wanted 0x1A0090000), h 0x1C1C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58746, shared 0x1A00A0000 (wanted 0x1A00A0000), h 0x1C24, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58731, shared 0x1A00B0000 (wanted 0x1A00B0000), h 0x1C2C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58717, shared 0x1A00C0000 (wanted 0x1A00C0000), h 0x1C34, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51466, shared 0x1A00D0000 (wanted 0x1A00D0000), h 0x1C3C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58733, shared 0x1A00E0000 (wanted 0x1A00E0000), h 0x1C44, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51465, shared 0x1A00F0000 (wanted 0x1A00F0000), h 0x1C4C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.49550, shared 0x1A0100000 (wanted 0x1A0100000), h 0x1C54, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58734, shared 0x1A0110000 (wanted 0x1A0110000), h 0x1C5C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.549, shared 0x1A0120000 (wanted 0x1A0120000), h 0x1C64, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58729, shared 0x1A0130000 (wanted 0x1A0130000), h 0x1C6C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39223, shared 0x1A0140000 (wanted 0x1A0140000), h 0x1C74, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58730, shared 0x1A0150000 (wanted 0x1A0150000), h 0x1C7C, m 6, created 0
00:00:08 [main] postgres 58742 kill_pgrp: killing pid 58742, pgrp 58742, p->no ctty, no ctty
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1C80
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1C80
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x1009067EF
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1C80
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 kill_pgrp: 0 = kill(58742, 2)
00:00:08 [main] postgres 58742 kill0: kill (58742, 2)
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1C78
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1C78
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x100919339
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1C78
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 stat: entering
00:00:08 [main] postgres 58742 normalize_posix_path: src /home/1/.pgpass
00:00:08 [main] postgres 58742 normalize_posix_path: /home/1/.pgpass = normalize_posix_path (/home/1/.pgpass)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1/.pgpass)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1/.pgpass, dst T:\cygwin64\home\1\.pgpass, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1\.pgpass, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1, dst T:\cygwin64\home\1, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\home\1)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\home\1\.pgpass), has_acls(1)
00:00:08 [main] postgres 58742 __set_errno: int stat_worker(path_conv&, stat*):2026 setting errno 2
00:00:08 [main] postgres 58742 stat_worker: -1 = (\??\T:\cygwin64\home\1\.pgpass,0x7FFFFAE10)
00:00:08 [main] postgres 58742 cygwin_socket: socket (1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 build_fh_pc: fh 0x8000DB628, dev 001E0078
00:00:08 [main] postgres 58742 fhandler_base::set_close_on_exec: set close_on_exec for to 1
00:00:08 [main] postgres 58742 fhandler_base::set_flags: flags 0x54002, supplied_bin 0x0
00:00:08 [main] postgres 58742 fhandler_base::set_flags: O_TEXT/O_BINARY set in flags 0x10000
00:00:08 [main] postgres 58742 fhandler_base::set_flags: filemode set to binary
00:00:08 [main] postgres 58742 cygwin_socket: 18 = socket(1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 normalize_posix_path: src /tmp/.s.PGSQL.5432
00:00:08 [main] postgres 58742 normalize_posix_path: /tmp/.s.PGSQL.5432 = normalize_posix_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /tmp/.s.PGSQL.5432, dst T:\cygwin64\tmp\.s.PGSQL.5432, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\tmp\.s.PGSQL.5432)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\tmp\.s.PGSQL.5432, 0x7FFFF9640) (mount_flags 0x30008, path_flags 0x20)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\tmp\.s.PGSQL.5432), has_acls(1)
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
--- Process 3416 (pid: 58742) thread 6216 created
00:00:08 [main] postgres 58742 __set_errno: void __set_winsock_errno(const char*, int):234 setting errno 119
00:00:08 [main] postgres 58742 __set_winsock_errno: connect:981 - winsock error 10036 -> errno 119
00:00:08 [main] postgres 58742 cygwin_connect: -1 = connect(18, 0xA0009A6B8, 110), errno 119
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB2A8)
00:00:08 [main] postgres 58742 close: close(18)
00:00:08 [main] postgres 58742 close: 0 = close(18)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x7FFFFBF28, 0x0)
00:00:08 [main] postgres 58742 write: 114 = write(2, 0xA000026A0, 114)
00:00:08 [main] postgres 58742 cygwin_send: 110 = send(11, 0xA00085318, 110, 0x0)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 sigprocmask: 0 = sigprocmask (0, 0x100CF8D60, 0x0)
00:00:08 [main] postgres 58742 cygwin_send: 6 = send(11, 0xA00085318, 6, 0x0)
00:00:08 [main] postgres 58742 cygwin_recv: 46 = recv(11, 0x100CF6D00, 8192, 0x0)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBB38)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer disarmed, Win32 error 0
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 write: 120 = write(2, 0xA000026A0, 120)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBA28)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB458)
00:00:08 [main] postgres 58742 stat: entering
00:00:08 [main] postgres 58742 normalize_posix_path: src /home/1/.pgpass
00:00:08 [main] postgres 58742 normalize_posix_path: /home/1/.pgpass = normalize_posix_path (/home/1/.pgpass)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1/.pgpass)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1/.pgpass, dst T:\cygwin64\home\1\.pgpass, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1\.pgpass, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1, dst T:\cygwin64\home\1, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\home\1)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\home\1\.pgpass), has_acls(1)
00:00:08 [main] postgres 58742 __set_errno: int stat_worker(path_conv&, stat*):2026 setting errno 2
00:00:08 [main] postgres 58742 stat_worker: -1 = (\??\T:\cygwin64\home\1\.pgpass,0x7FFFFAE10)
00:00:08 [main] postgres 58742 cygwin_socket: socket (1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 build_fh_pc: fh 0x8000DB628, dev 001E0078
00:00:08 [main] postgres 58742 fhandler_base::set_close_on_exec: set close_on_exec for to 1
00:00:08 [main] postgres 58742 fhandler_base::set_flags: flags 0x54002, supplied_bin 0x0
00:00:08 [main] postgres 58742 fhandler_base::set_flags: O_TEXT/O_BINARY set in flags 0x10000
00:00:08 [main] postgres 58742 fhandler_base::set_flags: filemode set to binary
00:00:08 [main] postgres 58742 cygwin_socket: 18 = socket(1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 normalize_posix_path: src /tmp/.s.PGSQL.5432
00:00:08 [main] postgres 58742 normalize_posix_path: /tmp/.s.PGSQL.5432 = normalize_posix_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /tmp/.s.PGSQL.5432, dst T:\cygwin64\tmp\.s.PGSQL.5432, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\tmp\.s.PGSQL.5432)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\tmp\.s.PGSQL.5432, 0x7FFFF9640) (mount_flags 0x30008, path_flags 0x20)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\tmp\.s.PGSQL.5432), has_acls(1)
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 __set_errno: void __set_winsock_errno(const char*, int):234 setting errno 119
00:00:08 [main] postgres 58742 __set_winsock_errno: connect:981 - winsock error 10036 -> errno 119
00:00:08 [main] postgres 58742 cygwin_connect: -1 = connect(18, 0xA0009A6B8, 110), errno 119
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB2A8)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_connect: af_local_connect called, no_getpeereid=0
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_secret: Sending af_local secret succeeded
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x1007885B1
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 0, m = 4. verifying
00:00:08 [main] postgres 58742 select_stuff::wait: signal received
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA001340B0 si->thread 0x7FFC91135610
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_secret: Received af_local secret: ACA27CCF-44EA3F93-FB94FBF5-21DDF032
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_cred: Sending eid credentials succeeded
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_cred: Received eid credentials: pid: 58729, uid: 197609, gid: 197121
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 write: write(5, 0x7FFFFA2BF, 1)
00:00:08 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_write(PIPEW) release 1
00:00:08 [main] postgres 58742 write: 1 = write(5, 0x7FFFFA2BF, 1)
00:00:08 [main] postgres 58742 kill0: kill (-58742, 2)
00:00:08 [main] postgres 58742 kill_pgrp: pid 58742, signal 2
00:00:08 [main] postgres 58742 open_shared: name cygpid.58741, shared 0x1A0160000 (wanted 0x1A0160000), h 0x1C5C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39224, shared 0x1A0170000 (wanted 0x1A0170000), h 0x1C54, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58735, shared 0x1A0180000 (wanted 0x1A0180000), h 0x1C4C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.550, shared 0x1A0190000 (wanted 0x1A0190000), h 0x1C44, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58746, shared 0x1A01A0000 (wanted 0x1A01A0000), h 0x1C3C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58731, shared 0x1A01B0000 (wanted 0x1A01B0000), h 0x1C34, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58717, shared 0x1A01C0000 (wanted 0x1A01C0000), h 0x1C2C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51466, shared 0x1A01D0000 (wanted 0x1A01D0000), h 0x1C24, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58733, shared 0x1A01E0000 (wanted 0x1A01E0000), h 0x1C1C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51465, shared 0x1A01F0000 (wanted 0x1A01F0000), h 0x1C14, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.49550, shared 0x1A0200000 (wanted 0x1A0200000), h 0x1C0C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58734, shared 0x1A0210000 (wanted 0x1A0210000), h 0x1C04, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.549, shared 0x1A0220000 (wanted 0x1A0220000), h 0x1C88, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58729, shared 0x1A0230000 (wanted 0x1A0230000), h 0x1C90, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39223, shared 0x1A0240000 (wanted 0x1A0240000), h 0x1C98, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58730, shared 0x1A0250000 (wanted 0x1A0250000), h 0x1CA0, m 6, created 0
00:00:08 [main] postgres 58742 kill_pgrp: killing pid 58742, pgrp 58742, p->no ctty, no ctty
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1CA4
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1CA4
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x1009067EF
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1CA4
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 kill_pgrp: 0 = kill(58742, 2)
00:00:08 [main] postgres 58742 kill0: kill (58742, 2)
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1C9C
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1C9C
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x100919339
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1C9C
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [main] postgres 58742 select_stuff::wait: returning -3
00:00:08 [main] postgres 58742 select: sel.wait returns -3
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: -1 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0), errno 4
00:00:08 [main] postgres 58742 close: close(18)
00:00:08 [main] postgres 58742 close: 0 = close(18)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x7FFFFBF28, 0x0)
00:00:08 [main] postgres 58742 write: 115 = write(2, 0xA000026A0, 115)
00:00:08 [main] postgres 58742 cygwin_send: 110 = send(11, 0xA00085318, 110, 0x0)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 sigprocmask: 0 = sigprocmask (0, 0x100CF8D60, 0x0)
00:00:08 [main] postgres 58742 cygwin_send: 6 = send(11, 0xA00085318, 6, 0x0)
00:00:08 [main] postgres 58742 cygwin_recv: 46 = recv(11, 0x100CF6D00, 8192, 0x0)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBB38)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer disarmed, Win32 error 0
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 write: 120 = write(2, 0xA000026A0, 120)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBA28)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB458)
00:00:08 [main] postgres 58742 stat: entering
00:00:08 [main] postgres 58742 normalize_posix_path: src /home/1/.pgpass
00:00:08 [main] postgres 58742 normalize_posix_path: /home/1/.pgpass = normalize_posix_path (/home/1/.pgpass)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1/.pgpass)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1/.pgpass, dst T:\cygwin64\home\1\.pgpass, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1\.pgpass, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1, dst T:\cygwin64\home\1, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\home\1)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\home\1\.pgpass), has_acls(1)
00:00:08 [main] postgres 58742 __set_errno: int stat_worker(path_conv&, stat*):2026 setting errno 2
00:00:08 [main] postgres 58742 stat_worker: -1 = (\??\T:\cygwin64\home\1\.pgpass,0x7FFFFAE10)
00:00:08 [main] postgres 58742 cygwin_socket: socket (1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 build_fh_pc: fh 0x8000DB628, dev 001E0078
00:00:08 [main] postgres 58742 fhandler_base::set_close_on_exec: set close_on_exec for to 1
00:00:08 [main] postgres 58742 fhandler_base::set_flags: flags 0x54002, supplied_bin 0x0
00:00:08 [main] postgres 58742 fhandler_base::set_flags: O_TEXT/O_BINARY set in flags 0x10000
00:00:08 [main] postgres 58742 fhandler_base::set_flags: filemode set to binary
00:00:08 [main] postgres 58742 cygwin_socket: 18 = socket(1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 normalize_posix_path: src /tmp/.s.PGSQL.5432
00:00:08 [main] postgres 58742 normalize_posix_path: /tmp/.s.PGSQL.5432 = normalize_posix_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /tmp/.s.PGSQL.5432, dst T:\cygwin64\tmp\.s.PGSQL.5432, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\tmp\.s.PGSQL.5432)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\tmp\.s.PGSQL.5432, 0x7FFFF9640) (mount_flags 0x30008, path_flags 0x20)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\tmp\.s.PGSQL.5432), has_acls(1)
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 __set_errno: void __set_winsock_errno(const char*, int):234 setting errno 119
00:00:08 [main] postgres 58742 __set_winsock_errno: connect:981 - winsock error 10036 -> errno 119
00:00:08 [main] postgres 58742 cygwin_connect: -1 = connect(18, 0xA0009A6B8, 110), errno 119
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB2A8)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_connect: af_local_connect called, no_getpeereid=0
00:00:08 [pipesel] postgres 58742 peek_pipe: read: pipe:[12884905304], ready for read: avail 1
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_secret: Sending af_local secret succeeded
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 3, m = 4. verifying
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_secret: Received af_local secret: ACA27CCF-44EA3F93-FB94FBF5-21DDF032
00:00:08 [main] postgres 58742 set_bits: me 0xA00133FD0, testing fd 6 (pipe:[8589937592])
00:00:08 [main] postgres 58742 set_bits: ready 0
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_cred: Sending eid credentials succeeded
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A630, testing fd 4 (pipe:[12884905304])
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_cred: Received eid credentials: pid: 58729, uid: 197609, gid: 197121
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA00134040, testing fd 18 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 peek_pipe: pipe:[12884905304], already ready for read
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A630, testing fd 4 (pipe:[12884905304])
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA001340B0 si->thread 0x7FFC91135610
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 2 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0)
00:00:08 [main] postgres 58742 read: read(4, 0x7FFFFAC20, 1024) nonblocking
00:00:08 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_read(PIPER) release 1
00:00:08 [main] postgres 58742 fhandler_base::read: returning 1, binary mode
00:00:08 [main] postgres 58742 read: 1 = read(4, 0x7FFFFAC20, 1)
00:00:08 [main] postgres 58742 fhandler_socket_local::getsockopt: WinSock SO_ERROR = 0
00:00:08 [main] postgres 58742 cygwin_getsockopt: 0 = getsockopt(18, 65535, 0x1007, 0x7FFFFAAC0, 0x7FFFFAAD0)
00:00:08 [main] postgres 58742 cygwin_getsockname: 0 =getsockname (18, 0xA00134BF8, 0xA00134C78)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 2, m = 4. verifying
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA001340E0, testing fd 18 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A370 si->thread 0x7FFC91135610
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [pipesel] postgres 58742 thread_pipe: stopping
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 1 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0)
00:00:08 [main] postgres 58742 cygwin_send: 85 = send(18, 0xA0015FFA0, 85, 0x20)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_read: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 0, except_ready: 0
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x1007885B1
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 0, m = 4. verifying
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [main] postgres 58742 select_stuff::wait: signal received
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A370 si->thread 0x7FFC91135610
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 write: write(5, 0x7FFFFA2BF, 1)
00:00:08 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_write(PIPEW) release 1
00:00:08 [main] postgres 58742 write: 1 = write(5, 0x7FFFFA2BF, 1)
00:00:08 [main] postgres 58742 kill0: kill (-58742, 2)
00:00:08 [main] postgres 58742 kill_pgrp: pid 58742, signal 2
00:00:08 [main] postgres 58742 open_shared: name cygpid.58741, shared 0x1A0260000 (wanted 0x1A0260000), h 0x1CAC, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39224, shared 0x1A0270000 (wanted 0x1A0270000), h 0x1CB4, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58735, shared 0x1A0280000 (wanted 0x1A0280000), h 0x1CBC, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.550, shared 0x1A0290000 (wanted 0x1A0290000), h 0x1CC4, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58746, shared 0x1A02A0000 (wanted 0x1A02A0000), h 0x1CCC, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58731, shared 0x1A02B0000 (wanted 0x1A02B0000), h 0x1CD4, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58717, shared 0x1A02C0000 (wanted 0x1A02C0000), h 0x1CDC, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51466, shared 0x1A02D0000 (wanted 0x1A02D0000), h 0x1CE4, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58733, shared 0x1A02E0000 (wanted 0x1A02E0000), h 0x1CEC, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51465, shared 0x1A02F0000 (wanted 0x1A02F0000), h 0x1CF4, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.49550, shared 0x1A0300000 (wanted 0x1A0300000), h 0x1CFC, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58734, shared 0x1A0310000 (wanted 0x1A0310000), h 0x1D04, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.549, shared 0x1A0320000 (wanted 0x1A0320000), h 0x1D0C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58729, shared 0x1A0330000 (wanted 0x1A0330000), h 0x1D14, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39223, shared 0x1A0340000 (wanted 0x1A0340000), h 0x1D1C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58730, shared 0x1A0350000 (wanted 0x1A0350000), h 0x1D24, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58749, shared 0x1A0360000 (wanted 0x1A0360000), h 0x1D2C, m 6, created 0
00:00:08 [main] postgres 58742 kill_pgrp: killing pid 58742, pgrp 58742, p->no ctty, no ctty
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1D30
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1D30
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x1009067EF
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1D30
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 kill_pgrp: 0 = kill(58742, 2)
00:00:08 [main] postgres 58742 kill0: kill (58742, 2)
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1D28
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1D28
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x100919339
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1D28
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 select_stuff::wait: returning -3
00:00:08 [main] postgres 58742 select: sel.wait returns -3
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: -1 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0), errno 4
00:00:08 [main] postgres 58742 close: close(18)
00:00:08 [main] postgres 58742 close: 0 = close(18)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x7FFFFBF28, 0x0)
00:00:08 [main] postgres 58742 write: 115 = write(2, 0xA000026A0, 115)
00:00:08 [main] postgres 58742 cygwin_send: 110 = send(11, 0xA00085318, 110, 0x0)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 sigprocmask: 0 = sigprocmask (0, 0x100CF8D60, 0x0)
00:00:08 [main] postgres 58742 cygwin_send: 6 = send(11, 0xA00085318, 6, 0x0)
00:00:08 [main] postgres 58742 cygwin_recv: 46 = recv(11, 0x100CF6D00, 8192, 0x0)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBB38)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer disarmed, Win32 error 0
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 write: 120 = write(2, 0xA000026A0, 120)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBA28)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB458)
00:00:08 [main] postgres 58742 stat: entering
00:00:08 [main] postgres 58742 normalize_posix_path: src /home/1/.pgpass
00:00:08 [main] postgres 58742 normalize_posix_path: /home/1/.pgpass = normalize_posix_path (/home/1/.pgpass)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1/.pgpass)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1/.pgpass, dst T:\cygwin64\home\1\.pgpass, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1\.pgpass, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1, dst T:\cygwin64\home\1, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\home\1)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\home\1\.pgpass), has_acls(1)
00:00:08 [main] postgres 58742 __set_errno: int stat_worker(path_conv&, stat*):2026 setting errno 2
00:00:08 [main] postgres 58742 stat_worker: -1 = (\??\T:\cygwin64\home\1\.pgpass,0x7FFFFAE10)
00:00:08 [main] postgres 58742 cygwin_socket: socket (1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 build_fh_pc: fh 0x8000DB628, dev 001E0078
00:00:08 [main] postgres 58742 fhandler_base::set_close_on_exec: set close_on_exec for to 1
00:00:08 [main] postgres 58742 fhandler_base::set_flags: flags 0x54002, supplied_bin 0x0
00:00:08 [main] postgres 58742 fhandler_base::set_flags: O_TEXT/O_BINARY set in flags 0x10000
00:00:08 [main] postgres 58742 fhandler_base::set_flags: filemode set to binary
00:00:08 [main] postgres 58742 cygwin_socket: 18 = socket(1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 normalize_posix_path: src /tmp/.s.PGSQL.5432
00:00:08 [main] postgres 58742 normalize_posix_path: /tmp/.s.PGSQL.5432 = normalize_posix_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /tmp/.s.PGSQL.5432, dst T:\cygwin64\tmp\.s.PGSQL.5432, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\tmp\.s.PGSQL.5432)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\tmp\.s.PGSQL.5432, 0x7FFFF9640) (mount_flags 0x30008, path_flags 0x20)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\tmp\.s.PGSQL.5432), has_acls(1)
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 __set_errno: void __set_winsock_errno(const char*, int):234 setting errno 119
00:00:08 [main] postgres 58742 __set_winsock_errno: connect:981 - winsock error 10036 -> errno 119
00:00:08 [main] postgres 58742 cygwin_connect: -1 = connect(18, 0xA0009A378, 110), errno 119
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB2A8)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_connect: af_local_connect called, no_getpeereid=0
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_secret: Sending af_local secret succeeded
00:00:08 [pipesel] postgres 58742 peek_pipe: read: pipe:[12884905304], ready for read: avail 1
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_secret: Received af_local secret: ACA27CCF-44EA3F93-FB94FBF5-21DDF032
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 3, m = 4. verifying
00:00:08 [main] postgres 58742 set_bits: me 0xA00134020, testing fd 6 (pipe:[8589937592])
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_cred: Sending eid credentials succeeded
00:00:08 [main] postgres 58742 set_bits: ready 0
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_cred: Received eid credentials: pid: 58729, uid: 197609, gid: 197121
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A690, testing fd 4 (pipe:[12884905304])
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA00134090, testing fd 18 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 peek_pipe: pipe:[12884905304], already ready for read
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A690, testing fd 4 (pipe:[12884905304])
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA00134100 si->thread 0x7FFC91135610
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 2 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0)
00:00:08 [main] postgres 58742 read: read(4, 0x7FFFFAC20, 1024) nonblocking
00:00:08 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_read(PIPER) release 1
00:00:08 [main] postgres 58742 fhandler_base::read: returning 1, binary mode
00:00:08 [main] postgres 58742 read: 1 = read(4, 0x7FFFFAC20, 1)
00:00:08 [main] postgres 58742 fhandler_socket_local::getsockopt: WinSock SO_ERROR = 0
00:00:08 [main] postgres 58742 cygwin_getsockopt: 0 = getsockopt(18, 65535, 0x1007, 0x7FFFFAAC0, 0x7FFFFAAD0)
00:00:08 [main] postgres 58742 cygwin_getsockname: 0 =getsockname (18, 0xA00134BF8, 0xA00134C78)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 2, m = 4. verifying
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A690, testing fd 18 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A700 si->thread 0x7FFC91135610
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 1 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0)
00:00:08 [main] postgres 58742 cygwin_send: 85 = send(18, 0xA0015FFA0, 85, 0x20)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_read: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 0, except_ready: 0
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x1007885B1
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 0, m = 4. verifying
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [main] postgres 58742 select_stuff::wait: signal received
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA00134100 si->thread 0x7FFC91135610
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 write: write(5, 0x7FFFFA2BF, 1)
00:00:08 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_write(PIPEW) release 1
00:00:08 [main] postgres 58742 write: 1 = write(5, 0x7FFFFA2BF, 1)
00:00:08 [main] postgres 58742 kill0: kill (-58742, 2)
00:00:08 [main] postgres 58742 kill_pgrp: pid 58742, signal 2
00:00:08 [main] postgres 58742 open_shared: name cygpid.58750, shared 0x1A0370000 (wanted 0x1A0370000), h 0x1D20, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58741, shared 0x1A0380000 (wanted 0x1A0380000), h 0x1C84, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39224, shared 0x1A0390000 (wanted 0x1A0390000), h 0x1B98, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58735, shared 0x1A03A0000 (wanted 0x1A03A0000), h 0x1D18, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.550, shared 0x1A03B0000 (wanted 0x1A03B0000), h 0x1D10, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58746, shared 0x1A03C0000 (wanted 0x1A03C0000), h 0x1D08, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58731, shared 0x1A03D0000 (wanted 0x1A03D0000), h 0x1D00, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58717, shared 0x1A03E0000 (wanted 0x1A03E0000), h 0x1CF8, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51466, shared 0x1A03F0000 (wanted 0x1A03F0000), h 0x1CF0, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58733, shared 0x1A0400000 (wanted 0x1A0400000), h 0x1CE8, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51465, shared 0x1A0410000 (wanted 0x1A0410000), h 0x1CE0, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.49550, shared 0x1A0420000 (wanted 0x1A0420000), h 0x1CD8, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58734, shared 0x1A0430000 (wanted 0x1A0430000), h 0x1CD0, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.549, shared 0x1A0440000 (wanted 0x1A0440000), h 0x1CC8, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58729, shared 0x1A0450000 (wanted 0x1A0450000), h 0x1CC0, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39223, shared 0x1A0460000 (wanted 0x1A0460000), h 0x1CB8, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58730, shared 0x1A0470000 (wanted 0x1A0470000), h 0x1CB0, m 6, created 0
00:00:08 [main] postgres 58742 kill_pgrp: killing pid 58742, pgrp 58742, p->no ctty, no ctty
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1CB4
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1CB4
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x1009067EF
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1CB4
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 kill_pgrp: 0 = kill(58742, 2)
00:00:08 [main] postgres 58742 kill0: kill (58742, 2)
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1CBC
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1CBC
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x100919339
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1CBC
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [main] postgres 58742 select_stuff::wait: returning -3
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 select: sel.wait returns -3
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: -1 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0), errno 4
00:00:08 [main] postgres 58742 close: close(18)
00:00:08 [main] postgres 58742 close: 0 = close(18)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x7FFFFBF28, 0x0)
00:00:08 [main] postgres 58742 write: 115 = write(2, 0xA000026A0, 115)
00:00:08 [main] postgres 58742 cygwin_send: 110 = send(11, 0xA00085318, 110, 0x0)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 sigprocmask: 0 = sigprocmask (0, 0x100CF8D60, 0x0)
00:00:08 [main] postgres 58742 cygwin_send: 6 = send(11, 0xA00085318, 6, 0x0)
00:00:08 [main] postgres 58742 cygwin_recv: 46 = recv(11, 0x100CF6D00, 8192, 0x0)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBB38)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer disarmed, Win32 error 0
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 write: 120 = write(2, 0xA000026A0, 120)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBA28)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB458)
00:00:08 [main] postgres 58742 stat: entering
00:00:08 [main] postgres 58742 normalize_posix_path: src /home/1/.pgpass
00:00:08 [main] postgres 58742 normalize_posix_path: /home/1/.pgpass = normalize_posix_path (/home/1/.pgpass)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1/.pgpass)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1/.pgpass, dst T:\cygwin64\home\1\.pgpass, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1\.pgpass, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x4F7975DF2
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1)
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1, dst T:\cygwin64\home\1, flags 0x30008, rc 0
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\home\1)
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\home\1\.pgpass), has_acls(1)
00:00:08 [main] postgres 58742 __set_errno: int stat_worker(path_conv&, stat*):2026 setting errno 2
00:00:08 [main] postgres 58742 stat_worker: -1 = (\??\T:\cygwin64\home\1\.pgpass,0x7FFFFAE10)
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 cygwin_socket: socket (1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 build_fh_pc: fh 0x8000DB628, dev 001E0078
00:00:08 [main] postgres 58742 fhandler_base::set_close_on_exec: set close_on_exec for to 1
00:00:08 [main] postgres 58742 fhandler_base::set_flags: flags 0x54002, supplied_bin 0x0
00:00:08 [main] postgres 58742 fhandler_base::set_flags: O_TEXT/O_BINARY set in flags 0x10000
00:00:08 [main] postgres 58742 fhandler_base::set_flags: filemode set to binary
00:00:08 [main] postgres 58742 cygwin_socket: 18 = socket(1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 normalize_posix_path: src /tmp/.s.PGSQL.5432
00:00:08 [main] postgres 58742 normalize_posix_path: /tmp/.s.PGSQL.5432 = normalize_posix_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /tmp/.s.PGSQL.5432, dst T:\cygwin64\tmp\.s.PGSQL.5432, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\tmp\.s.PGSQL.5432)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\tmp\.s.PGSQL.5432, 0x7FFFF9640) (mount_flags 0x30008, path_flags 0x20)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\tmp\.s.PGSQL.5432), has_acls(1)
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 __set_errno: void __set_winsock_errno(const char*, int):234 setting errno 119
00:00:08 [main] postgres 58742 __set_winsock_errno: connect:981 - winsock error 10036 -> errno 119
00:00:08 [main] postgres 58742 cygwin_connect: -1 = connect(18, 0xA0009A378, 110), errno 119
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB2A8)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_connect: af_local_connect called, no_getpeereid=0
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_secret: Sending af_local secret succeeded
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_secret: Received af_local secret: ACA27CCF-44EA3F93-FB94FBF5-21DDF032
00:00:08 [pipesel] postgres 58742 peek_pipe: read: pipe:[12884905304], ready for read: avail 1
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_cred: Sending eid credentials succeeded
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 3, m = 4. verifying
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_cred: Received eid credentials: pid: 58729, uid: 197609, gid: 197121
00:00:08 [main] postgres 58742 set_bits: me 0xA00134020, testing fd 6 (pipe:[8589937592])
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: ready 0
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A690, testing fd 4 (pipe:[12884905304])
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA00134090, testing fd 18 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 peek_pipe: pipe:[12884905304], already ready for read
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A690, testing fd 4 (pipe:[12884905304])
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA00134100 si->thread 0x7FFC91135610
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 2 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0)
00:00:08 [main] postgres 58742 read: read(4, 0x7FFFFAC20, 1024) nonblocking
00:00:08 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_read(PIPER) release 1
00:00:08 [main] postgres 58742 fhandler_base::read: returning 1, binary mode
00:00:08 [main] postgres 58742 read: 1 = read(4, 0x7FFFFAC20, 1)
00:00:08 [main] postgres 58742 fhandler_socket_local::getsockopt: WinSock SO_ERROR = 0
00:00:08 [main] postgres 58742 cygwin_getsockopt: 0 = getsockopt(18, 65535, 0x1007, 0x7FFFFAAC0, 0x7FFFFAAD0)
00:00:08 [main] postgres 58742 cygwin_getsockname: 0 =getsockname (18, 0xA00134BF8, 0xA00134C78)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 2, m = 4. verifying
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A690, testing fd 18 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A700 si->thread 0x7FFC91135610
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 1 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0)
00:00:08 [main] postgres 58742 cygwin_send: 85 = send(18, 0xA0015FFA0, 85, 0x20)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_read: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 0, except_ready: 0
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x1007885B1
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 0, m = 4. verifying
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [main] postgres 58742 select_stuff::wait: signal received
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA00134100 si->thread 0x7FFC91135610
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 write: write(5, 0x7FFFFA2BF, 1)
00:00:08 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_write(PIPEW) release 1
00:00:08 [main] postgres 58742 write: 1 = write(5, 0x7FFFFA2BF, 1)
00:00:08 [main] postgres 58742 kill0: kill (-58742, 2)
00:00:08 [main] postgres 58742 kill_pgrp: pid 58742, signal 2
00:00:08 [main] postgres 58742 open_shared: name cygpid.58741, shared 0x1A0480000 (wanted 0x1A0480000), h 0x1CBC, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39224, shared 0x1A0490000 (wanted 0x1A0490000), h 0x1D38, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58735, shared 0x1A04A0000 (wanted 0x1A04A0000), h 0x1D40, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.550, shared 0x1A04B0000 (wanted 0x1A04B0000), h 0x1D48, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58746, shared 0x1A04C0000 (wanted 0x1A04C0000), h 0x1D50, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58751, shared 0x1A04D0000 (wanted 0x1A04D0000), h 0x1D58, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58731, shared 0x1A04E0000 (wanted 0x1A04E0000), h 0x1D60, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58717, shared 0x1A04F0000 (wanted 0x1A04F0000), h 0x1D68, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51466, shared 0x1A0500000 (wanted 0x1A0500000), h 0x1D70, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58733, shared 0x1A0510000 (wanted 0x1A0510000), h 0x1D78, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51465, shared 0x1A0520000 (wanted 0x1A0520000), h 0x1D80, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.49550, shared 0x1A0530000 (wanted 0x1A0530000), h 0x1D88, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58734, shared 0x1A0540000 (wanted 0x1A0540000), h 0x1D90, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.549, shared 0x1A0550000 (wanted 0x1A0550000), h 0x1D98, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58729, shared 0x1A0560000 (wanted 0x1A0560000), h 0x1DA0, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39223, shared 0x1A0570000 (wanted 0x1A0570000), h 0x1DA8, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58730, shared 0x1A0580000 (wanted 0x1A0580000), h 0x1DB0, m 6, created 0
00:00:08 [main] postgres 58742 kill_pgrp: killing pid 58742, pgrp 58742, p->no ctty, no ctty
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1DB4
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1DB4
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x1009067EF
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1DB4
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 kill_pgrp: 0 = kill(58742, 2)
00:00:08 [main] postgres 58742 kill0: kill (58742, 2)
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1DAC
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1DAC
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x100919339
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1DAC
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 select_stuff::wait: returning -3
00:00:08 [main] postgres 58742 select: sel.wait returns -3
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: -1 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0), errno 4
00:00:08 [main] postgres 58742 close: close(18)
00:00:08 [main] postgres 58742 close: 0 = close(18)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x7FFFFBF28, 0x0)
00:00:08 [main] postgres 58742 write: 115 = write(2, 0xA000026A0, 115)
00:00:08 [main] postgres 58742 cygwin_send: 110 = send(11, 0xA00085318, 110, 0x0)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 sigprocmask: 0 = sigprocmask (0, 0x100CF8D60, 0x0)
00:00:08 [main] postgres 58742 cygwin_send: 6 = send(11, 0xA00085318, 6, 0x0)
00:00:08 [main] postgres 58742 cygwin_recv: 46 = recv(11, 0x100CF6D00, 8192, 0x0)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBB38)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer disarmed, Win32 error 0
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 write: 121 = write(2, 0xA000026A0, 121)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBA28)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB458)
00:00:08 [main] postgres 58742 stat: entering
00:00:08 [main] postgres 58742 normalize_posix_path: src /home/1/.pgpass
00:00:08 [main] postgres 58742 normalize_posix_path: /home/1/.pgpass = normalize_posix_path (/home/1/.pgpass)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1/.pgpass)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1/.pgpass, dst T:\cygwin64\home\1\.pgpass, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1\.pgpass, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1, dst T:\cygwin64\home\1, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\home\1)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\home\1\.pgpass), has_acls(1)
00:00:08 [main] postgres 58742 __set_errno: int stat_worker(path_conv&, stat*):2026 setting errno 2
00:00:08 [main] postgres 58742 stat_worker: -1 = (\??\T:\cygwin64\home\1\.pgpass,0x7FFFFAE10)
00:00:08 [main] postgres 58742 cygwin_socket: socket (1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 build_fh_pc: fh 0x8000DB628, dev 001E0078
00:00:08 [main] postgres 58742 fhandler_base::set_close_on_exec: set close_on_exec for to 1
00:00:08 [main] postgres 58742 fhandler_base::set_flags: flags 0x54002, supplied_bin 0x0
00:00:08 [main] postgres 58742 fhandler_base::set_flags: O_TEXT/O_BINARY set in flags 0x10000
00:00:08 [main] postgres 58742 fhandler_base::set_flags: filemode set to binary
00:00:08 [main] postgres 58742 cygwin_socket: 18 = socket(1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 normalize_posix_path: src /tmp/.s.PGSQL.5432
00:00:08 [main] postgres 58742 normalize_posix_path: /tmp/.s.PGSQL.5432 = normalize_posix_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /tmp/.s.PGSQL.5432, dst T:\cygwin64\tmp\.s.PGSQL.5432, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\tmp\.s.PGSQL.5432)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\tmp\.s.PGSQL.5432, 0x7FFFF9640) (mount_flags 0x30008, path_flags 0x20)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\tmp\.s.PGSQL.5432), has_acls(1)
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 __set_errno: void __set_winsock_errno(const char*, int):234 setting errno 119
00:00:08 [main] postgres 58742 __set_winsock_errno: connect:981 - winsock error 10036 -> errno 119
00:00:08 [main] postgres 58742 cygwin_connect: -1 = connect(18, 0xA0009A378, 110), errno 119
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB2A8)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x1007885B1
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 0, m = 4. verifying
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [main] postgres 58742 select_stuff::wait: signal received
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA00134100 si->thread 0x7FFC91135610
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_connect: af_local_connect called, no_getpeereid=0
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_secret: Sending af_local secret succeeded
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_secret: Received af_local secret: ACA27CCF-44EA3F93-FB94FBF5-21DDF032
00:00:08 [pipesel] postgres 58742 peek_pipe: read: pipe:[12884905304], ready for read: avail 1
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_cred: Sending eid credentials succeeded
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_cred: Received eid credentials: pid: 58729, uid: 197609, gid: 197121
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 write: write(5, 0x7FFFFA2BF, 1)
00:00:08 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_write(PIPEW) release 1
00:00:08 [main] postgres 58742 write: 1 = write(5, 0x7FFFFA2BF, 1)
00:00:08 [main] postgres 58742 kill0: kill (-58742, 2)
00:00:08 [main] postgres 58742 kill_pgrp: pid 58742, signal 2
00:00:08 [main] postgres 58742 open_shared: name cygpid.58741, shared 0x1A0590000 (wanted 0x1A0590000), h 0x1DB0, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39224, shared 0x1A05A0000 (wanted 0x1A05A0000), h 0x1DA8, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58735, shared 0x1A05B0000 (wanted 0x1A05B0000), h 0x1DA0, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.550, shared 0x1A05C0000 (wanted 0x1A05C0000), h 0x1D98, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58746, shared 0x1A05D0000 (wanted 0x1A05D0000), h 0x1D90, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58731, shared 0x1A05E0000 (wanted 0x1A05E0000), h 0x1D88, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58717, shared 0x1A05F0000 (wanted 0x1A05F0000), h 0x1D80, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51466, shared 0x1A0600000 (wanted 0x1A0600000), h 0x1D78, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58733, shared 0x1A0610000 (wanted 0x1A0610000), h 0x1D70, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51465, shared 0x1A0620000 (wanted 0x1A0620000), h 0x1D68, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.49550, shared 0x1A0630000 (wanted 0x1A0630000), h 0x1D60, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58734, shared 0x1A0640000 (wanted 0x1A0640000), h 0x1D58, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.549, shared 0x1A0650000 (wanted 0x1A0650000), h 0x1D50, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58729, shared 0x1A0660000 (wanted 0x1A0660000), h 0x1D48, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39223, shared 0x1A0670000 (wanted 0x1A0670000), h 0x1D40, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58730, shared 0x1A0680000 (wanted 0x1A0680000), h 0x1D38, m 6, created 0
00:00:08 [main] postgres 58742 kill_pgrp: killing pid 58742, pgrp 58742, p->no ctty, no ctty
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1D28
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1D28
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x1009067EF
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1D28
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 kill_pgrp: 0 = kill(58742, 2)
00:00:08 [main] postgres 58742 kill0: kill (58742, 2)
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1D34
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1D34
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x100919339
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1D34
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 select_stuff::wait: returning -3
00:00:08 [main] postgres 58742 select: sel.wait returns -3
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: -1 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0), errno 4
00:00:08 [main] postgres 58742 close: close(18)
00:00:08 [main] postgres 58742 close: 0 = close(18)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x7FFFFBF28, 0x0)
00:00:08 [main] postgres 58742 write: 116 = write(2, 0xA000026A0, 116)
00:00:08 [main] postgres 58742 cygwin_send: 110 = send(11, 0xA00085318, 110, 0x0)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 sigprocmask: 0 = sigprocmask (0, 0x100CF8D60, 0x0)
00:00:08 [main] postgres 58742 cygwin_send: 6 = send(11, 0xA00085318, 6, 0x0)
00:00:08 [main] postgres 58742 cygwin_recv: 46 = recv(11, 0x100CF6D00, 8192, 0x0)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBB38)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer disarmed, Win32 error 0
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 write: 121 = write(2, 0xA000026A0, 121)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBA28)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB458)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x4F7973936
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 stat: entering
00:00:08 [main] postgres 58742 normalize_posix_path: src /home/1/.pgpass
00:00:08 [main] postgres 58742 normalize_posix_path: /home/1/.pgpass = normalize_posix_path (/home/1/.pgpass)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1/.pgpass)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1/.pgpass, dst T:\cygwin64\home\1\.pgpass, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1\.pgpass, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1, dst T:\cygwin64\home\1, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\home\1)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\home\1\.pgpass), has_acls(1)
00:00:08 [main] postgres 58742 __set_errno: int stat_worker(path_conv&, stat*):2026 setting errno 2
00:00:08 [main] postgres 58742 stat_worker: -1 = (\??\T:\cygwin64\home\1\.pgpass,0x7FFFFAE10)
00:00:08 [main] postgres 58742 cygwin_socket: socket (1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 build_fh_pc: fh 0x8000DB628, dev 001E0078
00:00:08 [main] postgres 58742 fhandler_base::set_close_on_exec: set close_on_exec for to 1
00:00:08 [main] postgres 58742 fhandler_base::set_flags: flags 0x54002, supplied_bin 0x0
00:00:08 [main] postgres 58742 fhandler_base::set_flags: O_TEXT/O_BINARY set in flags 0x10000
00:00:08 [main] postgres 58742 fhandler_base::set_flags: filemode set to binary
00:00:08 [main] postgres 58742 cygwin_socket: 18 = socket(1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 normalize_posix_path: src /tmp/.s.PGSQL.5432
00:00:08 [main] postgres 58742 normalize_posix_path: /tmp/.s.PGSQL.5432 = normalize_posix_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /tmp/.s.PGSQL.5432, dst T:\cygwin64\tmp\.s.PGSQL.5432, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\tmp\.s.PGSQL.5432)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\tmp\.s.PGSQL.5432, 0x7FFFF9640) (mount_flags 0x30008, path_flags 0x20)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\tmp\.s.PGSQL.5432), has_acls(1)
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 __set_errno: void __set_winsock_errno(const char*, int):234 setting errno 119
00:00:08 [main] postgres 58742 __set_winsock_errno: connect:981 - winsock error 10036 -> errno 119
00:00:08 [main] postgres 58742 cygwin_connect: -1 = connect(18, 0xA0009A378, 110), errno 119
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB2A8)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_connect: af_local_connect called, no_getpeereid=0
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_secret: Sending af_local secret succeeded
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_secret: Received af_local secret: ACA27CCF-44EA3F93-FB94FBF5-21DDF032
00:00:08 [pipesel] postgres 58742 peek_pipe: read: pipe:[12884905304], ready for read: avail 1
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_cred: Sending eid credentials succeeded
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_cred: Received eid credentials: pid: 58729, uid: 197609, gid: 197121
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 3, m = 4. verifying
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA00134020, testing fd 6 (pipe:[8589937592])
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 set_bits: ready 0
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A690, testing fd 4 (pipe:[12884905304])
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA00134090, testing fd 18 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 peek_pipe: pipe:[12884905304], already ready for read
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A690, testing fd 4 (pipe:[12884905304])
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA00134100 si->thread 0x7FFC91135610
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 2 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0)
00:00:08 [main] postgres 58742 read: read(4, 0x7FFFFAC20, 1024) nonblocking
00:00:08 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_read(PIPER) release 1
00:00:08 [main] postgres 58742 fhandler_base::read: returning 2, binary mode
00:00:08 [main] postgres 58742 read: 2 = read(4, 0x7FFFFAC20, 2)
00:00:08 [main] postgres 58742 fhandler_socket_local::getsockopt: WinSock SO_ERROR = 0
00:00:08 [main] postgres 58742 cygwin_getsockopt: 0 = getsockopt(18, 65535, 0x1007, 0x7FFFFAAC0, 0x7FFFFAAD0)
00:00:08 [main] postgres 58742 cygwin_getsockname: 0 =getsockname (18, 0xA00134BF8, 0xA00134C78)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 2, m = 4. verifying
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A690, testing fd 18 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A700 si->thread 0x7FFC91135610
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x1007885B1
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [main] postgres 58742 pselect: 1 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0)
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [main] postgres 58742 write: write(5, 0x7FFFFA75F, 1)
00:00:08 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_write(PIPEW) release 1
00:00:08 [main] postgres 58742 write: 1 = write(5, 0x7FFFFA75F, 1)
00:00:08 [main] postgres 58742 kill0: kill (-58742, 2)
00:00:08 [main] postgres 58742 kill_pgrp: pid 58742, signal 2
00:00:08 [main] postgres 58742 open_shared: name cygpid.58753, shared 0x1A0690000 (wanted 0x1A0690000), h 0x1D38, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58753, shared 0x1A06A0000 (wanted 0x1A06A0000), h 0x1D38, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58741, shared 0x1A06B0000 (wanted 0x1A06B0000), h 0x1D38, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39224, shared 0x1A06C0000 (wanted 0x1A06C0000), h 0x1D40, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58735, shared 0x1A06D0000 (wanted 0x1A06D0000), h 0x1D48, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.550, shared 0x1A06E0000 (wanted 0x1A06E0000), h 0x1D50, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58746, shared 0x1A06F0000 (wanted 0x1A06F0000), h 0x1D58, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58731, shared 0x1A0700000 (wanted 0x1A0700000), h 0x1D60, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58717, shared 0x1A0710000 (wanted 0x1A0710000), h 0x1D68, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51466, shared 0x1A0720000 (wanted 0x1A0720000), h 0x1D70, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58733, shared 0x1A0730000 (wanted 0x1A0730000), h 0x1D78, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51465, shared 0x1A0740000 (wanted 0x1A0740000), h 0x1D80, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.49550, shared 0x1A0750000 (wanted 0x1A0750000), h 0x1D88, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58734, shared 0x1A0760000 (wanted 0x1A0760000), h 0x1D90, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.549, shared 0x1A0770000 (wanted 0x1A0770000), h 0x1D98, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58729, shared 0x1A0780000 (wanted 0x1A0780000), h 0x1DA0, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39223, shared 0x1A0790000 (wanted 0x1A0790000), h 0x1DA8, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58730, shared 0x1A07A0000 (wanted 0x1A07A0000), h 0x1DB0, m 6, created 0
00:00:08 [main] postgres 58742 kill_pgrp: killing pid 58742, pgrp 58742, p->no ctty, no ctty
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1D28
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1D28
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x1009067EF
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1D28
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 kill_pgrp: 0 = kill(58742, 2)
00:00:08 [main] postgres 58742 kill0: kill (58742, 2)
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1DAC
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1DAC
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x100919339
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1DAC
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [main] postgres 58742 cygwin_send: 85 = send(18, 0xA0015FFA0, 85, 0x20)
00:00:08 [main] postgres 58742 close: close(18)
00:00:08 [main] postgres 58742 close: 0 = close(18)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x7FFFFBF28, 0x0)
00:00:08 [main] postgres 58742 write: 116 = write(2, 0xA000026A0, 116)
00:00:08 [main] postgres 58742 cygwin_send: 110 = send(11, 0xA00085318, 110, 0x0)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 sigprocmask: 0 = sigprocmask (0, 0x100CF8D60, 0x0)
00:00:08 [main] postgres 58742 cygwin_send: 6 = send(11, 0xA00085318, 6, 0x0)
00:00:08 [main] postgres 58742 cygwin_recv: 46 = recv(11, 0x100CF6D00, 8192, 0x0)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBB38)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer disarmed, Win32 error 0
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 write: 121 = write(2, 0xA000026A0, 121)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBA28)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB458)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x4F7979335
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 stat: entering
00:00:08 [main] postgres 58742 normalize_posix_path: src /home/1/.pgpass
00:00:08 [main] postgres 58742 normalize_posix_path: /home/1/.pgpass = normalize_posix_path (/home/1/.pgpass)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1/.pgpass)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1/.pgpass, dst T:\cygwin64\home\1\.pgpass, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1\.pgpass, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1, dst T:\cygwin64\home\1, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\home\1)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\home\1\.pgpass), has_acls(1)
00:00:08 [main] postgres 58742 __set_errno: int stat_worker(path_conv&, stat*):2026 setting errno 2
00:00:08 [main] postgres 58742 stat_worker: -1 = (\??\T:\cygwin64\home\1\.pgpass,0x7FFFFAE10)
00:00:08 [main] postgres 58742 cygwin_socket: socket (1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 build_fh_pc: fh 0x8000DB628, dev 001E0078
00:00:08 [main] postgres 58742 fhandler_base::set_close_on_exec: set close_on_exec for to 1
00:00:08 [main] postgres 58742 fhandler_base::set_flags: flags 0x54002, supplied_bin 0x0
00:00:08 [main] postgres 58742 fhandler_base::set_flags: O_TEXT/O_BINARY set in flags 0x10000
00:00:08 [main] postgres 58742 fhandler_base::set_flags: filemode set to binary
00:00:08 [main] postgres 58742 cygwin_socket: 18 = socket(1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 normalize_posix_path: src /tmp/.s.PGSQL.5432
00:00:08 [main] postgres 58742 normalize_posix_path: /tmp/.s.PGSQL.5432 = normalize_posix_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /tmp/.s.PGSQL.5432, dst T:\cygwin64\tmp\.s.PGSQL.5432, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\tmp\.s.PGSQL.5432)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\tmp\.s.PGSQL.5432, 0x7FFFF9640) (mount_flags 0x30008, path_flags 0x20)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\tmp\.s.PGSQL.5432), has_acls(1)
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 __set_errno: void __set_winsock_errno(const char*, int):234 setting errno 119
00:00:08 [main] postgres 58742 __set_winsock_errno: connect:981 - winsock error 10036 -> errno 119
00:00:08 [main] postgres 58742 cygwin_connect: -1 = connect(18, 0xA0009A3F8, 110), errno 119
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB2A8)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_connect: af_local_connect called, no_getpeereid=0
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_secret: Sending af_local secret succeeded
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_secret: Received af_local secret: ACA27CCF-44EA3F93-FB94FBF5-21DDF032
00:00:08 [pipesel] postgres 58742 peek_pipe: read: pipe:[12884905304], ready for read: avail 1
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_cred: Sending eid credentials succeeded
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 3, m = 4. verifying
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_cred: Received eid credentials: pid: 58729, uid: 197609, gid: 197121
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A370, testing fd 6 (pipe:[8589937592])
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: ready 0
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A490, testing fd 4 (pipe:[12884905304])
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA001340F0, testing fd 18 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 peek_pipe: pipe:[12884905304], already ready for read
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A490, testing fd 4 (pipe:[12884905304])
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A530 si->thread 0x7FFC91135610
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 2 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0)
00:00:08 [main] postgres 58742 read: read(4, 0x7FFFFAC20, 1024) nonblocking
00:00:08 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_read(PIPER) release 1
00:00:08 [main] postgres 58742 fhandler_base::read: returning 1, binary mode
00:00:08 [main] postgres 58742 read: 1 = read(4, 0x7FFFFAC20, 1)
00:00:08 [main] postgres 58742 fhandler_socket_local::getsockopt: WinSock SO_ERROR = 0
00:00:08 [main] postgres 58742 cygwin_getsockopt: 0 = getsockopt(18, 65535, 0x1007, 0x7FFFFAAC0, 0x7FFFFAAD0)
00:00:08 [main] postgres 58742 cygwin_getsockname: 0 =getsockname (18, 0xA00134BF8, 0xA00134C78)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 2, m = 4. verifying
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA001340F0, testing fd 18 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A530 si->thread 0x7FFC91135610
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 1 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0)
00:00:08 [main] postgres 58742 cygwin_send: 85 = send(18, 0xA0015FFA0, 85, 0x20)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [main] postgres 58742 dtable::select_read: fd 18
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x1007885B1
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 0, m = 4. verifying
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [main] postgres 58742 select_stuff::wait: signal received
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A530 si->thread 0x7FFC91135610
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 0, except_ready: 0
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 write: write(5, 0x7FFFFA2BF, 1)
00:00:08 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_write(PIPEW) release 1
00:00:08 [main] postgres 58742 write: 1 = write(5, 0x7FFFFA2BF, 1)
00:00:08 [main] postgres 58742 kill0: kill (-58742, 2)
00:00:08 [main] postgres 58742 kill_pgrp: pid 58742, signal 2
00:00:08 [main] postgres 58742 open_shared: name cygpid.58741, shared 0x1A07B0000 (wanted 0x1A07B0000), h 0x1DB0, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39224, shared 0x1A07C0000 (wanted 0x1A07C0000), h 0x1DA8, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58735, shared 0x1A07D0000 (wanted 0x1A07D0000), h 0x1DA0, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.550, shared 0x1A07E0000 (wanted 0x1A07E0000), h 0x1D98, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58746, shared 0x1A07F0000 (wanted 0x1A07F0000), h 0x1D90, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58731, shared 0x1A0800000 (wanted 0x1A0800000), h 0x1D88, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58717, shared 0x1A0810000 (wanted 0x1A0810000), h 0x1D80, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51466, shared 0x1A0820000 (wanted 0x1A0820000), h 0x1D78, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58733, shared 0x1A0830000 (wanted 0x1A0830000), h 0x1D70, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51465, shared 0x1A0840000 (wanted 0x1A0840000), h 0x1D68, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58754, shared 0x1A0850000 (wanted 0x1A0850000), h 0x1D60, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.49550, shared 0x1A0860000 (wanted 0x1A0860000), h 0x1D58, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58734, shared 0x1A0870000 (wanted 0x1A0870000), h 0x1D50, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.549, shared 0x1A0880000 (wanted 0x1A0880000), h 0x1D48, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58729, shared 0x1A0890000 (wanted 0x1A0890000), h 0x1D40, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39223, shared 0x1A08A0000 (wanted 0x1A08A0000), h 0x1D38, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58730, shared 0x1A08B0000 (wanted 0x1A08B0000), h 0x1CBC, m 6, created 0
00:00:08 [main] postgres 58742 kill_pgrp: killing pid 58742, pgrp 58742, p->no ctty, no ctty
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1DB4
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1DB4
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x1009067EF
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1DB4
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 kill_pgrp: 0 = kill(58742, 2)
00:00:08 [main] postgres 58742 kill0: kill (58742, 2)
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1D28
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1D28
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x100919339
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1D28
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [main] postgres 58742 select_stuff::wait: returning -3
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 select: sel.wait returns -3
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: -1 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0), errno 4
00:00:08 [main] postgres 58742 close: close(18)
00:00:08 [main] postgres 58742 close: 0 = close(18)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x7FFFFBF28, 0x0)
00:00:08 [main] postgres 58742 write: 116 = write(2, 0xA000026A0, 116)
00:00:08 [main] postgres 58742 cygwin_send: 110 = send(11, 0xA00085318, 110, 0x0)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 sigprocmask: 0 = sigprocmask (0, 0x100CF8D60, 0x0)
00:00:08 [main] postgres 58742 cygwin_send: 6 = send(11, 0xA00085318, 6, 0x0)
00:00:08 [main] postgres 58742 cygwin_recv: 46 = recv(11, 0x100CF6D00, 8192, 0x0)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBB38)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer disarmed, Win32 error 0
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 write: 121 = write(2, 0xA000026A0, 121)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBA28)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB458)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x4F797392E
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 stat: entering
00:00:08 [main] postgres 58742 normalize_posix_path: src /home/1/.pgpass
00:00:08 [main] postgres 58742 normalize_posix_path: /home/1/.pgpass = normalize_posix_path (/home/1/.pgpass)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1/.pgpass)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1/.pgpass, dst T:\cygwin64\home\1\.pgpass, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1\.pgpass, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1, dst T:\cygwin64\home\1, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\home\1)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\home\1\.pgpass), has_acls(1)
00:00:08 [main] postgres 58742 __set_errno: int stat_worker(path_conv&, stat*):2026 setting errno 2
00:00:08 [main] postgres 58742 stat_worker: -1 = (\??\T:\cygwin64\home\1\.pgpass,0x7FFFFAE10)
00:00:08 [main] postgres 58742 cygwin_socket: socket (1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 build_fh_pc: fh 0x8000DB628, dev 001E0078
00:00:08 [main] postgres 58742 fhandler_base::set_close_on_exec: set close_on_exec for to 1
00:00:08 [main] postgres 58742 fhandler_base::set_flags: flags 0x54002, supplied_bin 0x0
00:00:08 [main] postgres 58742 fhandler_base::set_flags: O_TEXT/O_BINARY set in flags 0x10000
00:00:08 [main] postgres 58742 fhandler_base::set_flags: filemode set to binary
00:00:08 [main] postgres 58742 cygwin_socket: 18 = socket(1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 normalize_posix_path: src /tmp/.s.PGSQL.5432
00:00:08 [main] postgres 58742 normalize_posix_path: /tmp/.s.PGSQL.5432 = normalize_posix_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /tmp/.s.PGSQL.5432, dst T:\cygwin64\tmp\.s.PGSQL.5432, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\tmp\.s.PGSQL.5432)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\tmp\.s.PGSQL.5432, 0x7FFFF9640) (mount_flags 0x30008, path_flags 0x20)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\tmp\.s.PGSQL.5432), has_acls(1)
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 __set_errno: void __set_winsock_errno(const char*, int):234 setting errno 119
00:00:08 [main] postgres 58742 __set_winsock_errno: connect:981 - winsock error 10036 -> errno 119
00:00:08 [main] postgres 58742 cygwin_connect: -1 = connect(18, 0xA0009A3F8, 110), errno 119
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB2A8)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_connect: af_local_connect called, no_getpeereid=0
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_secret: Sending af_local secret succeeded
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_secret: Received af_local secret: ACA27CCF-44EA3F93-FB94FBF5-21DDF032
00:00:08 [pipesel] postgres 58742 peek_pipe: read: pipe:[12884905304], ready for read: avail 1
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_cred: Sending eid credentials succeeded
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 3, m = 4. verifying
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_cred: Received eid credentials: pid: 58729, uid: 197609, gid: 197121
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A370, testing fd 6 (pipe:[8589937592])
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: ready 0
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A490, testing fd 4 (pipe:[12884905304])
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA001340F0, testing fd 18 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 peek_pipe: pipe:[12884905304], already ready for read
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A490, testing fd 4 (pipe:[12884905304])
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A530 si->thread 0x7FFC91135610
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 2 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0)
00:00:08 [main] postgres 58742 read: read(4, 0x7FFFFAC20, 1024) nonblocking
00:00:08 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_read(PIPER) release 1
00:00:08 [main] postgres 58742 fhandler_base::read: returning 1, binary mode
00:00:08 [main] postgres 58742 read: 1 = read(4, 0x7FFFFAC20, 1)
00:00:08 [main] postgres 58742 fhandler_socket_local::getsockopt: WinSock SO_ERROR = 0
00:00:08 [main] postgres 58742 cygwin_getsockopt: 0 = getsockopt(18, 65535, 0x1007, 0x7FFFFAAC0, 0x7FFFFAAD0)
00:00:08 [main] postgres 58742 cygwin_getsockname: 0 =getsockname (18, 0xA00134BF8, 0xA00134C78)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 2, m = 4. verifying
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA001340F0, testing fd 18 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A530 si->thread 0x7FFC91135610
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 1 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [main] postgres 58742 cygwin_send: 85 = send(18, 0xA0015FFA0, 85, 0x20)
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x1007885B1
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [main] postgres 58742 dtable::select_read: fd 18
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 0, except_ready: 0
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 0, m = 4. verifying
00:00:08 [main] postgres 58742 select_stuff::wait: signal received
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A530 si->thread 0x7FFC91135610
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 write: write(5, 0x7FFFFA2BF, 1)
00:00:08 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_write(PIPEW) release 1
00:00:08 [main] postgres 58742 write: 1 = write(5, 0x7FFFFA2BF, 1)
00:00:08 [main] postgres 58742 kill0: kill (-58742, 2)
00:00:08 [main] postgres 58742 kill_pgrp: pid 58742, signal 2
00:00:08 [main] postgres 58742 open_shared: name cygpid.58741, shared 0x1A08C0000 (wanted 0x1A08C0000), h 0x1D40, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39224, shared 0x1A08D0000 (wanted 0x1A08D0000), h 0x1D48, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58735, shared 0x1A08E0000 (wanted 0x1A08E0000), h 0x1D50, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.550, shared 0x1A08F0000 (wanted 0x1A08F0000), h 0x1D58, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58746, shared 0x1A0900000 (wanted 0x1A0900000), h 0x1D60, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58731, shared 0x1A0910000 (wanted 0x1A0910000), h 0x1D68, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58717, shared 0x1A0920000 (wanted 0x1A0920000), h 0x1D70, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51466, shared 0x1A0930000 (wanted 0x1A0930000), h 0x1D78, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58733, shared 0x1A0940000 (wanted 0x1A0940000), h 0x1D80, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51465, shared 0x1A0950000 (wanted 0x1A0950000), h 0x1D88, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.49550, shared 0x1A0960000 (wanted 0x1A0960000), h 0x1D90, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58734, shared 0x1A0970000 (wanted 0x1A0970000), h 0x1D98, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.549, shared 0x1A0980000 (wanted 0x1A0980000), h 0x1DA0, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58729, shared 0x1A0990000 (wanted 0x1A0990000), h 0x1DA8, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39223, shared 0x1A09A0000 (wanted 0x1A09A0000), h 0x1DB0, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58730, shared 0x1A09B0000 (wanted 0x1A09B0000), h 0x1D28, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58755, shared 0x1A09C0000 (wanted 0x1A09C0000), h 0x1DBC, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58755, shared 0x1A09D0000 (wanted 0x1A09D0000), h 0x1DBC, m 6, created 0
00:00:08 [main] postgres 58742 kill_pgrp: killing pid 58742, pgrp 58742, p->no ctty, no ctty
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1DB8
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1DB8
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x1009067EF
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1DB8
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 kill_pgrp: 0 = kill(58742, 2)
00:00:08 [main] postgres 58742 kill0: kill (58742, 2)
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1DB4
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1DB4
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x100919339
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1DB4
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [main] postgres 58742 select_stuff::wait: returning -3
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [main] postgres 58742 select: sel.wait returns -3
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: -1 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0), errno 4
00:00:08 [main] postgres 58742 close: close(18)
00:00:08 [main] postgres 58742 close: 0 = close(18)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x7FFFFBF28, 0x0)
00:00:08 [main] postgres 58742 write: 116 = write(2, 0xA000026A0, 116)
00:00:08 [main] postgres 58742 cygwin_send: 110 = send(11, 0xA00085318, 110, 0x0)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 sigprocmask: 0 = sigprocmask (0, 0x100CF8D60, 0x0)
00:00:08 [main] postgres 58742 cygwin_send: 6 = send(11, 0xA00085318, 6, 0x0)
00:00:08 [main] postgres 58742 cygwin_recv: 46 = recv(11, 0x100CF6D00, 8192, 0x0)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBB38)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer disarmed, Win32 error 0
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 write: 121 = write(2, 0xA000026A0, 121)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBA28)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB458)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x4F7979335
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 stat: entering
00:00:08 [main] postgres 58742 normalize_posix_path: src /home/1/.pgpass
00:00:08 [main] postgres 58742 normalize_posix_path: /home/1/.pgpass = normalize_posix_path (/home/1/.pgpass)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1/.pgpass)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1/.pgpass, dst T:\cygwin64\home\1\.pgpass, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1\.pgpass, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1, dst T:\cygwin64\home\1, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\home\1)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\home\1\.pgpass), has_acls(1)
00:00:08 [main] postgres 58742 __set_errno: int stat_worker(path_conv&, stat*):2026 setting errno 2
00:00:08 [main] postgres 58742 stat_worker: -1 = (\??\T:\cygwin64\home\1\.pgpass,0x7FFFFAE10)
00:00:08 [main] postgres 58742 cygwin_socket: socket (1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 build_fh_pc: fh 0x8000DB628, dev 001E0078
00:00:08 [main] postgres 58742 fhandler_base::set_close_on_exec: set close_on_exec for to 1
00:00:08 [main] postgres 58742 fhandler_base::set_flags: flags 0x54002, supplied_bin 0x0
00:00:08 [main] postgres 58742 fhandler_base::set_flags: O_TEXT/O_BINARY set in flags 0x10000
00:00:08 [main] postgres 58742 fhandler_base::set_flags: filemode set to binary
00:00:08 [main] postgres 58742 cygwin_socket: 18 = socket(1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 normalize_posix_path: src /tmp/.s.PGSQL.5432
00:00:08 [main] postgres 58742 normalize_posix_path: /tmp/.s.PGSQL.5432 = normalize_posix_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /tmp/.s.PGSQL.5432, dst T:\cygwin64\tmp\.s.PGSQL.5432, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\tmp\.s.PGSQL.5432)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\tmp\.s.PGSQL.5432, 0x7FFFF9640) (mount_flags 0x30008, path_flags 0x20)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\tmp\.s.PGSQL.5432), has_acls(1)
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 __set_errno: void __set_winsock_errno(const char*, int):234 setting errno 119
00:00:08 [main] postgres 58742 __set_winsock_errno: connect:981 - winsock error 10036 -> errno 119
00:00:08 [main] postgres 58742 cygwin_connect: -1 = connect(18, 0xA0009A3F8, 110), errno 119
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB2A8)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_connect: af_local_connect called, no_getpeereid=0
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_secret: Sending af_local secret succeeded
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_secret: Received af_local secret: ACA27CCF-44EA3F93-FB94FBF5-21DDF032
00:00:08 [pipesel] postgres 58742 peek_pipe: read: pipe:[12884905304], ready for read: avail 1
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_cred: Sending eid credentials succeeded
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 3, m = 4. verifying
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_cred: Received eid credentials: pid: 58729, uid: 197609, gid: 197121
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A370, testing fd 6 (pipe:[8589937592])
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: ready 0
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A490, testing fd 4 (pipe:[12884905304])
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA001340F0, testing fd 18 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 peek_pipe: pipe:[12884905304], already ready for read
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A490, testing fd 4 (pipe:[12884905304])
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A530 si->thread 0x7FFC91135610
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 2 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0)
00:00:08 [main] postgres 58742 read: read(4, 0x7FFFFAC20, 1024) nonblocking
00:00:08 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_read(PIPER) release 1
00:00:08 [main] postgres 58742 fhandler_base::read: returning 1, binary mode
00:00:08 [main] postgres 58742 read: 1 = read(4, 0x7FFFFAC20, 1)
00:00:08 [main] postgres 58742 fhandler_socket_local::getsockopt: WinSock SO_ERROR = 0
00:00:08 [main] postgres 58742 cygwin_getsockopt: 0 = getsockopt(18, 65535, 0x1007, 0x7FFFFAAC0, 0x7FFFFAAD0)
00:00:08 [main] postgres 58742 cygwin_getsockname: 0 =getsockname (18, 0xA00134BF8, 0xA00134C78)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 2, m = 4. verifying
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA001340F0, testing fd 18 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A530 si->thread 0x7FFC91135610
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 1 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0)
00:00:08 [main] postgres 58742 cygwin_send: 85 = send(18, 0xA0015FFA0, 85, 0x20)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_read: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 0, except_ready: 0
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x1007885B1
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 0, m = 4. verifying
00:00:08 [main] postgres 58742 select_stuff::wait: signal received
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A530 si->thread 0x7FFC91135610
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 write: write(5, 0x7FFFFA2BF, 1)
00:00:08 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_write(PIPEW) release 1
00:00:08 [main] postgres 58742 write: 1 = write(5, 0x7FFFFA2BF, 1)
00:00:08 [main] postgres 58742 kill0: kill (-58742, 2)
00:00:08 [main] postgres 58742 kill_pgrp: pid 58742, signal 2
00:00:08 [main] postgres 58742 open_shared: name cygpid.58741, shared 0x1A09E0000 (wanted 0x1A09E0000), h 0x1D28, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39224, shared 0x1A09F0000 (wanted 0x1A09F0000), h 0x1DB0, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58735, shared 0x1A0A00000 (wanted 0x1A0A00000), h 0x1DA8, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.550, shared 0x1A0A10000 (wanted 0x1A0A10000), h 0x1DA0, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58746, shared 0x1A0A20000 (wanted 0x1A0A20000), h 0x1D98, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58731, shared 0x1A0A30000 (wanted 0x1A0A30000), h 0x1D90, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58717, shared 0x1A0A40000 (wanted 0x1A0A40000), h 0x1D88, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51466, shared 0x1A0A50000 (wanted 0x1A0A50000), h 0x1D80, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58733, shared 0x1A0A60000 (wanted 0x1A0A60000), h 0x1D78, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51465, shared 0x1A0A70000 (wanted 0x1A0A70000), h 0x1D70, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.49550, shared 0x1A0A80000 (wanted 0x1A0A80000), h 0x1D68, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58734, shared 0x1A0A90000 (wanted 0x1A0A90000), h 0x1D60, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.549, shared 0x1A0AA0000 (wanted 0x1A0AA0000), h 0x1D58, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58729, shared 0x1A0AB0000 (wanted 0x1A0AB0000), h 0x1D50, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39223, shared 0x1A0AC0000 (wanted 0x1A0AC0000), h 0x1D48, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58756, shared 0x1A0AD0000 (wanted 0x1A0AD0000), h 0x1D40, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58730, shared 0x1A0AE0000 (wanted 0x1A0AE0000), h 0x1DBC, m 6, created 0
00:00:08 [main] postgres 58742 kill_pgrp: killing pid 58742, pgrp 58742, p->no ctty, no ctty
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1DC0
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1DC0
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x1009067EF
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1DC0
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 kill_pgrp: 0 = kill(58742, 2)
00:00:08 [main] postgres 58742 kill0: kill (58742, 2)
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1DB8
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1DB8
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x100919339
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1DB8
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [main] postgres 58742 select_stuff::wait: returning -3
00:00:08 [main] postgres 58742 select: sel.wait returns -3
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: -1 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0), errno 4
00:00:08 [main] postgres 58742 close: close(18)
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 close: 0 = close(18)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x7FFFFBF28, 0x0)
00:00:08 [main] postgres 58742 write: 116 = write(2, 0xA000026A0, 116)
00:00:08 [main] postgres 58742 cygwin_send: 110 = send(11, 0xA00085318, 110, 0x0)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 sigprocmask: 0 = sigprocmask (0, 0x100CF8D60, 0x0)
00:00:08 [main] postgres 58742 cygwin_send: 6 = send(11, 0xA00085318, 6, 0x0)
00:00:08 [main] postgres 58742 cygwin_recv: 46 = recv(11, 0x100CF6D00, 8192, 0x0)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBB38)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer disarmed, Win32 error 0
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 write: 121 = write(2, 0xA000026A0, 121)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBA28)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB458)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [main] postgres 58742 stat: entering
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [main] postgres 58742 normalize_posix_path: src /home/1/.pgpass
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [main] postgres 58742 normalize_posix_path: /home/1/.pgpass = normalize_posix_path (/home/1/.pgpass)
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1/.pgpass)
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1/.pgpass, dst T:\cygwin64\home\1\.pgpass, flags 0x30008, rc 0
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x4F7975DF2
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1\.pgpass, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1, dst T:\cygwin64\home\1, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\home\1)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\home\1\.pgpass), has_acls(1)
00:00:08 [main] postgres 58742 __set_errno: int stat_worker(path_conv&, stat*):2026 setting errno 2
00:00:08 [main] postgres 58742 stat_worker: -1 = (\??\T:\cygwin64\home\1\.pgpass,0x7FFFFAE10)
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 cygwin_socket: socket (1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 build_fh_pc: fh 0x8000DB628, dev 001E0078
00:00:08 [main] postgres 58742 fhandler_base::set_close_on_exec: set close_on_exec for to 1
00:00:08 [main] postgres 58742 fhandler_base::set_flags: flags 0x54002, supplied_bin 0x0
00:00:08 [main] postgres 58742 fhandler_base::set_flags: O_TEXT/O_BINARY set in flags 0x10000
00:00:08 [main] postgres 58742 fhandler_base::set_flags: filemode set to binary
00:00:08 [main] postgres 58742 cygwin_socket: 18 = socket(1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 normalize_posix_path: src /tmp/.s.PGSQL.5432
00:00:08 [main] postgres 58742 normalize_posix_path: /tmp/.s.PGSQL.5432 = normalize_posix_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /tmp/.s.PGSQL.5432, dst T:\cygwin64\tmp\.s.PGSQL.5432, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\tmp\.s.PGSQL.5432)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\tmp\.s.PGSQL.5432, 0x7FFFF9640) (mount_flags 0x30008, path_flags 0x20)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\tmp\.s.PGSQL.5432), has_acls(1)
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 __set_errno: void __set_winsock_errno(const char*, int):234 setting errno 119
00:00:08 [main] postgres 58742 __set_winsock_errno: connect:981 - winsock error 10036 -> errno 119
00:00:08 [main] postgres 58742 cygwin_connect: -1 = connect(18, 0xA0009A3F8, 110), errno 119
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB2A8)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_connect: af_local_connect called, no_getpeereid=0
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_secret: Sending af_local secret succeeded
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_secret: Received af_local secret: ACA27CCF-44EA3F93-FB94FBF5-21DDF032
00:00:08 [pipesel] postgres 58742 peek_pipe: read: pipe:[12884905304], ready for read: avail 1
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_cred: Sending eid credentials succeeded
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_cred: Received eid credentials: pid: 58729, uid: 197609, gid: 197121
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 3, m = 4. verifying
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A370, testing fd 6 (pipe:[8589937592])
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 set_bits: ready 0
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A490, testing fd 4 (pipe:[12884905304])
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA001340F0, testing fd 18 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 peek_pipe: pipe:[12884905304], already ready for read
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A490, testing fd 4 (pipe:[12884905304])
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A530 si->thread 0x7FFC91135610
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 2 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0)
00:00:08 [main] postgres 58742 read: read(4, 0x7FFFFAC20, 1024) nonblocking
00:00:08 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_read(PIPER) release 1
00:00:08 [main] postgres 58742 fhandler_base::read: returning 1, binary mode
00:00:08 [main] postgres 58742 read: 1 = read(4, 0x7FFFFAC20, 1)
00:00:08 [main] postgres 58742 fhandler_socket_local::getsockopt: WinSock SO_ERROR = 0
00:00:08 [main] postgres 58742 cygwin_getsockopt: 0 = getsockopt(18, 65535, 0x1007, 0x7FFFFAAC0, 0x7FFFFAAD0)
00:00:08 [main] postgres 58742 cygwin_getsockname: 0 =getsockname (18, 0xA00134BF8, 0xA00134C78)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 2, m = 4. verifying
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA001340F0, testing fd 18 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A530 si->thread 0x7FFC91135610
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 1 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0)
00:00:08 [main] postgres 58742 cygwin_send: 85 = send(18, 0xA0015FFA0, 85, 0x20)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_read: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 0, except_ready: 0
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x1007885B1
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 0, m = 4. verifying
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [main] postgres 58742 select_stuff::wait: signal received
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A530 si->thread 0x7FFC91135610
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 write: write(5, 0x7FFFFA2BF, 1)
00:00:08 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_write(PIPEW) release 1
00:00:08 [main] postgres 58742 write: 1 = write(5, 0x7FFFFA2BF, 1)
00:00:08 [main] postgres 58742 kill0: kill (-58742, 2)
00:00:08 [main] postgres 58742 kill_pgrp: pid 58742, signal 2
00:00:08 [main] postgres 58742 open_shared: name cygpid.58741, shared 0x1A0AF0000 (wanted 0x1A0AF0000), h 0x1CBC, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39224, shared 0x1A0B00000 (wanted 0x1A0B00000), h 0x1D44, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58735, shared 0x1A0B10000 (wanted 0x1A0B10000), h 0x1D4C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.550, shared 0x1A0B20000 (wanted 0x1A0B20000), h 0x1D54, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58746, shared 0x1A0B30000 (wanted 0x1A0B30000), h 0x1D5C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58731, shared 0x1A0B40000 (wanted 0x1A0B40000), h 0x1D64, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58717, shared 0x1A0B50000 (wanted 0x1A0B50000), h 0x1D6C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51466, shared 0x1A0B60000 (wanted 0x1A0B60000), h 0x1D74, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58733, shared 0x1A0B70000 (wanted 0x1A0B70000), h 0x1D7C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51465, shared 0x1A0B80000 (wanted 0x1A0B80000), h 0x1D84, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.49550, shared 0x1A0B90000 (wanted 0x1A0B90000), h 0x1D8C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58734, shared 0x1A0BA0000 (wanted 0x1A0BA0000), h 0x1D94, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.549, shared 0x1A0BB0000 (wanted 0x1A0BB0000), h 0x1D9C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58729, shared 0x1A0BC0000 (wanted 0x1A0BC0000), h 0x1DA4, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58757, shared 0x1A0BD0000 (wanted 0x1A0BD0000), h 0x1DAC, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39223, shared 0x1A0BE0000 (wanted 0x1A0BE0000), h 0x1DB4, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58730, shared 0x1A0BF0000 (wanted 0x1A0BF0000), h 0x1DC0, m 6, created 0
00:00:08 [main] postgres 58742 kill_pgrp: killing pid 58742, pgrp 58742, p->no ctty, no ctty
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1DC4
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1DC4
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x1009067EF
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1DC4
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 kill_pgrp: 0 = kill(58742, 2)
00:00:08 [main] postgres 58742 kill0: kill (58742, 2)
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1D28
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1D28
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x100919339
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1D28
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [main] postgres 58742 select_stuff::wait: returning -3
00:00:08 [main] postgres 58742 select: sel.wait returns -3
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: -1 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0), errno 4
00:00:08 [main] postgres 58742 close: close(18)
00:00:08 [main] postgres 58742 close: 0 = close(18)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x7FFFFBF28, 0x0)
00:00:08 [main] postgres 58742 write: 116 = write(2, 0xA000026A0, 116)
00:00:08 [main] postgres 58742 cygwin_send: 110 = send(11, 0xA00085318, 110, 0x0)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 sigprocmask: 0 = sigprocmask (0, 0x100CF8D60, 0x0)
00:00:08 [main] postgres 58742 cygwin_send: 6 = send(11, 0xA00085318, 6, 0x0)
00:00:08 [main] postgres 58742 cygwin_recv: 46 = recv(11, 0x100CF6D00, 8192, 0x0)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBB38)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer disarmed, Win32 error 0
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 write: 121 = write(2, 0xA000026A0, 121)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBA28)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB458)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [main] postgres 58742 stat: entering
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [main] postgres 58742 normalize_posix_path: src /home/1/.pgpass
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [main] postgres 58742 normalize_posix_path: /home/1/.pgpass = normalize_posix_path (/home/1/.pgpass)
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1/.pgpass)
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x4F7975DF2
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1/.pgpass, dst T:\cygwin64\home\1\.pgpass, flags 0x30008, rc 0
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1\.pgpass, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1, dst T:\cygwin64\home\1, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\home\1)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\home\1\.pgpass), has_acls(1)
00:00:08 [main] postgres 58742 __set_errno: int stat_worker(path_conv&, stat*):2026 setting errno 2
00:00:08 [main] postgres 58742 stat_worker: -1 = (\??\T:\cygwin64\home\1\.pgpass,0x7FFFFAE10)
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 cygwin_socket: socket (1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 build_fh_pc: fh 0x8000DB628, dev 001E0078
00:00:08 [main] postgres 58742 fhandler_base::set_close_on_exec: set close_on_exec for to 1
00:00:08 [main] postgres 58742 fhandler_base::set_flags: flags 0x54002, supplied_bin 0x0
00:00:08 [main] postgres 58742 fhandler_base::set_flags: O_TEXT/O_BINARY set in flags 0x10000
00:00:08 [main] postgres 58742 fhandler_base::set_flags: filemode set to binary
00:00:08 [main] postgres 58742 cygwin_socket: 18 = socket(1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 normalize_posix_path: src /tmp/.s.PGSQL.5432
00:00:08 [main] postgres 58742 normalize_posix_path: /tmp/.s.PGSQL.5432 = normalize_posix_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /tmp/.s.PGSQL.5432, dst T:\cygwin64\tmp\.s.PGSQL.5432, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\tmp\.s.PGSQL.5432)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\tmp\.s.PGSQL.5432, 0x7FFFF9640) (mount_flags 0x30008, path_flags 0x20)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\tmp\.s.PGSQL.5432), has_acls(1)
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 __set_errno: void __set_winsock_errno(const char*, int):234 setting errno 119
00:00:08 [main] postgres 58742 __set_winsock_errno: connect:981 - winsock error 10036 -> errno 119
00:00:08 [main] postgres 58742 cygwin_connect: -1 = connect(18, 0xA0009A3F8, 110), errno 119
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB2A8)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_connect: af_local_connect called, no_getpeereid=0
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_secret: Sending af_local secret succeeded
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_secret: Received af_local secret: ACA27CCF-44EA3F93-FB94FBF5-21DDF032
00:00:08 [pipesel] postgres 58742 peek_pipe: read: pipe:[12884905304], ready for read: avail 1
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_cred: Sending eid credentials succeeded
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_cred: Received eid credentials: pid: 58729, uid: 197609, gid: 197121
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 3, m = 4. verifying
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A370, testing fd 6 (pipe:[8589937592])
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 set_bits: ready 0
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A490, testing fd 4 (pipe:[12884905304])
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA001340F0, testing fd 18 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 peek_pipe: pipe:[12884905304], already ready for read
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A490, testing fd 4 (pipe:[12884905304])
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A530 si->thread 0x7FFC91135610
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 2 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0)
00:00:08 [main] postgres 58742 read: read(4, 0x7FFFFAC20, 1024) nonblocking
00:00:08 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_read(PIPER) release 1
00:00:08 [main] postgres 58742 fhandler_base::read: returning 1, binary mode
00:00:08 [main] postgres 58742 read: 1 = read(4, 0x7FFFFAC20, 1)
00:00:08 [main] postgres 58742 fhandler_socket_local::getsockopt: WinSock SO_ERROR = 0
00:00:08 [main] postgres 58742 cygwin_getsockopt: 0 = getsockopt(18, 65535, 0x1007, 0x7FFFFAAC0, 0x7FFFFAAD0)
00:00:08 [main] postgres 58742 cygwin_getsockname: 0 =getsockname (18, 0xA00134BF8, 0xA00134C78)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 2, m = 4. verifying
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA001340F0, testing fd 18 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A530 si->thread 0x7FFC91135610
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 1 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0)
00:00:08 [main] postgres 58742 cygwin_send: 85 = send(18, 0xA0015FFA0, 85, 0x20)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_read: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 0, except_ready: 0
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x1007885B1
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 0, m = 4. verifying
00:00:08 [main] postgres 58742 select_stuff::wait: signal received
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A530 si->thread 0x7FFC91135610
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 write: write(5, 0x7FFFFA2BF, 1)
00:00:08 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_write(PIPEW) release 1
00:00:08 [main] postgres 58742 write: 1 = write(5, 0x7FFFFA2BF, 1)
00:00:08 [main] postgres 58742 kill0: kill (-58742, 2)
00:00:08 [main] postgres 58742 kill_pgrp: pid 58742, signal 2
00:00:08 [main] postgres 58742 open_shared: name cygpid.58741, shared 0x1A0C00000 (wanted 0x1A0C00000), h 0x1DC0, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39224, shared 0x1A0C10000 (wanted 0x1A0C10000), h 0x1DB4, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58735, shared 0x1A0C20000 (wanted 0x1A0C20000), h 0x1DAC, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.550, shared 0x1A0C30000 (wanted 0x1A0C30000), h 0x1DA4, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58746, shared 0x1A0C40000 (wanted 0x1A0C40000), h 0x1D9C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58731, shared 0x1A0C50000 (wanted 0x1A0C50000), h 0x1D94, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58717, shared 0x1A0C60000 (wanted 0x1A0C60000), h 0x1D8C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51466, shared 0x1A0C70000 (wanted 0x1A0C70000), h 0x1D84, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58733, shared 0x1A0C80000 (wanted 0x1A0C80000), h 0x1D7C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51465, shared 0x1A0C90000 (wanted 0x1A0C90000), h 0x1D74, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.49550, shared 0x1A0CA0000 (wanted 0x1A0CA0000), h 0x1D6C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58734, shared 0x1A0CB0000 (wanted 0x1A0CB0000), h 0x1D64, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.549, shared 0x1A0CC0000 (wanted 0x1A0CC0000), h 0x1D5C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58729, shared 0x1A0CD0000 (wanted 0x1A0CD0000), h 0x1D54, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39223, shared 0x1A0CE0000 (wanted 0x1A0CE0000), h 0x1D4C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58730, shared 0x1A0CF0000 (wanted 0x1A0CF0000), h 0x1D44, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58758, shared 0x1A0D00000 (wanted 0x1A0D00000), h 0x1CBC, m 6, created 0
00:00:08 [main] postgres 58742 kill_pgrp: killing pid 58742, pgrp 58742, p->no ctty, no ctty
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1DC4
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1DC4
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x1009067EF
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1DC4
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 kill_pgrp: 0 = kill(58742, 2)
00:00:08 [main] postgres 58742 kill0: kill (58742, 2)
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1DB8
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1DB8
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x100919339
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1DB8
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [main] postgres 58742 select_stuff::wait: returning -3
00:00:08 [main] postgres 58742 select: sel.wait returns -3
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: -1 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0), errno 4
00:00:08 [main] postgres 58742 close: close(18)
00:00:08 [main] postgres 58742 close: 0 = close(18)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x7FFFFBF28, 0x0)
00:00:08 [main] postgres 58742 write: 116 = write(2, 0xA000026A0, 116)
00:00:08 [main] postgres 58742 cygwin_send: 110 = send(11, 0xA00085318, 110, 0x0)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 sigprocmask: 0 = sigprocmask (0, 0x100CF8D60, 0x0)
00:00:08 [main] postgres 58742 cygwin_send: 6 = send(11, 0xA00085318, 6, 0x0)
00:00:08 [main] postgres 58742 cygwin_recv: 46 = recv(11, 0x100CF6D00, 8192, 0x0)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBB38)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer disarmed, Win32 error 0
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 write: 121 = write(2, 0xA000026A0, 121)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBA28)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB458)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [main] postgres 58742 stat: entering
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [main] postgres 58742 normalize_posix_path: src /home/1/.pgpass
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [main] postgres 58742 normalize_posix_path: /home/1/.pgpass = normalize_posix_path (/home/1/.pgpass)
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1/.pgpass)
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1/.pgpass, dst T:\cygwin64\home\1\.pgpass, flags 0x30008, rc 0
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x4F7975DF2
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1\.pgpass, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1, dst T:\cygwin64\home\1, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\home\1)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\home\1\.pgpass), has_acls(1)
00:00:08 [main] postgres 58742 __set_errno: int stat_worker(path_conv&, stat*):2026 setting errno 2
00:00:08 [main] postgres 58742 stat_worker: -1 = (\??\T:\cygwin64\home\1\.pgpass,0x7FFFFAE10)
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 cygwin_socket: socket (1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 build_fh_pc: fh 0x8000DB628, dev 001E0078
00:00:08 [main] postgres 58742 fhandler_base::set_close_on_exec: set close_on_exec for to 1
00:00:08 [main] postgres 58742 fhandler_base::set_flags: flags 0x54002, supplied_bin 0x0
00:00:08 [main] postgres 58742 fhandler_base::set_flags: O_TEXT/O_BINARY set in flags 0x10000
00:00:08 [main] postgres 58742 fhandler_base::set_flags: filemode set to binary
00:00:08 [main] postgres 58742 cygwin_socket: 18 = socket(1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 normalize_posix_path: src /tmp/.s.PGSQL.5432
00:00:08 [main] postgres 58742 normalize_posix_path: /tmp/.s.PGSQL.5432 = normalize_posix_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /tmp/.s.PGSQL.5432, dst T:\cygwin64\tmp\.s.PGSQL.5432, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\tmp\.s.PGSQL.5432)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\tmp\.s.PGSQL.5432, 0x7FFFF9640) (mount_flags 0x30008, path_flags 0x20)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\tmp\.s.PGSQL.5432), has_acls(1)
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 __set_errno: void __set_winsock_errno(const char*, int):234 setting errno 119
00:00:08 [main] postgres 58742 __set_winsock_errno: connect:981 - winsock error 10036 -> errno 119
00:00:08 [main] postgres 58742 cygwin_connect: -1 = connect(18, 0xA0009A3F8, 110), errno 119
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB2A8)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_connect: af_local_connect called, no_getpeereid=0
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_secret: Sending af_local secret succeeded
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_secret: Received af_local secret: ACA27CCF-44EA3F93-FB94FBF5-21DDF032
00:00:08 [pipesel] postgres 58742 peek_pipe: read: pipe:[12884905304], ready for read: avail 1
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_cred: Sending eid credentials succeeded
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 3, m = 4. verifying
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_cred: Received eid credentials: pid: 58729, uid: 197609, gid: 197121
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A370, testing fd 6 (pipe:[8589937592])
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: ready 0
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A490, testing fd 4 (pipe:[12884905304])
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA001340F0, testing fd 18 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 peek_pipe: pipe:[12884905304], already ready for read
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A490, testing fd 4 (pipe:[12884905304])
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A530 si->thread 0x7FFC91135610
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 2 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0)
00:00:08 [main] postgres 58742 read: read(4, 0x7FFFFAC20, 1024) nonblocking
00:00:08 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_read(PIPER) release 1
00:00:08 [main] postgres 58742 fhandler_base::read: returning 1, binary mode
00:00:08 [main] postgres 58742 read: 1 = read(4, 0x7FFFFAC20, 1)
00:00:08 [main] postgres 58742 fhandler_socket_local::getsockopt: WinSock SO_ERROR = 0
00:00:08 [main] postgres 58742 cygwin_getsockopt: 0 = getsockopt(18, 65535, 0x1007, 0x7FFFFAAC0, 0x7FFFFAAD0)
00:00:08 [main] postgres 58742 cygwin_getsockname: 0 =getsockname (18, 0xA00134BF8, 0xA00134C78)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 2, m = 4. verifying
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA001340F0, testing fd 18 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A530 si->thread 0x7FFC91135610
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 1 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0)
00:00:08 [main] postgres 58742 cygwin_send: 85 = send(18, 0xA0015FFA0, 85, 0x20)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_read: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 0, except_ready: 0
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x1007885B1
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 0, m = 4. verifying
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [main] postgres 58742 select_stuff::wait: signal received
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A530 si->thread 0x7FFC91135610
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 write: write(5, 0x7FFFFA2BF, 1)
00:00:08 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_write(PIPEW) release 1
00:00:08 [main] postgres 58742 write: 1 = write(5, 0x7FFFFA2BF, 1)
00:00:08 [main] postgres 58742 kill0: kill (-58742, 2)
00:00:08 [main] postgres 58742 kill_pgrp: pid 58742, signal 2
00:00:08 [main] postgres 58742 open_shared: name cygpid.58741, shared 0x1A0D10000 (wanted 0x1A0D10000), h 0x1CBC, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39224, shared 0x1A0D20000 (wanted 0x1A0D20000), h 0x1D44, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58735, shared 0x1A0D30000 (wanted 0x1A0D30000), h 0x1D4C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.550, shared 0x1A0D40000 (wanted 0x1A0D40000), h 0x1D54, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58746, shared 0x1A0D50000 (wanted 0x1A0D50000), h 0x1D5C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58759, shared 0x1A0D60000 (wanted 0x1A0D60000), h 0x1D64, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58731, shared 0x1A0D70000 (wanted 0x1A0D70000), h 0x1D6C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58717, shared 0x1A0D80000 (wanted 0x1A0D80000), h 0x1D74, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51466, shared 0x1A0D90000 (wanted 0x1A0D90000), h 0x1D7C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58733, shared 0x1A0DA0000 (wanted 0x1A0DA0000), h 0x1D84, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51465, shared 0x1A0DB0000 (wanted 0x1A0DB0000), h 0x1D8C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.49550, shared 0x1A0DC0000 (wanted 0x1A0DC0000), h 0x1D94, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58734, shared 0x1A0DD0000 (wanted 0x1A0DD0000), h 0x1D9C, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.549, shared 0x1A0DE0000 (wanted 0x1A0DE0000), h 0x1DA4, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58729, shared 0x1A0DF0000 (wanted 0x1A0DF0000), h 0x1DAC, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39223, shared 0x1A0E00000 (wanted 0x1A0E00000), h 0x1DB4, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58730, shared 0x1A0E10000 (wanted 0x1A0E10000), h 0x1DC0, m 6, created 0
00:00:08 [main] postgres 58742 kill_pgrp: killing pid 58742, pgrp 58742, p->no ctty, no ctty
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1DC4
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1DC4
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x1009067EF
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1DC4
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 kill_pgrp: 0 = kill(58742, 2)
00:00:08 [main] postgres 58742 kill0: kill (58742, 2)
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1D28
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1D28
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x100919339
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1D28
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [main] postgres 58742 select_stuff::wait: returning -3
00:00:08 [main] postgres 58742 select: sel.wait returns -3
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: -1 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0), errno 4
00:00:08 [main] postgres 58742 close: close(18)
00:00:08 [main] postgres 58742 close: 0 = close(18)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x7FFFFBF28, 0x0)
00:00:08 [main] postgres 58742 write: 116 = write(2, 0xA000026A0, 116)
00:00:08 [main] postgres 58742 cygwin_send: 110 = send(11, 0xA00085318, 110, 0x0)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 sigprocmask: 0 = sigprocmask (0, 0x100CF8D60, 0x0)
00:00:08 [main] postgres 58742 cygwin_send: 6 = send(11, 0xA00085318, 6, 0x0)
00:00:08 [main] postgres 58742 cygwin_recv: 46 = recv(11, 0x100CF6D00, 8192, 0x0)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBB38)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer disarmed, Win32 error 0
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 write: 121 = write(2, 0xA000026A0, 121)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBA28)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB458)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [main] postgres 58742 stat: entering
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [main] postgres 58742 normalize_posix_path: src /home/1/.pgpass
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [main] postgres 58742 normalize_posix_path: /home/1/.pgpass = normalize_posix_path (/home/1/.pgpass)
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1/.pgpass)
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1/.pgpass, dst T:\cygwin64\home\1\.pgpass, flags 0x30008, rc 0
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x4F7975DF2
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1\.pgpass, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1, dst T:\cygwin64\home\1, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\home\1)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\home\1\.pgpass), has_acls(1)
00:00:08 [main] postgres 58742 __set_errno: int stat_worker(path_conv&, stat*):2026 setting errno 2
00:00:08 [main] postgres 58742 stat_worker: -1 = (\??\T:\cygwin64\home\1\.pgpass,0x7FFFFAE10)
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 cygwin_socket: socket (1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 build_fh_pc: fh 0x8000DB628, dev 001E0078
00:00:08 [main] postgres 58742 fhandler_base::set_close_on_exec: set close_on_exec for to 1
00:00:08 [main] postgres 58742 fhandler_base::set_flags: flags 0x54002, supplied_bin 0x0
00:00:08 [main] postgres 58742 fhandler_base::set_flags: O_TEXT/O_BINARY set in flags 0x10000
00:00:08 [main] postgres 58742 fhandler_base::set_flags: filemode set to binary
00:00:08 [main] postgres 58742 cygwin_socket: 18 = socket(1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 normalize_posix_path: src /tmp/.s.PGSQL.5432
00:00:08 [main] postgres 58742 normalize_posix_path: /tmp/.s.PGSQL.5432 = normalize_posix_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /tmp/.s.PGSQL.5432, dst T:\cygwin64\tmp\.s.PGSQL.5432, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\tmp\.s.PGSQL.5432)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\tmp\.s.PGSQL.5432, 0x7FFFF9640) (mount_flags 0x30008, path_flags 0x20)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\tmp\.s.PGSQL.5432), has_acls(1)
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 __set_errno: void __set_winsock_errno(const char*, int):234 setting errno 119
00:00:08 [main] postgres 58742 __set_winsock_errno: connect:981 - winsock error 10036 -> errno 119
00:00:08 [main] postgres 58742 cygwin_connect: -1 = connect(18, 0xA0009A3F8, 110), errno 119
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB2A8)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_connect: af_local_connect called, no_getpeereid=0
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_secret: Sending af_local secret succeeded
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_secret: Received af_local secret: ACA27CCF-44EA3F93-FB94FBF5-21DDF032
00:00:08 [pipesel] postgres 58742 peek_pipe: read: pipe:[12884905304], ready for read: avail 1
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_cred: Sending eid credentials succeeded
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_cred: Received eid credentials: pid: 58729, uid: 197609, gid: 197121
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 3, m = 4. verifying
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A370, testing fd 6 (pipe:[8589937592])
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 set_bits: ready 0
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A490, testing fd 4 (pipe:[12884905304])
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA001340F0, testing fd 18 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 peek_pipe: pipe:[12884905304], already ready for read
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A490, testing fd 4 (pipe:[12884905304])
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A530 si->thread 0x7FFC91135610
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 2 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0)
00:00:08 [main] postgres 58742 read: read(4, 0x7FFFFAC20, 1024) nonblocking
00:00:08 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_read(PIPER) release 1
00:00:08 [main] postgres 58742 fhandler_base::read: returning 1, binary mode
00:00:08 [main] postgres 58742 read: 1 = read(4, 0x7FFFFAC20, 1)
00:00:08 [main] postgres 58742 fhandler_socket_local::getsockopt: WinSock SO_ERROR = 0
00:00:08 [main] postgres 58742 cygwin_getsockopt: 0 = getsockopt(18, 65535, 0x1007, 0x7FFFFAAC0, 0x7FFFFAAD0)
00:00:08 [main] postgres 58742 cygwin_getsockname: 0 =getsockname (18, 0xA00134BF8, 0xA00134C78)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 2, m = 4. verifying
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [main] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:08 [main] postgres 58742 set_bits: me 0xA001340F0, testing fd 18 ()
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A530 si->thread 0x7FFC91135610
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: 1 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0)
00:00:08 [main] postgres 58742 cygwin_send: 85 = send(18, 0xA0015FFA0, 85, 0x20)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_read: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 0, except_ready: 0
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x1007885B1
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 0, m = 4. verifying
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [main] postgres 58742 select_stuff::wait: signal received
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [main] postgres 58742 socket_cleanup: si 0xA0009A530 si->thread 0x7FFC91135610
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:08 [main] postgres 58742 socket_cleanup: returning
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 write: write(5, 0x7FFFFA2BF, 1)
00:00:08 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_write(PIPEW) release 1
00:00:08 [main] postgres 58742 write: 1 = write(5, 0x7FFFFA2BF, 1)
00:00:08 [main] postgres 58742 kill0: kill (-58742, 2)
00:00:08 [main] postgres 58742 kill_pgrp: pid 58742, signal 2
00:00:08 [main] postgres 58742 open_shared: name cygpid.58741, shared 0x1A0E20000 (wanted 0x1A0E20000), h 0x1DB0, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58760, shared 0x1A0E30000 (wanted 0x1A0E30000), h 0x1DA8, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39224, shared 0x1A0E40000 (wanted 0x1A0E40000), h 0x1DA0, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58735, shared 0x1A0E50000 (wanted 0x1A0E50000), h 0x1D98, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.550, shared 0x1A0E60000 (wanted 0x1A0E60000), h 0x1D90, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58746, shared 0x1A0E70000 (wanted 0x1A0E70000), h 0x1D88, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58731, shared 0x1A0E80000 (wanted 0x1A0E80000), h 0x1D80, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58717, shared 0x1A0E90000 (wanted 0x1A0E90000), h 0x1D78, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51466, shared 0x1A0EA0000 (wanted 0x1A0EA0000), h 0x1D70, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58733, shared 0x1A0EB0000 (wanted 0x1A0EB0000), h 0x1D68, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.51465, shared 0x1A0EC0000 (wanted 0x1A0EC0000), h 0x1D60, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.49550, shared 0x1A0ED0000 (wanted 0x1A0ED0000), h 0x1D58, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58734, shared 0x1A0EE0000 (wanted 0x1A0EE0000), h 0x1D50, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.549, shared 0x1A0EF0000 (wanted 0x1A0EF0000), h 0x1D48, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58729, shared 0x1A0F00000 (wanted 0x1A0F00000), h 0x1D40, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.39223, shared 0x1A0F10000 (wanted 0x1A0F10000), h 0x1DB8, m 6, created 0
00:00:08 [main] postgres 58742 open_shared: name cygpid.58730, shared 0x1A0F20000 (wanted 0x1A0F20000), h 0x1DC4, m 6, created 0
00:00:08 [main] postgres 58742 kill_pgrp: killing pid 58742, pgrp 58742, p->no ctty, no ctty
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1D28
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1D28
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x1009067EF
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1D28
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 kill_pgrp: 0 = kill(58742, 2)
00:00:08 [main] postgres 58742 kill0: kill (58742, 2)
00:00:08 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:08 [main] postgres 58742 sig_send: wakeup 0x1CBC
00:00:08 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1CBC
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x100919339
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1CBC
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:08 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:08 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [main] postgres 58742 select_stuff::wait: returning -3
00:00:08 [main] postgres 58742 select: sel.wait returns -3
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:08 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:08 [main] postgres 58742 pselect: -1 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0), errno 4
00:00:08 [main] postgres 58742 close: close(18)
00:00:08 [main] postgres 58742 close: 0 = close(18)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x7FFFFBF28, 0x0)
00:00:08 [main] postgres 58742 write: 116 = write(2, 0xA000026A0, 116)
00:00:08 [main] postgres 58742 cygwin_send: 110 = send(11, 0xA00085318, 110, 0x0)
00:00:08 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:08 [main] postgres 58742 sigprocmask: 0 = sigprocmask (0, 0x100CF8D60, 0x0)
00:00:08 [main] postgres 58742 cygwin_send: 6 = send(11, 0xA00085318, 6, 0x0)
00:00:08 [main] postgres 58742 cygwin_recv: 46 = recv(11, 0x100CF6D00, 8192, 0x0)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBB38)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer disarmed, Win32 error 0
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 write: 121 = write(2, 0xA000026A0, 121)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFBA28)
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB458)
00:00:08 [main] postgres 58742 stat: entering
00:00:08 [main] postgres 58742 normalize_posix_path: src /home/1/.pgpass
00:00:08 [main] postgres 58742 normalize_posix_path: /home/1/.pgpass = normalize_posix_path (/home/1/.pgpass)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1/.pgpass)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1/.pgpass, dst T:\cygwin64\home\1\.pgpass, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass)
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe)
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x4F7975DF2
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.lnk)
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtCreateFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0xC0000034 = NtQueryInformationFile (\??\T:\cygwin64\home\1\.pgpass.exe.lnk)
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1\.pgpass, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/home/1)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /home/1, dst T:\cygwin64\home\1, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\home\1)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\home\1, 0x7FFFF9B80) (mount_flags 0x30008, path_flags 0x0)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\home\1\.pgpass), has_acls(1)
00:00:08 [main] postgres 58742 __set_errno: int stat_worker(path_conv&, stat*):2026 setting errno 2
00:00:08 [main] postgres 58742 stat_worker: -1 = (\??\T:\cygwin64\home\1\.pgpass,0x7FFFFAE10)
00:00:08 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 setitimer: 0 = setitimer()
00:00:08 [itimer] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer armed
00:00:08 [main] postgres 58742 cygwin_socket: socket (1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 build_fh_pc: fh 0x8000DB628, dev 001E0078
00:00:08 [main] postgres 58742 fhandler_base::set_close_on_exec: set close_on_exec for to 1
00:00:08 [main] postgres 58742 fhandler_base::set_flags: flags 0x54002, supplied_bin 0x0
00:00:08 [main] postgres 58742 fhandler_base::set_flags: O_TEXT/O_BINARY set in flags 0x10000
00:00:08 [main] postgres 58742 fhandler_base::set_flags: filemode set to binary
00:00:08 [main] postgres 58742 cygwin_socket: 18 = socket(1, 1 (flags 0x3000000), 0)
00:00:08 [main] postgres 58742 normalize_posix_path: src /tmp/.s.PGSQL.5432
00:00:08 [main] postgres 58742 normalize_posix_path: /tmp/.s.PGSQL.5432 = normalize_posix_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: conv_to_win32_path (/tmp/.s.PGSQL.5432)
00:00:08 [main] postgres 58742 mount_info::conv_to_win32_path: src_path /tmp/.s.PGSQL.5432, dst T:\cygwin64\tmp\.s.PGSQL.5432, flags 0x30008, rc 0
00:00:08 [main] postgres 58742 symlink_info::check: 0x0 = NtCreateFile (\??\T:\cygwin64\tmp\.s.PGSQL.5432)
00:00:08 [main] postgres 58742 symlink_info::check: not a symlink
00:00:08 [main] postgres 58742 symlink_info::check: 0 = symlink.check(T:\cygwin64\tmp\.s.PGSQL.5432, 0x7FFFF9640) (mount_flags 0x30008, path_flags 0x20)
00:00:08 [main] postgres 58742 path_conv::check: this->path(T:\cygwin64\tmp\.s.PGSQL.5432), has_acls(1)
00:00:08 [main] postgres 58742 getpid: 58742 = getpid()
00:00:08 [main] postgres 58742 __set_errno: void __set_winsock_errno(const char*, int):234 setting errno 119
00:00:08 [main] postgres 58742 __set_winsock_errno: connect:981 - winsock error 10036 -> errno 119
00:00:08 [main] postgres 58742 cygwin_connect: -1 = connect(18, 0xA0009A3F8, 110), errno 119
00:00:08 [main] postgres 58742 pthread_sigmask: 0 = pthread_sigmask(0, 0x0, 0x7FFFFB2A8)
00:00:08 [main] postgres 58742 pselect: pselect (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0, 0x0)
00:00:08 [main] postgres 58742 pselect: to NULL, us -1
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[12884905304] fd 4
00:00:08 [main] postgres 58742 dtable::select_read: pipe:[8589937592] fd 6
00:00:08 [main] postgres 58742 dtable::select_write: fd 18
00:00:08 [main] postgres 58742 select: sel.always_ready 0
00:00:08 [main] postgres 58742 start_thread_socket: stuff_start 0x7FFFFAD38
00:00:08 [main] postgres 58742 select_stuff::wait: m 4, us 18446744073709551615, wmfo_timeout -1
00:00:08 [pipesel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [socksel] postgres 58742 SetThreadName: SetThreadDescription() failed. 00000000 10000000
00:00:08 [pipesel] postgres 58742 peek_pipe: read: pipe:[12884905304], ready for read: avail 1
00:00:08 [socksel] postgres 58742 thread_socket: stuff_start 0x7FFFFAD38, timeout 4294967295
00:00:08 [main] postgres 58742 select_stuff::wait: wait_ret 3, m = 4. verifying
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_connect: af_local_connect called, no_getpeereid=0
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A370, testing fd 6 (pipe:[8589937592])
00:00:08 [main] postgres 58742 set_bits: ready 0
00:00:08 [main] postgres 58742 set_bits: me 0xA0009A490, testing fd 4 (pipe:[12884905304])
00:00:08 [main] postgres 58742 set_bits: ready 1
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_secret: Sending af_local secret succeeded
00:00:08 [main] postgres 58742 select_stuff::wait: res after verify 0
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_secret: Received af_local secret: ACA27CCF-44EA3F93-FB94FBF5-21DDF032
00:00:08 [main] postgres 58742 select_stuff::wait: returning 0
00:00:08 [main] postgres 58742 select: sel.wait returns 0
00:00:08 [socksel] postgres 58742 fhandler_socket_local::af_local_send_cred: Sending eid credentials succeeded
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 timer expired
00:00:08 [itimer] postgres 58742 timer_tracker::thread_func: 0x7FFC911575E0 sending signal 14
00:00:08 [itimer] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 14, its_me 1
00:00:08 [itimer] postgres 58742 sig_send: Not waiting for sigcomplete. its_me 1 signal 14
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14 processing
00:00:08 [itimer] postgres 58742 sig_send: returning 0x0 from sending signal 14
00:00:08 [sig] postgres 58742 init_cygheap::find_tls: sig 14
00:00:08 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:08 [sig] postgres 58742 sigpacket::process: signal 14, signal handler 0x100951460
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE220, stack 0x7FFFFE218, stackptr[-1] 0x1007885B1
00:00:08 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:08 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:08 [sig] postgres 58742 proc_subproc: finished clearing
00:00:08 [sig] postgres 58742 proc_subproc: returning 1
00:00:08 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 14
00:00:08 [sig] postgres 58742 sigpacket::setup_handler: signal 14 delivered
00:00:08 [sig] postgres 58742 sigpacket::process: returning 1
00:00:21 [sig] postgres 58742 sigpacket::process: signal 15 processing
00:00:21 [socksel] postgres 58742 fhandler_socket_local::af_local_recv_cred: Received eid credentials: pid: 58729, uid: 197609, gid: 197121
00:00:21 [sig] postgres 58742 init_cygheap::find_tls: sig 15
00:00:21 [main] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:21 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:21 [main] postgres 58742 set_bits: me 0xA001340F0, testing fd 18 ()
00:00:21 [sig] postgres 58742 sigpacket::process: signal 15, signal handler 0x100951460
00:00:21 [main] postgres 58742 set_bits: ready 1
00:00:21 [sig] postgres 58742 sigpacket::setup_handler: trying to send signal 15 but signal 14 already armed
00:00:21 [main] postgres 58742 peek_pipe: pipe:[12884905304], already ready for read
00:00:21 [sig] postgres 58742 sigpacket::setup_handler: signal 15 not delivered
00:00:21 [main] postgres 58742 set_bits: me 0xA0009A490, testing fd 4 (pipe:[12884905304])
00:00:21 [sig] postgres 58742 sigpacket::process: returning 0
00:00:21 [main] postgres 58742 set_bits: ready 1
00:00:21 [sig] postgres 58742 sigpacket::process: signal 15 processing
00:00:21 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:21 [sig] postgres 58742 init_cygheap::find_tls: sig 15
00:00:21 [main] postgres 58742 socket_cleanup: si 0xA0009A530 si->thread 0x7FFC91135610
00:00:21 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:21 [socksel] postgres 58742 peek_socket: read_ready: 0, write_ready: 1, except_ready: 0
00:00:21 [socksel] postgres 58742 thread_socket: leaving thread_socket
00:00:21 [sig] postgres 58742 sigpacket::process: signal 15, signal handler 0x100951460
00:00:21 [sig] postgres 58742 sigpacket::setup_handler: trying to send signal 15 but signal 14 already armed
00:00:21 [sig] postgres 58742 sigpacket::setup_handler: signal 15 not delivered
00:00:21 [main] postgres 58742 socket_cleanup: returning
00:00:21 [sig] postgres 58742 sigpacket::process: returning 0
00:00:21 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:21 [main] postgres 58742 select_stuff::cleanup: calling cleanup routines
00:00:21 [main] postgres 58742 select_stuff::destroy: deleting select records
00:00:21 [main] postgres 58742 pselect: 2 = select (19, 0x7FFFFAFA0, 0x7FFFFAF90, 0x7FFFFAF80, 0x0)
00:00:21 [main] postgres 58742 set_process_mask_delta: oldmask 0, newmask 2000, deltamask 2000
00:00:21 [main] postgres 58742 getpid: 58742 = getpid()
00:00:21 [main] postgres 58742 write: write(5, 0x7FFFFA75F, 1)
00:00:21 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_write(PIPEW) release 1
00:00:21 [main] postgres 58742 write: 1 = write(5, 0x7FFFFA75F, 1)
00:00:21 [main] postgres 58742 kill0: kill (-58742, 2)
00:00:21 [main] postgres 58742 kill_pgrp: pid 58742, signal 2
00:00:21 [main] postgres 58742 open_shared: name cygpid.39224, shared 0x1A0F30000 (wanted 0x1A0F30000), h 0x1DC4, m 6, created 0
00:00:21 [main] postgres 58742 open_shared: name cygpid.550, shared 0x1A0F40000 (wanted 0x1A0F40000), h 0x1DB8, m 6, created 0
00:00:21 [main] postgres 58742 open_shared: name cygpid.58746, shared 0x1A0F50000 (wanted 0x1A0F50000), h 0x1D40, m 6, created 0
00:00:21 [main] postgres 58742 open_shared: name cygpid.58717, shared 0x1A0F60000 (wanted 0x1A0F60000), h 0x1D48, m 6, created 0
00:00:21 [main] postgres 58742 open_shared: name cygpid.51466, shared 0x1A0F70000 (wanted 0x1A0F70000), h 0x1D50, m 6, created 0
00:00:21 [main] postgres 58742 open_shared: name cygpid.51465, shared 0x1A0F80000 (wanted 0x1A0F80000), h 0x1D58, m 6, created 0
00:00:21 [main] postgres 58742 open_shared: name cygpid.49550, shared 0x1A0F90000 (wanted 0x1A0F90000), h 0x1D60, m 6, created 0
00:00:21 [main] postgres 58742 open_shared: name cygpid.549, shared 0x1A0FA0000 (wanted 0x1A0FA0000), h 0x1D68, m 6, created 0
00:00:21 [main] postgres 58742 open_shared: name cygpid.58729, shared 0x1A0FB0000 (wanted 0x1A0FB0000), h 0x1D70, m 6, created 0
00:00:21 [main] postgres 58742 open_shared: name cygpid.39223, shared 0x1A0FC0000 (wanted 0x1A0FC0000), h 0x1D78, m 6, created 0
00:00:21 [main] postgres 58742 open_shared: name cygpid.58730, shared 0x1A0FD0000 (wanted 0x1A0FD0000), h 0x1D80, m 6, created 0
00:00:21 [main] postgres 58742 kill_pgrp: killing pid 58742, pgrp 58742, p->no ctty, no ctty
00:00:21 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal -66, its_me 1
00:00:21 [main] postgres 58742 sig_send: wakeup 0x1D94
00:00:21 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1D94
00:00:21 [sig] postgres 58742 sigpacket::process: signal 15 processing
00:00:21 [sig] postgres 58742 init_cygheap::find_tls: sig 15
00:00:21 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:21 [sig] postgres 58742 sigpacket::process: signal 15, signal handler 0x100951460
00:00:21 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x1009067EF
00:00:21 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:21 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:21 [sig] postgres 58742 proc_subproc: finished clearing
00:00:21 [sig] postgres 58742 proc_subproc: returning 1
00:00:21 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 15
00:00:21 [sig] postgres 58742 sigpacket::setup_handler: signal 15 delivered
00:00:21 [sig] postgres 58742 sigpacket::process: returning 1
00:00:21 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1D94
00:00:21 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 6000, deltamask 4000
00:00:21 [main] postgres 58742 getpid: 58742 = getpid()
00:00:21 [main] postgres 58742 set_signal_mask: setmask 6000, newmask 2000, mask_bits 4000
00:00:21 [main] postgres 58742 sig_send: returning 0x0 from sending signal -66
00:00:21 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:21 [main] postgres 58742 sig_send: wakeup 0x1D94
00:00:21 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1D94
00:00:21 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:21 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:21 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:21 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:21 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x1009067EF
00:00:21 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:21 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:21 [sig] postgres 58742 proc_subproc: finished clearing
00:00:21 [sig] postgres 58742 proc_subproc: returning 1
00:00:21 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:21 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:21 [sig] postgres 58742 sigpacket::process: returning 1
00:00:21 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1D94
00:00:21 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:21 [main] postgres 58742 getpid: 58742 = getpid()
00:00:21 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:21 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:21 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:21 [main] postgres 58742 kill_pgrp: 0 = kill(58742, 2)
00:00:21 [main] postgres 58742 kill0: kill (58742, 2)
00:00:21 [main] postgres 58742 sig_send: sendsig 0x134, pid 58742, signal 2, its_me 1
00:00:21 [main] postgres 58742 sig_send: wakeup 0x1D8C
00:00:21 [main] postgres 58742 sig_send: Waiting for pack.wakeup 0x1D8C
00:00:21 [sig] postgres 58742 sigpacket::process: signal 2 processing
00:00:21 [sig] postgres 58742 init_cygheap::find_tls: sig 2
00:00:21 [sig] postgres 58742 sigpacket::process: using tls 0x7FFFFCE00
00:00:21 [sig] postgres 58742 sigpacket::process: signal 2, signal handler 0x100951460
00:00:21 [sig] postgres 58742 sigpacket::setup_handler: controlled interrupt. stackptr 0x7FFFFE228, stack 0x7FFFFE218, stackptr[-1] 0x100919339
00:00:21 [sig] postgres 58742 proc_subproc: args: 4, 1
00:00:21 [sig] postgres 58742 proc_subproc: clear waiting threads
00:00:21 [sig] postgres 58742 proc_subproc: finished clearing
00:00:21 [sig] postgres 58742 proc_subproc: returning 1
00:00:21 [sig] postgres 58742 _cygtls::interrupt_setup: armed signal_arrived 0x178, signal 2
00:00:21 [sig] postgres 58742 sigpacket::setup_handler: signal 2 delivered
00:00:21 [sig] postgres 58742 sigpacket::process: returning 1
00:00:21 [sig] postgres 58742 wait_sig: signalling pack.wakeup 0x1D8C
00:00:21 [main] postgres 58742 set_process_mask_delta: oldmask 2000, newmask 2002, deltamask 2
00:00:21 [main] postgres 58742 getpid: 58742 = getpid()
00:00:21 [main] postgres 58742 set_signal_mask: setmask 2002, newmask 2000, mask_bits 2
00:00:21 [main] postgres 58742 sig_send: returning 0x0 from sending signal 2
00:00:21 [main] postgres 58742 _pinfo::kill: 0 = _pinfo::kill (2), pid 58742, process_state 0x441
00:00:21 [main] postgres 58742 set_signal_mask: setmask 2000, newmask 0, mask_bits 2000
00:00:21 [main] postgres 58742 read: read(4, 0x7FFFFAC20, 1024) nonblocking
00:00:21 [main] postgres 58742 fhandler_pipe::release_select_sem: raw_read(PIPER) release 1
00:00:21 [main] postgres 58742 fhandler_base::read: returning 2, binary mode
00:00:21 [main] postgres 58742 read: 2 = read(4, 0x7FFFFAC20, 2)
00:00:21 [main] postgres 58742 write: 239 = write(2, 0xA000026A0, 239)
00:00:21 [main] postgres 58742 cygwin_send: 117 = send(11, 0xA00085318, 117, 0x0)
00:00:21 [main] postgres 58742 getpid: 58742 = getpid()
00:00:21 [main] postgres 58742 write: 124 = write(2, 0xA000026A0, 124)
00:00:21 [main] postgres 58742 set_signal_mask: setmask 0, newmask 0, mask_bits 0
00:00:21 [main] postgres 58742 sigprocmask: 0 = sigprocmask (0, 0x100CF8D60, 0x0)
00:00:21 [main] postgres 58742 munmap: munmap (addr 0x6FFFF6FF0000, len 1048576)
00:00:21 [main] postgres 58742 build_fh_pc: fh 0x8000D90C8, dev 000000C3
00:00:21 [main] postgres 58742 munmap: 0 = munmap(): 0x6FFFF6FF0000
00:00:21 [main] postgres 58742 write: 119 = write(2, 0xA000026A0, 119)
00:00:21 [main] postgres 58742 getpid: 58742 = getpid()
00:00:21 [main] postgres 58742 write: 101 = write(2, 0xA000026A0, 101)
00:00:21 [main] postgres 58742 write: 144 = write(2, 0xA000026A0, 144)
00:00:21 [main] postgres 58742 write: 75 = write(2, 0xA000026A0, 75)
00:00:21 [main] postgres 58742 write: 121 = write(2, 0xA000026A0, 121)
00:00:21 [main] postgres 58742 write: 117 = write(2, 0xA000026A0, 117)
00:00:21 [main] postgres 58742 write: 102 = write(2, 0xA000026A0, 102)
00:00:21 [main] postgres 58742 time: 1721235085 = time(0x0)
00:00:21 [main] postgres 58742 fstat: 0 = fstat(3, 0x7FFFFAE90)
00:00:21 [main] postgres 58742 close: close(3)
00:00:21 [main] postgres 58742 fhandler_base::close: closing '/dev/urandom' handle 0x16B8
00:00:21 [main] postgres 58742 close: 0 = close(3)
00:00:21 [main] postgres 58742 do_exit: do_exit (256), exit_state 1
00:00:21 [main] postgres 58742 void: 0x0 = signal (20, 0x1)
00:00:21 [main] postgres 58742 void: 0x100951460 = signal (1, 0x1)
00:00:21 [main] postgres 58742 void: 0x100951460 = signal (2, 0x1)
00:00:21 [main] postgres 58742 void: 0x100951460 = signal (3, 0x1)
00:00:21 [main] postgres 58742 fhandler_base::close_with_arch: line 1276: /dev/pty1<0x800008A90> usecount + -1 = 0
00:00:21 [main] postgres 58742 fhandler_base::close_with_arch: closing archetype
00:00:21 [main] postgres 58742 fhandler_pty_slave::cleanup: /dev/pty1 closed, usecount 0
00:00:21 [main] postgres 58742 fhandler_pty_slave::close: closing last open /dev/pty1 handle
00:00:21 [main] postgres 58742 fhandler_pty_common::close: pty1 <0x4C8,0x4D0> closing
00:00:21 [main] postgres 58742 open_shared: name cygpid.39224, shared 0x1A0FE0000 (wanted 0x1A0FE0000), h 0x490, m 6, created 0
00:00:21 [main] postgres 58742 dtable::delete_archetype: deleting element 0 for /dev/pty1(136/1)
00:00:21 [main] postgres 58742 fhandler_base::close: closing '/home/1/postmaster.log' handle 0x158
00:00:21 [main] postgres 58742 fhandler_base::close: closing '/home/1/postmaster.log' handle 0x238
00:00:21 [main] postgres 58742 fhandler_pipe::release_select_sem: close(PIPER) release 1
00:00:21 [main] postgres 58742 fhandler_base::close: closing 'pipe:[12884905304]' handle 0x164
00:00:21 [main] postgres 58742 fhandler_pipe::release_select_sem: close(PIPEW) release 0
00:00:21 [main] postgres 58742 fhandler_base::close: closing 'pipe:[12884905304]' handle 0x160
00:00:21 [main] postgres 58742 fhandler_pipe::release_select_sem: close(PIPER) release 2
00:00:21 [main] postgres 58742 fhandler_base::close: closing 'pipe:[8589937592]' handle 0x28E0
00:00:21 [main] postgres 58742 fhandler_base::close: closing '/home/1/postgresql/tmpdb/global/2676' handle 0x1990
00:00:21 [main] postgres 58742 fhandler_base::close: closing '/home/1/postgresql/tmpdb/global/2677' handle 0x1998
00:00:21 [main] postgres 58742 fhandler_base::close: closing '/home/1/postgresql/tmpdb/global/2671' handle 0x19A0
00:00:21 [main] postgres 58742 fhandler_base::close: closing '/home/1/postgresql/tmpdb/base/5/2601' handle 0x19A8
00:00:21 [main] postgres 58742 fhandler_base::close: closing '/home/1/postgresql/tmpdb/base/5/2691' handle 0x19D8
00:00:21 [main] postgres 58742 fhandler_base::close: closing '/home/1/postgresql/tmpdb/base/5/1255' handle 0x19E0
00:00:21 [main] postgres 58742 fhandler_base::close: closing '/home/1/postgresql/tmpdb/base/5/2690' handle 0x19E8
00:00:21 [main] postgres 58742 fhandler_base::close: closing '/home/1/postgresql/tmpdb/base/5/2650' handle 0x1BE8
00:00:21 [main] postgres 58742 fhandler_base::close: closing '/home/1/postgresql/tmpdb/base/5/2600' handle 0x1BF0
00:00:21 [main] postgres 58742 fhandler_base::close: closing '/home/1/postgresql/tmpdb/base/5/3379' handle 0x1BF8
00:00:21 [main] postgres 58742 getpid: 58742 = getpid()
00:00:21 [main] postgres 58742 proc_terminate: child_procs count 0
00:00:21 [main] postgres 58742 proc_terminate: leaving
00:00:21 [main] postgres 58742 pinfo::exit: Calling dlls.cleanup_forkables n 0x100, exitcode 0x100
00:00:21 [main] postgres 58742 pinfo::exit: Calling ExitProcess n 0x100, exitcode 0x100
--- Process 3416 (pid: 58742) thread 4796 exited with status 0x100
--- Process 3416 (pid: 58742) thread 5616 exited with status 0x100
--- Process 3416 (pid: 58742) thread 872 exited with status 0x100
--- Process 3416 (pid: 58742) thread 3316 exited with status 0x100
--- Process 3416 (pid: 58742) thread 6484 exited with status 0x100
--- Process 3416 (pid: 58742) thread 5820 exited with status 0x100
--- Process 3416 (pid: 58742) thread 4900 exited with status 0x100
--- Process 3416 (pid: 58742) thread 6216 exited with status 0x100
--- Process 3416 (pid: 58742) exited with status 0x100
On Thu, Jul 18, 2024 at 7:00 AM Alexander Lakhin <exclusion@gmail.com> wrote:
As far as I can see (having analyzed a number of runs), the hanging occurs
when some itimer-related activity happens before "peek_socket" in this
event sequence:
[main] postgres {pid} select_stuff::wait: res after verify 0
[main] postgres {pid} select_stuff::wait: returning 0
[main] postgres {pid} select: sel.wait returns 0
[main] postgres {pid} peek_socket: read_ready: 0, write_ready: 1, except_ready: 0(See the last occurrence of the sequence in the log.)
Yeah, right, there's a lot going on between those two lines from the
[main] thread. There are messages from helper threads [itimer], [sig]
and [socksel]. At a guess, [socksel] might be doing extra secret
communication over our socket in order to exchange SO_PEERCRED
information, huh, is that always there? Seems worth filing a bug
report.
For the record, I know of one other occasional test failure on Cygwin:
it randomly panics in SnapBuildSerialize(). While I don't expect
there to be any users of PostgreSQL on Cygwin (it was unusably broken
before we refactored the postmaster in v16), that one is interesting
because (1) it also happen on native Windows builds, and (2) at least
one candidate fix[1]/messages/by-id/CA+hUKG+J4jSFk=-hdoZdcx+p7ru6xuipzCZY-kiKoDc2FjsV7g@mail.gmail.com sounds like it would speed up logical replication
on all operating systems.
[1]: /messages/by-id/CA+hUKG+J4jSFk=-hdoZdcx+p7ru6xuipzCZY-kiKoDc2FjsV7g@mail.gmail.com
On 2024-Jul-16, Alvaro Herrera wrote:
On 2024-Jul-16, Alvaro Herrera wrote:
Maybe we can disable this test specifically on Cygwin. We could do that
by creating a postgres_fdw_cancel.sql file, with the current output for
all platforms, and a "SELECT version() ~ 'cygwin' AS skip_test" query,
as we do for encoding tests and such.Something like this.
I have pushed this "fix", so we shouldn't see this failure anymore.
--
Álvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/
Hello Alvaro,
Let me show you another related anomaly, which drongo kindly discovered
recently: [1]. That test failed with:
SELECT dblink_cancel_query('dtest1');
- dblink_cancel_query
----------------------
- OK
+ dblink_cancel_query
+--------------------------
+ cancel request timed out
(1 row)
I've managed to reproduce this when running 20 dblink tests in parallel,
and with extra logging added (see attached) I've got:
...
2024-08-28 10:17:12.949 PDT [8236:204] pg_regress/dblink LOG: statement: SELECT dblink_cancel_query('dtest1');
!!!PQcancelPoll|8236| conn->status: 2
!!!PQcancelPoll|8236| conn->status: 3
!!!PQconnectPoll|8236| before pqPacketSend(..., &cancelpacket, ...)
!!!pqPacketSend|8236| before pqFlush
!!!pqsecure_raw_write|8236| could not send data to server: Socket is not connected (0x00002749/10057)
!!!pqPacketSend|8236| after pqFlush, STATUS_OK
!!!PQconnectPoll|8236| after pqPacketSend, STATUS_OK
2024-08-28 10:17:12.950 PDT [5548:7] pg_regress LOG: statement: select * from foo where f1 < 3
2024-08-28 10:17:12.951 PDT [8692:157] DEBUG: forked new backend, pid=4644 socket=5160
2024-08-28 10:17:12.973 PDT [4644:1] [unknown] LOG: connection received: host=::1 port=55073
2024-08-28 10:17:12.973 PDT [4644:2] [unknown] LOG: !!!BackendInitialize| before ProcessSSLStartup()
!!!PQcancelPoll|8236| conn->status: 4
!!!PQcancelPoll|8236| conn->status: 4
2024-08-28 10:17:24.060 PDT [1436:1] DEBUG: snapshot of 0+0 running transaction ids (lsn 0/194C4E0 oldest xid 780
latest complete 779 next xid 780)
!!!PQcancelPoll|8236| conn->status: 4
2024-08-28 10:17:42.951 PDT [4644:3] [unknown] LOG: !!!BackendInitialize| ProcessSSLStartup() returned -1
2024-08-28 10:17:42.951 PDT [4644:4] [unknown] DEBUG: shmem_exit(0): 0 before_shmem_exit callbacks to make
...
Thus, pqsecure_raw_write(), called via PQcancelPoll() -> PQconnectPoll() ->
pqPacketSend() -> pqFlush) -> pqSendSome() -> pqsecure_write(), returned
the WSAENOTCONN error, but it wasn't noticed at upper levels.
Consequently, the cancelling backend waited for the cancel packet that was
never sent.
The first commit, that I could reproduce this test failure on, is 2466d6654.
[1]: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-08-26%2021%3A35%3A04
Best regards,
Alexander
Attachments:
libpqsrv_cancel-debugging.patchtext/x-patch; charset=UTF-8; name=libpqsrv_cancel-debugging.patchDownload
diff --git a/src/backend/tcop/backend_startup.c b/src/backend/tcop/backend_startup.c
index cfa2755196..fc9abbfe53 100644
--- a/src/backend/tcop/backend_startup.c
+++ b/src/backend/tcop/backend_startup.c
@@ -252,8 +252,10 @@ BackendInitialize(ClientSocket *client_sock, CAC_state cac)
RegisterTimeout(STARTUP_PACKET_TIMEOUT, StartupPacketTimeoutHandler);
enable_timeout_after(STARTUP_PACKET_TIMEOUT, AuthenticationTimeout * 1000);
+elog(LOG, "!!!BackendInitialize| before ProcessSSLStartup()", status);
/* Handle direct SSL handshake */
status = ProcessSSLStartup(port);
+elog(LOG, "!!!BackendInitialize| ProcessSSLStartup() returned %d", status);
/*
* Receive the startup packet (which might turn out to be a cancel request
diff --git a/src/interfaces/libpq/fe-cancel.c b/src/interfaces/libpq/fe-cancel.c
index 213a6f43c2..8482dfa3e8 100644
--- a/src/interfaces/libpq/fe-cancel.c
+++ b/src/interfaces/libpq/fe-cancel.c
@@ -209,6 +209,7 @@ PQcancelPoll(PGcancelConn *cancelConn)
PGconn *conn = &cancelConn->conn;
int n;
+fprintf(stderr, "!!!PQcancelPoll|%d| conn->status: %d\n", getpid(), conn->status);
/*
* We leave most of the connection establishment to PQconnectPoll, since
* it's very similar to normal connection establishment. But once we get
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 360d9a4547..72a31d29ed 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -3407,12 +3407,14 @@ keep_going: /* We will come back to here until there is
cancelpacket.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);
cancelpacket.backendPID = pg_hton32(conn->be_pid);
cancelpacket.cancelAuthCode = pg_hton32(conn->be_key);
+fprintf(stderr, "!!!PQconnectPoll|%d| before pqPacketSend(..., &cancelpacket, ...)\n", getpid());
if (pqPacketSend(conn, 0, &cancelpacket, packetlen) != STATUS_OK)
{
libpq_append_conn_error(conn, "could not send cancel packet: %s",
SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)));
goto error_return;
}
+fprintf(stderr, "!!!PQconnectPoll|%d| after pqPacketSend, STATUS_OK\n", getpid());
conn->status = CONNECTION_AWAITING_RESPONSE;
return PGRES_POLLING_READING;
}
@@ -5012,9 +5014,13 @@ pqPacketSend(PGconn *conn, char pack_type,
if (pqPutMsgEnd(conn))
return STATUS_ERROR;
+if (buf_len == 12)
+fprintf(stderr, "!!!pqPacketSend|%d| before pqFlush\n", getpid());
/* Flush to ensure backend gets it. */
if (pqFlush(conn))
return STATUS_ERROR;
+if (buf_len == 12)
+fprintf(stderr, "!!!pqPacketSend|%d| after pqFlush, STATUS_OK\n", getpid());
return STATUS_OK;
}
diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index f628082337..1a85d9a40f 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -421,6 +421,7 @@ retry_masked:
sebuf, sizeof(sebuf)));
/* keep newline out of translated string */
strlcat(msgbuf, "\n", sizeof(msgbuf));
+fprintf(stderr, "!!!pqsecure_raw_write|%d| %s", getpid(), msgbuf);
conn->write_err_msg = strdup(msgbuf);
/* Now claim the write succeeded */
n = len;
Alexander Lakhin <exclusion@gmail.com> writes:
Let me show you another related anomaly, which drongo kindly discovered recently: [1]. That test failed with: SELECT dblink_cancel_query('dtest1'); - dblink_cancel_query ---------------------- - OK + dblink_cancel_query +-------------------------- + cancel request timed out (1 row)
While we're piling on, has anyone noticed that *non* Windows buildfarm
animals are also failing this test pretty frequently? The most recent
occurrence is at [1]https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-08-30%2006%3A25%3A46, and it looks like this:
diff -U3 /home/bf/bf-build/mylodon/HEAD/pgsql/contrib/postgres_fdw/expected/query_cancel.out /home/bf/bf-build/mylodon/HEAD/pgsql.build/testrun/postgres_fdw/regress/results/query_cancel.out
--- /home/bf/bf-build/mylodon/HEAD/pgsql/contrib/postgres_fdw/expected/query_cancel.out 2024-07-22 11:09:50.638133878 +0000
+++ /home/bf/bf-build/mylodon/HEAD/pgsql.build/testrun/postgres_fdw/regress/results/query_cancel.out 2024-08-30 06:28:01.971083945 +0000
@@ -17,4 +17,5 @@
SET LOCAL statement_timeout = '10ms';
select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5; -- this takes very long
ERROR: canceling statement due to statement timeout
+WARNING: could not get result of cancel request due to timeout
COMMIT;
I trawled the buildfarm database for other occurrences of "could not
get result of cancel request" since this test went in. I found 34
of them (see attachment), and none that weren't the timeout flavor.
Most of the failing machines are not especially slow, so even though
the hard-wired 30 second timeout that's being used here feels a little
under-engineered, I'm not sure that arranging to raise it would help.
My spidey sense feels that there's some actual bug here, but it's hard
to say where. mylodon's postmaster log confirms that the 30 seconds
did elapse, and that there wasn't anything much else going on:
2024-08-30 06:27:31.926 UTC client backend[3668381] pg_regress/query_cancel ERROR: canceling statement due to statement timeout
2024-08-30 06:27:31.926 UTC client backend[3668381] pg_regress/query_cancel STATEMENT: select count(*) from ft1 CROSS JOIN ft2 CROSS JOIN ft4 CROSS JOIN ft5;
2024-08-30 06:28:01.946 UTC client backend[3668381] pg_regress/query_cancel WARNING: could not get result of cancel request due to timeout
Any thoughts?
regards, tom lane
[1]: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-08-30%2006%3A25%3A46
Attachments:
On Fri, Aug 30, 2024, 21:21 Tom Lane <tgl@sss.pgh.pa.us> wrote:
While we're piling on, has anyone noticed that *non* Windows buildfarm
animals are also failing this test pretty frequently?
<snip>
Any thoughts?
Yes. Fixes are here (see the ~10 emails above in the thread for details):
/messages/by-id/CAGECzQQO8Cn2Rw45xUYmvzXeSSsst7-bcruuzUfMbGQc3ueSdw@mail.gmail.com
They don't apply anymore after the change to move this test to a dedicated
file. It shouldn't be too hard to update those patches though. I'll try to
do that in a few weeks when I'm back behind my computer. But feel free to
commit something earlier.
Show quoted text
Jelte Fennema-Nio <postgres@jeltef.nl> writes:
On Fri, Aug 30, 2024, 21:21 Tom Lane <tgl@sss.pgh.pa.us> wrote:
While we're piling on, has anyone noticed that *non* Windows buildfarm
animals are also failing this test pretty frequently?
Yes. Fixes are here (see the ~10 emails above in the thread for details):
/messages/by-id/CAGECzQQO8Cn2Rw45xUYmvzXeSSsst7-bcruuzUfMbGQc3ueSdw@mail.gmail.com
Hmm. I'm not convinced that 0001 is an actual *fix*, but it should
at least reduce the frequency of occurrence a lot, which'd help.
I don't want to move the test case to where you propose, because
that's basically not sensible. But can't we avoid remote estimates
by just cross-joining ft1 to itself, and not using the tables for
which remote estimate is enabled?
I think 0002 is probably outright wrong, or at least the change to
disable_statement_timeout is. Once we get to that, we don't want
to throw a timeout error any more, even if an interrupt was received
just before it.
regards, tom lane
I wrote:
Hmm. I'm not convinced that 0001 is an actual *fix*, but it should
at least reduce the frequency of occurrence a lot, which'd help.
After enabling log_statement = all to verify what commands are being
sent to the remote, I realized that there's a third thing this patch
can do to stabilize matters: issue a regular remote query inside the
test transaction, before we enable the timeout. This will ensure
that we've dealt with configure_remote_session() and started a
remote transaction, so that there aren't extra round trips happening
for that while the clock is running.
Pushed with that addition and some comment-tweaking. We'll see
whether that actually makes things more stable, but I don't think
it could make it worse.
regards, tom lane
On Fri, 30 Aug 2024 at 22:12, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Jelte Fennema-Nio <postgres@jeltef.nl> writes:
On Fri, Aug 30, 2024, 21:21 Tom Lane <tgl@sss.pgh.pa.us> wrote:
While we're piling on, has anyone noticed that *non* Windows buildfarm
animals are also failing this test pretty frequently?Yes. Fixes are here (see the ~10 emails above in the thread for details):
/messages/by-id/CAGECzQQO8Cn2Rw45xUYmvzXeSSsst7-bcruuzUfMbGQc3ueSdw@mail.gmail.comHmm. I'm not convinced that 0001 is an actual *fix*, but it should
at least reduce the frequency of occurrence a lot, which'd help.
I also don't think it's an actual fix, but I couldn't think of a way
to fix this. And since this only happens if you cancel right at the
start of a postgres_fdw query, I don't think it's worth investing too
much time on a fix.
I don't want to move the test case to where you propose, because
that's basically not sensible. But can't we avoid remote estimates
by just cross-joining ft1 to itself, and not using the tables for
which remote estimate is enabled?
Yeah that should work too (I just saw your next email, where you said
it's committed like this).
I think 0002 is probably outright wrong, or at least the change to
disable_statement_timeout is. Once we get to that, we don't want
to throw a timeout error any more, even if an interrupt was received
just before it.
The disable_statement_timeout change was not the part of that patch
that was necessary for stable output, only the change in the first
branch of enable_statement_timeout was necessary. The reason being
that enable_statement_timeout is called multiple times for a query,
because start_xact_command is called multiple times in
exec_simple_query. The change to disable_statement_timeout just seemed
like the logical extension of that change, especially since there was
basically a verbatim copy of disable_statement_timeout in the second
branch of enable_statement_timeout.
To make sure I understand your suggestion correctly: Are you saying
you would want to completely remove the outstanding interrupt if it
was caused by de statement_timout when disable_statement_timeout is
called? Because I agree that would probably make sense, but that
sounds like a more impactful change. But the current behaviour seems
strictly worse than the behaviour proposed in the patch to me, because
currently the backend would still be interrupted, but the error would
indicate a reason for the interrupt that is simply incorrect i.e. it
will say it was cancelled due to a user request, which never happened.
Hello Tom,
30.08.2024 23:55, Tom Lane wrote:
Pushed with that addition and some comment-tweaking. We'll see
whether that actually makes things more stable, but I don't think
it could make it worse.
Thank you for fixing that issue!
I've tested your fix with the modification I proposed upthread:
idle_session_timeout_enabled = false;
}
+if (rand() % 10 == 0) pg_usleep(10000);
/*
* (5) disable async signal conditions again.
and can confirm that the issue is gone. On 8749d850f~1, the test failed
on iterations 3, 3. 12 for me, but on current REL_17_STABLE, 100 test
iterations succeeded.
At the same time, mylodon confirmed my other finding at [1] and failed [2] with:
-ERROR: canceling statement due to statement timeout
+ERROR: canceling statement due to user request
[1]: /messages/by-id/4db099c8-4a52-3cc4-e970-14539a319466@gmail.com
[2]: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-08-30%2023%3A03%3A31
Best regards,
Alexander
On Sat, 31 Aug 2024 at 06:04, Alexander Lakhin <exclusion@gmail.com> wrote:
At the same time, mylodon confirmed my other finding at [1] and failed [2] with: -ERROR: canceling statement due to statement timeout +ERROR: canceling statement due to user request[1] /messages/by-id/4db099c8-4a52-3cc4-e970-14539a319466@gmail.com
[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2024-08-30%2023%3A03%3A31
Interestingly that's a different test that failed, but it looks like
it failed for the same reason that my 0002 patch fixes.
I also took a quick look at the code again, and completely removing
the outstanding interrupt seems hard to do. Because there's no way to
know if there were multiple causes for the interupt, i.e. someone
could have pressed ctrl+c as well and we wouldn't want to undo that.
So I think the solution in 0002, while debatable if strictly correct,
is the only fix that we can easily do. Also I personally believe the
behaviour resulting from 0002 is totally correct: The new behaviour
would be that if a timeout occurred, right before it was disabled or
reset, but the interrupt was not processed yet, then we process that
timeout as normal. That seems totally reasonable behaviour to me from
the perspective of an end user: You get a timeout error when the
timeout occurred before the timeout was disabled/reset.
Hello Tom and Jelte,
31.08.2024 07:04, Alexander Lakhin wrote:
I've tested your fix with the modification I proposed upthread:
idle_session_timeout_enabled = false;
}
+if (rand() % 10 == 0) pg_usleep(10000);
/*
* (5) disable async signal conditions again.and can confirm that the issue is gone. On 8749d850f~1, the test failed
on iterations 3, 3. 12 for me, but on current REL_17_STABLE, 100 test
iterations succeeded.
One month later, treehopper has found a way to break that test: [1]https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=treehopper&dt=2024-09-30%2019%3A21%3A14.
The failure log contains:
2024-09-30 19:34:31.347 UTC [3201034:13] fdw_retry_check LOG: execute <unnamed>: DECLARE c2 CURSOR FOR
SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON (TRUE)) INNER JOIN "S 1"."T 1" r4 ON (TRUE))
INNER JOIN "S 1"."T 1" r6 ON (TRUE))
2024-09-30 19:34:31.464 UTC [3201033:10] pg_regress/query_cancel ERROR: canceling statement due to statement timeout
2024-09-30 19:34:31.464 UTC [3201033:11] pg_regress/query_cancel STATEMENT: SELECT count(*) FROM ft1 a CROSS JOIN ft1 b
CROSS JOIN ft1 c CROSS JOIN ft1 d;
2024-09-30 19:34:31.466 UTC [3201035:1] [unknown] LOG: connection received: host=[local]
2024-09-30 19:34:31.474 UTC [3201034:14] fdw_retry_check LOG: statement: FETCH 100 FROM c2
2024-09-30 19:35:01.485 UTC [3201033:12] pg_regress/query_cancel WARNING: could not get result of cancel request due to
timeout
It looks like this time the cancel request arrived to the remote backend
when it processed FETCH, presumably at the DoingCommandRead stage.
I've managed to reproduce the issue with the additional modification:
@@ -1605,7 +1605,10 @@ postgresIterateForeignScan(ForeignScanState *node)
* first call after Begin or ReScan.
*/
if (!fsstate->cursor_exists)
+{
create_cursor(node);
+pg_usleep(100000);
+}
With postgres_fdw/Makefile modified to repeat the query_cancel test, I get:
ok 13 - query_cancel 245 ms
not ok 14 - query_cancel 30258 ms
ok 15 - query_cancel 249 ms
...
ok 19 - query_cancel 236 ms
not ok 20 - query_cancel 30258 ms
ok 21 - query_cancel 225 ms
..
not ok 33 - query_cancel 30272 ms
1..33
# 3 of 33 tests failed.
(Please find attached the complete patch.)
[1]: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=treehopper&dt=2024-09-30%2019%3A21%3A14
Best regards,
Alexander
Attachments:
reproduce-postgres_fdw-query_cancel-failure.pathchtext/plain; charset=UTF-8; name=reproduce-postgres_fdw-query_cancel-failure.pathchDownload
diff --git a/contrib/postgres_fdw/Makefile b/contrib/postgres_fdw/Makefile
index b9fa699305..3e1260e125 100644
--- a/contrib/postgres_fdw/Makefile
+++ b/contrib/postgres_fdw/Makefile
@@ -16,7 +16,7 @@ SHLIB_LINK_INTERNAL = $(libpq)
EXTENSION = postgres_fdw
DATA = postgres_fdw--1.0.sql postgres_fdw--1.0--1.1.sql
-REGRESS = postgres_fdw query_cancel
+REGRESS = postgres_fdw query_cancel $(shell printf 'query_cancel %.0s' `seq 31`)
ifdef USE_PGXS
PG_CONFIG = pg_config
diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c
index fc65d81e21..9190fbb3cb 100644
--- a/contrib/postgres_fdw/postgres_fdw.c
+++ b/contrib/postgres_fdw/postgres_fdw.c
@@ -1605,7 +1605,10 @@ postgresIterateForeignScan(ForeignScanState *node)
* first call after Begin or ReScan.
*/
if (!fsstate->cursor_exists)
+{
create_cursor(node);
+pg_usleep(100000);
+}
/*
* Get some more tuples, if we've run out.
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index a750dc800b..c281547e22 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -4717,6 +4717,10 @@ PostgresMain(const char *dbname, const char *username)
idle_session_timeout_enabled = false;
}
+if (rand() % 10 == 0)
+{
+pg_usleep(10000);
+}
/*
* (5) disable async signal conditions again.
*
Hi,
On 2024-10-01 15:00:00 +0300, Alexander Lakhin wrote:
Hello Tom and Jelte,
31.08.2024 07:04, Alexander Lakhin wrote:
I've tested your fix with the modification I proposed upthread:
������������ idle_session_timeout_enabled = false;
�������� }
+if (rand() % 10 == 0) pg_usleep(10000);
�������� /*
��������� * (5) disable async signal conditions again.and can confirm that the issue is gone. On 8749d850f~1, the test failed
on iterations 3, 3. 12 for me, but on current REL_17_STABLE, 100 test
iterations succeeded.One month later, treehopper has found a way to break that test: [1].
The failure log contains:
2024-09-30 19:34:31.347 UTC [3201034:13] fdw_retry_check LOG: execute <unnamed>: DECLARE c2 CURSOR FOR
��� SELECT count(*) FROM ((("S 1"."T 1" r1 INNER JOIN "S 1"."T 1" r2 ON
(TRUE)) INNER JOIN "S 1"."T 1" r4 ON (TRUE)) INNER JOIN "S 1"."T 1" r6 ON
(TRUE))
2024-09-30 19:34:31.464 UTC [3201033:10] pg_regress/query_cancel ERROR:� canceling statement due to statement timeout
2024-09-30 19:34:31.464 UTC [3201033:11] pg_regress/query_cancel STATEMENT:�
SELECT count(*) FROM ft1 a CROSS JOIN ft1 b CROSS JOIN ft1 c CROSS JOIN ft1
d;
2024-09-30 19:34:31.466 UTC [3201035:1] [unknown] LOG:� connection received: host=[local]
2024-09-30 19:34:31.474 UTC [3201034:14] fdw_retry_check LOG: statement: FETCH 100 FROM c2
2024-09-30 19:35:01.485 UTC [3201033:12] pg_regress/query_cancel WARNING:�
could not get result of cancel request due to timeout
Another failure in CI, that cleared up after a retry:
https://cirrus-ci.com/task/5725647677423616
https://api.cirrus-ci.com/v1/artifact/task/5725647677423616/log/contrib/postgres_fdw/regression.diffs
https://api.cirrus-ci.com/v1/artifact/task/5725647677423616/log/contrib/postgres_fdw/log/postmaster.log
diff -U3 /tmp/cirrus-ci-build/contrib/postgres_fdw/expected/query_cancel.out /tmp/cirrus-ci-build/contrib/postgres_fdw/results/query_cancel.out
--- /tmp/cirrus-ci-build/contrib/postgres_fdw/expected/query_cancel.out 2024-11-16 22:13:32.174593005 +0000
+++ /tmp/cirrus-ci-build/contrib/postgres_fdw/results/query_cancel.out 2024-11-16 22:21:20.165877954 +0000
@@ -29,4 +29,5 @@
-- This would take very long if not canceled:
SELECT count(*) FROM ft1 a CROSS JOIN ft1 b CROSS JOIN ft1 c CROSS JOIN ft1 d;
ERROR: canceling statement due to statement timeout
+WARNING: could not get result of cancel request due to timeout
COMMIT;
Statement logging isn't enabled for the test, so the log isn't that helpful:
2024-11-16 22:20:49.962 UTC [38643][not initialized] [[unknown]][:0] LOG: connection received: host=[local]
2024-11-16 22:20:49.964 UTC [38643][client backend] [[unknown]][67/1:0] LOG: connection authenticated: user="postgres" method=trust (/tmp/cirrus-ci-build/contrib/postgres_fdw/tmp_check/data/pg_hba.conf:117)
2024-11-16 22:20:49.964 UTC [38643][client backend] [[unknown]][67/1:0] LOG: connection authorized: user=postgres database=contrib_regression application_name=pg_regress/query_cancel
2024-11-16 22:20:50.007 UTC [38645][not initialized] [[unknown]][:0] LOG: connection received: host=[local]
2024-11-16 22:20:50.010 UTC [38645][client backend] [[unknown]][68/1:0] LOG: connection authenticated: user="postgres" method=trust (/tmp/cirrus-ci-build/contrib/postgres_fdw/tmp_check/data/pg_hba.conf:117)
2024-11-16 22:20:50.010 UTC [38645][client backend] [[unknown]][68/1:0] LOG: connection authorized: user=postgres database=contrib_regression application_name=fdw_retry_check
2024-11-16 22:20:50.148 UTC [38643][client backend] [pg_regress/query_cancel][67/4:0] ERROR: canceling statement due to statement timeout
2024-11-16 22:20:50.148 UTC [38643][client backend] [pg_regress/query_cancel][67/4:0] STATEMENT: SELECT count(*) FROM ft1 a CROSS JOIN ft1 b CROSS JOIN ft1 c CROSS JOIN ft1 d;
2024-11-16 22:20:50.159 UTC [38656][not initialized] [[unknown]][:0] LOG: connection received: host=[local]
2024-11-16 22:21:20.167 UTC [38643][client backend] [pg_regress/query_cancel][67/0:0] WARNING: could not get result of cancel request due to timeout
2024-11-16 22:21:20.170 UTC [38643][client backend] [pg_regress/query_cancel][:0] LOG: disconnection: session time: 0:00:30.211 user=postgres database=contrib_regression host=[local]
2024-11-16 22:21:20.315 UTC [36800][postmaster] LOG: received fast shutdown request
Greetings,
Andres Freund
Andres Freund <andres@anarazel.de> writes:
On 2024-10-01 15:00:00 +0300, Alexander Lakhin wrote:
One month later, treehopper has found a way to break that test: [1].
The failure log contains:
2024-09-30 19:35:01.485 UTC [3201033:12] pg_regress/query_cancel WARNING:
could not get result of cancel request due to timeout
Another failure in CI, that cleared up after a retry:
+WARNING: could not get result of cancel request due to timeout
Yeah. This has been happening off-and-on in the buildfarm ever
since we added that test. I'm not sure if it's just "the test
is unstable" or if it's telling us there's a problem with the
cancel logic. Scraping the last 3 months worth of buildfarm
logs finds these instances:
sysname | branch | snapshot | stage | l
------------+---------------+---------------------+----------------------------+------------------------------------------------------------------
adder | HEAD | 2024-08-29 10:42:09 | postgres_fdwInstallCheck-C | +WARNING: could not get result of cancel request due to timeout
adder | REL_17_STABLE | 2024-08-29 12:52:00 | postgres_fdwCheck | +WARNING: could not get result of cancel request due to timeout
froghopper | HEAD | 2024-10-25 08:31:55 | ContribCheck-C | +WARNING: could not get result of cancel request due to timeout
grassquit | HEAD | 2024-08-20 19:29:20 | postgres_fdwCheck | +WARNING: could not get result of cancel request due to timeout
mylodon | HEAD | 2024-08-30 06:25:46 | postgres_fdwCheck | +WARNING: could not get result of cancel request due to timeout
pipit | HEAD | 2024-11-13 01:12:28 | ContribCheck-C | +WARNING: could not get result of cancel request due to timeout
snakefly | REL_17_STABLE | 2024-08-19 11:30:04 | ContribCheck-C | +WARNING: could not get result of cancel request due to timeout
treehopper | REL_17_STABLE | 2024-09-30 19:21:14 | ContribCheck-C | +WARNING: could not get result of cancel request due to timeout
(8 rows)
regards, tom lane
Hello Tom and Andres,
17.11.2024 05:33, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
Another failure in CI, that cleared up after a retry:
+WARNING: could not get result of cancel request due to timeoutYeah. This has been happening off-and-on in the buildfarm ever
since we added that test. I'm not sure if it's just "the test
is unstable" or if it's telling us there's a problem with the
cancel logic. Scraping the last 3 months worth of buildfarm
logs finds these instances:
Yes, I counted those bf failures at [1]https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#posgtres_fdw.2Fquery_cancel_fails_due_to_an_unexpected_warning_on_canceling_a_statement too and posted my explanation
upthread [2]/messages/by-id/c68225b4-fce9-3425-1534-a21a815d5846@gmail.com.
[1]: https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#posgtres_fdw.2Fquery_cancel_fails_due_to_an_unexpected_warning_on_canceling_a_statement
https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#posgtres_fdw.2Fquery_cancel_fails_due_to_an_unexpected_warning_on_canceling_a_statement
[2]: /messages/by-id/c68225b4-fce9-3425-1534-a21a815d5846@gmail.com
Best regards,
Alexander
Alexander Lakhin <exclusion@gmail.com> writes:
17.11.2024 05:33, Tom Lane wrote:
Yeah. This has been happening off-and-on in the buildfarm ever
since we added that test. I'm not sure if it's just "the test
is unstable" or if it's telling us there's a problem with the
cancel logic. Scraping the last 3 months worth of buildfarm
logs finds these instances:
Yes, I counted those bf failures at [1] too and posted my explanation
upthread [2].
Sorry, I'd forgotten about that. I added some more debug logging
to the modifications you made, and confirmed your theory that the
remote session is ignoring the cancel request because it receives it
while DoingCommandRead is true; which must mean that it hasn't started
the slow query yet.
This implies that the 100ms delay in query_cancel.sql is not reliably
enough for the remote to receive the command, which surprises me,
especially since the failing animals aren't particularly slow ones.
Maybe there is something else happening? But I do reproduce the
failure after adding your delays, and the patch I'm about to propose
does fix it.
Anyway, given that info, Jelte's unapplied 0002 patch earlier in the
thread is not the answer, because this is about dropping a query
cancel not about losing a timeout interrupt. The equivalent thing
to what he suggested would be to not clear the cancel request flag
during DoingCommandRead, instead letting it kill the next query.
But I didn't like the idea for timeouts, and I like it even less for
query cancel. What I think we should do instead is to re-issue
the cancel request if we've waited a little and nothing came of it.
This corresponds more or less to what a human user would likely do
(or at least this human would). The attached patch is set up to
re-cancel after 1 second, then 2 more seconds, then 4 more, etc
until we reach the 30-second "it's dead Jim" threshold.
This seems to fix the problem here. Thoughts?
BTW, while I didn't do it in the attached, I'm tempted to greatly
reduce the 100ms delay in query_cancel.sql. If this does make it
more robust, we shouldn't need that much time anymore.
regards, tom lane
Attachments:
reissue-cancel-requests.patchtext/x-diff; charset=us-ascii; name=reissue-cancel-requests.patchDownload
diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c
index 2326f391d3..7a8cac83cb 100644
--- a/contrib/postgres_fdw/connection.c
+++ b/contrib/postgres_fdw/connection.c
@@ -95,6 +95,13 @@ static uint32 pgfdw_we_get_result = 0;
*/
#define CONNECTION_CLEANUP_TIMEOUT 30000
+/*
+ * Milliseconds to wait before issuing another cancel request. This covers
+ * the race condition where the remote session ignored our cancel request
+ * because it arrived while idle.
+ */
+#define RE_CANCEL_TIMEOUT 1000
+
/* Macro for constructing abort command to be sent */
#define CONSTRUCT_ABORT_COMMAND(sql, entry, toplevel) \
do { \
@@ -145,6 +152,7 @@ static void pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_cancel_query(PGconn *conn);
static bool pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime);
static bool pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
+ TimestampTz recanceltime,
bool consume_input);
static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query,
bool ignore_errors);
@@ -154,6 +162,7 @@ static bool pgfdw_exec_cleanup_query_end(PGconn *conn, const char *query,
bool consume_input,
bool ignore_errors);
static bool pgfdw_get_cleanup_result(PGconn *conn, TimestampTz endtime,
+ TimestampTz recanceltime,
PGresult **result, bool *timed_out);
static void pgfdw_abort_cleanup(ConnCacheEntry *entry, bool toplevel);
static bool pgfdw_abort_cleanup_begin(ConnCacheEntry *entry, bool toplevel,
@@ -1322,18 +1331,25 @@ pgfdw_reset_xact_state(ConnCacheEntry *entry, bool toplevel)
static bool
pgfdw_cancel_query(PGconn *conn)
{
+ TimestampTz now = GetCurrentTimestamp();
TimestampTz endtime;
+ TimestampTz recanceltime;
/*
* If it takes too long to cancel the query and discard the result, assume
* the connection is dead.
*/
- endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
- CONNECTION_CLEANUP_TIMEOUT);
+ endtime = TimestampTzPlusMilliseconds(now, CONNECTION_CLEANUP_TIMEOUT);
+
+ /*
+ * Also, lose patience and re-issue the cancel request after a little bit.
+ * (This serves to close some race conditions.)
+ */
+ recanceltime = TimestampTzPlusMilliseconds(now, RE_CANCEL_TIMEOUT);
if (!pgfdw_cancel_query_begin(conn, endtime))
return false;
- return pgfdw_cancel_query_end(conn, endtime, false);
+ return pgfdw_cancel_query_end(conn, endtime, recanceltime, false);
}
/*
@@ -1359,9 +1375,10 @@ pgfdw_cancel_query_begin(PGconn *conn, TimestampTz endtime)
}
static bool
-pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime, bool consume_input)
+pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime,
+ TimestampTz recanceltime, bool consume_input)
{
- PGresult *result = NULL;
+ PGresult *result;
bool timed_out;
/*
@@ -1380,7 +1397,8 @@ pgfdw_cancel_query_end(PGconn *conn, TimestampTz endtime, bool consume_input)
}
/* Get and discard the result of the query. */
- if (pgfdw_get_cleanup_result(conn, endtime, &result, &timed_out))
+ if (pgfdw_get_cleanup_result(conn, endtime, recanceltime,
+ &result, &timed_out))
{
if (timed_out)
ereport(WARNING,
@@ -1453,7 +1471,7 @@ pgfdw_exec_cleanup_query_end(PGconn *conn, const char *query,
TimestampTz endtime, bool consume_input,
bool ignore_errors)
{
- PGresult *result = NULL;
+ PGresult *result;
bool timed_out;
Assert(query != NULL);
@@ -1471,7 +1489,7 @@ pgfdw_exec_cleanup_query_end(PGconn *conn, const char *query,
}
/* Get the result of the query. */
- if (pgfdw_get_cleanup_result(conn, endtime, &result, &timed_out))
+ if (pgfdw_get_cleanup_result(conn, endtime, endtime, &result, &timed_out))
{
if (timed_out)
ereport(WARNING,
@@ -1495,28 +1513,36 @@ pgfdw_exec_cleanup_query_end(PGconn *conn, const char *query,
}
/*
- * Get, during abort cleanup, the result of a query that is in progress. This
- * might be a query that is being interrupted by transaction abort, or it might
- * be a query that was initiated as part of transaction abort to get the remote
- * side back to the appropriate state.
+ * Get, during abort cleanup, the result of a query that is in progress.
+ * This might be a query that is being interrupted by a cancel request or by
+ * transaction abort, or it might be a query that was initiated as part of
+ * transaction abort to get the remote side back to the appropriate state.
*
* endtime is the time at which we should give up and assume the remote
- * side is dead. Returns true if the timeout expired or connection trouble
- * occurred, false otherwise. Sets *result except in case of a timeout.
- * Sets timed_out to true only when the timeout expired.
+ * side is dead. recanceltime is the time at which we should issue a fresh
+ * cancel request (pass the same value as endtime if this is not wanted).
+ *
+ * Returns true if the timeout expired or connection trouble occurred,
+ * false otherwise. Sets *result except in case of a true result.
+ * Sets *timed_out to true only when the timeout expired.
*/
static bool
-pgfdw_get_cleanup_result(PGconn *conn, TimestampTz endtime, PGresult **result,
+pgfdw_get_cleanup_result(PGconn *conn, TimestampTz endtime,
+ TimestampTz recanceltime,
+ PGresult **result,
bool *timed_out)
{
volatile bool failed = false;
PGresult *volatile last_res = NULL;
+ *result = NULL;
*timed_out = false;
/* In what follows, do not leak any PGresults on an error. */
PG_TRY();
{
+ int canceldelta = RE_CANCEL_TIMEOUT * 2;
+
for (;;)
{
PGresult *res;
@@ -1527,8 +1553,33 @@ pgfdw_get_cleanup_result(PGconn *conn, TimestampTz endtime, PGresult **result,
TimestampTz now = GetCurrentTimestamp();
long cur_timeout;
+ /* If timeout has expired, give up. */
+ if (now >= endtime)
+ {
+ *timed_out = true;
+ failed = true;
+ goto exit;
+ }
+
+ /* If we need to re-issue the cancel request, do that. */
+ if (now >= recanceltime)
+ {
+ /* We ignore failure to issue the repeated request. */
+ (void) libpqsrv_cancel(conn, endtime);
+
+ /* Recompute "now" in case that took measurable time. */
+ now = GetCurrentTimestamp();
+
+ /* Adjust re-cancel timeout in increasing steps. */
+ recanceltime = TimestampTzPlusMilliseconds(now,
+ canceldelta);
+ canceldelta += canceldelta;
+ }
+
/* If timeout has expired, give up, else get sleep time. */
- cur_timeout = TimestampDifferenceMilliseconds(now, endtime);
+ cur_timeout = TimestampDifferenceMilliseconds(now,
+ Min(endtime,
+ recanceltime));
if (cur_timeout <= 0)
{
*timed_out = true;
@@ -1849,7 +1900,9 @@ pgfdw_finish_abort_cleanup(List *pending_entries, List *cancel_requested,
foreach(lc, cancel_requested)
{
ConnCacheEntry *entry = (ConnCacheEntry *) lfirst(lc);
+ TimestampTz now = GetCurrentTimestamp();
TimestampTz endtime;
+ TimestampTz recanceltime;
char sql[100];
Assert(entry->changing_xact_state);
@@ -1863,10 +1916,13 @@ pgfdw_finish_abort_cleanup(List *pending_entries, List *cancel_requested,
* remaining entries in the list, leading to slamming that entry's
* connection shut.
*/
- endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(),
+ endtime = TimestampTzPlusMilliseconds(now,
CONNECTION_CLEANUP_TIMEOUT);
+ recanceltime = TimestampTzPlusMilliseconds(now,
+ RE_CANCEL_TIMEOUT);
- if (!pgfdw_cancel_query_end(entry->conn, endtime, true))
+ if (!pgfdw_cancel_query_end(entry->conn, endtime,
+ recanceltime, true))
{
/* Unable to cancel running query */
pgfdw_reset_xact_state(entry, toplevel);
On Thu, 21 Nov 2024 at 02:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Anyway, given that info, Jelte's unapplied 0002 patch earlier in the
thread is not the answer, because this is about dropping a query
cancel not about losing a timeout interrupt.
Agreed that 0002 does not fix the issue re-reported by Andres (let's
call this issue number 1). But I'm still of the opinion that 0002
fixes a real bug: i.e. a bug which causes timeouts.spec to randomly
fail[1]https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#timeouts.spec_failed_because_of_statement_cancelled_due_to_unexpected_reason (let's call this issue number 2).
This seems to fix the problem here. Thoughts?
Overall, a good approach to fix issue number 1. I think it would be
best if this was integrated into libpqsrv_cancel instead though. That
way the dblink would benefit from it too.
nit: Maybe call it RETRY_CANCEL_TIME. The RE_ prefix wasn't instantly
obvious what it meant to me, it seemed like an abbreviation when I first saw it.
BTW, while I didn't do it in the attached, I'm tempted to greatly
reduce the 100ms delay in query_cancel.sql. If this does make it
more robust, we shouldn't need that much time anymore.
Seems sensible to me.
Finally there's a third issue[2]https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#dblink.sql_.28and_postgres_fdw.sql.29_fail_on_Windows_due_to_the_cancel_packet_not_sent (let's call this issue number 3).
Alexander did some investigation into this issue too[3]/messages/by-id/5ea25e4d-1ee2-b9bf-7806-119ffa658826@gmail.com. For this one
I have a hard time understanding what is going on, or at least how
this issue only seems to apply to cancel connections. From his
description of the problem and my reading of the code it seems that if
we fail to send the StartupPacket/CancelRequest due to a socket error,
we set the write_failed flag. But we don't actually check this flag
during the CONNECTION_AWAITING_RESPONSE phase of PQconnectPoll, so we
just wait until we reach a timeout because the server never sends us
anything.
[1]: https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#timeouts.spec_failed_because_of_statement_cancelled_due_to_unexpected_reason
[2]: https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#dblink.sql_.28and_postgres_fdw.sql.29_fail_on_Windows_due_to_the_cancel_packet_not_sent
[3]: /messages/by-id/5ea25e4d-1ee2-b9bf-7806-119ffa658826@gmail.com
Jelte Fennema-Nio <postgres@jeltef.nl> writes:
On Thu, 21 Nov 2024 at 02:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:
This seems to fix the problem here. Thoughts?
Overall, a good approach to fix issue number 1. I think it would be
best if this was integrated into libpqsrv_cancel instead though. That
way the dblink would benefit from it too.
How would we do that? libpqsrv_cancel is not chartered to wait around
for the results of the cancel, and I'm not even sure that it could
know what to check for.
(I did get the impression that all this code was not very well
factored, but I'm not volunteering to rewrite it wholesale.)
nit: Maybe call it RETRY_CANCEL_TIME.
Sure.
regards, tom lane
On Fri, 22 Nov 2024 at 01:37, Tom Lane <tgl@sss.pgh.pa.us> wrote:
How would we do that? libpqsrv_cancel is not chartered to wait around
for the results of the cancel, and I'm not even sure that it could
know what to check for.
Ah yeah, you're right. I got confused by the two timeouts (the one to
wait for the response of the cancel request itself, and the one to
wait for the running query to actually be cancelled).
Jelte Fennema-Nio <postgres@jeltef.nl> writes:
On Fri, 22 Nov 2024 at 01:37, Tom Lane <tgl@sss.pgh.pa.us> wrote:
How would we do that? libpqsrv_cancel is not chartered to wait around
for the results of the cancel, and I'm not even sure that it could
know what to check for.
Ah yeah, you're right. I got confused by the two timeouts (the one to
wait for the response of the cancel request itself, and the one to
wait for the running query to actually be cancelled).
Not having heard any better ideas, I pushed that to HEAD and v17
(with the renaming you suggested).
regards, tom lane