Re: shared tempfile was not removed on statement_timeout
On Fri, Dec 13, 2019 at 03:03:47PM +1300, Thomas Munro wrote:
On Fri, Dec 13, 2019 at 7:05 AM Justin Pryzby <pryzby@telsasoft.com> wrote:
I have a nagios check on ancient tempfiles, intended to catch debris left by
crashed processes. But triggered on this file:$ sudo find /var/lib/pgsql/12/data/base/pgsql_tmp -ls
142977 4 drwxr-x--- 3 postgres postgres 4096 Dec 12 11:32 /var/lib/pgsql/12/data/base/pgsql_tmp
169868 4 drwxr-x--- 2 postgres postgres 4096 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset
169347 5492 -rw-r----- 1 postgres postgres 5619712 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/0.0
169346 5380 -rw-r----- 1 postgres postgres 5505024 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/1.0I found:
2019-12-07 01:35:56 | 11025 | postgres | canceling statement due to statement timeout | CLUSTER pg_stat_database_snap USI
2019-12-07 01:35:56 | 11025 | postgres | temporary file: path "base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/2.0", size 5455872 | CLUSTER pg_stat_database_snap USIHmm. I played around with this and couldn't reproduce it, but I
thought of something. What if the statement timeout is reached while
we're in here:#0 PathNameDeleteTemporaryDir (dirname=0x7fffffffd010 "base/pgsql_tmp/pgsql_tmp28884.31.sharedfileset") at fd.c:1471
#1 0x0000000000a32c77 in SharedFileSetDeleteAll (fileset=0x80182e2cc) at sharedfileset.c:177
#2 0x0000000000a327e1 in SharedFileSetOnDetach (segment=0x80a6e62d8, datum=34385093324) at sharedfileset.c:206
#3 0x0000000000a365ca in dsm_detach (seg=0x80a6e62d8) at dsm.c:684
#4 0x000000000061621b in DestroyParallelContext (pcxt=0x80a708f20) at parallel.c:904
#5 0x00000000005d97b3 in _bt_end_parallel (btleader=0x80fe9b4b0) at nbtsort.c:1473
#6 0x00000000005d92f0 in btbuild (heap=0x80a7bc4c8, index=0x80a850a50, indexInfo=0x80fec1ab0) at nbtsort.c:340
...
The CHECK_FOR_INTERRUPTS() inside the walkdir() loop could ereport()
out of there after deleting some but not all of your files, but the
code in dsm_detach() has already popped the callback (which it does
"to avoid infinite error recursion"), so it won't run again on error
cleanup. Hmm. But then... maybe the two log lines you quoted should
be the other way around for that.
With inspired from re-reading your messages several times in rapid succession,
I was able to reproduce this easily with:
--- a/src/backend/storage/file/fd.c
+++ b/src/backend/storage/file/fd.c
@@ -3344,6 +3344,7 @@ walkdir(const char *path,
struct stat fst;
int sret;
+ usleep(99999);
CHECK_FOR_INTERRUPTS();
On Fri, Dec 13, 2019 at 03:49:26PM +1300, Thomas Munro wrote:
Ok, so it looks like we shouldn't be relying on the same code path for
'happy' and 'error' cleanup. This could probably be fixed with a well
placed explicit call to SharedFileSetDeleteAll() or a new function
SharedFileSetDestroy(), and perhaps a flag in shmem to say it's been
done so the callback doesn't do it again needlessly. I don't think
this problem is specific to parallel index creation.
Find below a caveman-approved patch which avoids leaving behind tmpfiles.
I'm not sure how to do this cleanly, since:
| src/include/utils/tuplesort.h: * Tuplesortstate and Sharedsort are opaque types whose details are not
Maybe we need to add a new parameter like:
| tuplesort_end(Tuplesortstate *state, bool do_delete_fileset)
Arguably, that has the benefit that existing callers *have* to confront whether
they should delete the fileset or not. This is such a minor issue that it's
unfortunate to force a confrontation over it.
--
Justin
diff --git a/src/backend/access/nbtree/nbtsort.c b/src/backend/access/nbtree/nbtsort.c
index efee86784b..f5b0a48d64 100644
--- a/src/backend/access/nbtree/nbtsort.c
+++ b/src/backend/access/nbtree/nbtsort.c
@@ -511,12 +511,17 @@ _bt_spools_heapscan(Relation heap, Relation index, BTBuildState *buildstate,
return reltuples;
}
+extern void *tuplesort_shared_fileset(Tuplesortstate*);
+
/*
* clean up a spool structure and its substructures.
*/
static void
_bt_spooldestroy(BTSpool *btspool)
{
+ void *fileset = tuplesort_shared_fileset(btspool->sortstate);
+ if (fileset)
+ SharedFileSetDeleteAll(fileset);
tuplesort_end(btspool->sortstate);
pfree(btspool);
}
@@ -1669,6 +1674,10 @@ _bt_end_parallel(BTLeader *btleader)
/* Free last reference to MVCC snapshot, if one was used */
if (IsMVCCSnapshot(btleader->snapshot))
UnregisterSnapshot(btleader->snapshot);
+
+ // SharedFileSetDeleteAll(btleader->sharedsort->fileset);
+ // SharedFileSetDeleteAll(btleader->sharedsort2->fileset);
+
DestroyParallelContext(btleader->pcxt);
ExitParallelMode();
}
diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c
index 5f6420efb2..f89d42f475 100644
--- a/src/backend/storage/file/fd.c
+++ b/src/backend/storage/file/fd.c
@@ -3344,6 +3344,7 @@ walkdir(const char *path,
struct stat fst;
int sret;
+ usleep(99999);
CHECK_FOR_INTERRUPTS();
if (strcmp(de->d_name, ".") == 0 ||
diff --git a/src/backend/utils/sort/tuplesort.c b/src/backend/utils/sort/tuplesort.c
index 3c49476483..3de9592b78 100644
--- a/src/backend/utils/sort/tuplesort.c
+++ b/src/backend/utils/sort/tuplesort.c
@@ -1375,6 +1375,11 @@ tuplesort_free(Tuplesortstate *state)
MemoryContextReset(state->sortcontext);
}
+void *tuplesort_shared_fileset(Tuplesortstate *state)
+{
+ return state->shared ? &state->shared->fileset : NULL;
+}
+
/*
* tuplesort_end
*
Import Notes
Reply to msg id not found: CA+hUKGJf5uPTPz-9FoxnaOfeu74Hi494HG8EyVAmPUatPjPL9A@mail.gmail.comCA+hUKGJStr-3B6qNnFEOpES8HHc3Wwe3wSrYYQJcQhHuTB9SdQ@mail.gmail.com
On Tue, Jul 21, 2020 at 4:33 PM Justin Pryzby <pryzby@telsasoft.com> wrote:
/* * clean up a spool structure and its substructures. */ static void _bt_spooldestroy(BTSpool *btspool) { + void *fileset = tuplesort_shared_fileset(btspool->sortstate); + if (fileset) + SharedFileSetDeleteAll(fileset); tuplesort_end(btspool->sortstate); pfree(btspool); }
Why can't tuplesort_end do it?
On Mon, Jul 27, 2020 at 08:00:46PM +1200, Thomas Munro wrote:
On Tue, Jul 21, 2020 at 4:33 PM Justin Pryzby <pryzby@telsasoft.com> wrote:
/* * clean up a spool structure and its substructures. */ static void _bt_spooldestroy(BTSpool *btspool) { + void *fileset = tuplesort_shared_fileset(btspool->sortstate); + if (fileset) + SharedFileSetDeleteAll(fileset); tuplesort_end(btspool->sortstate); pfree(btspool); }Why can't tuplesort_end do it?
Because then I think the parallel workers remove their own files, with tests
failing like:
+ERROR: could not open temporary file "0.0" from BufFile "0": No such file or directory
I look around a bit more and came up with this, which works, but I don't know
enough to say if it's right.
diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c
index 5f6420efb2..f89d42f475 100644
--- a/src/backend/storage/file/fd.c
+++ b/src/backend/storage/file/fd.c
@@ -3344,6 +3344,7 @@ walkdir(const char *path,
struct stat fst;
int sret;
+ usleep(99999);
CHECK_FOR_INTERRUPTS();
if (strcmp(de->d_name, ".") == 0 ||
diff --git a/src/backend/utils/sort/tuplesort.c b/src/backend/utils/sort/tuplesort.c
index 3c49476483..c6e5e6d00b 100644
--- a/src/backend/utils/sort/tuplesort.c
+++ b/src/backend/utils/sort/tuplesort.c
@@ -1387,6 +1387,9 @@ tuplesort_free(Tuplesortstate *state)
void
tuplesort_end(Tuplesortstate *state)
{
+ if (state->shared && state->shared->workersFinished == state->nParticipants)
+ SharedFileSetDeleteAll(&state->shared->fileset);
+
tuplesort_free(state);
/*
On Mon, Jul 27, 2020 at 05:39:02AM -0500, Justin Pryzby wrote:
On Mon, Jul 27, 2020 at 08:00:46PM +1200, Thomas Munro wrote:
Why can't tuplesort_end do it?
Because then I think the parallel workers remove their own files, with tests
failing like:+ERROR: could not open temporary file "0.0" from BufFile "0": No such file or directory
I look around a bit more and came up with this, which works, but I don't know
enough to say if it's right.
I convinced myself this is right, since state->nParticipants==-1 for workers.
Only the leader should do the cleanup.
Added here:
https://commitfest.postgresql.org/29/2657/
--
Justin
Attachments:
v1-0001-explicity-remove-shared-fileset.patchtext/x-diff; charset=us-asciiDownload
From 3d9db664d86f9a1f4745c19d1fb620dae5f9e42a Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Mon, 27 Jul 2020 05:50:34 -0500
Subject: [PATCH v1] explicity remove shared fileset..
..to avoid leaving it around if a timeout occurs *during* the path that would
normally release resources.
Discussion: https://www.postgresql.org/message-id/20191212180506.GR2082%40telsasoft.com
Analysis: Thomas Munro
---
src/backend/utils/sort/tuplesort.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/src/backend/utils/sort/tuplesort.c b/src/backend/utils/sort/tuplesort.c
index 3c49476483..b2b3ff789c 100644
--- a/src/backend/utils/sort/tuplesort.c
+++ b/src/backend/utils/sort/tuplesort.c
@@ -1387,6 +1387,15 @@ tuplesort_free(Tuplesortstate *state)
void
tuplesort_end(Tuplesortstate *state)
{
+ /*
+ * Leader explicitly cleans up shared filesets.
+ * This would normally be cleaned up by DestroyParallelContext(), but
+ * doing it now avoids leaving files behind if an error/timeout happens
+ * *during* cleanup.
+ */
+ if (state->shared && state->shared->workersFinished == state->nParticipants)
+ SharedFileSetDeleteAll(&state->shared->fileset);
+
tuplesort_free(state);
/*
--
2.17.0
On Wed, 29 Jul 2020 at 10:37, Justin Pryzby <pryzby@telsasoft.com> wrote:
On Mon, Jul 27, 2020 at 05:39:02AM -0500, Justin Pryzby wrote:
On Mon, Jul 27, 2020 at 08:00:46PM +1200, Thomas Munro wrote:
Why can't tuplesort_end do it?
Because then I think the parallel workers remove their own files, with tests
failing like:+ERROR: could not open temporary file "0.0" from BufFile "0": No such file or directory
I look around a bit more and came up with this, which works, but I don't know
enough to say if it's right.I convinced myself this is right, since state->nParticipants==-1 for workers.
Only the leader should do the cleanup.Added here:
https://commitfest.postgresql.org/29/2657/
I've also investigated this issue. As Thomas mentioned before, this
problem is not specific to parallel index creation. Shared temporary
files could be left if the process is interrupted while deleting the
file as a part of the work of detaching dsm segment.
To fix this issue, possible solutions would be:
1. Like the current patch, we call SharedFileSetDeleteAll() before
DestroyParallelContext() which calls dsm_detach() so that we can make
sure to delete these files while not relying on on_dsm_detach
callback. That way, even if the process is interrupted during that
cleaning, it will clean these files again during transaction abort
(AtEOXact_Parallel() calls dsm_detach()). OTOH a downside would be
that we will end up setting a rule that we need to explicitly call
SharedFileSetDeleteAll().
2. We don't use on_dsm_detach callback to delete the shared file set.
Instead, I wonder if we can delete them at the end of the transaction
by using ResourceOwner mechanism, like we do for non-shared temporary
files cleanup. This idea doesn't have the cons that idea #1 has. OTOH,
the lifetime of the shared file set will change from the parallel
context to the transaction, leading to keep many temporary files until
the transaction end. Also, we would need to rework the handling shared
file set.
3. We use on_dsm_detach as well as on_proc_exit callback to delete the
shared file set. It doesn't resolve the root cause but that way, even
if the process didn’t delete it on destroying the parallel context, we
can make sure to delete it on process exit.
I think #1 is suitable for back branches. For HEAD, I think #2 and #3
would be better in terms of not setting an implicit rule. Thoughts?
Regards,
--
Masahiko Sawada http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
At Wed, 28 Oct 2020 19:03:14 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in
On Wed, 29 Jul 2020 at 10:37, Justin Pryzby <pryzby@telsasoft.com> wrote:
On Mon, Jul 27, 2020 at 05:39:02AM -0500, Justin Pryzby wrote:
On Mon, Jul 27, 2020 at 08:00:46PM +1200, Thomas Munro wrote:
Why can't tuplesort_end do it?
Because then I think the parallel workers remove their own files, with tests
failing like:+ERROR: could not open temporary file "0.0" from BufFile "0": No such file or directory
I look around a bit more and came up with this, which works, but I don't know
enough to say if it's right.I convinced myself this is right, since state->nParticipants==-1 for workers.
Only the leader should do the cleanup.Added here:
https://commitfest.postgresql.org/29/2657/
+ if (state->shared && state->shared->workersFinished == state->nParticipants)
Isn't it more straight forward to check "state->shared &&
state->worker == -1"?
I've also investigated this issue. As Thomas mentioned before, this
problem is not specific to parallel index creation. Shared temporary
files could be left if the process is interrupted while deleting the
file as a part of the work of detaching dsm segment.To fix this issue, possible solutions would be:
1. Like the current patch, we call SharedFileSetDeleteAll() before
DestroyParallelContext() which calls dsm_detach() so that we can make
sure to delete these files while not relying on on_dsm_detach
callback. That way, even if the process is interrupted during that
cleaning, it will clean these files again during transaction abort
(AtEOXact_Parallel() calls dsm_detach()). OTOH a downside would be
that we will end up setting a rule that we need to explicitly call
SharedFileSetDeleteAll().
That seems to be common. We release lock explicitly but it is
automatically released on error. Of couse it is slightly different
that SharedFileSetOnDetach unconditionally removes the directory but
that doesn't matter as far as that behavior doesn't lead to an
error. We can skip, as Thomas suggested, the cleanup if not necessary.
Looking the comment of SharedFileSetOnDetach:
| * everything in them. We can't raise an error on failures, because this runs
| * in error cleanup paths.
I feel that a function that shouldn't error-out also shouldn't be
cancellable. If that's the case, we omit the CHECK_FOR_INTERRUPTS() in
walkdir() when elevel is smaller than ERROR.
=====
diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c
index b58502837a..593c23553e 100644
--- a/src/backend/storage/file/fd.c
+++ b/src/backend/storage/file/fd.c
@@ -3374,7 +3374,9 @@ walkdir(const char *path,
{
char subpath[MAXPGPATH * 2];
- CHECK_FOR_INTERRUPTS();
+ /* omit interrupts while we shouldn't error-out */
+ if (elevel >= ERROR)
+ CHECK_FOR_INTERRUPTS();
if (strcmp(de->d_name, ".") == 0 ||
strcmp(de->d_name, "..") == 0)
=====
2. We don't use on_dsm_detach callback to delete the shared file set.
Instead, I wonder if we can delete them at the end of the transaction
by using ResourceOwner mechanism, like we do for non-shared temporary
files cleanup. This idea doesn't have the cons that idea #1 has. OTOH,
the lifetime of the shared file set will change from the parallel
context to the transaction, leading to keep many temporary files until
the transaction end. Also, we would need to rework the handling shared
file set.3. We use on_dsm_detach as well as on_proc_exit callback to delete the
shared file set. It doesn't resolve the root cause but that way, even
if the process didn’t delete it on destroying the parallel context, we
can make sure to delete it on process exit.I think #1 is suitable for back branches. For HEAD, I think #2 and #3
would be better in terms of not setting an implicit rule. Thoughts?
As far as we allow dms_detach being canceled, the problem persists
anywhat other we do. So #2 and #3 seems a bit too much. It seems to me
that #1 + omitting CHECK_FOR_INTERRUPTS() is suitable for all
branches.
regards.
--
Kyotaro Horiguchi
NTT Open Source Software Center
On 26/01/2021 06:46, Kyotaro Horiguchi wrote:
Looking the comment of SharedFileSetOnDetach:
| * everything in them. We can't raise an error on failures, because this runs
| * in error cleanup paths.I feel that a function that shouldn't error-out also shouldn't be
cancellable. If that's the case, we omit the CHECK_FOR_INTERRUPTS() in
walkdir() when elevel is smaller than ERROR.===== diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c index b58502837a..593c23553e 100644 --- a/src/backend/storage/file/fd.c +++ b/src/backend/storage/file/fd.c @@ -3374,7 +3374,9 @@ walkdir(const char *path, { char subpath[MAXPGPATH * 2];- CHECK_FOR_INTERRUPTS(); + /* omit interrupts while we shouldn't error-out */ + if (elevel >= ERROR) + CHECK_FOR_INTERRUPTS();if (strcmp(de->d_name, ".") == 0 ||
strcmp(de->d_name, "..") == 0)
=====
Don't we potentially have the same problem with all on_dsm_detach
callbacks? Looking at the other on_dsm_detach callbacks, I don't see any
CHECK_FOR_INTERRUPT() calls in them, but it seems fragile to assume that.
I'd suggest adding HOLD/RESUME_INTERRUPTS() to dsm_detach(). At least
around the removal of the callback from the list and calling the
callback. Maybe even over the whole function.
- Heikki
At Tue, 26 Jan 2021 11:00:56 +0200, Heikki Linnakangas <hlinnaka@iki.fi> wrote in
On 26/01/2021 06:46, Kyotaro Horiguchi wrote:
Looking the comment of SharedFileSetOnDetach: | * everything in them. We can't raise an error on failures, because this | * runs | * in error cleanup paths. I feel that a function that shouldn't error-out also shouldn't be cancellable. If that's the case, we omit the CHECK_FOR_INTERRUPTS() in walkdir() when elevel is smaller than ERROR. ===== diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c index b58502837a..593c23553e 100644 --- a/src/backend/storage/file/fd.c +++ b/src/backend/storage/file/fd.c @@ -3374,7 +3374,9 @@ walkdir(const char *path, { char subpath[MAXPGPATH * 2]; - CHECK_FOR_INTERRUPTS(); + /* omit interrupts while we shouldn't error-out */ + if (elevel >= ERROR) + CHECK_FOR_INTERRUPTS(); if (strcmp(de->d_name, ".") == 0 || strcmp(de->d_name, "..") == 0) =====Don't we potentially have the same problem with all on_dsm_detach
callbacks? Looking at the other on_dsm_detach callbacks, I don't see
any CHECK_FOR_INTERRUPT() calls in them, but it seems fragile to
assume that.I'd suggest adding HOLD/RESUME_INTERRUPTS() to dsm_detach(). At least
around the removal of the callback from the list and calling the
callback. Maybe even over the whole function.
Yes, I first came up with HOLD/RESUME_QUERY_INTERRUPTS() to the same
location.
regards.
--
Kyotaro Horiguchi
NTT Open Source Software Center
On Wed, Jan 27, 2021 at 12:22 AM Kyotaro Horiguchi
<horikyota.ntt@gmail.com> wrote:
At Tue, 26 Jan 2021 11:00:56 +0200, Heikki Linnakangas <hlinnaka@iki.fi> wrote in
Don't we potentially have the same problem with all on_dsm_detach
callbacks? Looking at the other on_dsm_detach callbacks, I don't see
any CHECK_FOR_INTERRUPT() calls in them, but it seems fragile to
assume that.I'd suggest adding HOLD/RESUME_INTERRUPTS() to dsm_detach(). At least
around the removal of the callback from the list and calling the
callback. Maybe even over the whole function.Yes, I first came up with HOLD/RESUME_QUERY_INTERRUPTS() to the same
location.
+1, this seems like a good idea. This is a little bit like the code
near the comments "Don't joggle the elbow of proc_exit".
On Wed, Jan 27, 2021 at 9:34 AM Thomas Munro <thomas.munro@gmail.com> wrote:
On Wed, Jan 27, 2021 at 12:22 AM Kyotaro Horiguchi
<horikyota.ntt@gmail.com> wrote:At Tue, 26 Jan 2021 11:00:56 +0200, Heikki Linnakangas <hlinnaka@iki.fi> wrote in
Don't we potentially have the same problem with all on_dsm_detach
callbacks? Looking at the other on_dsm_detach callbacks, I don't see
any CHECK_FOR_INTERRUPT() calls in them, but it seems fragile to
assume that.I'd suggest adding HOLD/RESUME_INTERRUPTS() to dsm_detach(). At least
around the removal of the callback from the list and calling the
callback. Maybe even over the whole function.Yes, I first came up with HOLD/RESUME_QUERY_INTERRUPTS() to the same
location.+1, this seems like a good idea. This is a little bit like the code
near the comments "Don't joggle the elbow of proc_exit".
So that gives a very simple back-patchable patch.
Attachments:
0001-Hold-interrupts-while-running-dsm_detach-callbacks.patchtext/x-patch; charset=US-ASCII; name=0001-Hold-interrupts-while-running-dsm_detach-callbacks.patchDownload
From b27cbabee9a5980c8673c4fee4ea6f7e0c89bdbc Mon Sep 17 00:00:00 2001
From: Thomas Munro <thomas.munro@gmail.com>
Date: Sun, 31 Jan 2021 14:04:02 +1300
Subject: [PATCH] Hold interrupts while running dsm_detach() callbacks.
While cleaning up after a parallel query or parallel index creation that
created temporary files, we could be interrupted by a statement timeout.
The error handling path would then fail to clean up the files too,
because the callback was already popped off the list. Prevent this
hazard by holding interrupts while the cleanup code runs.
Thanks to Heikki Linnakangas for this suggestion.
Back-patch to all supported releases.
Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20191212180506.GR2082@telsasoft.com
---
src/backend/storage/ipc/dsm.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/src/backend/storage/ipc/dsm.c b/src/backend/storage/ipc/dsm.c
index ae82b4bdc0..23bf192727 100644
--- a/src/backend/storage/ipc/dsm.c
+++ b/src/backend/storage/ipc/dsm.c
@@ -771,8 +771,10 @@ dsm_detach(dsm_segment *seg)
/*
* Invoke registered callbacks. Just in case one of those callbacks
* throws a further error that brings us back here, pop the callback
- * before invoking it, to avoid infinite error recursion.
+ * before invoking it, to avoid infinite error recursion. Don't allow
+ * interrupts to prevent cleanup from running to completion.
*/
+ HOLD_INTERRUPTS();
while (!slist_is_empty(&seg->on_detach))
{
slist_node *node;
@@ -788,6 +790,7 @@ dsm_detach(dsm_segment *seg)
function(seg, arg);
}
+ RESUME_INTERRUPTS();
/*
* Try to remove the mapping, if one exists. Normally, there will be, but
--
2.20.1
Thomas Munro <thomas.munro@gmail.com> writes:
+1, this seems like a good idea. This is a little bit like the code
near the comments "Don't joggle the elbow of proc_exit".
So that gives a very simple back-patchable patch.
Hmm, so is the *rest* of that function perfectly okay with being
interrupted?
regards, tom lane
On Sun, Jan 31, 2021 at 6:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Thomas Munro <thomas.munro@gmail.com> writes:
So that gives a very simple back-patchable patch.
Hmm, so is the *rest* of that function perfectly okay with being
interrupted?
It looks OK to me. There aren't any CFI()s in there.
On Fri, Feb 5, 2021 at 5:47 PM Thomas Munro <thomas.munro@gmail.com> wrote:
On Sun, Jan 31, 2021 at 6:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Thomas Munro <thomas.munro@gmail.com> writes:
So that gives a very simple back-patchable patch.
Hmm, so is the *rest* of that function perfectly okay with being
interrupted?It looks OK to me. There aren't any CFI()s in there.
Pushed. That closes CF #2657.