Warn when parallel restoring a custom dump without data offsets
If pg_dump can't seek on its output stream when writing a dump in the
custom archive format (possibly because you piped its stdout to a file)
it can't update that file with data offsets. These files will often
break parallel restoration. Warn when the user is doing pg_restore on
such a file to give them a hint as to why their restore is about to
fail.
The documentation for pg_restore -j is also updated to suggest that you
dump custom archive formats with the -f option.
---
doc/src/sgml/ref/pg_restore.sgml | 9 +++++++++
src/bin/pg_dump/pg_backup_custom.c | 8 ++++++++
2 files changed, 17 insertions(+)
Attachments:
0001-Warn-when-parallel-restoring-a-custom-dump-without-d.patchapplication/octet-stream; name=0001-Warn-when-parallel-restoring-a-custom-dump-without-d.patchDownload+17-1
On Sat, May 16, 2020 at 04:57:46PM -0400, David Gilman wrote:
If pg_dump can't seek on its output stream when writing a dump in the
custom archive format (possibly because you piped its stdout to a file)
it can't update that file with data offsets. These files will often
break parallel restoration. Warn when the user is doing pg_restore on
such a file to give them a hint as to why their restore is about to
fail.
You didn't say so, but I gather this is related to this other thread (which
seems to represent two separate issues).
/messages/by-id/1582010626326-0.post@n3.nabble.com
Tom, if you or anyone else with PostgreSQL would appreciate the
pg_dump file I can send it to you out of band, it's only a few
megabytes. I have pg_restore with debug symbols too if you want me to
try anything.
Would you send to me or post a link to a filesharing site and I'll try to
reproduce it ? So far no luck.
You should include here your diagnosis from that thread, or add it to a commit
message, and mention the suspect commit (548e50976). Eventually add patch for
the next commitfest. https://commitfest.postgresql.org/
I guess you're also involved in this conversation:
https://dba.stackexchange.com/questions/257398/pg-restore-with-jobs-flag-results-in-pg-restore-error-a-worker-process-di
--
Justin
I started fooling with this at home while our ISP is broke (pardon my brevity).
Maybe you also saw commit b779ea8a9a2dc3a089b3ac152b1ec4568bfeb26f
"Fix pg_restore so parallel restore doesn't fail when the input file
doesn't contain data offsets (which it won't, if pg_dump thought its
output wasn't seekable)..."
...which I guess should actually say "doesn't NECESSARILY fail", since
it also adds this comment:
"This could fail if we are asked to restore items out-of-order."
So this is a known issue and not a regression. I think the PG11
commit you mentioned (548e5097) happens to make some databases fail in
parallel restore that previously worked (I didn't check). Possibly
also some databases (or some pre-existing dumps) which used to fail
might possibly now succeed.
Your patch adds a warning if unseekable output might fail during
parallel restore. I'm not opposed to that, but can we just make
pg_restore work in that case? If the input is unseekable, then we can
never do a parallel restore at all. If it *is* seekable, could we
make _PrintTocData rewind if it gets to EOF using ftello(SEEK_SET, 0)
and re-scan again from the beginning? Would you want to try that ?
Import Notes
Reply to msg id not found: CALBH9DDpO8dyttAx34_pu4dPXphm+k8Ca0ee+-1e7A+vXx4emQ@mail.gmail.com
Your understanding of the issue is mostly correct:
I think the PG11
commit you mentioned (548e5097) happens to make some databases fail in
parallel restore that previously worked (I didn't check).
Correct, if you do the bisect around that yourself you'll see
pg_restore start failing with the expected "possibly due to
out-of-order restore request" on offset-less dumps. It is a known
issue but it's only documented in code comments, not anywhere user
facing, which is sending people to StackOverflow.
If the input is unseekable, then we can
never do a parallel restore at all.
I don't know if this is strictly true. Imagine the case of a database
dump of a single large table with a few indexes, so simple enough that
everything in the file is going to be in restore order. It might seem
silly to parallel restore a single table but remember that pg_restore
also creates indexes in parallel and on a typical development
workstation with a few CPU cores and an SSD it'll be a substantial
improvement. There are probably some other corner cases where you can
get lucky with the offset-less dump and it'll work. That's why my gut
instinct was to warn instead of fail.
If it *is* seekable, could we
make _PrintTocData rewind if it gets to EOF using ftello(SEEK_SET, 0)
and re-scan again from the beginning? Would you want to try that ?
I will try this and report back. I will also see if I can get an strace.
--
David Gilman
:DG<
David Gilman <davidgilman1@gmail.com> writes:
I think the PG11
commit you mentioned (548e5097) happens to make some databases fail in
parallel restore that previously worked (I didn't check).
Correct, if you do the bisect around that yourself you'll see
pg_restore start failing with the expected "possibly due to
out-of-order restore request" on offset-less dumps.
Yeah. Now, the whole point of that patch was to decouple the restore
order from the dump order ... but with an offset-less dump file, we
can't do that, or at least the restore order is greatly constrained.
I wonder if it'd be sensible for pg_restore to use a different parallel
scheduling algorithm if it notices that the input lacks offsets.
(There could still be some benefit from parallelism, just not as much.)
No idea if this is going to be worth the trouble, but it probably
is worth looking into.
regards, tom lane
I did some more digging. To keep everyone on the same page there are
four different ways to order TOCs:
1. topological order,
2. dataLength order, size of the table, is always zero when pg_dump can't seek,
3. dumpId order, which should be thought as random but roughly
correlates to topological order to make things fun,
4. file order, the order that tables are physically stored in the
custom dump file.
Without being able to seek backwards a parallel restore of the custom
dump archive format has to be ordered by #1 and #4. The reference
counting that reduce_dependencies does inside of the parallel restore
logic upholds ordering #1. Unfortunately, 548e50976ce changed
TocEntrySizeCompare (which is used to break ties within #1) to order
by #2, then by #3. This most often breaks on dumps written by pg_dump
without seeks (everything has a dataLength of zero) as it then falls
back to #3 ordering every time. But, because nothing in pg_restore
does any ordering by #4 you could potentially run into this with any
custom dump so I think it's a regression.
For some troubleshooting I changed ready_list_sort to never call
qsort. This fixes the problem by never ordering by #3, leaving things
in #4 order, but breaks the new algorithm introduced in 548e50976ce.
I did what Justin suggested earlier and it works great. Parallel
restore requires seekable input (enforced elsewhere) so everyone's
parallel restores should work again.
On Wed, May 20, 2020 at 10:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
David Gilman <davidgilman1@gmail.com> writes:
I think the PG11
commit you mentioned (548e5097) happens to make some databases fail in
parallel restore that previously worked (I didn't check).Correct, if you do the bisect around that yourself you'll see
pg_restore start failing with the expected "possibly due to
out-of-order restore request" on offset-less dumps.Yeah. Now, the whole point of that patch was to decouple the restore
order from the dump order ... but with an offset-less dump file, we
can't do that, or at least the restore order is greatly constrained.
I wonder if it'd be sensible for pg_restore to use a different parallel
scheduling algorithm if it notices that the input lacks offsets.
(There could still be some benefit from parallelism, just not as much.)
No idea if this is going to be worth the trouble, but it probably
is worth looking into.regards, tom lane
--
David Gilman
:DG<
Attachments:
0001-pg_restore-fix-v2.patchapplication/octet-stream; name=0001-pg_restore-fix-v2.patchDownload+27-3
I've rounded this patch out with a test and I've set up the commitfest
website for this thread. The latest patches are attached and I think
they are ready for review.
On Wed, May 20, 2020 at 11:05 PM David Gilman <davidgilman1@gmail.com> wrote:
I did some more digging. To keep everyone on the same page there are
four different ways to order TOCs:1. topological order,
2. dataLength order, size of the table, is always zero when pg_dump can't seek,
3. dumpId order, which should be thought as random but roughly
correlates to topological order to make things fun,
4. file order, the order that tables are physically stored in the
custom dump file.Without being able to seek backwards a parallel restore of the custom
dump archive format has to be ordered by #1 and #4. The reference
counting that reduce_dependencies does inside of the parallel restore
logic upholds ordering #1. Unfortunately, 548e50976ce changed
TocEntrySizeCompare (which is used to break ties within #1) to order
by #2, then by #3. This most often breaks on dumps written by pg_dump
without seeks (everything has a dataLength of zero) as it then falls
back to #3 ordering every time. But, because nothing in pg_restore
does any ordering by #4 you could potentially run into this with any
custom dump so I think it's a regression.For some troubleshooting I changed ready_list_sort to never call
qsort. This fixes the problem by never ordering by #3, leaving things
in #4 order, but breaks the new algorithm introduced in 548e50976ce.I did what Justin suggested earlier and it works great. Parallel
restore requires seekable input (enforced elsewhere) so everyone's
parallel restores should work again.On Wed, May 20, 2020 at 10:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
David Gilman <davidgilman1@gmail.com> writes:
I think the PG11
commit you mentioned (548e5097) happens to make some databases fail in
parallel restore that previously worked (I didn't check).Correct, if you do the bisect around that yourself you'll see
pg_restore start failing with the expected "possibly due to
out-of-order restore request" on offset-less dumps.Yeah. Now, the whole point of that patch was to decouple the restore
order from the dump order ... but with an offset-less dump file, we
can't do that, or at least the restore order is greatly constrained.
I wonder if it'd be sensible for pg_restore to use a different parallel
scheduling algorithm if it notices that the input lacks offsets.
(There could still be some benefit from parallelism, just not as much.)
No idea if this is going to be worth the trouble, but it probably
is worth looking into.regards, tom lane
--
David Gilman
:DG<
--
David Gilman
:DG<
Attachments:
0003-Add-integration-test-for-out-of-order-TOC-requests-i.patchapplication/octet-stream; name=0003-Add-integration-test-for-out-of-order-TOC-requests-i.patchDownload+109-14
0001-Remove-unused-seek-check-in-tar-dump-format.patchapplication/octet-stream; name=0001-Remove-unused-seek-check-in-tar-dump-format.patchDownload+0-6
0002-Scan-all-TOCs-when-restoring-a-custom-dump-file-with.patchapplication/octet-stream; name=0002-Scan-all-TOCs-when-restoring-a-custom-dump-file-with.patchDownload+27-3
On Sat, May 23, 2020 at 03:54:30PM -0400, David Gilman wrote:
I've rounded this patch out with a test and I've set up the commitfest
website for this thread. The latest patches are attached and I think
they are ready for review.
Thanks. https://commitfest.postgresql.org/28/2568/
I'm not sure this will be considered a bugfix, since the behavior is known.
Maybe eligible for backpatch though (?)
Your patch was encoded, so this is failing:
http://cfbot.cputube.org/david-gilman.html
Ideally CFBOT would deal with that (maybe by using git-am - adding Thomas), but
I think you sent using gmail web interface, which also reordered the patches.
(CFBOT *does* sort them, but it's a known annoyance).
dump file was written with data offsets pg_restore can seek directly to
offsets COMMA
pg_restore would only find the TOC if it happened to be immediately
"immediately" is wrong, no ? I thought the problem was if we seeked to D and
then looked for C, we wouldn't attempt to go backwards.
read request only when restoring a custom dump file without data offsets.
remove "only"
of a bunch of extra tiny reads when pg_restore starts up.
I would have thought to mention the seeks() ; but it's true that the read()s now
grow quadratically. I did run a test, but I don't know how many objects would
be unreasonable or how many it'd take to show a problem.
Maybe we should avoid fseeko(0, SEEK_SET) unless we need to wrap around after
EOF - I'm not sure.
Maybe the cleanest way would be to pre-populate a structure with all the TOC
data and loop around that instead of seeking around the file ? Can we use the
same structure as pg_dump ?
Otherwise, that makes me think of commit 42f70cd9c. Make it's not a good
parallel or example for this case, though.
+ The custom archive format may not work with the <option>-j</option>
+ option if the archive was originally created by writing the archive
+ to an unseekable output file. For the best concurrent restoration
Can I suggest something like: pg_restore with parallel jobs may fail if the
archive dump was written to an unseekable output stream, like stdout.
+ * If the input file can't be seeked we're at the mercy of the
seeked COMMA
Subject: [PATCH 3/3] Add integration test for out-of-order TOC requests in pg_restore
Well done - thanks for that.
Also add undocumented --disable-seeking argument to pg_dump to emulate
writing to an unseekable output file.
Remove "also".
Is it possible to dump to stdout (or pipe to cat or dd) to avoid a new option ?
Maybe that would involve changing the test process to use the shell (system() vs
execve()), or maybe you could write:
/* sh handles output redirection and arg splitting */
'sh', '-c', 'pg_dump -Fc -Z6 --no-sync --disable-seeking postgres > $tempdir/defaults_custom_format_no_seek_parallel_restore.dump',
But I think that would need to then separately handle WIN32, so maybe it's not
worth it.
Updated patches are attached, I ditched the gmail web interface so
hopefully this works.
Not mentioned in Justin's feedback: I dropped the extra sort in the test
as it's no longer necessary. I also added a parallel dump -> parallel
restore -> dump test run for the directory format to get some free test
coverage.
On Sat, May 23, 2020 at 05:47:51PM -0500, Justin Pryzby wrote:
I'm not sure this will be considered a bugfix, since the behavior is known.
Maybe eligible for backpatch though (?)
I'm not familiar with how your release management works, but I'm
personally fine with whatever version you can get it into. I urge you to
try landing this as soon as possible. The minimum reproducible example
in the test case is very minimal and I imagine all real world databases
are going to trigger this.
I would have thought to mention the seeks() ; but it's true that the read()s now
grow quadratically. I did run a test, but I don't know how many objects would
be unreasonable or how many it'd take to show a problem.
And I misunderstood how bad it was. I thought it was reading little
header structs off the disk but it's actually reading the entire table
(see _skipData). So you're quadratically rereading entire tables and
thrashing your cache. Oops.
Maybe we should avoid fseeko(0, SEEK_SET) unless we need to wrap around after
EOF - I'm not sure.
The seek location is already the location of the end of the last good
object so just adding wraparound gives the good algorithmic performance
from the technique in commit 42f70cd9c. I’ve gone ahead and implemented
this.
Is it possible to dump to stdout (or pipe to cat or dd) to avoid a new option ?
The underlying IPC::Run code seems to support piping in a cross-platform
way. I am not a Perl master though and after spending an evening trying
to get it to work I went with this approach. If you can put me in touch
with anyone to help me out here I'd appreciate it.
--
David Gilman :DG<
https://gilslotd.com
Attachments:
0001-Remove-unused-seek-check-in-tar-dump-format.patchtext/x-diff; charset=us-asciiDownload+0-6
0002-Scan-all-TOCs-when-restoring-a-custom-dump-file-with.patchtext/x-diff; charset=us-asciiDownload+30-3
0003-Add-integration-test-for-out-of-order-TOC-requests-i.patchtext/x-diff; charset=us-asciiDownload+117-13
The earlier patches weren't applying because I had "git config
diff.noprefix true" set globally and that was messing up the git
format-patch output.
On Mon, May 25, 2020 at 01:54:29PM -0500, David Gilman wrote:
And I misunderstood how bad it was. I thought it was reading little
header structs off the disk but it's actually reading the entire table
(see _skipData). So you're quadratically rereading entire tables and
thrashing your cache. Oops.
I changed _skipData to fseeko() instead of fread() when possible to cut
down on this thrashing further.
--
David Gilman :DG<
https://gilslotd.com
Attachments:
0001-Remove-unused-seek-check-in-tar-dump-format.patchtext/x-diff; charset=us-asciiDownload+0-6
0002-Skip-tables-in-pg_restore-by-seeking-instead-of-read.patchtext/x-diff; charset=us-asciiDownload+18-11
0003-Scan-all-TOCs-when-restoring-a-custom-dump-file-with.patchtext/x-diff; charset=us-asciiDownload+30-3
0004-Add-integration-test-for-out-of-order-TOC-requests-i.patchtext/x-diff; charset=us-asciiDownload+117-13
I've attached the latest patches after further review from Justin Pryzby.
--
David Gilman :DG<
https://gilslotd.com
Attachments:
0001-Scan-all-TOCs-when-restoring-a-custom-dump-file-with.patchtext/x-diff; charset=us-asciiDownload+28-6
0002-Add-integration-test-for-out-of-order-TOC-requests-i.patchtext/x-diff; charset=us-asciiDownload+117-13
0003-Remove-unused-seek-check-in-tar-dump-format.patchtext/x-diff; charset=us-asciiDownload+0-6
0004-Skip-tables-in-pg_restore-by-seeking-instead-of-read.patchtext/x-diff; charset=us-asciiDownload+18-11
On Mon, May 25, 2020 at 01:54:29PM -0500, David Gilman wrote:
Is it possible to dump to stdout (or pipe to cat or dd) to avoid a new option ?
The underlying IPC::Run code seems to support piping in a cross-platform
way. I am not a Perl master though and after spending an evening trying
to get it to work I went with this approach. If you can put me in touch
with anyone to help me out here I'd appreciate it.
I think you can do what's needed like so:
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -152,10 +152,13 @@ my %pgdump_runs = (
},
defaults_custom_format_no_seek_parallel_restore => {
test_key => 'defaults',
- dump_cmd => [
- 'pg_dump', '-Fc', '-Z6', '--no-sync', '--disable-seeking',
+ dump_cmd => (
+ [
+ 'pg_dump', '-Fc', '-Z6', '--no-sync',
"--file=$tempdir/defaults_custom_format_no_seek_parallel_restore.dump", 'postgres',
- ],
+ ],
+ "|", [ "cat" ], # disable seeking
+ ),
Also, these are failing intermittently:
t/002_pg_dump.pl .............. 1649/6758
# Failed test 'defaults_custom_format_no_seek_parallel_restore: should dump GRANT SELECT (proname ...) ON TABLE pg_proc TO public'
# at t/002_pg_dump.pl line 3635.
# Review defaults_custom_format_no_seek_parallel_restore results in /var/lib/pgsql/postgresql.src/src/bin/pg_dump/tmp_check/tmp_test_NqRC
t/002_pg_dump.pl .............. 2060/6758
# Failed test 'defaults_dir_format_parallel: should dump GRANT SELECT (proname ...) ON TABLE pg_proc TO public'
# at t/002_pg_dump.pl line 3635.
# Review defaults_dir_format_parallel results in /var/lib/pgsql/postgresql.src/src/bin/pg_dump/tmp_check/tmp_test_NqRC
If you can address those, I think this will be "ready for committer".
--
Justin
David Gilman <dgilman@gilslotd.com> writes:
I've attached the latest patches after further review from Justin Pryzby.
I guess I'm completely confused about the purpose of these patches.
Far from coping with the situation of an unseekable file, they appear
to change pg_restore so that it fails altogether if it can't seek
its input file. Why would we want to do this?
regards, tom lane
On Thu, Jul 02, 2020 at 05:25:21PM -0400, Tom Lane wrote:
I guess I'm completely confused about the purpose of these patches.
Far from coping with the situation of an unseekable file, they appear
to change pg_restore so that it fails altogether if it can't seek
its input file. Why would we want to do this?
I'm not sure where the "fails altogether if it can't seek" is. The
"Skip tables in pg_restore" patch retains the old fread() logic. The
--disable-seeking stuff was just to support tests, and thanks to
help from Justin Pryzby the tests no longer require it. I've attached
the updated patch set.
Note that this still shouldn't be merged because of Justin's bug report
in 20200706050129.GW4107@telsasoft.com which is unrelated to this change
but will leave you with flaky CI until it's fixed.
--
David Gilman :DG<
https://gilslotd.com
Attachments:
0001-Scan-all-TOCs-when-restoring-a-custom-dump-file-with.patchtext/x-diff; charset=us-asciiDownload+28-6
0002-Add-integration-test-for-out-of-order-TOC-requests-i.patchtext/x-diff; charset=us-asciiDownload+114-6
0003-Remove-unused-seek-check-in-tar-dump-format.patchtext/x-diff; charset=us-asciiDownload+0-6
0004-Skip-tables-in-pg_restore-by-seeking-instead-of-read.patchtext/x-diff; charset=us-asciiDownload+18-11
David Gilman <dgilman@gilslotd.com> writes:
On Thu, Jul 02, 2020 at 05:25:21PM -0400, Tom Lane wrote:
I guess I'm completely confused about the purpose of these patches.
Far from coping with the situation of an unseekable file, they appear
to change pg_restore so that it fails altogether if it can't seek
its input file. Why would we want to do this?
I'm not sure where the "fails altogether if it can't seek" is.
I misread the patch, is where :-(
As penance, I spent some time studying this patchset, and have a few
comments:
1. The proposed doc change in 0001 seems out-of-date; isn't it adding a
warning about exactly the deficiency that the rest of the patch is
eliminating? Note that the preceding para already says that the input
has to be seekable, so that's covered. Maybe there is reason for
documenting that parallel restore will be slower if the archive was
written in a non-seekable way ... but that's not what this says.
2. It struck me that the patch is still pretty inefficient, in that
anytime it has to back up in an offset-less archive, it blindly rewinds
to dataStart and rescans everything. In the worst case that'd still be
O(N^2) work, and it's really not necessary, because once we've seen a
given data block we know where it is. We just have to remember that,
which seems easy enough. (Well, on Windows it's a bit trickier because
the state in question is shared across threads; but that's good, it might
save some work.)
3. Extending on #2, we actually don't need the rewind and retry logic
at all. If we are looking for a block we haven't already seen, and we
get to the end of the archive, it ain't there. (This is a bit less
obvious in the Windows case than otherwise, but I think it's still true,
given that the start state is either "all offsets known" or "no offsets
known". A particular thread might skip over some blocks on the strength
of an offset established by another thread, but the blocks ahead of that
spot must now all have known offsets.)
4. Patch 0002 seems mighty expensive for the amount of code coverage
it's adding. On my machine it seems to raise the overall runtime of
pg_dump's "make installcheck" by about 10%, and the only new coverage
is of the few lines added here. I wonder if we couldn't cover that
more cheaply by testing what happens when we use a "-L" option with
an intentionally mis-sorted restore list.
5. I'm inclined to reject 0003. It's not saving anything very meaningful,
and we'd just have to put the code back whenever somebody gets around
to making pg_backup_tar.c capable of out-of-order restores like
pg_backup_custom.c is now able to do.
The attached 0001 rewrites your 0001 as per the above ideas (dropping
the proposed doc change for now), and includes your 0004 for simplicity.
I'm including your 0002 verbatim just so the cfbot will be able to do a
meaningful test on 0001; but as stated, I don't really want to commit it.
regards, tom lane
Attachments:
0001-remember-seek-positions.patchtext/x-diff; charset=us-ascii; name=0001-remember-seek-positions.patchDownload+69-27
0002-Add-integration-test-for-out-of-order-TOC-requests-i.patchtext/x-diff; charset=us-ascii; name*0=0002-Add-integration-test-for-out-of-order-TOC-requests-i.p; name*1=atchDownload+114-6
I wrote:
The attached 0001 rewrites your 0001 as per the above ideas (dropping
the proposed doc change for now), and includes your 0004 for simplicity.
I'm including your 0002 verbatim just so the cfbot will be able to do a
meaningful test on 0001; but as stated, I don't really want to commit it.
I spent some more time testing this, by trying to dump and restore the
core regression database. I immediately noticed that I sometimes got
"ftell mismatch with expected position -- ftell used" warnings, though
it was a bit variable depending on the -j level. The reason was fairly
apparent on looking at the code: we had various fseeko() calls in code
paths that did not bother to correct ctx->filePos afterwards. In fact,
*none* of the four existing fseeko calls in pg_backup_custom.c did so.
It's fairly surprising that that hadn't caused a problem up to now.
I started to add adjustments of ctx->filePos after all the fseeko calls,
but then began to wonder why we don't just rip the variable out entirely.
The only places where we need it are to set dataPos for data blocks,
but that's an entirely pointless activity if we don't have seek
capability, because we're not going to be able to rewrite the TOC
to emit the updated values.
Hence, the 0000 patch attached rips out ctx->filePos, and then
0001 is the currently-discussed patch rebased on that. I also added
an additional refinement, which is to track the furthest point we've
scanned to while looking for data blocks in an offset-less file.
If we have seek capability, then when we need to resume looking for
data blocks we can search forward from that spot rather than wherever
we happened to have stopped at. This fixes an additional source
of potentially-O(N^2) behavior if we have to restore blocks in a
very out-of-order fashion. I'm not sure that it makes much difference
in common cases, but with this we can say positively that we don't
scan the same block more than once per worker process.
I'm still unhappy about the proposed test case (0002), but now
I have a more concrete reason for that: it didn't catch this bug,
so the coverage is still pretty miserable.
Dump-and-restore-the-regression-database used to be a pretty common
manual test for pg_dump, but we never got around to automating it,
possibly because we figured that the pg_upgrade test script covers
that ground. It's becoming gruesomely clear that pg_upgrade is a
distinct operating mode that doesn't necessarily have the same bugs.
So I'm inclined to feel that what we ought to do is automate a test
of that sort; but first we'll have to fix the existing bugs described
at [1]/messages/by-id/3169466.1594841366@sss.pgh.pa.us[2]/messages/by-id/3170626.1594842723@sss.pgh.pa.us.
Given the current state of affairs, I'm inclined to commit the
attached with no new test coverage, and then come back and look
at better testing after the other bugs are dealt with.
regards, tom lane
[1]: /messages/by-id/3169466.1594841366@sss.pgh.pa.us
[2]: /messages/by-id/3170626.1594842723@sss.pgh.pa.us