BUG #18815: Logical replication worker Segmentation fault

Started by PG Bug reporting formabout 1 year ago23 messages
Jump to latest
#1PG Bug reporting form
noreply@postgresql.org

The following bug has been logged on the website:

Bug reference: 18815
Logged by: Sergey Belyashov
Email address: sergey.belyashov@gmail.com
PostgreSQL version: 17.3
Operating system: Debian bookworm x86_64
Description:

Today I try to upgrade my cluster from postgresql-16 to postgresql-17. And
it was successfull until I restore some logical replication subscriptions.
When subscription is activated and first data are come then server logs:
2025-02-17 13:34:08.975 [98417] LOG: logical replication apply worker for
subscription "node4_closed_sessions_sub" has started
2025-02-17 13:34:11.213 [62583] LOG: background worker "logical
replication apply worker" (PID 98417) was terminated by signal 11:
Segmentation fault
2025-02-17 13:34:11.213 [62583] LOG: terminating any other active server
processes
2025-02-17 13:34:11.240 [62583] LOG: all server processes terminated;
reinitializing
2025-02-17 13:34:11.310 [98418] LOG: database system was interrupted; last
known up at 2025-02-17 13:22:08
and then restarts.
Kernel has been logged following info:
[94740743.468001] postgres[98417]: segfault at 10 ip 0000562b2b74d69c sp
00007fff284a7320 error 4 in postgres[562b2b6bb000+595000]
[94740743.468173] Code: 1f 80 00 00 00 00 44 89 e0 48 8b 15 56 0b 82 00 f7
d0 48 98 4c 8b 3c c2 eb 99 0f 1f 40 00 55 48 89 e5 53 48 89
fb 48 83 ec 08 <8b> 7f 10 e8 4c b1 32 00 8b 7b 14 85 ff 75 15 48 89 df 48
8b 5d f8

After some investigations I found that segfault is caused by one type of
subscriptions: subscription for huge partitioned tables on publisher and
subscriber (via root), subscriptions are created with data_copy=false
(source table updated by inserts and partition detaches, and it is huge,
data transfer is not compressed so it may take a days). Segfault does not
come immediately after subscription creation, but it cause when data is come
from the publisher. Then subscriber is restarts, recover, run subscription
again, catch segfault and repeat again until subscription is disabled.

Subscriptions for tables (small) without partitions works fine.

There is difference for publisher server versions: both publishers 16 and 17
cause the segfault on subscriber (version 17.3).

postgresql versions 12-16 works for years without any segfault with same
partition tables and publications/subscriptions.
postgresql-17=17.3-3.pgdg120+1 installed from the repository:
http://apt.postgresql.org/pub/repos/apt/ bookworm-pgdg main

#2Tom Lane
tgl@sss.pgh.pa.us
In reply to: PG Bug reporting form (#1)
Re: BUG #18815: Logical replication worker Segmentation fault

PG Bug reporting form <noreply@postgresql.org> writes:

After some investigations I found that segfault is caused by one type of
subscriptions: subscription for huge partitioned tables on publisher and
subscriber (via root), subscriptions are created with data_copy=false
(source table updated by inserts and partition detaches, and it is huge,
data transfer is not compressed so it may take a days). Segfault does not
come immediately after subscription creation, but it cause when data is come
from the publisher. Then subscriber is restarts, recover, run subscription
again, catch segfault and repeat again until subscription is disabled.

This is not enough information for anyone else to reproduce the
problem; it very likely depends on details that you haven't mentioned.
Can you create a reproducer case? I'm hoping for a script that sets
up the necessary tables and subscriptions and populates the tables
with enough dummy data to cause the failure.

Something that might be less work for you is to get a stack trace
from the crash:

https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend

However, I make no promises that we can isolate the cause from
just a stack trace. A reproducer would be much better.

regards, tom lane

#3Sergey Belyashov
sergey.belyashov@gmail.com
In reply to: Tom Lane (#2)
Re: BUG #18815: Logical replication worker Segmentation fault

Hi,

I think backtrace will help.
Core was generated by `postgres: 17/main: logical replication apply
worker for subscription 602051860'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00005635402c869c in brinRevmapTerminate (revmap=0x0)
at ./build/../src/backend/access/brin/brin_revmap.c:102
(gdb) backtrace
#0 0x00005635402c869c in brinRevmapTerminate (revmap=0x0)
at ./build/../src/backend/access/brin/brin_revmap.c:102
#1 0x00005635402bfddd in brininsertcleanup (index=<optimized out>,
indexInfo=<optimized out>)
at ./build/../src/backend/access/brin/brin.c:515
#2 0x0000563540479309 in ExecCloseIndices
(resultRelInfo=resultRelInfo@entry=0x563541cab8d0)
at ./build/../src/backend/executor/execIndexing.c:248
#3 0x000056354048067f in ExecCleanupTupleRouting
(mtstate=0x563541c6db58, proute=0x563541cab638)
at ./build/../src/backend/executor/execPartition.c:1270
#4 0x00005635405c89f7 in finish_edata (edata=0x563541ca0fa8)
at ./build/../src/backend/replication/logical/worker.c:718
#5 0x00005635405cc6c4 in apply_handle_insert (s=0x7f61d2a3a1d8)
at ./build/../src/backend/replication/logical/worker.c:2438
#6 apply_dispatch (s=s@entry=0x7ffd30d95a70) at
./build/../src/backend/replication/logical/worker.c:3296
#7 0x00005635405cdb7f in LogicalRepApplyLoop (last_received=106949425100872)
at ./build/../src/backend/replication/logical/worker.c:3587
#8 start_apply (origin_startpos=origin_startpos@entry=0)
at ./build/../src/backend/replication/logical/worker.c:4429
#9 0x00005635405ce11f in run_apply_worker () at
./build/../src/backend/replication/logical/worker.c:4550
#10 ApplyWorkerMain (main_arg=<optimized out>) at
./build/../src/backend/replication/logical/worker.c:4719
#11 0x0000563540594bf8 in BackgroundWorkerMain (startup_data=<optimized out>,
startup_data_len=<optimized out>) at
./build/../src/backend/postmaster/bgworker.c:848
#12 0x0000563540596daa in postmaster_child_launch
(child_type=child_type@entry=B_BG_WORKER,
startup_data=startup_data@entry=0x563541bc3618 "",
startup_data_len=startup_data_len@entry=1472,
client_sock=client_sock@entry=0x0) at
./build/../src/backend/postmaster/launch_backend.c:277
#13 0x0000563540598f88 in do_start_bgworker (rw=0x563541bc3618)
at ./build/../src/backend/postmaster/postmaster.c:4272
#14 maybe_start_bgworkers () at
./build/../src/backend/postmaster/postmaster.c:4503
#15 0x0000563540599fea in process_pm_pmsignal () at
./build/../src/backend/postmaster/postmaster.c:3776
#16 ServerLoop () at ./build/../src/backend/postmaster/postmaster.c:1669
#17 0x000056354059c7c8 in PostmasterMain (argc=argc@entry=5,
argv=argv@entry=0x563541b229c0)
at ./build/../src/backend/postmaster/postmaster.c:1374
#18 0x00005635402bf5b1 in main (argc=5, argv=0x563541b229c0) at
./build/../src/backend/main/main.c:199

The destination (subscriber) table has two timestamps "started" and
"closed" with brin index on each of them. Table is partitioned by the
range on the "closed" column. Each partition is splitted on 6
subpartitions via list (remainder of id).

Best regards,
Sergey Belyashov

пн, 17 февр. 2025 г. в 19:39, Tom Lane <tgl@sss.pgh.pa.us>:

Show quoted text

PG Bug reporting form <noreply@postgresql.org> writes:

After some investigations I found that segfault is caused by one type of
subscriptions: subscription for huge partitioned tables on publisher and
subscriber (via root), subscriptions are created with data_copy=false
(source table updated by inserts and partition detaches, and it is huge,
data transfer is not compressed so it may take a days). Segfault does not
come immediately after subscription creation, but it cause when data is come
from the publisher. Then subscriber is restarts, recover, run subscription
again, catch segfault and repeat again until subscription is disabled.

This is not enough information for anyone else to reproduce the
problem; it very likely depends on details that you haven't mentioned.
Can you create a reproducer case? I'm hoping for a script that sets
up the necessary tables and subscriptions and populates the tables
with enough dummy data to cause the failure.

Something that might be less work for you is to get a stack trace
from the crash:

https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend

However, I make no promises that we can isolate the cause from
just a stack trace. A reproducer would be much better.

regards, tom lane

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Sergey Belyashov (#3)
Re: BUG #18815: Logical replication worker Segmentation fault

Sergey Belyashov <sergey.belyashov@gmail.com> writes:

I think backtrace will help.
Core was generated by `postgres: 17/main: logical replication apply
worker for subscription 602051860'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00005635402c869c in brinRevmapTerminate (revmap=0x0)
at ./build/../src/backend/access/brin/brin_revmap.c:102
(gdb) backtrace
#0 0x00005635402c869c in brinRevmapTerminate (revmap=0x0)
at ./build/../src/backend/access/brin/brin_revmap.c:102
#1 0x00005635402bfddd in brininsertcleanup (index=<optimized out>,
indexInfo=<optimized out>)
at ./build/../src/backend/access/brin/brin.c:515
#2 0x0000563540479309 in ExecCloseIndices
(resultRelInfo=resultRelInfo@entry=0x563541cab8d0)
at ./build/../src/backend/executor/execIndexing.c:248

Thanks! It seems clear from that that the fault is basically in
brininsertcleanup(), which is trashing the BrinInsertState but
leaving indexInfo->ii_AmCache still pointing at it, so that
the next brininsert() will think it has a valid cache entry.
I suspect that the attached will fix it. What I don't understand
is why it's apparently so hard to trigger the crash, because it
looks to me like any two successive insert commands on the same
BRIN index should hit this.

BTW, I'm also a bit suspicious of the comment's claim that the
brinDesc doesn't need cleanup. That looks like a potential
memory leak.

regards, tom lane

Attachments:

fix-brininsertcleanup.patchtext/x-diff; charset=us-ascii; name=fix-brininsertcleanup.patchDownload+5-3
#5Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#4)
Re: BUG #18815: Logical replication worker Segmentation fault

I wrote:

I suspect that the attached will fix it. What I don't understand
is why it's apparently so hard to trigger the crash, because it
looks to me like any two successive insert commands on the same
BRIN index should hit this.

Oh, wait: I was confusing ii_AmCache with rd_amcache in the index's
relcache entry. This coding would absolutely not work with rd_amcache
since that's persistent. It mostly works with the IndexInfo field
though, since an IndexInfo typically only survives per-query.
Evidently there's some path in logical replication that will re-use an
IndexInfo across multiple distinct insertion operations, and that's
what breaks it.

BTW, I'm also a bit suspicious of the comment's claim that the
brinDesc doesn't need cleanup. That looks like a potential
memory leak.

This concern still stands.

regards, tom lane

#6Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#5)
Re: BUG #18815: Logical replication worker Segmentation fault

Further to this ... I'd still really like to have a reproducer.
While brininsertcleanup is clearly being less robust than it should
be, I now suspect that there is another bug somewhere further down
the call stack. We're getting to this point via ExecCloseIndices,
and that should be paired with ExecOpenIndices, and that would have
created a fresh IndexInfo. So it looks a lot like some path in a
logrep worker is able to call ExecCloseIndices twice on the same
working data. That would probably lead to a "releasing a lock you
don't own" error if we weren't hitting this crash first.

regards, tom lane

#7Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#6)
Re: BUG #18815: Logical replication worker Segmentation fault

I wrote:

Further to this ... I'd still really like to have a reproducer.
While brininsertcleanup is clearly being less robust than it should
be, I now suspect that there is another bug somewhere further down
the call stack. We're getting to this point via ExecCloseIndices,
and that should be paired with ExecOpenIndices, and that would have
created a fresh IndexInfo. So it looks a lot like some path in a
logrep worker is able to call ExecCloseIndices twice on the same
working data. That would probably lead to a "releasing a lock you
don't own" error if we weren't hitting this crash first.

Hmm ... I tried modifying ExecCloseIndices to blow up if called
twice, as in the attached. This gets through core regression
just fine, but it blows up in three different subscription TAP
tests, all with a stack trace matching Sergey's:

#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007f064bfe3e65 in __GI_abort () at abort.c:79
#2 0x00000000009e9253 in ExceptionalCondition (
conditionName=conditionName@entry=0xb8717b "indexDescs[i] != NULL",
fileName=fileName@entry=0xb87139 "execIndexing.c",
lineNumber=lineNumber@entry=249) at assert.c:66
#3 0x00000000006f0b13 in ExecCloseIndices (
resultRelInfo=resultRelInfo@entry=0x2f11c18) at execIndexing.c:249
#4 0x00000000006f86d8 in ExecCleanupTupleRouting (mtstate=0x2ef92d8,
proute=0x2ef94e8) at execPartition.c:1273
#5 0x0000000000848cb6 in finish_edata (edata=0x2ef8f50) at worker.c:717
#6 0x000000000084d0a0 in apply_handle_insert (s=<optimized out>)
at worker.c:2460
#7 apply_dispatch (s=<optimized out>) at worker.c:3389
#8 0x000000000084e494 in LogicalRepApplyLoop (last_received=25066600)
at worker.c:3680
#9 start_apply (origin_startpos=0) at worker.c:4507
#10 0x000000000084e711 in run_apply_worker () at worker.c:4629
#11 ApplyWorkerMain (main_arg=<optimized out>) at worker.c:4798
#12 0x00000000008138f9 in BackgroundWorkerMain (startup_data=<optimized out>,
startup_data_len=<optimized out>) at bgworker.c:842

The problem seems to be that apply_handle_insert_internal does
ExecOpenIndices and then ExecCloseIndices, and then
ExecCleanupTupleRouting does ExecCloseIndices again, which nicely
explains why brininsertcleanup blows up if you happen to have a BRIN
index involved. What it doesn't explain is how come we don't see
other symptoms from the duplicate index_close calls, regardless of
index type. I'd have expected an assertion failure from
RelationDecrementReferenceCount, and/or an assertion failure for
nonzero rd_refcnt at transaction end, and/or a "you don't own a lock
of type X" gripe from LockRelease. We aren't getting any of those,
but why not, if this code is as broken as I think it is?

(On closer inspection, we seem to have about 99% broken relcache.c's
ability to notice rd_refcnt being nonzero at transaction end, but
the other two things should still be happening.)

regards, tom lane

Attachments:

complain-about-duplicate-ExecCloseIndices.patchtext/x-diff; charset=us-ascii; name=complain-about-duplicate-ExecCloseIndices.patchDownload+3-2
#8Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#7)
Re: BUG #18815: Logical replication worker Segmentation fault

I wrote:

The problem seems to be that apply_handle_insert_internal does
ExecOpenIndices and then ExecCloseIndices, and then
ExecCleanupTupleRouting does ExecCloseIndices again, which nicely
explains why brininsertcleanup blows up if you happen to have a BRIN
index involved. What it doesn't explain is how come we don't see
other symptoms from the duplicate index_close calls, regardless of
index type.

Hah, I see it: this bug is specific to the apply_handle_tuple_routing
code path. The ResultRelInfo we're dealing with is made by
ExecFindPartition, which does ExecOpenIndices on it. Then
apply_handle_tuple_routing calls apply_handle_insert_internal, which
does ExecOpenIndices *again* on the same ResultRelInfo. That gets a
second refcount and second lock on the index(es), and leaks all the
IndexInfo data structures made by the first call. When done,
apply_handle_insert_internal does ExecCloseIndices, releasing one
refcount and lock. Then, back in apply_handle_tuple_routing, we do
ExecCloseIndices again. So the refcounts and locks balance out, and
it very accidentally fails to crash, so long as nobody is expecting
the contents of the IndexInfos to match up.

The "Move the tuple into the new partition" path in
apply_handle_tuple_routing's UPDATE case is even squirrelier. That
does another ExecFindPartition and then apply_handle_insert_internal,
but there's no ExecCloseIndices call balancing the ExecFindPartition,
meaning it looks like it's leaking refcount and lock. Experimentation
shows that it's not, because the matching ExecCloseIndices is
eventually done by the ExecCleanupTupleRouting call in finish_edata
after returning to the outer apply_handle_update.

So if you ask me, this is impossibly convoluted and in need of a
significant rewrite. It's not okay to be opening/closing the
ResultRelInfo's indexes twice (not least because "speculative"
is different in the two open calls). And it's not okay to have
such radically different paths for cleaning up the index-opening
done by ExecFindPartition. It's even more not okay that there's
not one word of commentary about this complexity, strongly suggesting
that the original author didn't quite understand how it worked either.

regards, tom lane

#9Sergey Belyashov
sergey.belyashov@gmail.com
In reply to: Tom Lane (#7)
Re: BUG #18815: Logical replication worker Segmentation fault

Hi,

Do I need to apply this patch for debugging purposes?

I want to remove brin indexes from active partitions and start
replication. When the issue is fixed I will return brin indexes back.

Best regards,
Sergey Belyashov

вт, 18 февр. 2025 г. в 02:37, Tom Lane <tgl@sss.pgh.pa.us>:

Show quoted text

I wrote:

Further to this ... I'd still really like to have a reproducer.
While brininsertcleanup is clearly being less robust than it should
be, I now suspect that there is another bug somewhere further down
the call stack. We're getting to this point via ExecCloseIndices,
and that should be paired with ExecOpenIndices, and that would have
created a fresh IndexInfo. So it looks a lot like some path in a
logrep worker is able to call ExecCloseIndices twice on the same
working data. That would probably lead to a "releasing a lock you
don't own" error if we weren't hitting this crash first.

Hmm ... I tried modifying ExecCloseIndices to blow up if called
twice, as in the attached. This gets through core regression
just fine, but it blows up in three different subscription TAP
tests, all with a stack trace matching Sergey's:

#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007f064bfe3e65 in __GI_abort () at abort.c:79
#2 0x00000000009e9253 in ExceptionalCondition (
conditionName=conditionName@entry=0xb8717b "indexDescs[i] != NULL",
fileName=fileName@entry=0xb87139 "execIndexing.c",
lineNumber=lineNumber@entry=249) at assert.c:66
#3 0x00000000006f0b13 in ExecCloseIndices (
resultRelInfo=resultRelInfo@entry=0x2f11c18) at execIndexing.c:249
#4 0x00000000006f86d8 in ExecCleanupTupleRouting (mtstate=0x2ef92d8,
proute=0x2ef94e8) at execPartition.c:1273
#5 0x0000000000848cb6 in finish_edata (edata=0x2ef8f50) at worker.c:717
#6 0x000000000084d0a0 in apply_handle_insert (s=<optimized out>)
at worker.c:2460
#7 apply_dispatch (s=<optimized out>) at worker.c:3389
#8 0x000000000084e494 in LogicalRepApplyLoop (last_received=25066600)
at worker.c:3680
#9 start_apply (origin_startpos=0) at worker.c:4507
#10 0x000000000084e711 in run_apply_worker () at worker.c:4629
#11 ApplyWorkerMain (main_arg=<optimized out>) at worker.c:4798
#12 0x00000000008138f9 in BackgroundWorkerMain (startup_data=<optimized out>,
startup_data_len=<optimized out>) at bgworker.c:842

The problem seems to be that apply_handle_insert_internal does
ExecOpenIndices and then ExecCloseIndices, and then
ExecCleanupTupleRouting does ExecCloseIndices again, which nicely
explains why brininsertcleanup blows up if you happen to have a BRIN
index involved. What it doesn't explain is how come we don't see
other symptoms from the duplicate index_close calls, regardless of
index type. I'd have expected an assertion failure from
RelationDecrementReferenceCount, and/or an assertion failure for
nonzero rd_refcnt at transaction end, and/or a "you don't own a lock
of type X" gripe from LockRelease. We aren't getting any of those,
but why not, if this code is as broken as I think it is?

(On closer inspection, we seem to have about 99% broken relcache.c's
ability to notice rd_refcnt being nonzero at transaction end, but
the other two things should still be happening.)

regards, tom lane

#10Tom Lane
tgl@sss.pgh.pa.us
In reply to: Sergey Belyashov (#9)
Re: BUG #18815: Logical replication worker Segmentation fault

Sergey Belyashov <sergey.belyashov@gmail.com> writes:

Do I need to apply this patch for debugging purposes?

I think we've debugged enough to understand what's happening.
Thanks for the report!

If you just want to get some work done, applying that patch
should be enough to prevent the crash, although I want to
change some other things too.

regards, tom lane

#11Sergey Belyashov
sergey.belyashov@gmail.com
In reply to: Tom Lane (#10)
Re: BUG #18815: Logical replication worker Segmentation fault

Thank you very much

вт, 18 февр. 2025 г., 19:54 Tom Lane <tgl@sss.pgh.pa.us>:

Show quoted text

Sergey Belyashov <sergey.belyashov@gmail.com> writes:

Do I need to apply this patch for debugging purposes?

I think we've debugged enough to understand what's happening.
Thanks for the report!

If you just want to get some work done, applying that patch
should be enough to prevent the crash, although I want to
change some other things too.

regards, tom lane

#12Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#10)
Re: BUG #18815: Logical replication worker Segmentation fault

[ redirecting to -hackers for patch discussion ]

Here's a completed set of patches for bug #18815 [1]/messages/by-id/18815-2a0407cc7f40b327@postgresql.org (which,
briefly, is that the logrep worker opens/closes indexes
multiple times and thereby breaks brininsertcleanup).

0001 adds a subscriber-side BRIN index to 013_partition.pl, which
replicates the crash Sergey reported. I've got mixed feelings about
whether to actually commit this: it's certainly useful for testing
this fix, but without context it looks like a pretty random thing to
do. It could be justified perhaps on the grounds of testing
aminsertcleanup callbacks within replication, since BRIN is the
only core index type that has such a callback.

0002 fixes the crash by hardening brininsertcleanup against multiple
calls. I think that this is patching a symptom not the root cause,
but it still seems like good defensive programming.

0003 adds Asserts to ExecOpenIndices and ExecCloseIndices to check
that they're not called more than once per ResultRelInfo, and thereby
exposes what I consider the root cause: apply_handle_tuple_routing
opens the indexes twice and then closes them twice. This somewhat
accidentally leaves us with zero refcounts and zero locks on the
indexes, so in the absence of aminsertcleanup callbacks the only real
reason to complain is the duplicative construction of the IndexInfo
structures. But the double call of brininsertcleanup breaks the
existing coding of that function, and it might well break third-party
aminsertcleanup callbacks if there are any. So I think we should
institute a policy that this coding pattern is forbidden.

And finally, 0004 fixes worker.c to not do that. This turned out
simpler than I thought, because I was misled by believing that the
ExecOpenIndices/ExecCloseIndices calls in apply_handle_tuple_routing
itself do something useful. They don't, and should be nuked outright.

I don't intend 0003 for back-patch, but the rest of this has to go
back as far as the bug can be observed, which I didn't test yet.
To be clear, 0004 would fix the issue even without 0002, but
I still think we should back-patch both.

regards, tom lane

[1]: /messages/by-id/18815-2a0407cc7f40b327@postgresql.org

Attachments:

v1-0001-Demonstrate-crash-in-brininsertcleanup-during-log.patchtext/x-diff; charset=us-ascii; name*0=v1-0001-Demonstrate-crash-in-brininsertcleanup-during-log.p; name*1=atchDownload+3-1
v1-0002-Harden-brininsertcleanup-against-repeat-calls.patchtext/x-diff; charset=us-ascii; name=v1-0002-Harden-brininsertcleanup-against-repeat-calls.patchDownload+5-4
v1-0003-Assert-that-ExecOpenIndices-and-ExecCloseIndices-.patchtext/x-diff; charset=us-ascii; name*0=v1-0003-Assert-that-ExecOpenIndices-and-ExecCloseIndices-.p; name*1=atchDownload+11-5
v1-0004-Avoid-duplicate-ExecOpenIndices-ExecCloseIndices-.patchtext/x-diff; charset=us-ascii; name*0=v1-0004-Avoid-duplicate-ExecOpenIndices-ExecCloseIndices-.p; name*1=atchDownload+16-15
#13Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#12)
Re: BUG #18815: Logical replication worker Segmentation fault

I wrote:

I don't intend 0003 for back-patch, but the rest of this has to go
back as far as the bug can be observed, which I didn't test yet.

Ah: this bug is new in v17, because neither brininsertcleanup nor
the whole aminsertcleanup infrastructure exist before that.

So that leaves us with a question of whether 0004 should be
back-patched any further than v17. There's some attraction
to doing so to keep the branches in sync; but there's already
quite a lot of cross-branch churn in worker.c, so on the whole
I'm inclined to just do v17.

regards, tom lane

#14Sergey Belyashov
sergey.belyashov@gmail.com
In reply to: Tom Lane (#13)
Re: BUG #18815: Logical replication worker Segmentation fault

Hi,

The 4th patch is not applicable for the current REL_17_STABLE branch.
There are a lot of differences from master in the worker.c.

Best regards,
Sergey Belyashov

ср, 19 февр. 2025 г. в 01:24, Tom Lane <tgl@sss.pgh.pa.us>:

Show quoted text

I wrote:

I don't intend 0003 for back-patch, but the rest of this has to go
back as far as the bug can be observed, which I didn't test yet.

Ah: this bug is new in v17, because neither brininsertcleanup nor
the whole aminsertcleanup infrastructure exist before that.

So that leaves us with a question of whether 0004 should be
back-patched any further than v17. There's some attraction
to doing so to keep the branches in sync; but there's already
quite a lot of cross-branch churn in worker.c, so on the whole
I'm inclined to just do v17.

regards, tom lane

#15Tom Lane
tgl@sss.pgh.pa.us
In reply to: Sergey Belyashov (#14)
Re: BUG #18815: Logical replication worker Segmentation fault

Sergey Belyashov <sergey.belyashov@gmail.com> writes:

The 4th patch is not applicable for the current REL_17_STABLE branch.

Yeah, I saw that, didn't work on fixing it yet. It looked like
the diffs were mostly about the ExecOpen/CloseIndices calls in
apply_handle_tuple_routing, which we already knew are completely
bogus. So I'm hopeful that just removing whichever of them are
in v17 will do the job --- but it'll require a full re-test
to be sure :-(

regards, tom lane

#16Tom Lane
tgl@sss.pgh.pa.us
In reply to: Sergey Belyashov (#14)
Re: BUG #18815: Logical replication worker Segmentation fault

Sergey Belyashov <sergey.belyashov@gmail.com> writes:

The 4th patch is not applicable for the current REL_17_STABLE branch.
There are a lot of differences from master in the worker.c.

If you want to try the committed v17 patch, see

https://git.postgresql.org/gitweb/?p=postgresql.git;a=patch;h=788baa9a25ae83b576621166367c868d3329b4d4

Thanks again for the report!

regards, tom lane

#17Sergey Belyashov
sergey.belyashov@gmail.com
In reply to: Tom Lane (#16)
Re: BUG #18815: Logical replication worker Segmentation fault

I have applied the patch. Replication now works. Thank you.

Regards

чт, 20 февр. 2025 г. в 00:55, Tom Lane <tgl@sss.pgh.pa.us>:

Show quoted text

Sergey Belyashov <sergey.belyashov@gmail.com> writes:

The 4th patch is not applicable for the current REL_17_STABLE branch.
There are a lot of differences from master in the worker.c.

If you want to try the committed v17 patch, see

https://git.postgresql.org/gitweb/?p=postgresql.git;a=patch;h=788baa9a25ae83b576621166367c868d3329b4d4

Thanks again for the report!

regards, tom lane

#18Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Tom Lane (#12)
RE: BUG #18815: Logical replication worker Segmentation fault

On Wed, Feb 19, 2025 at 4:38 AM Tom Lane wrote:

Hi,

Here's a completed set of patches for bug #18815 [1] (which, briefly,
is that the logrep worker opens/closes indexes multiple times and
thereby breaks brininsertcleanup).

0001 adds a subscriber-side BRIN index to 013_partition.pl, which
replicates the crash Sergey reported. I've got mixed feelings about
whether to actually commit this: it's certainly useful for testing
this fix, but without context it looks like a pretty random thing to
do. It could be justified perhaps on the grounds of testing
aminsertcleanup callbacks within replication, since BRIN is the only core index type that has such a callback.

0002 fixes the crash by hardening brininsertcleanup against multiple
calls. I think that this is patching a symptom not the root cause,
but it still seems like good defensive programming.

0003 adds Asserts to ExecOpenIndices and ExecCloseIndices to check
that they're not called more than once per ResultRelInfo, and thereby
exposes what I consider the root cause: apply_handle_tuple_routing
opens the indexes twice and then closes them twice. This somewhat
accidentally leaves us with zero refcounts and zero locks on the
indexes, so in the absence of aminsertcleanup callbacks the only real
reason to complain is the duplicative construction of the IndexInfo
structures. But the double call of brininsertcleanup breaks the
existing coding of that function, and it might well break third-party
aminsertcleanup callbacks if there are any. So I think we should
institute a policy that this coding pattern is forbidden.

And finally, 0004 fixes worker.c to not do that. This turned out
simpler than I thought, because I was misled by believing that the
ExecOpenIndices/ExecCloseIndices calls in apply_handle_tuple_routing
itself do something useful. They don't, and should be nuked outright.

My colleague Nisha reported off-list about a crash in logical replication that
occurs during unique constraint violations on leaf partitions. Upon
investigation, I confirmed that this crash started happening after commit
9ff6867.

The problem arises because unique key conflict detection relied on the
ExecOpenIndices call in apply_handle_insert_internal and
apply_handle_tuple_routing with the speculative parameter set to true to
construct necessary index information. However, these openings were redundant,
as indexes had already been opened during target partition searches via the
parent table (e.g., using ExecFindPartition). Hence, they were removed in
commit 9ff6867. Unfortunately, ExecFindPartition opens indexes without
constructing the needed index information for conflict detection, which leads
to the crash.

To fix it, I tried to add a detect_conflict flag in ModifyTableState, enabling
ExecFindPartition() to internally build the required index information in this
case, like the attachment.

An alternative approach may be to delay index information initialization until
immediately before executing the actual update or insert. E.g., do that in
InitConflictIndexes(). This approach can also avoid unnecessary construction of
index information when the tuple to be updated is not found, or during a
cross-partition update where index information for the source partition is not
required. However, it introduces an additional location for index
initialization, potentially increasing maintenance efforts.

Best Regards,
Hou zj

Attachments:

v1-0001-Fix-crashes-in-logical-replication-during-unique-.patchapplication/octet-stream; name=v1-0001-Fix-crashes-in-logical-replication-during-unique-.patchDownload+48-3
#19Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#18)
Re: BUG #18815: Logical replication worker Segmentation fault

On Wed, Mar 26, 2025 at 10:38 AM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:

On Wed, Feb 19, 2025 at 4:38 AM Tom Lane wrote:

My colleague Nisha reported off-list about a crash in logical replication that
occurs during unique constraint violations on leaf partitions. Upon
investigation, I confirmed that this crash started happening after commit
9ff6867.

Even though the commit 9ff68679b5 is backpatched to 17, the proposed
patch needs to be applied only for HEAD because the requirement for
additional conflict information is new to HEAD, right?

The problem arises because unique key conflict detection relied on the
ExecOpenIndices call in apply_handle_insert_internal and
apply_handle_tuple_routing with the speculative parameter set to true to
construct necessary index information. However, these openings were redundant,
as indexes had already been opened during target partition searches via the
parent table (e.g., using ExecFindPartition). Hence, they were removed in
commit 9ff6867. Unfortunately, ExecFindPartition opens indexes without
constructing the needed index information for conflict detection, which leads
to the crash.

To fix it, I tried to add a detect_conflict flag in ModifyTableState, enabling
ExecFindPartition() to internally build the required index information in this
case, like the attachment.

I have a question about the additional information built in
ExecOpenIndices(). We built the required information when the index is
not used for exclusion constraint whereas we set the specConflict flag
in ExecInsertIndexTuples() even for the exclusion constraints case.
The specConflict flag is used by the caller to report conflicts. So,
won't we need to build the required information in ExecOpenIndices()
even when the index is used for exclusion constraint? What is the
behavior of conflict reporting code in case of exclusion constraints?

An alternative approach may be to delay index information initialization until
immediately before executing the actual update or insert. E.g., do that in
InitConflictIndexes().

Right, this is also worth considering. But let us first conclude on
the existing approach to build the required information in
ExecOpenIndices().

--
With Regards,
Amit Kapila.

#20Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#19)
RE: BUG #18815: Logical replication worker Segmentation fault

On Mon, Apr 7, 2025 at 5:37 PM Amit Kapila wrote:

On Wed, Mar 26, 2025 at 10:3 AM Zhijie Hou (Fujitsu)
<houzj.fnst@fujitsu.com> wrote:

On Wed, Feb 19, 2025 at 4:38 AM Tom Lane wrote:

My colleague Nisha reported off-list about a crash in logical replication that
occurs during unique constraint violations on leaf partitions. Upon
investigation, I confirmed that this crash started happening after commit
9ff6867.

Even though the commit 9ff68679b5 is backpatched to 17, the proposed
patch needs to be applied only for HEAD because the requirement for
additional conflict information is new to HEAD, right?

Right.

The problem arises because unique key conflict detection relied on the
ExecOpenIndices call in apply_handle_insert_internal and
apply_handle_tuple_routing with the speculative parameter set to true to
construct necessary index information. However, these openings were

redundant,

as indexes had already been opened during target partition searches via the
parent table (e.g., using ExecFindPartition). Hence, they were removed in
commit 9ff6867. Unfortunately, ExecFindPartition opens indexes without
constructing the needed index information for conflict detection, which leads
to the crash.

To fix it, I tried to add a detect_conflict flag in ModifyTableState, enabling
ExecFindPartition() to internally build the required index information in this
case, like the attachment.

I have a question about the additional information built in
ExecOpenIndices(). We built the required information when the index is
not used for exclusion constraint whereas we set the specConflict flag
in ExecInsertIndexTuples() even for the exclusion constraints case.
The specConflict flag is used by the caller to report conflicts. So,
won't we need to build the required information in ExecOpenIndices()
even when the index is used for exclusion constraint?

I think the index information used for detecting exclusion conflict is always
initialized In ExecOpenIndices-> BuildIndexInfo -> RelationGetExclusionInfo
regardless of the input parameter.

What is the
behavior of conflict reporting code in case of exclusion constraints?

Under logical replication context, since we do not detect conflicts for exclusion
constraints, it would simply report the original constraint violation ERROR.

Best Regards,
Hou zj

#21Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#20)
#22Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#21)
#23Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#22)