error “server process was terminated by signal 11: Segmentation fault” running pg_create_logical_replication_slot using pgoutput plugin

Started by abrahim abrahao10 months ago8 messagesgeneral
Jump to latest
#1abrahim abrahao
a_abrahao@yahoo.com.br

I got error “server process was terminated by signal 11: Segmentation fault” using pg_create_logical_replication_slot with pgoutput plugin parameter and using test_decoding worked fine, any idea that is wrong?

Note: I am using docker container and I also updated shm-size from 1024mb to 2g and I am using shared_buffers=1.5GB. This is a test server and there is nothing else running. IT is the first time I am working with logical replication.
See details below
postgresql.conf file:
wal_level = logical max_replication_slots = 10  max_wal_senders = 20listen_addresses = '*'

psql -U postgres -h postgres -c "SELECT pg_create_logical_replication_slot('support7561_repslot', 'pgoutput');"
SSL SYSCALL error: EOF detected
connection to server was lost

< 2025-07-08 14:57:08.653 UTC psql postgres postgres 172.18.0.94(53414) SELECT 00000 2025-07-08 14:57:07 UTC 1096 686d31c3.448 2025-07-08 14:57:08.653 UTC > LOG: Initializing CDC decoder
< 2025-07-08 14:57:08.653 UTC psql postgres postgres 172.18.0.94(53414) SELECT 00000 2025-07-08 14:57:07 UTC 1096 686d31c3.448 2025-07-08 14:57:08.653 UTC > STATEMENT: SELECT pg_create_logical_replication_slot('support7561_repslot', 'pgoutput');
< 2025-07-08 14:57:08.821 UTC 00000 2025-07-08 14:55:38 UTC 923 686d316a.39b 2025-07-08 14:57:08.821 UTC > LOG: server process (PID 1096) was terminated by signal 11: Segmentation fault
< 2025-07-08 14:57:08.821 UTC 00000 2025-07-08 14:55:38 UTC 923 686d316a.39b 2025-07-08 14:57:08.821 UTC > DETAIL: Failed process was running: SELECT pg_create_logical_replication_slot('support7561_repslot', 'pgoutput');
< 2025-07-08 14:57:08.821 UTC 00000 2025-07-08 14:55:38 UTC 923 686d316a.39b 2025-07-08 14:57:08.821 UTC > LOG: terminating any other active server processes
< 2025-07-08 14:57:08.829 UTC 00000 2025-07-08 14:55:38 UTC 923 686d316a.39b 2025-07-08 14:57:08.829 UTC > LOG: all server processes terminated; reinitializing
< 2025-07-08 14:57:09.215 UTC 00000 2025-07-08 14:57:09 UTC 1098 686d31c5.44a 2025-07-08 14:57:09.215 UTC > LOG: database system was interrupted; last known up at 2025-07-08 14:55:39 UTC
< 2025-07-08 14:57:10.037 UTC [unknown] postgres postgres 172.18.0.217(33506) 57P03 2025-07-08 14:57:10 UTC 1101 686d31c6.44d 2025-07-08 14:57:10.037 UTC > FATAL: the database system is in recovery mode
< 2025-07-08 14:57:10.437 UTC 00000 2025-07-08 14:57:09 UTC 1098 686d31c5.44a 2025-07-08 14:57:10.437 UTC > LOG: database system was not properly shut down; automatic recovery in progress
< 2025-07-08 14:57:10.450 UTC 00000 2025-07-08 14:57:09 UTC 1098 686d31c5.44a 2025-07-08 14:57:10.450 UTC > LOG: redo starts at 1FB9/C0000A0
< 2025-07-08 14:57:10.456 UTC 00000 2025-07-08 14:57:09 UTC 1098 686d31c5.44a 2025-07-08 14:57:10.456 UTC > LOG: invalid record length at 1FB9/C054DF8: wanted 24, got 0
< 2025-07-08 14:57:10.456 UTC 00000 2025-07-08 14:57:09 UTC 1098 686d31c5.44a 2025-07-08 14:57:10.456 UTC > LOG: redo done at 1FB9/C054DC0 system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
< 2025-07-08 14:57:10.475 UTC 00000 2025-07-08 14:57:09 UTC 1099 686d31c5.44b 2025-07-08 14:57:10.475 UTC > LOG: checkpoint starting: end-of-recovery immediate wait
< 2025-07-08 14:57:10.501 UTC 00000 2025-07-08 14:57:09 UTC 1099 686d31c5.44b 2025-07-08 14:57:10.501 UTC > LOG: checkpoint complete: wrote 86 buffers (0.0%); 0 WAL file(s) added, 0 removed, 2 recycled; write=0.010 s, sync=0.007 s, total=0.028 s; sync files=18, longest=0.003 s, average=0.001 s; distance=339 kB, estimate=339 kB
< 2025-07-08 14:57:10.510 UTC 00000 2025-07-08 14:55:38 UTC 923 686d316a.39b 2025-07-08 14:57:10.510 UTC > LOG: database system is ready to accept connections

psql -U postgres -h postgres -c "SELECT pg_create_logical_replication_slot('support7561_repslot', 'test_decoding');"
pg_create_logical_replication_slot
------------------------------------
(support7561_repslot,1FB9/C081668)
(1 row)

postgres@support7560_postgres:/var/lib/postgresql/15/main$ psql -U postgres -h postgres -c "SELECT slot_name, plugin, slot_type, database, active, restart_lsn, confirmed_flush_lsn FROM pg_replication_slots;"
slot_name | plugin | slot_type | database | active | restart_lsn | confirmed_flush_lsn
---------------------+---------------+-----------+----------+--------+--------------+---------------------
support7561_repslot | test_decoding | logical | postgres | f | 1FB9/C081630 | 1FB9/C081668

SHOW shared_buffers;
shared_buffers
----------------
1532512kB
(1 row)

postgres=# \! uname -a
Linux support7560_postgres 6.8.0-1030-gcp #32~22.04.1-Ubuntu SMP Tue Apr 29 23:17:09 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux

psql -U postgres -h postgres -c "select version()"
version
-------------------------------------------------------------------------------------------------------------------------------------
PostgreSQL 15.13 (Ubuntu 15.13-1.pgdg24.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, 64-bit

#2Shlok Kyal
shlok.kyal.oss@gmail.com
In reply to: abrahim abrahao (#1)
Re: error “server process was terminated by signal 11: Segmentation fault” running pg_create_logical_replication_slot using pgoutput plugin

On Wed, 9 Jul 2025 at 11:43, abrahim abrahao <a_abrahao@yahoo.com.br> wrote:

I got error “server process was terminated by signal 11: Segmentation fault” using pg_create_logical_replication_slot with pgoutput plugin parameter and using test_decoding worked fine, any idea that is wrong?

Note: I am using docker container and I also updated shm-size from 1024mb to 2g and I am using shared_buffers=1.5GB.
This is a test server and there is nothing else running. IT is the first time I am working with logical replication.

See details below

postgresql.conf file:
wal_level = logical
max_replication_slots = 10
max_wal_senders = 20
listen_addresses = '*'

psql -U postgres -h postgres -c "SELECT pg_create_logical_replication_slot('support7561_repslot', 'pgoutput');"
SSL SYSCALL error: EOF detected
connection to server was lost

< 2025-07-08 14:57:08.653 UTC psql postgres postgres 172.18.0.94(53414) SELECT 00000 2025-07-08 14:57:07 UTC 1096 686d31c3.448 2025-07-08 14:57:08.653 UTC > LOG: Initializing CDC decoder
< 2025-07-08 14:57:08.653 UTC psql postgres postgres 172.18.0.94(53414) SELECT 00000 2025-07-08 14:57:07 UTC 1096 686d31c3.448 2025-07-08 14:57:08.653 UTC > STATEMENT: SELECT pg_create_logical_replication_slot('support7561_repslot', 'pgoutput');
< 2025-07-08 14:57:08.821 UTC 00000 2025-07-08 14:55:38 UTC 923 686d316a.39b 2025-07-08 14:57:08.821 UTC > LOG: server process (PID 1096) was terminated by signal 11: Segmentation fault
< 2025-07-08 14:57:08.821 UTC 00000 2025-07-08 14:55:38 UTC 923 686d316a.39b 2025-07-08 14:57:08.821 UTC > DETAIL: Failed process was running: SELECT pg_create_logical_replication_slot('support7561_repslot', 'pgoutput');
< 2025-07-08 14:57:08.821 UTC 00000 2025-07-08 14:55:38 UTC 923 686d316a.39b 2025-07-08 14:57:08.821 UTC > LOG: terminating any other active server processes
< 2025-07-08 14:57:08.829 UTC 00000 2025-07-08 14:55:38 UTC 923 686d316a.39b 2025-07-08 14:57:08.829 UTC > LOG: all server processes terminated; reinitializing
< 2025-07-08 14:57:09.215 UTC 00000 2025-07-08 14:57:09 UTC 1098 686d31c5.44a 2025-07-08 14:57:09.215 UTC > LOG: database system was interrupted; last known up at 2025-07-08 14:55:39 UTC
< 2025-07-08 14:57:10.037 UTC [unknown] postgres postgres 172.18.0.217(33506) 57P03 2025-07-08 14:57:10 UTC 1101 686d31c6.44d 2025-07-08 14:57:10.037 UTC > FATAL: the database system is in recovery mode
< 2025-07-08 14:57:10.437 UTC 00000 2025-07-08 14:57:09 UTC 1098 686d31c5.44a 2025-07-08 14:57:10.437 UTC > LOG: database system was not properly shut down; automatic recovery in progress
< 2025-07-08 14:57:10.450 UTC 00000 2025-07-08 14:57:09 UTC 1098 686d31c5.44a 2025-07-08 14:57:10.450 UTC > LOG: redo starts at 1FB9/C0000A0
< 2025-07-08 14:57:10.456 UTC 00000 2025-07-08 14:57:09 UTC 1098 686d31c5.44a 2025-07-08 14:57:10.456 UTC > LOG: invalid record length at 1FB9/C054DF8: wanted 24, got 0
< 2025-07-08 14:57:10.456 UTC 00000 2025-07-08 14:57:09 UTC 1098 686d31c5.44a 2025-07-08 14:57:10.456 UTC > LOG: redo done at 1FB9/C054DC0 system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
< 2025-07-08 14:57:10.475 UTC 00000 2025-07-08 14:57:09 UTC 1099 686d31c5.44b 2025-07-08 14:57:10.475 UTC > LOG: checkpoint starting: end-of-recovery immediate wait
< 2025-07-08 14:57:10.501 UTC 00000 2025-07-08 14:57:09 UTC 1099 686d31c5.44b 2025-07-08 14:57:10.501 UTC > LOG: checkpoint complete: wrote 86 buffers (0.0%); 0 WAL file(s) added, 0 removed, 2 recycled; write=0.010 s, sync=0.007 s, total=0.028 s; sync files=18, longest=0.003 s, average=0.001 s; distance=339 kB, estimate=339 kB
< 2025-07-08 14:57:10.510 UTC 00000 2025-07-08 14:55:38 UTC 923 686d316a.39b 2025-07-08 14:57:10.510 UTC > LOG: database system is ready to accept connections

psql -U postgres -h postgres -c "SELECT pg_create_logical_replication_slot('support7561_repslot', 'test_decoding');"
pg_create_logical_replication_slot
------------------------------------
(support7561_repslot,1FB9/C081668)
(1 row)

postgres@support7560_postgres:/var/lib/postgresql/15/main$ psql -U postgres -h postgres -c "SELECT slot_name, plugin, slot_type, database, active, restart_lsn, confirmed_flush_lsn FROM pg_replication_slots;"
slot_name | plugin | slot_type | database | active | restart_lsn | confirmed_flush_lsn
---------------------+---------------+-----------+----------+--------+--------------+---------------------
support7561_repslot | test_decoding | logical | postgres | f | 1FB9/C081630 | 1FB9/C081668

SHOW shared_buffers;
shared_buffers
----------------
1532512kB
(1 row)

postgres=# \! uname -a
Linux support7560_postgres 6.8.0-1030-gcp #32~22.04.1-Ubuntu SMP Tue Apr 29 23:17:09 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux

psql -U postgres -h postgres -c "select version()"
version
-------------------------------------------------------------------------------------------------------------------------------------
PostgreSQL 15.13 (Ubuntu 15.13-1.pgdg24.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, 64-bit

Hi Abrahim,

Can you also share the stack trace for the crash?
Also can you share the exact steps used to reproduce the issue?

Thanks and Regards,
Shlok Kyal

#3Shlok Kyal
shlok.kyal.oss@gmail.com
In reply to: Shlok Kyal (#2)
Re: error “server process was terminated by signal 11: Segmentation fault” running pg_create_logical_replication_slot using pgoutput plugin

On Wed, 9 Jul 2025 at 12:19, Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:

On Wed, 9 Jul 2025 at 11:43, abrahim abrahao <a_abrahao@yahoo.com.br> wrote:

I got error “server process was terminated by signal 11: Segmentation fault” using pg_create_logical_replication_slot with pgoutput plugin parameter and using test_decoding worked fine, any idea that is wrong?

Note: I am using docker container and I also updated shm-size from 1024mb to 2g and I am using shared_buffers=1.5GB.
This is a test server and there is nothing else running. IT is the first time I am working with logical replication.

See details below

postgresql.conf file:
wal_level = logical
max_replication_slots = 10
max_wal_senders = 20
listen_addresses = '*'

psql -U postgres -h postgres -c "SELECT pg_create_logical_replication_slot('support7561_repslot', 'pgoutput');"
SSL SYSCALL error: EOF detected
connection to server was lost

< 2025-07-08 14:57:08.653 UTC psql postgres postgres 172.18.0.94(53414) SELECT 00000 2025-07-08 14:57:07 UTC 1096 686d31c3.448 2025-07-08 14:57:08.653 UTC > LOG: Initializing CDC decoder
< 2025-07-08 14:57:08.653 UTC psql postgres postgres 172.18.0.94(53414) SELECT 00000 2025-07-08 14:57:07 UTC 1096 686d31c3.448 2025-07-08 14:57:08.653 UTC > STATEMENT: SELECT pg_create_logical_replication_slot('support7561_repslot', 'pgoutput');
< 2025-07-08 14:57:08.821 UTC 00000 2025-07-08 14:55:38 UTC 923 686d316a.39b 2025-07-08 14:57:08.821 UTC > LOG: server process (PID 1096) was terminated by signal 11: Segmentation fault
< 2025-07-08 14:57:08.821 UTC 00000 2025-07-08 14:55:38 UTC 923 686d316a.39b 2025-07-08 14:57:08.821 UTC > DETAIL: Failed process was running: SELECT pg_create_logical_replication_slot('support7561_repslot', 'pgoutput');
< 2025-07-08 14:57:08.821 UTC 00000 2025-07-08 14:55:38 UTC 923 686d316a.39b 2025-07-08 14:57:08.821 UTC > LOG: terminating any other active server processes
< 2025-07-08 14:57:08.829 UTC 00000 2025-07-08 14:55:38 UTC 923 686d316a.39b 2025-07-08 14:57:08.829 UTC > LOG: all server processes terminated; reinitializing
< 2025-07-08 14:57:09.215 UTC 00000 2025-07-08 14:57:09 UTC 1098 686d31c5.44a 2025-07-08 14:57:09.215 UTC > LOG: database system was interrupted; last known up at 2025-07-08 14:55:39 UTC
< 2025-07-08 14:57:10.037 UTC [unknown] postgres postgres 172.18.0.217(33506) 57P03 2025-07-08 14:57:10 UTC 1101 686d31c6.44d 2025-07-08 14:57:10.037 UTC > FATAL: the database system is in recovery mode
< 2025-07-08 14:57:10.437 UTC 00000 2025-07-08 14:57:09 UTC 1098 686d31c5.44a 2025-07-08 14:57:10.437 UTC > LOG: database system was not properly shut down; automatic recovery in progress
< 2025-07-08 14:57:10.450 UTC 00000 2025-07-08 14:57:09 UTC 1098 686d31c5.44a 2025-07-08 14:57:10.450 UTC > LOG: redo starts at 1FB9/C0000A0
< 2025-07-08 14:57:10.456 UTC 00000 2025-07-08 14:57:09 UTC 1098 686d31c5.44a 2025-07-08 14:57:10.456 UTC > LOG: invalid record length at 1FB9/C054DF8: wanted 24, got 0
< 2025-07-08 14:57:10.456 UTC 00000 2025-07-08 14:57:09 UTC 1098 686d31c5.44a 2025-07-08 14:57:10.456 UTC > LOG: redo done at 1FB9/C054DC0 system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
< 2025-07-08 14:57:10.475 UTC 00000 2025-07-08 14:57:09 UTC 1099 686d31c5.44b 2025-07-08 14:57:10.475 UTC > LOG: checkpoint starting: end-of-recovery immediate wait
< 2025-07-08 14:57:10.501 UTC 00000 2025-07-08 14:57:09 UTC 1099 686d31c5.44b 2025-07-08 14:57:10.501 UTC > LOG: checkpoint complete: wrote 86 buffers (0.0%); 0 WAL file(s) added, 0 removed, 2 recycled; write=0.010 s, sync=0.007 s, total=0.028 s; sync files=18, longest=0.003 s, average=0.001 s; distance=339 kB, estimate=339 kB
< 2025-07-08 14:57:10.510 UTC 00000 2025-07-08 14:55:38 UTC 923 686d316a.39b 2025-07-08 14:57:10.510 UTC > LOG: database system is ready to accept connections

psql -U postgres -h postgres -c "SELECT pg_create_logical_replication_slot('support7561_repslot', 'test_decoding');"
pg_create_logical_replication_slot
------------------------------------
(support7561_repslot,1FB9/C081668)
(1 row)

postgres@support7560_postgres:/var/lib/postgresql/15/main$ psql -U postgres -h postgres -c "SELECT slot_name, plugin, slot_type, database, active, restart_lsn, confirmed_flush_lsn FROM pg_replication_slots;"
slot_name | plugin | slot_type | database | active | restart_lsn | confirmed_flush_lsn
---------------------+---------------+-----------+----------+--------+--------------+---------------------
support7561_repslot | test_decoding | logical | postgres | f | 1FB9/C081630 | 1FB9/C081668

SHOW shared_buffers;
shared_buffers
----------------
1532512kB
(1 row)

postgres=# \! uname -a
Linux support7560_postgres 6.8.0-1030-gcp #32~22.04.1-Ubuntu SMP Tue Apr 29 23:17:09 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux

psql -U postgres -h postgres -c "select version()"
version
-------------------------------------------------------------------------------------------------------------------------------------
PostgreSQL 15.13 (Ubuntu 15.13-1.pgdg24.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, 64-bit

Hi Abrahim,

Can you also share the stack trace for the crash?
Also can you share the exact steps used to reproduce the issue?

Also, I was going to the logs on found:

< 2025-07-08 14:57:08.653 UTC psql postgres postgres 172.18.0.94(53414) SELECT 00000 2025-07-08 14:57:07 UTC 1096 686d31c3.448 2025-07-08 14:57:08.653 UTC > LOG: Initializing CDC decoder

This log is not present in Postgres source code. Why is this log appearing here?
Also I would suggest you to post this issue in pgsql-hackers mailing list [1]pgsql-hackers@lists.postgresql.org.

[1]: pgsql-hackers@lists.postgresql.org

Thanks and Regards,
Shlok Kyal

#4Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Shlok Kyal (#3)
RE: error “server process was terminated by signal 11: Segmentation fault” running pg_create_logical_replication_slot using pgoutput plugin

Dear Shlok, Abrahim,

Also, I was going to the logs on found:

< 2025-07-08 14:57:08.653 UTC psql postgres postgres 172.18.0.94(53414)

SELECT 00000 2025-07-08 14:57:07 UTC 1096 686d31c3.448 2025-07-08
14:57:08.653 UTC > LOG: Initializing CDC decoder

This log is not present in Postgres source code. Why is this log appearing here?

I found the output in Citus source code [1]https://github.com/citusdata/citus/blob/5deaf9a61673e10c183b6d4f13593f168e1c2c10/src/backend/distributed/cdc/cdc_decoder.c#L85. So, I'm afraid that you may load the
shared library provided by Citus when you created the replication slot.

If so, Citus community may be the better place to discuss the bug.
We can help if you can reproduce the bug by the PostgreSQL core codes.

[1]: https://github.com/citusdata/citus/blob/5deaf9a61673e10c183b6d4f13593f168e1c2c10/src/backend/distributed/cdc/cdc_decoder.c#L85

Best regards,
Hayato Kuroda
FUJITSU LIMITED

#5abrahim abrahao
a_abrahao@yahoo.com.br
In reply to: Hayato Kuroda (Fujitsu) (#4)
Re: error “server process was terminated by signal 11: Segmentation fault” running pg_create_logical_replication_slot using pgoutput plugin

Thanks  Hayato and Shlok, The Citus extension package is installed, but it is not preload on shared_preload_libraries and citus extesion is not created.I will create a new container without Citus extension package and adding stack trace ( I think this is the one you're talking about) as soon as possible and I will update here as soon I complete the test.
See information below.
show shared_preload_libraries;   shared_preload_libraries------------------------------- pg_stat_statements, pg_repack(1 row)
\dx                                            List of installed extensions        Name        | Version |   Schema   |                              Description--------------------+---------+------------+------------------------------------------------------------------------ btree_gist         | 1.7     | public     | support for indexing common datatypes in GiST ltree              | 1.2     | public     | data type for hierarchical tree-like structures pg_stat_statements | 1.10    | public     | track planning and execution statistics of all SQL statements executed pg_trgm            | 1.5     | public     | text similarity measurement and index searching based on trigrams pgcrypto           | 1.3     | public     | cryptographic functions plpgsql            | 1.0     | pg_catalog | PL/pgSQL procedural language postgis            | 3.5.1   | public     | PostGIS geometry and geography spatial types and functions uuid-ossp          | 1.1     | public     | generate universally unique identifiers (UUIDs)(8 rows)

Steps done until pg_create_logical_replication_slot command (just the steps, does not include the full command)
set wal_level, max_replication_slots, max_wal_senders and listen_addresses         name          | setting-----------------------+--------- listen_addresses      | * max_replication_slots | 10 max_wal_senders       | 20 wal_level             | logical
Changed pg_hba filerestart databasepg_ctl restart -D $POSTGRESQL_DATA
create a user "CREATE USER user_rep WITH REPLICATION ENCRYPTED PASSWORD"ALTER DEFAULT PRIVILEGES FOR ROLE postgres IN SCHEMA myg GRANT SELECT ON TABLES TO user_rep;CREATE PUBLICATION myg_pub FOR TABLES IN SCHEMA myg;ALTER PUBLICATION myg_pub ADD TABLE myg
SELECT snapshot_name FROM pg_create_logical_replication_slot

On Wednesday, July 9, 2025 at 10:19:07 p.m. EDT, Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com> wrote:

Dear Shlok, Abrahim,

Also, I was going to the logs on found:

< 2025-07-08 14:57:08.653 UTC psql postgres postgres 172.18.0.94(53414)

SELECT 00000 2025-07-08 14:57:07 UTC 1096 686d31c3.448 2025-07-08
14:57:08.653 UTC > LOG:  Initializing CDC decoder

This log is not present in Postgres source code. Why is this log appearing here?

I found the output in Citus source code [1]https://github.com/citusdata/citus/blob/5deaf9a61673e10c183b6d4f13593f168e1c2c10/src/backend/distributed/cdc/cdc_decoder.c#L85. So, I'm afraid that you may load the
shared library provided by Citus when you created the replication slot.

If so, Citus community may be the better place to discuss the bug.
We can help if you can reproduce the bug by the PostgreSQL core codes.

[1]: https://github.com/citusdata/citus/blob/5deaf9a61673e10c183b6d4f13593f168e1c2c10/src/backend/distributed/cdc/cdc_decoder.c#L85

Best regards,
Hayato Kuroda
FUJITSU LIMITED

#6Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: abrahim abrahao (#5)
RE: error “server process was terminated by signal 11: Segmentation fault” running pg_create_logical_replication_slot using pgoutput plugin

Dear Abrahim

The Citus extension package is installed, but it is not preload on shared_preload_libraries
and citus extesion is not created.

It is possible that a shared library is loaded even if shared_preload is not set
and CREATE EXTENSION is not executed. Per my understanding the specified plugin
name would be searched by the same rule as other libraries. See [1]https://www.postgresql.org/docs/devel/xfunc-c.html#XFUNC-C-DYNLOAD.

Another example is 'test_decoding'. It is a sample plugin which postgres-core
includes. Anyone can use the plugin via SQL function. CREATE EXTENSION is not needed.

```
postgres=# SELECT pg_create_logical_replication_slot('slot', 'test_decoding');
pg_create_logical_replication_slot
------------------------------------
(slot,0/1829CE0)
(1 row)
```

I will create a new container without Citus extension package ...

Yeah, it is quite helpful to understand the issue correctly. Thanks for working on it.

[1]: https://www.postgresql.org/docs/devel/xfunc-c.html#XFUNC-C-DYNLOAD

Best regards,
Hayato Kuroda
FUJITSU LIMITED

#7abrahim abrahao
a_abrahao@yahoo.com.br
In reply to: Hayato Kuroda (Fujitsu) (#6)
Re: error “server process was terminated by signal 11: Segmentation fault” running pg_create_logical_replication_slot using pgoutput plugin

Hi Hayato and Shlok, I confirmed that it is related of Citus, everytrhing worked after remove the Citus instalation from the docker image.
I did not added stack trace yet on the new instalation. It seems that the present Citus installation was done in an unusual way. I will work to figure out a better way to install it.

Thanks for your help, I appreciate it.

On Thursday, July 10, 2025 at 08:10:18 p.m. MDT, Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com> wrote:

Dear Abrahim

The Citus extension package is installed, but it is not preload on shared_preload_libraries
and citus extesion is not created.

It is possible that a shared library is loaded even if shared_preload is not set
and CREATE EXTENSION is not executed. Per my understanding the specified plugin
name would be searched by the same rule as other libraries. See [1]https://www.postgresql.org/docs/devel/xfunc-c.html#XFUNC-C-DYNLOAD.

Another example is 'test_decoding'. It is a sample plugin which postgres-core
includes. Anyone can use the plugin via SQL function. CREATE EXTENSION is not needed.

```
postgres=# SELECT pg_create_logical_replication_slot('slot', 'test_decoding');
pg_create_logical_replication_slot
------------------------------------
(slot,0/1829CE0)
(1 row)
```

I will create a new container without Citus extension package ...

Yeah, it is quite helpful to understand the issue correctly. Thanks for working on it.

[1]: https://www.postgresql.org/docs/devel/xfunc-c.html#XFUNC-C-DYNLOAD

Best regards,
Hayato Kuroda
FUJITSU LIMITED

#8Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: abrahim abrahao (#7)
RE: error “server process was terminated by signal 11: Segmentation fault” running pg_create_logical_replication_slot using pgoutput plugin

Dear Abrahim,

Hi Hayato and Shlok, I confirmed that it is related of Citus, everytrhing worked
after remove the Citus instalation from the docker image.

Thanks for the confirmation. I also feel that the issue is related with Citus. I recommend
to report the Citus's community [1]https://github.com/citusdata/citus to solve the issue.

[1]: https://github.com/citusdata/citus

Best regards,
Hayato Kuroda
FUJITSU LIMITED