pg_xlog on a hot_stanby slave
Hi everyone,
Questions about pg_xlogs again...
I have two Postgresql 9.1 servers in a master/slave stream replication
(hot_standby).
Psql01 (master) is backuped with Barman and pg_xlogs is correctly
purged (archive_command is used).
Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G for 7 days
only, it keeps growing up until disk space is full). I have found
documentation and tutorials, mailing list, but I don't know what is
suitable for a Slave. Leads I've found :
- checkpoints
- archive_command
- archive_cleanup
Master postgresq.conf :
[...]
wal_level = 'hot_standby'
archive_mode = on
archive_command = 'rsync -az /var/lib/postgresql/9.1/main/pg_xlog/%f
barman@nas.lan:/data/pgbarman/psql01/incoming/%f'
max_wal_senders = 5
wal_keep_segments = 64
autovacuum = on
Slave postgresql.conf :
[...]
wal_level = minimal
wal_keep_segments = 32
hot_standby = on
Slave recovery.conf :
standby_mode = 'on'
primary_conninfo = 'host=10.0.0.1 port=5400 user=postgres'
trigger_file = '/var/lib/postgresql/9.1/triggersql'
restore_command='cp /var/lib/postgresql/9.1/wal_archive/%f "%p"'
archive_cleanup_command =
'/usr/lib/postgresql/9.1/bin/pg_archivecleanup
/var/lib/postgresql/9.1/wal_archive/ %r'
How can I reduce the number of WAL files on the hot_stanby slave ?
Thanks
Regards.
Xavier C.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Le 16 juin 2015 10:57 AM, "Xavier 12" <maniatux@gmail.com> a écrit :
Hi everyone,
Questions about pg_xlogs again...
I have two Postgresql 9.1 servers in a master/slave stream replication
(hot_standby).Psql01 (master) is backuped with Barman and pg_xlogs is correctly
purged (archive_command is used).Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G for 7 days
only, it keeps growing up until disk space is full). I have found
documentation and tutorials, mailing list, but I don't know what is
suitable for a Slave. Leads I've found :- checkpoints
- archive_command
- archive_cleanupMaster postgresq.conf :
[...]
wal_level = 'hot_standby'
archive_mode = on
archive_command = 'rsync -az /var/lib/postgresql/9.1/main/pg_xlog/%f
barman@nas.lan:/data/pgbarman/psql01/incoming/%f'
max_wal_senders = 5
wal_keep_segments = 64
autovacuum = onSlave postgresql.conf :
[...]
wal_level = minimal
wal_keep_segments = 32
hot_standby = onSlave recovery.conf :
standby_mode = 'on'
primary_conninfo = 'host=10.0.0.1 port=5400 user=postgres'
trigger_file = '/var/lib/postgresql/9.1/triggersql'
restore_command='cp /var/lib/postgresql/9.1/wal_archive/%f "%p"'
archive_cleanup_command =
'/usr/lib/postgresql/9.1/bin/pg_archivecleanup
/var/lib/postgresql/9.1/wal_archive/ %r'How can I reduce the number of WAL files on the hot_stanby slave ?
Depends on what you're talking about. If they are archived wal,
pg_archive_cleanup is what you're looking for.
I don't think so. There is no archive_command and the master doesn't
ship its wal here.
But how can I check that ?
2015-06-16 12:41 GMT+02:00 Guillaume Lelarge <guillaume@lelarge.info>:
Le 16 juin 2015 10:57 AM, "Xavier 12" <maniatux@gmail.com> a écrit :
Hi everyone,
Questions about pg_xlogs again...
I have two Postgresql 9.1 servers in a master/slave stream replication
(hot_standby).Psql01 (master) is backuped with Barman and pg_xlogs is correctly
purged (archive_command is used).Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G for 7 days
only, it keeps growing up until disk space is full). I have found
documentation and tutorials, mailing list, but I don't know what is
suitable for a Slave. Leads I've found :- checkpoints
- archive_command
- archive_cleanupMaster postgresq.conf :
[...]
wal_level = 'hot_standby'
archive_mode = on
archive_command = 'rsync -az /var/lib/postgresql/9.1/main/pg_xlog/%f
barman@nas.lan:/data/pgbarman/psql01/incoming/%f'
max_wal_senders = 5
wal_keep_segments = 64
autovacuum = onSlave postgresql.conf :
[...]
wal_level = minimal
wal_keep_segments = 32
hot_standby = onSlave recovery.conf :
standby_mode = 'on'
primary_conninfo = 'host=10.0.0.1 port=5400 user=postgres'
trigger_file = '/var/lib/postgresql/9.1/triggersql'
restore_command='cp /var/lib/postgresql/9.1/wal_archive/%f "%p"'
archive_cleanup_command =
'/usr/lib/postgresql/9.1/bin/pg_archivecleanup
/var/lib/postgresql/9.1/wal_archive/ %r'How can I reduce the number of WAL files on the hot_stanby slave ?
Depends on what you're talking about. If they are archived wal,
pg_archive_cleanup is what you're looking for.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
[moving to -bugs]
Re: Xavier 12 2015-06-16 <CAMOV8iB3oRzC4f7UTzOwC2wT08do3voi+PGN07uJq+ayo9E=cQ@mail.gmail.com>
Hi everyone,
Questions about pg_xlogs again...
I have two Postgresql 9.1 servers in a master/slave stream replication
(hot_standby).Psql01 (master) is backuped with Barman and pg_xlogs is correctly
purged (archive_command is used).Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G for 7 days
only, it keeps growing up until disk space is full). I have found
documentation and tutorials, mailing list, but I don't know what is
suitable for a Slave. Leads I've found :
Hi,
I have the same problem here. Master/slave running on 9.3.current. On
the master everything is normal, but on the slave server, files in
pg_xlog and archive_status pile up. Interestingly, the filenames are
mostly 0x20 apart. (IRC user Kassandry is reporting the same issue on
9.4 as well, including the 0x20 spacing.)
[0]: root@synthesis:/var/lib/postgresql/9.3/ircservices/pg_xlog # l archive_status/ |tail -rw------- 1 postgres postgres 0 Jun 11 17:52 0000000100000031000000EC.ready -rw------- 1 postgres postgres 0 Jun 12 04:32 00000001000000320000000C.ready -rw------- 1 postgres postgres 0 Jun 12 15:32 00000001000000320000002C.ready -rw------- 1 postgres postgres 0 Jun 13 02:12 00000001000000320000004C.ready -rw------- 1 postgres postgres 0 Jun 13 13:13 00000001000000320000006C.ready -rw------- 1 postgres postgres 0 Jun 13 23:12 00000001000000320000008C.ready -rw------- 1 postgres postgres 0 Jun 15 08:15 0000000100000032000000F1.ready -rw------- 1 postgres postgres 0 Jun 16 10:15 00000001000000330000003F.ready -rw------- 1 postgres postgres 0 Jun 16 19:35 00000001000000330000005E.done -rw------- 1 postgres postgres 0 Jun 16 20:15 000000010000003300000060.done
insgesamt 2099744
drwx------ 3 postgres postgres 12288 Jun 15 22:29 ./
drwx------ 15 postgres postgres 4096 Jun 15 22:29 ../
-rw------- 1 postgres postgres 16777216 Mär 9 04:25 000000010000001700000057
-rw------- 1 postgres postgres 16777216 Mär 9 19:06 000000010000001700000083
-rw------- 1 postgres postgres 16777216 Mär 10 05:46 0000000100000017000000A3
-rw------- 1 postgres postgres 16777216 Mär 10 16:26 0000000100000017000000C3
-rw------- 1 postgres postgres 16777216 Mär 12 06:26 000000010000001800000035
-rw------- 1 postgres postgres 16777216 Mär 13 01:06 00000001000000180000006D
-rw------- 1 postgres postgres 16777216 Mär 13 11:46 00000001000000180000008D
-rw------- 1 postgres postgres 16777216 Mär 13 22:26 0000000100000018000000AD
-rw------- 1 postgres postgres 16777216 Mär 14 09:06 0000000100000018000000CD
-rw------- 1 postgres postgres 16777216 Mär 14 19:46 0000000100000018000000ED
-rw------- 1 postgres postgres 16777216 Mär 15 05:52 00000001000000190000000D
-rw------- 1 postgres postgres 16777216 Mär 15 16:32 00000001000000190000002D
-rw------- 1 postgres postgres 16777216 Mär 16 03:12 00000001000000190000004D
-rw------- 1 postgres postgres 16777216 Mär 16 13:52 00000001000000190000006D
-rw------- 1 postgres postgres 16777216 Mär 17 00:32 00000001000000190000008D
-rw------- 1 postgres postgres 16777216 Mär 17 11:12 0000000100000019000000AD
-rw------- 1 postgres postgres 16777216 Mär 17 21:52 0000000100000019000000CD
-rw------- 1 postgres postgres 16777216 Mär 18 08:32 0000000100000019000000ED
-rw------- 1 postgres postgres 16777216 Mär 18 19:12 000000010000001A0000000D
-rw------- 1 postgres postgres 16777216 Mär 19 05:53 000000010000001A0000002D
-rw------- 1 postgres postgres 16777216 Mär 19 16:52 000000010000001A0000004D
-rw------- 1 postgres postgres 16777216 Mär 20 03:32 000000010000001A0000006D
-rw------- 1 postgres postgres 16777216 Mär 21 00:52 000000010000001A000000AD
-rw------- 1 postgres postgres 16777216 Mär 21 11:33 000000010000001A000000CD
-rw------- 1 postgres postgres 16777216 Mär 21 22:13 000000010000001A000000ED
-rw------- 1 postgres postgres 16777216 Mär 22 08:32 000000010000001B0000000D
-rw------- 1 postgres postgres 16777216 Mär 22 19:12 000000010000001B0000002D
-rw------- 1 postgres postgres 16777216 Mär 23 05:52 000000010000001B0000004D
-rw------- 1 postgres postgres 16777216 Mär 23 16:32 000000010000001B0000006D
-rw------- 1 postgres postgres 16777216 Mär 24 03:12 000000010000001B0000008D
-rw------- 1 postgres postgres 16777216 Mär 24 13:52 000000010000001B000000AD
-rw------- 1 postgres postgres 16777216 Mär 26 10:52 000000010000001C00000025
-rw------- 1 postgres postgres 16777216 Mär 26 21:32 000000010000001C00000045
-rw------- 1 postgres postgres 16777216 Mär 27 08:12 000000010000001C00000065
-rw------- 1 postgres postgres 16777216 Mär 27 18:52 000000010000001C00000085
-rw------- 1 postgres postgres 16777216 Mär 28 16:13 000000010000001C000000C5
-rw------- 1 postgres postgres 16777216 Mär 29 03:53 000000010000001C000000E5
-rw------- 1 postgres postgres 16777216 Mär 29 17:52 000000010000001D00000010
-rw------- 1 postgres postgres 16777216 Mär 30 04:32 000000010000001D00000030
-rw------- 1 postgres postgres 16777216 Mär 30 15:12 000000010000001D00000050
-rw------- 1 postgres postgres 16777216 Mär 31 01:52 000000010000001D00000070
-rw------- 1 postgres postgres 16777216 Mär 31 12:32 000000010000001D00000090
-rw------- 1 postgres postgres 16777216 Mär 31 23:12 000000010000001D000000B0
-rw------- 1 postgres postgres 16777216 Apr 11 15:32 0000000100000020000000B2
-rw------- 1 postgres postgres 16777216 Apr 12 02:12 0000000100000020000000D2
-rw------- 1 postgres postgres 16777216 Apr 12 12:32 0000000100000020000000F2
-rw------- 1 postgres postgres 16777216 Apr 12 23:12 000000010000002100000012
-rw------- 1 postgres postgres 16777216 Apr 13 09:52 000000010000002100000032
-rw------- 1 postgres postgres 16777216 Apr 13 20:32 000000010000002100000052
-rw------- 1 postgres postgres 16777216 Apr 14 07:12 000000010000002100000072
-rw------- 1 postgres postgres 16777216 Apr 14 17:52 000000010000002100000092
-rw------- 1 postgres postgres 16777216 Apr 15 04:32 0000000100000021000000B2
-rw------- 1 postgres postgres 16777216 Apr 15 15:12 0000000100000021000000D2
-rw------- 1 postgres postgres 16777216 Apr 21 17:32 00000001000000230000008A
-rw------- 1 postgres postgres 16777216 Apr 22 04:12 0000000100000023000000AA
-rw------- 1 postgres postgres 16777216 Apr 22 20:32 0000000100000023000000DB
-rw------- 1 postgres postgres 16777216 Apr 23 07:12 0000000100000023000000FB
-rw------- 1 postgres postgres 16777216 Apr 23 17:52 00000001000000240000001B
-rw------- 1 postgres postgres 16777216 Apr 24 04:32 00000001000000240000003B
-rw------- 1 postgres postgres 16777216 Apr 24 19:52 000000010000002400000069
-rw------- 1 postgres postgres 16777216 Apr 25 06:32 000000010000002400000089
-rw------- 1 postgres postgres 16777216 Apr 27 10:52 000000010000002500000027
-rw------- 1 postgres postgres 16777216 Apr 28 00:52 000000010000002500000051
-rw------- 1 postgres postgres 16777216 Apr 28 03:33 000000010000002500000059
-rw------- 1 postgres postgres 16777216 Apr 29 00:53 000000010000002500000099
-rw------- 1 postgres postgres 16777216 Apr 29 11:33 0000000100000025000000B9
-rw------- 1 postgres postgres 16777216 Apr 29 22:13 0000000100000025000000D9
-rw------- 1 postgres postgres 16777216 Apr 30 08:53 0000000100000025000000F9
-rw------- 1 postgres postgres 16777216 Apr 30 19:33 000000010000002600000019
-rw------- 1 postgres postgres 16777216 Mai 1 06:13 000000010000002600000039
-rw------- 1 postgres postgres 16777216 Mai 1 16:53 000000010000002600000059
-rw------- 1 postgres postgres 16777216 Mai 2 14:13 000000010000002600000099
-rw------- 1 postgres postgres 16777216 Mai 3 00:53 0000000100000026000000B9
-rw------- 1 postgres postgres 16777216 Mai 4 00:52 000000010000002700000002
-rw------- 1 postgres postgres 16777216 Mai 4 11:32 000000010000002700000022
-rw------- 1 postgres postgres 16777216 Mai 4 22:12 000000010000002700000042
-rw------- 1 postgres postgres 16777216 Mai 6 00:52 000000010000002700000092
-rw------- 1 postgres postgres 16777216 Mai 6 11:32 0000000100000027000000B2
-rw------- 1 postgres postgres 16777216 Mai 6 22:12 0000000100000027000000D2
-rw------- 1 postgres postgres 16777216 Mai 7 08:53 0000000100000027000000F2
-rw------- 1 postgres postgres 16777216 Mai 7 19:33 000000010000002800000012
-rw------- 1 postgres postgres 16777216 Mai 8 06:13 000000010000002800000032
-rw------- 1 postgres postgres 16777216 Mai 8 16:53 000000010000002800000052
-rw------- 1 postgres postgres 16777216 Mai 10 09:52 0000000100000028000000CF
-rw------- 1 postgres postgres 16777216 Mai 10 20:00 0000000100000028000000EF
-rw------- 1 postgres postgres 16777216 Mai 11 06:40 00000001000000290000000F
-rw------- 1 postgres postgres 16777216 Mai 11 17:20 00000001000000290000002F
-rw------- 1 postgres postgres 16777216 Mai 12 04:00 00000001000000290000004F
-rw------- 1 postgres postgres 16777216 Mai 12 14:40 00000001000000290000006F
-rw------- 1 postgres postgres 16777216 Mai 13 01:20 00000001000000290000008F
-rw------- 1 postgres postgres 16777216 Mai 13 12:01 0000000100000029000000AF
-rw------- 1 postgres postgres 16777216 Mai 13 22:38 0000000100000029000000CF
-rw------- 1 postgres postgres 16777216 Mai 14 09:18 0000000100000029000000EF
-rw------- 1 postgres postgres 16777216 Mai 14 19:58 000000010000002A0000000F
-rw------- 1 postgres postgres 16777216 Mai 15 06:38 000000010000002A0000002F
-rw------- 1 postgres postgres 16777216 Mai 15 17:18 000000010000002A0000004F
-rw------- 1 postgres postgres 16777216 Mai 16 03:58 000000010000002A0000006F
-rw------- 1 postgres postgres 16777216 Mai 17 01:19 000000010000002A000000AF
-rw------- 1 postgres postgres 16777216 Mai 18 00:32 000000010000002A000000F6
-rw------- 1 postgres postgres 16777216 Mai 18 11:12 000000010000002B00000016
-rw------- 1 postgres postgres 16777216 Mai 18 21:52 000000010000002B00000036
-rw------- 1 postgres postgres 16777216 Mai 19 08:32 000000010000002B00000056
-rw------- 1 postgres postgres 16777216 Mai 19 19:12 000000010000002B00000076
-rw------- 1 postgres postgres 16777216 Mai 20 05:52 000000010000002B00000096
-rw------- 1 postgres postgres 16777216 Mai 20 16:33 000000010000002B000000B6
-rw------- 1 postgres postgres 16777216 Mai 21 03:12 000000010000002B000000D6
-rw------- 1 postgres postgres 16777216 Jun 7 19:51 0000000100000030000000D5
-rw------- 1 postgres postgres 16777216 Jun 8 06:31 0000000100000030000000F5
-rw------- 1 postgres postgres 16777216 Jun 8 17:11 000000010000003100000015
-rw------- 1 postgres postgres 16777216 Jun 9 03:51 000000010000003100000035
-rw------- 1 postgres postgres 16777216 Jun 9 14:31 000000010000003100000055
-rw------- 1 postgres postgres 16777216 Jun 10 08:51 00000001000000310000008C
-rw------- 1 postgres postgres 16777216 Jun 10 19:31 0000000100000031000000AC
-rw------- 1 postgres postgres 16777216 Jun 11 06:11 0000000100000031000000CC
-rw------- 1 postgres postgres 16777216 Jun 11 16:51 0000000100000031000000EC
-rw------- 1 postgres postgres 16777216 Jun 12 03:31 00000001000000320000000C
-rw------- 1 postgres postgres 16777216 Jun 12 14:11 00000001000000320000002C
-rw------- 1 postgres postgres 16777216 Jun 13 00:52 00000001000000320000004C
-rw------- 1 postgres postgres 16777216 Jun 13 11:32 00000001000000320000006C
-rw------- 1 postgres postgres 16777216 Jun 13 22:12 00000001000000320000008C
-rw------- 1 postgres postgres 16777216 Jun 15 07:14 0000000100000032000000F1
-rw------- 1 postgres postgres 16777216 Jun 15 20:55 00000001000000330000001A
-rw------- 1 postgres postgres 16777216 Jun 15 21:14 00000001000000330000001B
-rw------- 1 postgres postgres 16777216 Jun 15 21:34 00000001000000330000001C
-rw------- 1 postgres postgres 16777216 Jun 15 22:29 00000001000000330000001D
-rw------- 1 postgres postgres 16777216 Jun 15 22:29 00000001000000330000001E
-rw------- 1 postgres postgres 16777216 Jun 15 22:29 00000001000000330000001F
-rw------- 1 postgres postgres 16777216 Jun 15 20:35 000000010000003300000020
drwx------ 2 postgres postgres 16384 Jun 15 22:15 archive_status/
[0]: root@synthesis:/var/lib/postgresql/9.3/ircservices/pg_xlog # l archive_status/ |tail -rw------- 1 postgres postgres 0 Jun 11 17:52 0000000100000031000000EC.ready -rw------- 1 postgres postgres 0 Jun 12 04:32 00000001000000320000000C.ready -rw------- 1 postgres postgres 0 Jun 12 15:32 00000001000000320000002C.ready -rw------- 1 postgres postgres 0 Jun 13 02:12 00000001000000320000004C.ready -rw------- 1 postgres postgres 0 Jun 13 13:13 00000001000000320000006C.ready -rw------- 1 postgres postgres 0 Jun 13 23:12 00000001000000320000008C.ready -rw------- 1 postgres postgres 0 Jun 15 08:15 0000000100000032000000F1.ready -rw------- 1 postgres postgres 0 Jun 16 10:15 00000001000000330000003F.ready -rw------- 1 postgres postgres 0 Jun 16 19:35 00000001000000330000005E.done -rw------- 1 postgres postgres 0 Jun 16 20:15 000000010000003300000060.done
insgesamt 28
drwx------ 2 postgres postgres 16384 Jun 15 22:15 ./
drwx------ 3 postgres postgres 12288 Jun 15 22:29 ../
-rw------- 1 postgres postgres 0 Mär 8 16:48 000000010000001500000017.ready
-rw------- 1 postgres postgres 0 Mär 8 16:48 000000010000001500000018.ready
-rw------- 1 postgres postgres 0 Mär 8 16:48 000000010000001500000019.ready
-rw------- 1 postgres postgres 0 Mär 8 16:48 00000001000000150000001A.ready
-rw------- 1 postgres postgres 0 Mär 8 16:48 00000001000000150000001B.ready
-rw------- 1 postgres postgres 0 Mär 8 16:48 00000001000000150000001C.ready
-rw------- 1 postgres postgres 0 Mär 8 16:48 00000001000000150000001D.ready
-rw------- 1 postgres postgres 0 Mär 8 16:48 00000001000000150000001E.ready
-rw------- 1 postgres postgres 0 Mär 8 16:48 00000001000000150000001F.ready
-rw------- 1 postgres postgres 0 Mär 8 16:48 000000010000001500000020.ready
-rw------- 1 postgres postgres 0 Mär 8 16:48 000000010000001500000021.ready
-rw------- 1 postgres postgres 0 Mär 8 16:48 000000010000001500000022.ready
-rw------- 1 postgres postgres 0 Mär 8 16:48 000000010000001500000023.ready
-rw------- 1 postgres postgres 0 Mär 8 16:48 000000010000001500000024.ready
-rw------- 1 postgres postgres 0 Mär 8 16:48 000000010000001500000025.ready
-rw------- 1 postgres postgres 0 Mär 8 16:48 000000010000001500000026.ready
-rw------- 1 postgres postgres 0 Mär 8 16:48 000000010000001500000027.ready
-rw------- 1 postgres postgres 0 Mär 8 16:48 000000010000001500000028.ready
-rw------- 1 postgres postgres 0 Mär 8 16:48 000000010000001500000029.ready
-rw------- 1 postgres postgres 0 Mär 8 16:48 00000001000000150000002A.ready
-rw------- 1 postgres postgres 0 Mär 8 16:48 00000001000000150000002B.ready
-rw------- 1 postgres postgres 0 Mär 8 16:48 00000001000000150000002C.ready
-rw------- 1 postgres postgres 0 Mär 9 05:47 000000010000001700000057.ready
-rw------- 1 postgres postgres 0 Mär 9 20:06 000000010000001700000083.ready
-rw------- 1 postgres postgres 0 Mär 10 06:46 0000000100000017000000A3.ready
-rw------- 1 postgres postgres 0 Mär 10 17:47 0000000100000017000000C3.ready
-rw------- 1 postgres postgres 0 Mär 12 08:06 000000010000001800000035.ready
-rw------- 1 postgres postgres 0 Mär 13 02:27 00000001000000180000006D.ready
-rw------- 1 postgres postgres 0 Mär 13 13:07 00000001000000180000008D.ready
-rw------- 1 postgres postgres 0 Mär 14 00:07 0000000100000018000000AD.ready
-rw------- 1 postgres postgres 0 Mär 14 10:07 0000000100000018000000CD.ready
-rw------- 1 postgres postgres 0 Mär 14 20:49 0000000100000018000000ED.ready
-rw------- 1 postgres postgres 0 Mär 15 06:52 00000001000000190000000D.ready
-rw------- 1 postgres postgres 0 Mär 15 17:53 00000001000000190000002D.ready
-rw------- 1 postgres postgres 0 Mär 16 04:32 00000001000000190000004D.ready
-rw------- 1 postgres postgres 0 Mär 16 15:33 00000001000000190000006D.ready
-rw------- 1 postgres postgres 0 Mär 17 01:33 00000001000000190000008D.ready
-rw------- 1 postgres postgres 0 Mär 17 12:12 0000000100000019000000AD.ready
-rw------- 1 postgres postgres 0 Mär 17 23:13 0000000100000019000000CD.ready
-rw------- 1 postgres postgres 0 Mär 18 09:53 0000000100000019000000ED.ready
-rw------- 1 postgres postgres 0 Mär 18 20:53 000000010000001A0000000D.ready
-rw------- 1 postgres postgres 0 Mär 19 06:53 000000010000001A0000002D.ready
-rw------- 1 postgres postgres 0 Mär 19 18:33 000000010000001A0000004D.ready
-rw------- 1 postgres postgres 0 Mär 20 04:34 000000010000001A0000006D.ready
-rw------- 1 postgres postgres 0 Mär 21 02:13 000000010000001A000000AD.ready
-rw------- 1 postgres postgres 0 Mär 21 12:53 000000010000001A000000CD.ready
-rw------- 1 postgres postgres 0 Mär 21 23:54 000000010000001A000000ED.ready
-rw------- 1 postgres postgres 0 Mär 22 09:32 000000010000001B0000000D.ready
-rw------- 1 postgres postgres 0 Mär 22 20:12 000000010000001B0000002D.ready
-rw------- 1 postgres postgres 0 Mär 23 07:12 000000010000001B0000004D.ready
-rw------- 1 postgres postgres 0 Mär 23 17:53 000000010000001B0000006D.ready
-rw------- 1 postgres postgres 0 Mär 24 04:53 000000010000001B0000008D.ready
-rw------- 1 postgres postgres 0 Mär 24 14:53 000000010000001B000000AD.ready
-rw------- 1 postgres postgres 0 Mär 26 12:15 000000010000001C00000025.ready
-rw------- 1 postgres postgres 0 Mär 26 22:53 000000010000001C00000045.ready
-rw------- 1 postgres postgres 0 Mär 27 09:53 000000010000001C00000065.ready
-rw------- 1 postgres postgres 0 Mär 27 19:53 000000010000001C00000085.ready
-rw------- 1 postgres postgres 0 Mär 28 17:34 000000010000001C000000C5.ready
-rw------- 1 postgres postgres 0 Mär 29 05:13 000000010000001C000000E5.ready
-rw------- 1 postgres postgres 0 Mär 29 18:53 000000010000001D00000010.ready
-rw------- 1 postgres postgres 0 Mär 30 05:32 000000010000001D00000030.ready
-rw------- 1 postgres postgres 0 Mär 30 16:32 000000010000001D00000050.ready
-rw------- 1 postgres postgres 0 Mär 31 03:12 000000010000001D00000070.ready
-rw------- 1 postgres postgres 0 Mär 31 14:13 000000010000001D00000090.ready
-rw------- 1 postgres postgres 0 Apr 1 00:13 000000010000001D000000B0.ready
-rw------- 1 postgres postgres 0 Apr 11 16:33 0000000100000020000000B2.ready
-rw------- 1 postgres postgres 0 Apr 12 03:13 0000000100000020000000D2.ready
-rw------- 1 postgres postgres 0 Apr 12 13:52 0000000100000020000000F2.ready
-rw------- 1 postgres postgres 0 Apr 13 00:33 000000010000002100000012.ready
-rw------- 1 postgres postgres 0 Apr 13 11:32 000000010000002100000032.ready
-rw------- 1 postgres postgres 0 Apr 13 21:32 000000010000002100000052.ready
-rw------- 1 postgres postgres 0 Apr 14 08:13 000000010000002100000072.ready
-rw------- 1 postgres postgres 0 Apr 14 19:13 000000010000002100000092.ready
-rw------- 1 postgres postgres 0 Apr 15 05:52 0000000100000021000000B2.ready
-rw------- 1 postgres postgres 0 Apr 15 16:53 0000000100000021000000D2.ready
-rw------- 1 postgres postgres 0 Apr 21 19:13 00000001000000230000008A.ready
-rw------- 1 postgres postgres 0 Apr 22 05:12 0000000100000023000000AA.ready
-rw------- 1 postgres postgres 0 Apr 22 21:53 0000000100000023000000DB.ready
-rw------- 1 postgres postgres 0 Apr 23 08:33 0000000100000023000000FB.ready
-rw------- 1 postgres postgres 0 Apr 23 19:33 00000001000000240000001B.ready
-rw------- 1 postgres postgres 0 Apr 24 05:33 00000001000000240000003B.ready
-rw------- 1 postgres postgres 0 Apr 24 21:13 000000010000002400000069.ready
-rw------- 1 postgres postgres 0 Apr 25 08:13 000000010000002400000089.ready
-rw------- 1 postgres postgres 0 Apr 27 12:13 000000010000002500000027.ready
-rw------- 1 postgres postgres 0 Apr 28 02:33 000000010000002500000051.ready
-rw------- 1 postgres postgres 0 Apr 28 04:53 000000010000002500000059.ready
-rw------- 1 postgres postgres 0 Apr 29 01:53 000000010000002500000099.ready
-rw------- 1 postgres postgres 0 Apr 29 12:34 0000000100000025000000B9.ready
-rw------- 1 postgres postgres 0 Apr 29 23:33 0000000100000025000000D9.ready
-rw------- 1 postgres postgres 0 Apr 30 10:14 0000000100000025000000F9.ready
-rw------- 1 postgres postgres 0 Apr 30 21:13 000000010000002600000019.ready
-rw------- 1 postgres postgres 0 Mai 1 07:14 000000010000002600000039.ready
-rw------- 1 postgres postgres 0 Mai 1 17:54 000000010000002600000059.ready
-rw------- 1 postgres postgres 0 Mai 2 15:34 000000010000002600000099.ready
-rw------- 1 postgres postgres 0 Mai 3 02:34 0000000100000026000000B9.ready
-rw------- 1 postgres postgres 0 Mai 4 02:12 000000010000002700000002.ready
-rw------- 1 postgres postgres 0 Mai 4 13:13 000000010000002700000022.ready
-rw------- 1 postgres postgres 0 Mai 4 23:14 000000010000002700000042.ready
-rw------- 1 postgres postgres 0 Mai 6 01:53 000000010000002700000092.ready
-rw------- 1 postgres postgres 0 Mai 6 12:34 0000000100000027000000B2.ready
-rw------- 1 postgres postgres 0 Mai 6 23:33 0000000100000027000000D2.ready
-rw------- 1 postgres postgres 0 Mai 7 10:13 0000000100000027000000F2.ready
-rw------- 1 postgres postgres 0 Mai 7 21:13 000000010000002800000012.ready
-rw------- 1 postgres postgres 0 Mai 8 07:14 000000010000002800000032.ready
-rw------- 1 postgres postgres 0 Mai 8 17:53 000000010000002800000052.ready
-rw------- 1 postgres postgres 0 Mai 10 11:13 0000000100000028000000CF.ready
-rw------- 1 postgres postgres 0 Mai 10 21:21 0000000100000028000000EF.ready
-rw------- 1 postgres postgres 0 Mai 11 08:01 00000001000000290000000F.ready
-rw------- 1 postgres postgres 0 Mai 11 19:02 00000001000000290000002F.ready
-rw------- 1 postgres postgres 0 Mai 12 05:01 00000001000000290000004F.ready
-rw------- 1 postgres postgres 0 Mai 12 15:41 00000001000000290000006F.ready
-rw------- 1 postgres postgres 0 Mai 13 02:41 00000001000000290000008F.ready
-rw------- 1 postgres postgres 0 Mai 13 13:39 0000000100000029000000AF.ready
-rw------- 1 postgres postgres 0 Mai 13 23:39 0000000100000029000000CF.ready
-rw------- 1 postgres postgres 0 Mai 14 10:21 0000000100000029000000EF.ready
-rw------- 1 postgres postgres 0 Mai 14 21:19 000000010000002A0000000F.ready
-rw------- 1 postgres postgres 0 Mai 15 07:59 000000010000002A0000002F.ready
-rw------- 1 postgres postgres 0 Mai 15 19:00 000000010000002A0000004F.ready
-rw------- 1 postgres postgres 0 Mai 16 04:59 000000010000002A0000006F.ready
-rw------- 1 postgres postgres 0 Mai 17 02:39 000000010000002A000000AF.ready
-rw------- 1 postgres postgres 0 Mai 18 01:32 000000010000002A000000F6.ready
-rw------- 1 postgres postgres 0 Mai 18 12:12 000000010000002B00000016.ready
-rw------- 1 postgres postgres 0 Mai 18 23:13 000000010000002B00000036.ready
-rw------- 1 postgres postgres 0 Mai 19 09:53 000000010000002B00000056.ready
-rw------- 1 postgres postgres 0 Mai 19 20:53 000000010000002B00000076.ready
-rw------- 1 postgres postgres 0 Mai 20 06:53 000000010000002B00000096.ready
-rw------- 1 postgres postgres 0 Mai 20 17:33 000000010000002B000000B6.ready
-rw------- 1 postgres postgres 0 Mai 21 04:33 000000010000002B000000D6.ready
-rw------- 1 postgres postgres 0 Jun 7 21:11 0000000100000030000000D5.ready
-rw------- 1 postgres postgres 0 Jun 8 08:12 0000000100000030000000F5.ready
-rw------- 1 postgres postgres 0 Jun 8 18:12 000000010000003100000015.ready
-rw------- 1 postgres postgres 0 Jun 9 04:52 000000010000003100000035.ready
-rw------- 1 postgres postgres 0 Jun 9 15:52 000000010000003100000055.ready
-rw------- 1 postgres postgres 0 Jun 10 10:12 00000001000000310000008C.ready
-rw------- 1 postgres postgres 0 Jun 10 20:52 0000000100000031000000AC.ready
-rw------- 1 postgres postgres 0 Jun 11 07:52 0000000100000031000000CC.ready
-rw------- 1 postgres postgres 0 Jun 11 17:52 0000000100000031000000EC.ready
-rw------- 1 postgres postgres 0 Jun 12 04:32 00000001000000320000000C.ready
-rw------- 1 postgres postgres 0 Jun 12 15:32 00000001000000320000002C.ready
-rw------- 1 postgres postgres 0 Jun 13 02:12 00000001000000320000004C.ready
-rw------- 1 postgres postgres 0 Jun 13 13:13 00000001000000320000006C.ready
-rw------- 1 postgres postgres 0 Jun 13 23:12 00000001000000320000008C.ready
-rw------- 1 postgres postgres 0 Jun 15 08:15 0000000100000032000000F1.ready
-rw------- 1 postgres postgres 0 Jun 15 20:55 00000001000000330000001A.done
-rw------- 1 postgres postgres 0 Jun 15 21:15 00000001000000330000001B.done
-rw------- 1 postgres postgres 0 Jun 15 21:35 00000001000000330000001C.done
-rw------- 1 postgres postgres 0 Jun 15 21:55 00000001000000330000001D.done
-rw------- 1 postgres postgres 0 Jun 15 22:15 00000001000000330000001E.done
[0]: root@synthesis:/var/lib/postgresql/9.3/ircservices/pg_xlog # l archive_status/ |tail -rw------- 1 postgres postgres 0 Jun 11 17:52 0000000100000031000000EC.ready -rw------- 1 postgres postgres 0 Jun 12 04:32 00000001000000320000000C.ready -rw------- 1 postgres postgres 0 Jun 12 15:32 00000001000000320000002C.ready -rw------- 1 postgres postgres 0 Jun 13 02:12 00000001000000320000004C.ready -rw------- 1 postgres postgres 0 Jun 13 13:13 00000001000000320000006C.ready -rw------- 1 postgres postgres 0 Jun 13 23:12 00000001000000320000008C.ready -rw------- 1 postgres postgres 0 Jun 15 08:15 0000000100000032000000F1.ready -rw------- 1 postgres postgres 0 Jun 16 10:15 00000001000000330000003F.ready -rw------- 1 postgres postgres 0 Jun 16 19:35 00000001000000330000005E.done -rw------- 1 postgres postgres 0 Jun 16 20:15 000000010000003300000060.done
archive_command = ':'
archive_mode = yes
archive_timeout = 20min
autovacuum = yes
checkpoint_timeout = 1h
data_directory = '/var/lib/postgresql/9.3/ircservices'
datestyle = 'iso, mdy'
external_pid_file = '/var/run/postgresql/9.3-ircservices.pid'
hba_file = '/etc/postgresql/9.3/ircservices/pg_hba.conf'
hot_standby = on
ident_file = '/etc/postgresql/9.3/ircservices/pg_ident.conf'
lc_messages = C
lc_monetary = C
lc_numeric = C
lc_time = C
listen_addresses = '*'
log_destination = syslog
log_line_prefix = '%t '
max_connections = 100
port = 5433
shared_buffers = 24MB
ssl = true
ssl_ca_file = '/etc/ssl/certs/ca-oftc.pem'
ssl_cert_file = '/etc/ssl/certs/thishost.pem'
ssl_key_file = '/etc/ssl/private/thishost.key'
track_counts = yes
unix_socket_directories = '/var/run/postgresql'
wal_level = hot_standby
Jun 15 20:29:29 synthesis postgres[27944]: [5-1] 2015-06-15 20:29:29 GMT LOG: redo starts at 33/1D000024
Jun 15 20:29:33 synthesis postgres[27944]: [6-1] 2015-06-15 20:29:33 GMT LOG: restored log file "00000001000000330000001E" from archive
Jun 15 20:29:34 synthesis postgres[27944]: [7-1] 2015-06-15 20:29:34 GMT LOG: consistent recovery state reached at 33/1F0848B8
Jun 15 20:29:34 synthesis postgres[27943]: [2-1] 2015-06-15 20:29:34 GMT LOG: database system is ready to accept read only connections
Jun 15 20:29:34 synthesis postgres[27944]: [8-1] 2015-06-15 20:29:34 GMT LOG: invalid record length at 33/1F0848B8
Jun 15 20:29:34 synthesis postgres[28045]: [3-1] 2015-06-15 20:29:34 GMT LOG: started streaming WAL from primary at 33/1F000000 on timeline 1
In the meantime, I've manually removed the .ready files that do not
have a corresponding xlog files, but the files do not get cleaned up.
Since I gathered the data for the past yesterday evening, a new file
(00000001000000330000003F) has appeared, but now there's even a gap in
the .done files:
[0]: root@synthesis:/var/lib/postgresql/9.3/ircservices/pg_xlog # l archive_status/ |tail -rw------- 1 postgres postgres 0 Jun 11 17:52 0000000100000031000000EC.ready -rw------- 1 postgres postgres 0 Jun 12 04:32 00000001000000320000000C.ready -rw------- 1 postgres postgres 0 Jun 12 15:32 00000001000000320000002C.ready -rw------- 1 postgres postgres 0 Jun 13 02:12 00000001000000320000004C.ready -rw------- 1 postgres postgres 0 Jun 13 13:13 00000001000000320000006C.ready -rw------- 1 postgres postgres 0 Jun 13 23:12 00000001000000320000008C.ready -rw------- 1 postgres postgres 0 Jun 15 08:15 0000000100000032000000F1.ready -rw------- 1 postgres postgres 0 Jun 16 10:15 00000001000000330000003F.ready -rw------- 1 postgres postgres 0 Jun 16 19:35 00000001000000330000005E.done -rw------- 1 postgres postgres 0 Jun 16 20:15 000000010000003300000060.done
-rw------- 1 postgres postgres 16777216 Jun 15 07:14 0000000100000032000000F1
-rw------- 1 postgres postgres 16777216 Jun 16 09:14 00000001000000330000003F
-rw------- 1 postgres postgres 16777216 Jun 16 19:35 00000001000000330000005E
-rw------- 1 postgres postgres 16777216 Jun 16 19:55 00000001000000330000005F
-rw------- 1 postgres postgres 16777216 Jun 16 20:14 000000010000003300000060
-rw------- 1 postgres postgres 16777216 Jun 16 20:31 000000010000003300000061
-rw------- 1 postgres postgres 16777216 Jun 16 18:35 000000010000003300000062
-rw------- 1 postgres postgres 16777216 Jun 16 18:55 000000010000003300000063
-rw------- 1 postgres postgres 16777216 Jun 16 19:15 000000010000003300000064
drwx------ 2 postgres postgres 16384 Jun 16 20:15 archive_status/
[0]: root@synthesis:/var/lib/postgresql/9.3/ircservices/pg_xlog # l archive_status/ |tail -rw------- 1 postgres postgres 0 Jun 11 17:52 0000000100000031000000EC.ready -rw------- 1 postgres postgres 0 Jun 12 04:32 00000001000000320000000C.ready -rw------- 1 postgres postgres 0 Jun 12 15:32 00000001000000320000002C.ready -rw------- 1 postgres postgres 0 Jun 13 02:12 00000001000000320000004C.ready -rw------- 1 postgres postgres 0 Jun 13 13:13 00000001000000320000006C.ready -rw------- 1 postgres postgres 0 Jun 13 23:12 00000001000000320000008C.ready -rw------- 1 postgres postgres 0 Jun 15 08:15 0000000100000032000000F1.ready -rw------- 1 postgres postgres 0 Jun 16 10:15 00000001000000330000003F.ready -rw------- 1 postgres postgres 0 Jun 16 19:35 00000001000000330000005E.done -rw------- 1 postgres postgres 0 Jun 16 20:15 000000010000003300000060.done
-rw------- 1 postgres postgres 0 Jun 11 17:52 0000000100000031000000EC.ready
-rw------- 1 postgres postgres 0 Jun 12 04:32 00000001000000320000000C.ready
-rw------- 1 postgres postgres 0 Jun 12 15:32 00000001000000320000002C.ready
-rw------- 1 postgres postgres 0 Jun 13 02:12 00000001000000320000004C.ready
-rw------- 1 postgres postgres 0 Jun 13 13:13 00000001000000320000006C.ready
-rw------- 1 postgres postgres 0 Jun 13 23:12 00000001000000320000008C.ready
-rw------- 1 postgres postgres 0 Jun 15 08:15 0000000100000032000000F1.ready
-rw------- 1 postgres postgres 0 Jun 16 10:15 00000001000000330000003F.ready
-rw------- 1 postgres postgres 0 Jun 16 19:35 00000001000000330000005E.done
-rw------- 1 postgres postgres 0 Jun 16 20:15 000000010000003300000060.done
Christoph
--
cb@df7cb.de | http://www.df7cb.de/
On Jun 16, 2015, at 11:35 AM, Christoph Berg <cb@df7cb.de> wrote:
[moving to -bugs]
Re: Xavier 12 2015-06-16 <CAMOV8iB3oRzC4f7UTzOwC2wT08do3voi+PGN07uJq+ayo9E=cQ@mail.gmail.com>
Hi everyone,
Questions about pg_xlogs again...
I have two Postgresql 9.1 servers in a master/slave stream replication
(hot_standby).Psql01 (master) is backuped with Barman and pg_xlogs is correctly
purged (archive_command is used).Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G for 7 days
only, it keeps growing up until disk space is full). I have found
documentation and tutorials, mailing list, but I don't know what is
suitable for a Slave. Leads I've found :Hi,
I have the same problem here. Master/slave running on 9.3.current. On
the master everything is normal, but on the slave server, files in
pg_xlog and archive_status pile up. Interestingly, the filenames are
mostly 0x20 apart. (IRC user Kassandry is reporting the same issue on
9.4 as well, including the 0x20 spacing.)
I’ve seen this before, but haven’t been able to make a reproducible test case yet.
Are you by chance using SSL to talk to the primary server? Is the ssl_renegotiation_limit the default of 512MB? 32 WAL files at 16MB each = 512MB. I found that it would always leave the WAL file from before the invalid record length message. Does that seem to be the case for you as well?
--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs
2015-06-16 15:55 GMT+02:00 Xavier 12 <maniatux@gmail.com>:
I don't think so. There is no archive_command and the master doesn't
ship its wal here.
But how can I check that ?
What's the complete path to the directory on the salve that contains 951
files? what does PostgreSQL say on its log files?
2015-06-16 12:41 GMT+02:00 Guillaume Lelarge <guillaume@lelarge.info>:
Le 16 juin 2015 10:57 AM, "Xavier 12" <maniatux@gmail.com> a écrit :
Hi everyone,
Questions about pg_xlogs again...
I have two Postgresql 9.1 servers in a master/slave stream replication
(hot_standby).Psql01 (master) is backuped with Barman and pg_xlogs is correctly
purged (archive_command is used).Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G for 7 days
only, it keeps growing up until disk space is full). I have found
documentation and tutorials, mailing list, but I don't know what is
suitable for a Slave. Leads I've found :- checkpoints
- archive_command
- archive_cleanupMaster postgresq.conf :
[...]
wal_level = 'hot_standby'
archive_mode = on
archive_command = 'rsync -az /var/lib/postgresql/9.1/main/pg_xlog/%f
barman@nas.lan:/data/pgbarman/psql01/incoming/%f'
max_wal_senders = 5
wal_keep_segments = 64
autovacuum = onSlave postgresql.conf :
[...]
wal_level = minimal
wal_keep_segments = 32
hot_standby = onSlave recovery.conf :
standby_mode = 'on'
primary_conninfo = 'host=10.0.0.1 port=5400 user=postgres'
trigger_file = '/var/lib/postgresql/9.1/triggersql'
restore_command='cp /var/lib/postgresql/9.1/wal_archive/%f "%p"'
archive_cleanup_command =
'/usr/lib/postgresql/9.1/bin/pg_archivecleanup
/var/lib/postgresql/9.1/wal_archive/ %r'How can I reduce the number of WAL files on the hot_stanby slave ?
Depends on what you're talking about. If they are archived wal,
pg_archive_cleanup is what you're looking for.
--
Guillaume.
http://blog.guillaume.lelarge.info
http://www.dalibo.com
On 06/16/2015 01:16 PM, Jeff Frost wrote:
On Jun 16, 2015, at 11:35 AM, Christoph Berg <cb@df7cb.de> wrote:
[moving to -bugs]
Re: Xavier 12 2015-06-16 <CAMOV8iB3oRzC4f7UTzOwC2wT08do3voi+PGN07uJq+ayo9E=cQ@mail.gmail.com>
Hi everyone,
Questions about pg_xlogs again...
I have two Postgresql 9.1 servers in a master/slave stream replication
(hot_standby).Psql01 (master) is backuped with Barman and pg_xlogs is correctly
purged (archive_command is used).Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G for 7 days
only, it keeps growing up until disk space is full). I have found
documentation and tutorials, mailing list, but I don't know what is
suitable for a Slave. Leads I've found :Hi,
I have the same problem here. Master/slave running on 9.3.current. On
the master everything is normal, but on the slave server, files in
pg_xlog and archive_status pile up. Interestingly, the filenames are
mostly 0x20 apart. (IRC user Kassandry is reporting the same issue on
9.4 as well, including the 0x20 spacing.)I�ve seen this before, but haven�t been able to make a reproducible test case yet.
Are you by chance using SSL to talk to the primary server? Is the ssl_renegotiation_limit the default of 512MB? 32 WAL files at 16MB each = 512MB. I found that it would always leave the WAL file from before the invalid record length message. Does that seem to be the case for you as well?
Hello Jeff,
To add to this on PostgreSQL 9.4 ( Kassandry from IRC ), yes, I see SSL
errors in my logs.
I turned off the archive_command I had running on one of my three
replicas, which recycled all of the .ready files and all of the
outstanding xlogs.
I re-enabled the archive_command and waited.
I got this in my logs:
< @[] LOG: restartpoint complete: wrote 15437 buffers (2.9%); 0
transaction log file(s) added, 0 removed, 5 recycled; write=269.358 s,
sync=0.035 s, total=269.397 s; sync files=202, longest=0.008 s,
average=0.000 s
< @[] LOG: recovery restart point at 650/4D01CCA0
< @[] DETAIL: last completed transaction was at log time 2015-06-16
21:41:41.990409+00
< @[] LOG: restartpoint starting: time
< @[] LOG: restartpoint complete: wrote 115 buffers (0.0%); 0
transaction log file(s) added, 0 removed, 12 recycled; write=11.446 s,
sync=0.005 s, total=11.455 s; sync files=29, longest=0.001 s,
average=0.000 s
< @[] LOG: recovery restart point at 650/5204B6C8
< @[] DETAIL: last completed transaction was at log time 2015-06-16
21:42:24.524081+00
< @[] FATAL: could not send data to WAL stream: SSL error: unexpected
record
< @[] LOG: unexpected pageaddr 650/18000000 in log segment
00000001000006500000005A, offset 0
And a ready file appeared and stayed for 000000010000065000000059 :
-rw------- 1 postgres postgres 0 Jun 16 21:57 000000010000065000000059.ready
On my other streaming replica, there are lots of these log messages, and
it looks like there is also a ready file for each of the segments
previous to the segment mentioned in the unexpected pageaddr message.
Hope this helps.
Please let me know if I can gather further data to help fix this. =)
Regards,
Lacey
--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs
On Tue, Jun 16, 2015 at 6:55 PM, Xavier 12 <maniatux@gmail.com> wrote:
Hi everyone,
Questions about pg_xlogs again...
I have two Postgresql 9.1 servers in a master/slave stream replication
(hot_standby).Psql01 (master) is backuped with Barman and pg_xlogs is correctly
purged (archive_command is used).Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G for 7 days
only, it keeps growing up until disk space is full). I have found
documentation and tutorials, mailing list, but I don't know what is
suitable for a Slave. Leads I've found :- checkpoints
- archive_command
- archive_cleanupMaster postgresq.conf :
[...]
wal_level = 'hot_standby'
archive_mode = on
archive_command = 'rsync -az /var/lib/postgresql/9.1/main/pg_xlog/%f
barman@nas.lan:/data/pgbarman/psql01/incoming/%f'
max_wal_senders = 5
wal_keep_segments = 64
autovacuum = onSlave postgresql.conf :
[...]
wal_level = minimal
wal_keep_segments = 32
hot_standby = onSlave recovery.conf :
standby_mode = 'on'
primary_conninfo = 'host=10.0.0.1 port=5400 user=postgres'
trigger_file = '/var/lib/postgresql/9.1/triggersql'
restore_command='cp /var/lib/postgresql/9.1/wal_archive/%f "%p"'
archive_cleanup_command =
'/usr/lib/postgresql/9.1/bin/pg_archivecleanup
/var/lib/postgresql/9.1/wal_archive/ %r'How can I reduce the number of WAL files on the hot_stanby slave ?
If the number of WAL files in pg_xlog are growing, then you need to look at
why the files are not getting deleted.
Do you see master and standby in sync ? You can check that by getting the
current pg_xlog position in standby.
Regards,
Venkata Balaji N
Fujitsu Australia
On Tue, 16 Jun 2015 16:55 Xavier 12 <maniatux@gmail.com> wrote:
Hi everyone,
Questions about pg_xlogs again...
I have two Postgresql 9.1 servers in a master/slave stream replication
(hot_standby).
Psql01 (master) is backuped with Barman and pg_xlogs is correctly
purged (archive_command is used).
Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G for 7 days
only, it keeps growing up until disk space is full). I have found
documentation and tutorials, mailing list, but I don't know what is
suitable for a Slave. Leads I've found :
- checkpoints
- archive_command
- archive_cleanup
Master postgresq.conf :
[...]
wal_level = 'hot_standby'
archive_mode = on
archive_command = 'rsync -az /var/lib/postgresql/9.1/main/pg_xlog/%f
barman@nas.lan:/data/pgbarman/psql01/incoming/%f'
max_wal_senders = 5
wal_keep_segments = 64
What's this parameter's value on Slave?
autovacuum = on
Slave postgresql.conf :
[...]
wal_level = minimal
wal_keep_segments = 32
hot_standby = on
Slave recovery.conf :
standby_mode = 'on'
primary_conninfo = 'host=10.0.0.1 port=5400 user=postgres'
trigger_file = '/var/lib/postgresql/9.1/triggersql'
restore_command='cp /var/lib/postgresql/9.1/wal_archive/%f "%p"'
archive_cleanup_command =
'/usr/lib/postgresql/9.1/bin/pg_archivecleanup
/var/lib/postgresql/9.1/wal_archive/ %r'
How can I reduce the number of WAL files on the hot_stanby slave ?
Thanks
Regards.
Xavier C.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 16/06/2015 22:28, Guillaume Lelarge wrote:
2015-06-16 15:55 GMT+02:00 Xavier 12 <maniatux@gmail.com
<mailto:maniatux@gmail.com>>:I don't think so. There is no archive_command and the master doesn't
ship its wal here.
But how can I check that ?What's the complete path to the directory on the salve that contains
951 files? what does PostgreSQL say on its log files?
in /var/lib/postgresql/9.1/main/pg_xlog/
1059 files today.
Too much to copy/paste here.
Here are the first ones :
-rw------- 1 postgres postgres 16777216 Jun 9 08:40
000000040000040B0000007E
-rw------- 1 postgres postgres 16777216 Jun 9 08:41
000000040000040B0000007F
-rw------- 1 postgres postgres 16777216 Jun 9 08:42
000000040000040B00000080
-rw------- 1 postgres postgres 16777216 Jun 9 08:44
000000040000040B00000081
There are no .done or .ready and archive_status is empty.
Nothing critical in the logs :
Jun 17 08:55:11 psql02 postgres[4231]: [2-1] 2015-06-17 08:55:11 CEST
LOG: paquet de d?marrage incomplet
Jun 17 08:55:41 psql02 postgres[4322]: [2-1] 2015-06-17 08:55:41 CEST
LOG: paquet de d?marrage incomplet
Jun 17 08:56:11 psql02 postgres[4356]: [2-1] 2015-06-17 08:56:11 CEST
LOG: paquet de d?marrage incomplet
Jun 17 08:56:41 psql02postgres[4460]: [2-1] 2015-06-17 08:56:41 CEST
LOG: paquet de d?marrage incomplet
Jun 17 08:56:55 psql02postgres[4514]: [2-1] 2015-06-17 08:56:55 CEST
ERREUR: restauration en cours
Jun 17 08:56:55 psql02postgres[4514]: [2-2] 2015-06-17 08:56:55 CEST
ASTUCE : les fonctions de contr?le des journaux de transactions ne
peuvent pas
Jun 17 08:56:55 psql02postgres[4514]: [2-3] #011?tre ex?cut?es lors de
la restauration.
Jun 17 08:56:55 psql02postgres[4514]: [2-4] 2015-06-17 08:56:55 CEST
INSTRUCTION : select pg_current_xlog_location()
pg_current_xlog_location() is for a zabbix check, "ERREUR" is because
that server is readyonly.
Xavier C.
Show quoted text
2015-06-16 12:41 GMT+02:00 Guillaume Lelarge
<guillaume@lelarge.info <mailto:guillaume@lelarge.info>>:Le 16 juin 2015 10:57 AM, "Xavier 12" <maniatux@gmail.com
<mailto:maniatux@gmail.com>> a écrit :
Hi everyone,
Questions about pg_xlogs again...
I have two Postgresql 9.1 servers in a master/slave streamreplication
(hot_standby).
Psql01 (master) is backuped with Barman and pg_xlogs is correctly
purged (archive_command is used).Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G for 7 days
only, it keeps growing up until disk space is full). I have found
documentation and tutorials, mailing list, but I don't know what is
suitable for a Slave. Leads I've found :- checkpoints
- archive_command
- archive_cleanupMaster postgresq.conf :
[...]
wal_level = 'hot_standby'
archive_mode = on
archive_command = 'rsync -az/var/lib/postgresql/9.1/main/pg_xlog/%f
barman@nas.lan:/data/pgbarman/psql01/incoming/%f'
max_wal_senders = 5
wal_keep_segments = 64
autovacuum = onSlave postgresql.conf :
[...]
wal_level = minimal
wal_keep_segments = 32
hot_standby = onSlave recovery.conf :
standby_mode = 'on'
primary_conninfo = 'host=10.0.0.1 port=5400 user=postgres'
trigger_file = '/var/lib/postgresql/9.1/triggersql'
restore_command='cp /var/lib/postgresql/9.1/wal_archive/%f "%p"'
archive_cleanup_command =
'/usr/lib/postgresql/9.1/bin/pg_archivecleanup
/var/lib/postgresql/9.1/wal_archive/ %r'How can I reduce the number of WAL files on the hot_stanby slave ?
Depends on what you're talking about. If they are archived wal,
pg_archive_cleanup is what you're looking for.--
Guillaume.
http://blog.guillaume.lelarge.info
http://www.dalibo.com
On 17/06/2015 02:44, Venkata Balaji N wrote:
On Tue, Jun 16, 2015 at 6:55 PM, Xavier 12 <maniatux@gmail.com
<mailto:maniatux@gmail.com>> wrote:Hi everyone,
Questions about pg_xlogs again...
I have two Postgresql 9.1 servers in a master/slave stream replication
(hot_standby).Psql01 (master) is backuped with Barman and pg_xlogs is correctly
purged (archive_command is used).Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G for 7 days
only, it keeps growing up until disk space is full). I have found
documentation and tutorials, mailing list, but I don't know what is
suitable for a Slave. Leads I've found :- checkpoints
- archive_command
- archive_cleanupMaster postgresq.conf :
[...]
wal_level = 'hot_standby'
archive_mode = on
archive_command = 'rsync -az /var/lib/postgresql/9.1/main/pg_xlog/%f
barman@nas.lan:/data/pgbarman/psql01/incoming/%f'
max_wal_senders = 5
wal_keep_segments = 64
autovacuum = onSlave postgresql.conf :
[...]
wal_level = minimal
wal_keep_segments = 32
hot_standby = onSlave recovery.conf :
standby_mode = 'on'
primary_conninfo = 'host=10.0.0.1 port=5400 user=postgres'
trigger_file = '/var/lib/postgresql/9.1/triggersql'
restore_command='cp /var/lib/postgresql/9.1/wal_archive/%f "%p"'
archive_cleanup_command =
'/usr/lib/postgresql/9.1/bin/pg_archivecleanup
/var/lib/postgresql/9.1/wal_archive/ %r'How can I reduce the number of WAL files on the hot_stanby slave ?
If the number of WAL files in pg_xlog are growing, then you need to
look at why the files are not getting deleted.Do you see master and standby in sync ? You can check that by getting
the current pg_xlog position in standby.Regards,
Venkata Balaji NFujitsu Australia
I have a Zabbix check for pg_xlog in master/slave indeed.
Xavier C.
On 17/06/2015 03:17, Sameer Kumar wrote:
On Tue, 16 Jun 2015 16:55 Xavier 12 <maniatux@gmail.com
<mailto:maniatux@gmail.com>> wrote:Hi everyone,
Questions about pg_xlogs again...
I have two Postgresql 9.1 servers in a master/slave stream replication
(hot_standby).Psql01 (master) is backuped with Barman and pg_xlogs is correctly
purged (archive_command is used).Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G for 7 days
only, it keeps growing up until disk space is full). I have found
documentation and tutorials, mailing list, but I don't know what is
suitable for a Slave. Leads I've found :- checkpoints
- archive_command
- archive_cleanupMaster postgresq.conf :
[...]
wal_level = 'hot_standby'
archive_mode = on
archive_command = 'rsync -az /var/lib/postgresql/9.1/main/pg_xlog/%f
barman@nas.lan:/data/pgbarman/psql01/incoming/%f'
max_wal_senders = 5
wal_keep_segments = 64What's this parameter's value on Slave?
Hm... You have a point.
That autovacuum parameter seems to be useless on a slave.
I'll try to remove it and check pg_xlog.
Xavier C.
Show quoted text
autovacuum = on
Slave postgresql.conf :
[...]
wal_level = minimal
wal_keep_segments = 32
hot_standby = onSlave recovery.conf :
standby_mode = 'on'
primary_conninfo = 'host=10.0.0.1 port=5400 user=postgres'
trigger_file = '/var/lib/postgresql/9.1/triggersql'
restore_command='cp /var/lib/postgresql/9.1/wal_archive/%f "%p"'
archive_cleanup_command =
'/usr/lib/postgresql/9.1/bin/pg_archivecleanup
/var/lib/postgresql/9.1/wal_archive/ %r'How can I reduce the number of WAL files on the hot_stanby slave ?
Thanks
Regards.
Xavier C.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
<mailto:pgsql-general@postgresql.org>)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Re: Jeff Frost 2015-06-16 <67E2F20A-6A2E-484E-BF97-544F1FC66566@pgexperts.com>
I’ve seen this before, but haven’t been able to make a reproducible test case yet.
Are you by chance using SSL to talk to the primary server? Is the ssl_renegotiation_limit the default of 512MB? 32 WAL files at 16MB each = 512MB. I found that it would always leave the WAL file from before the invalid record length message. Does that seem to be the case for you as well?
Yes, SSL, with default settings. I can confirm your
wal-file-from-before analysis:
Jun 16 07:14:59 synthesis postgres[32525]: [8-1] 2015-06-16 07:14:59 GMT LOG: unexpected pageaddr 33/39000000 in log segment 000000010000003300000040, offset 0
Jun 16 07:15:00 synthesis postgres[11514]: [3-1] 2015-06-16 07:15:00 GMT LOG: started streaming WAL from primary at 33/40000000 on timeline 1
-rw------- 1 postgres postgres 16777216 Jun 16 09:14 00000001000000330000003F
Jun 16 17:55:01 synthesis postgres[32525]: [9-1] 2015-06-16 17:55:01 GMT LOG: unexpected pageaddr 33/5A000000 in log segment 000000010000003300000060, offset 0
Jun 16 17:55:02 synthesis postgres[24337]: [3-1] 2015-06-16 17:55:02 GMT LOG: started streaming WAL from primary at 33/60000000 on timeline 1
-rw------- 1 postgres postgres 16777216 Jun 16 19:55 00000001000000330000005F
Jun 17 04:35:02 synthesis postgres[24337]: [4-1] 2015-06-17 04:35:02 GMT FATAL: could not send data to WAL stream: server closed the connection unexpectedly
Jun 17 04:35:02 synthesis postgres[24337]: [4-2] This probably means the server terminated abnormally
Jun 17 04:35:02 synthesis postgres[24337]: [4-3] before or while processing the request.
Jun 17 04:35:02 synthesis postgres[24337]: [4-4]
Jun 17 04:35:04 synthesis postgres[32525]: [10-1] 2015-06-17 04:35:04 GMT LOG: unexpected pageaddr 33/7B000000 in log segment 000000010000003300000080, offset 0
Jun 17 04:35:05 synthesis postgres[4756]: [5-1] 2015-06-17 04:35:05 GMT LOG: started streaming WAL from primary at 33/80000000 on timeline 1
-rw------- 1 postgres postgres 16777216 Jun 17 06:35 00000001000000330000007F
There's a 1:1 correspondence with log and leaked files.
Christoph
--
cb@df7cb.de | http://www.df7cb.de/
--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs
Sent from my iPhone
On Jun 17, 2015, at 03:22, Christoph Berg <cb@df7cb.de> wrote:
Re: Jeff Frost 2015-06-16 <67E2F20A-6A2E-484E-BF97-544F1FC66566@pgexperts.com>
Yes, SSL, with default settings. I can confirm your
wal-file-from-before analysis:Jun 16 07:14:59 synthesis postgres[32525]: [8-1] 2015-06-16 07:14:59 GMT LOG: unexpected pageaddr 33/39000000 in log segment 000000010000003300000040, offset 0
Jun 16 07:15:00 synthesis postgres[11514]: [3-1] 2015-06-16 07:15:00 GMT LOG: started streaming WAL from primary at 33/40000000 on timeline 1-rw------- 1 postgres postgres 16777216 Jun 16 09:14 00000001000000330000003F
Jun 16 17:55:01 synthesis postgres[32525]: [9-1] 2015-06-16 17:55:01 GMT LOG: unexpected pageaddr 33/5A000000 in log segment 000000010000003300000060, offset 0
Jun 16 17:55:02 synthesis postgres[24337]: [3-1] 2015-06-16 17:55:02 GMT LOG: started streaming WAL from primary at 33/60000000 on timeline 1-rw------- 1 postgres postgres 16777216 Jun 16 19:55 00000001000000330000005F
Jun 17 04:35:02 synthesis postgres[24337]: [4-1] 2015-06-17 04:35:02 GMT FATAL: could not send data to WAL stream: server closed the connection unexpectedly
Jun 17 04:35:02 synthesis postgres[24337]: [4-2] This probably means the server terminated abnormally
Jun 17 04:35:02 synthesis postgres[24337]: [4-3] before or while processing the request.
Jun 17 04:35:02 synthesis postgres[24337]: [4-4]
Jun 17 04:35:04 synthesis postgres[32525]: [10-1] 2015-06-17 04:35:04 GMT LOG: unexpected pageaddr 33/7B000000 in log segment 000000010000003300000080, offset 0
Jun 17 04:35:05 synthesis postgres[4756]: [5-1] 2015-06-17 04:35:05 GMT LOG: started streaming WAL from primary at 33/80000000 on timeline 1-rw------- 1 postgres postgres 16777216 Jun 17 06:35 00000001000000330000007F
There's a 1:1 correspondence with log and leaked files.
We thought it was related to the ssl renegotiation limit, but reducing it didn't seem to make it happen more often.
The problem was that I couldn't seem to make a reproducible test case with pgbench and two servers, so it seems there is slightly more at play.
I believe setting the ssl renegotiation limit to 0 made it stop. Can you confirm?
Have you been able to reproduce synthetically?
--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs
On Wed, 17 Jun 2015 15:24 Xavier 12 <maniatux@gmail.com> wrote:
On 17/06/2015 03:17, Sameer Kumar wrote:
On Tue, 16 Jun 2015 16:55 Xavier 12 <maniatux@gmail.com> wrote:
Hi everyone,
Questions about pg_xlogs again...
I have two Postgresql 9.1 servers in a master/slave stream replication
(hot_standby).
Psql01 (master) is backuped with Barman and pg_xlogs is correctly
purged (archive_command is used).
Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G for 7 days
only, it keeps growing up until disk space is full). I have found
documentation and tutorials, mailing list, but I don't know what is
suitable for a Slave. Leads I've found :
- checkpoints
- archive_command
- archive_cleanup
Master postgresq.conf :
[...]
wal_level = 'hot_standby'
archive_mode = on
archive_command = 'rsync -az /var/lib/postgresql/9.1/main/pg_xlog/%f
barman@nas.lan:/data/pgbarman/psql01/incoming/%f'
max_wal_senders = 5
wal_keep_segments = 64
What's this parameter's value on Slave?
Hm... You have a point.
That autovacuum parameter seems to be useless on a slave.
I'll try to remove it and check pg_xlog.
That was not my point. I was actually asking about wal_keep_segment.
Nevermind I found that I had misses the info (found it below. Please see my
response).
Besides I try to keep my master and standby config as same as possible(so
my advise ia to not switchoff autovacuum). The parameters which are
imeffective on slave anyways won't have an effect. Same goes for parameters
on master.
This helps me when I swap roles or do a failover. I have less parameters to
be worried about.
Can you check the pg_log for log files. They may have se info? I am sorry
if you have already provided that info (after I finish I will try to look
at your previous emails on this thread)
Also can you share the vacuum cost parameters in your environment?
autovacuum = on
Slave postgresql.conf :
[...]
wal_level = minimal
wal_keep_segments = 32
Sorry I missed this somehow earlier. Any reason why you think you need to
retain 32 wal files on slave?
hot_standby = on
Slave recovery.conf :
standby_mode = 'on'
primary_conninfo = 'host=10.0.0.1 port=5400 user=postgres'
trigger_file = '/var/lib/postgresql/9.1/triggersql'
restore_command='cp /var/lib/postgresql/9.1/wal_archive/%f "%p"'
archive_cleanup_command =
'/usr/lib/postgresql/9.1/bin/pg_archivecleanup
/var/lib/postgresql/9.1/wal_archive/ %r'
Also consider setting hot_standby_feesback to on.
How can I reduce the number of WAL files on the hot_stanby slave ?
Thanks
Regards.
Xavier C.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 18/06/2015 04:00, Sameer Kumar wrote:
On Wed, 17 Jun 2015 15:24 Xavier 12 <maniatux@gmail.com
<mailto:maniatux@gmail.com>> wrote:On 17/06/2015 03:17, Sameer Kumar wrote:
On Tue, 16 Jun 2015 16:55 Xavier 12 <maniatux@gmail.com
<mailto:maniatux@gmail.com>> wrote:Hi everyone,
Questions about pg_xlogs again...
I have two Postgresql 9.1 servers in a master/slave stream
replication
(hot_standby).Psql01 (master) is backuped with Barman and pg_xlogs is
correctly
purged (archive_command is used).Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G
for 7 days
only, it keeps growing up until disk space is full). I
have found
documentation and tutorials, mailing list, but I don't
know what is
suitable for a Slave. Leads I've found :- checkpoints
- archive_command
- archive_cleanupMaster postgresq.conf :
[...]
wal_level = 'hot_standby'
archive_mode = on
archive_command = 'rsync -az
/var/lib/postgresql/9.1/main/pg_xlog/%f
barman@nas.lan:/data/pgbarman/psql01/incoming/%f
<mailto:barman@nas.lan:/data/pgbarman/psql01/incoming/%25f>'
max_wal_senders = 5
wal_keep_segments = 64What's this parameter's value on Slave?
Hm... You have a point.
That autovacuum parameter seems to be useless on a slave.
I'll try to remove it and check pg_xlog.That was not my point. I was actually asking about wal_keep_segment.
Nevermind I found that I had misses the info (found it below. Please
see my response).
Besides I try to keep my master and standby config as same as
possible(so my advise ia to not switchoff autovacuum). The parameters
which are imeffective on slave anyways won't have an effect. Same goes
for parameters on master.
This helps me when I swap roles or do a failover. I have less
parameters to be worried about.
Okay
Can you check the pg_log for log files. They may have se info? I am
sorry if you have already provided that info (after I finish I will
try to look at your previous emails on this thread)
Nothing...
/var/log/postgresql/postgresql-2015-06-17_111131.log is empty (except
old messages at the begining related to a configuration issue - which is
now solved - after rebuilding the cluster yesterday).
/var/log/syslog has nothing but these :
Jun 18 09:10:11 Bdd02 postgres[28400]: [2-1] 2015-06-18 09:10:11 CEST
LOG: paquet de d?marrage incomplet
Jun 18 09:10:41 Bdd02 postgres[28523]: [2-1] 2015-06-18 09:10:41 CEST
LOG: paquet de d?marrage incomplet
Jun 18 09:11:11 Bdd02 postgres[28557]: [2-1] 2015-06-18 09:11:11 CEST
LOG: paquet de d?marrage incomplet
Jun 18 09:11:41 Bdd02 postgres[28652]: [2-1] 2015-06-18 09:11:41 CEST
LOG: paquet de d?marrage incomplet
Jun 18 09:12:11 Bdd02 postgres[28752]: [2-1] 2015-06-18 09:12:11 CEST
LOG: paquet de d?marrage incomplet
Jun 18 09:12:41 Bdd02 postgres[28862]: [2-1] 2015-06-18 09:12:41 CEST
LOG: paquet de d?marrage incomplet
Jun 18 09:13:11 Bdd02 postgres[28891]: [2-1] 2015-06-18 09:13:11 CEST
LOG: paquet de d?marrage incomplet
Jun 18 09:13:40 Bdd02 postgres[28987]: [2-1] 2015-06-18 09:13:40 CEST
LOG: paquet de d?marrage incomplet
These messages are related to Zabbix (psql port check).
Also can you share the vacuum cost parameters in your environment?
I don't understand that part... is this in postgresql.conf ?
autovacuum = on
Slave postgresql.conf :
[...]
wal_level = minimal
wal_keep_segments = 32Sorry I missed this somehow earlier. Any reason why you think you need
to retain 32 wal files on slave?
No but I get the feeling that the parameter is ignored by my slave...
should I try another value ?
hot_standby = on
Slave recovery.conf :
standby_mode = 'on'
primary_conninfo = 'host=10.0.0.1 port=5400 user=postgres'
trigger_file = '/var/lib/postgresql/9.1/triggersql'
restore_command='cp /var/lib/postgresql/9.1/wal_archive/%f
"%p"'
archive_cleanup_command =
'/usr/lib/postgresql/9.1/bin/pg_archivecleanup
/var/lib/postgresql/9.1/wal_archive/ %r'Also consider setting hot_standby_feesback to on.
I will check that parameter in the documentation,
Thanks
Show quoted text
How can I reduce the number of WAL files on the hot_stanby
slave ?Thanks
Regards.
Xavier C.
--
Sent via pgsql-general mailing list
(pgsql-general@postgresql.org
<mailto:pgsql-general@postgresql.org>)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Re: Jeff Frost 2015-06-17 <AF73F62A-B83A-41A3-9CAA-CCFFDC4DB204@pgexperts.com>
We thought it was related to the ssl renegotiation limit, but reducing it didn't seem to make it happen more often.
The problem was that I couldn't seem to make a reproducible test case with pgbench and two servers, so it seems there is slightly more at play.
I believe setting the ssl renegotiation limit to 0 made it stop. Can you confirm?
I've configured that, we'll see later today.
Have you been able to reproduce synthetically?
No. I managed to make the test setup leak one file when the slave
server was restarted, but atm it doesn't reconnect/barf every 512MB.
I'm probably still missing some parameter. (sslcompression=0 was the
first I tried...)
Christoph
--
cb@df7cb.de | http://www.df7cb.de/
--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs
On Thu, 18 Jun 2015 15:17 Xavier 12 <maniatux@gmail.com> wrote:
On 18/06/2015 04:00, Sameer Kumar wrote:
On Wed, 17 Jun 2015 15:24 Xavier 12 <maniatux@gmail.com> wrote:
On 17/06/2015 03:17, Sameer Kumar wrote:
On Tue, 16 Jun 2015 16:55 Xavier 12 <maniatux@gmail.com> wrote:
Hi everyone,
Questions about pg_xlogs again...
I have two Postgresql 9.1 servers in a master/slave stream replication
(hot_standby).
Psql01 (master) is backuped with Barman and pg_xlogs is correctly
purged (archive_command is used).
Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G for 7 days
only, it keeps growing up until disk space is full). I have found
documentation and tutorials, mailing list, but I don't know what is
suitable for a Slave. Leads I've found :
- checkpoints
- archive_command
- archive_cleanup
Master postgresq.conf :
[...]
wal_level = 'hot_standby'
archive_mode = on
archive_command = 'rsync -az /var/lib/postgresql/9.1/main/pg_xlog/%f
barman@nas.lan:/data/pgbarman/psql01/incoming/%f'
max_wal_senders = 5
wal_keep_segments = 64
What's this parameter's value on Slave?
Hm... You have a point.
That autovacuum parameter seems to be useless on a slave.
I'll try to remove it and check pg_xlog.
That was not my point. I was actually asking about wal_keep_segment.
Nevermind I found that I had misses the info (found it below. Please see my
response).
Besides I try to keep my master and standby config as same as possible(so
my advise ia to not switchoff autovacuum). The parameters which are
imeffective on slave anyways won't have an effect. Same goes for parameters
on master.
This helps me when I swap roles or do a failover. I have less parameters to
be worried about.
Okay
Can you check the pg_log for log files. They may have se info? I am sorry
if you have already provided that info (after I finish I will try to look
at your previous emails on this thread)
Nothing...
/var/log/postgresql/postgresql-2015-06-17_111131.log is empty (except old
messages at the begining related to a configuration issue - which is now
solved - after rebuilding the cluster yesterday).
/var/log/syslog has nothing but these :
Jun 18 09:10:11 Bdd02 postgres[28400]: [2-1] 2015-06-18 09:10:11 CEST LOG:
paquet de d?marrage incomplet
Jun 18 09:10:41 Bdd02 postgres[28523]: [2-1] 2015-06-18 09:10:41 CEST LOG:
paquet de d?marrage incomplet
Jun 18 09:11:11 Bdd02 postgres[28557]: [2-1] 2015-06-18 09:11:11 CEST LOG:
paquet de d?marrage incomplet
Jun 18 09:11:41 Bdd02 postgres[28652]: [2-1] 2015-06-18 09:11:41 CEST LOG:
paquet de d?marrage incomplet
Jun 18 09:12:11 Bdd02 postgres[28752]: [2-1] 2015-06-18 09:12:11 CEST LOG:
paquet de d?marrage incomplet
Jun 18 09:12:41 Bdd02 postgres[28862]: [2-1] 2015-06-18 09:12:41 CEST LOG:
paquet de d?marrage incomplet
Jun 18 09:13:11 Bdd02 postgres[28891]: [2-1] 2015-06-18 09:13:11 CEST LOG:
paquet de d?marrage incomplet
Jun 18 09:13:40 Bdd02 postgres[28987]: [2-1] 2015-06-18 09:13:40 CEST LOG:
paquet de d?marrage incomplet
These messages are related to Zabbix (psql port check).
You sure these are the only messages you have in the log files?
Also can you share the vacuum cost parameters in your environm
en
t?
I don't understand that part... is this in postgresql.conf ?
There are vacuum cost parameters in postgresql.conf
autovacuum = on
Slave postgresql.conf :
[...]
wal_level = minimal
wal_keep_segments = 32
Sorry I missed this somehow earlier. Any reason why you think you need to
retain 32 wal files on slave?
No but I get the feeling that the parameter is ignored by my slave...
should I try another value ?
AFAIK you don't nees this parameter to set to > 0 unless you have cascaded
replica pull wal from stand by or you have backup jobs running to backup
from standby. Set it to 0 on the standby and check.
hot_standby = on
Slave recovery.conf :
standby_mode = 'on'
primary_conninfo = 'host=10.0.0.1 port=5400 user=postgres'
trigger_file = '/var/lib/postgresql/9.1/triggersql'
restore_command='cp /var/lib/postgresql/9.1/wal_archive/%f "%p"'
archive_cleanup_command =
'/usr/lib/postgresql/9.1/bin/pg_archivecleanup
/var/lib/postgresql/9.1/wal_archive/ %r'
Also consider setting hot_standby_feesback to on.
I will check that parameter in the documentation,
Thanks
How can I reduce the number of WAL files on the hot_stanby slave ?
Re: To Jeff Frost 2015-06-18 <20150618105305.GA22374@msg.df7cb.de>
I believe setting the ssl renegotiation limit to 0 made it stop. Can you confirm?
I've configured that, we'll see later today.
0 makes it stop.
Have you been able to reproduce synthetically?
No. I managed to make the test setup leak one file when the slave
server was restarted, but atm it doesn't reconnect/barf every 512MB.
I'm probably still missing some parameter. (sslcompression=0 was the
first I tried...)
(Still no success there.)
Christoph
--
cb@df7cb.de | http://www.df7cb.de/
--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs
On Jun 19, 2015, at 7:50 AM, Christoph Berg <cb@df7cb.de> wrote:
Re: To Jeff Frost 2015-06-18 <20150618105305.GA22374@msg.df7cb.de>
I believe setting the ssl renegotiation limit to 0 made it stop. Can you confirm?
I've configured that, we'll see later today.
0 makes it stop.
Have you been able to reproduce synthetically?
No. I managed to make the test setup leak one file when the slave
server was restarted, but atm it doesn't reconnect/barf every 512MB.
I'm probably still missing some parameter. (sslcompression=0 was the
first I tried...)(Still no success there.)
I had thought it was fixed on 9.2 by a recent update (not the last 3, but the one before) as it seemed to stop doing this, but then it started again after a few days, so there may be some large amount of transactions required before the funny business begins.
I would really love to deliver a self contained test case, but I tried for a few days unsuccessfully to reproduce it, but I still see it happening on 9.4.4 and 9.2.13, but not on all servers. :-/
--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs