Logical Replication WIP

Started by Petr Jelinekover 9 years ago222 messageshackers
Jump to latest
#1Petr Jelinek
petr@2ndquadrant.com

Hi,

as promised here is WIP version of logical replication patch.

This is by no means anywhere close to be committable, but it should be
enough for discussion on the approaches chosen. I do plan to give this
some more time before September CF as well as during the CF itself.

You've seen some preview of ideas in the doc Simon posted [1]/messages/by-id/CANP8+j+NMHP-yFvoG03tpb4_s7GdmnCriEEOJeKkXWmUu_=-HA@mail.gmail.com, not all
of them are implemented yet in this patch though.

I'll start with the overview of the state of things.

What works:
- Replication of INSERT/UPDATE/DELETE operations on tables in
publication.
- Initial copy of data in publication.
- Automatic management of things like slots and origin tracking.
- Some psql support (\drp, \drs and additional info in \d for
tables, it's mainly missing ACLs as those are not implemented
(see bellow) yet and tab completion.

What's missing:
- sequences, I'd like to have them in 10.0 but I don't have good
way to implement it. PGLogical uses periodical syncing with some
buffer value but that's suboptimal. I would like to decode them
but that has proven to be complicated due to their sometimes
transactional sometimes nontransactional nature, so I probably
won't have time to do it within 10.0 by myself.
- ACLs, I still expect to have it the way it's documented in the
logical replication docs, but currently the code just assumes
superuser/REPLICATION role. This can be probably discussed in the
design thread more [1]/messages/by-id/CANP8+j+NMHP-yFvoG03tpb4_s7GdmnCriEEOJeKkXWmUu_=-HA@mail.gmail.com.
- pg_dump, same as above, I want to have publications and membership
in those dumped unconditionally and potentially dump also
subscription definitions if user asks for it using commandline
option as I don't think subscriptions should be dumped by default as
automatically starting replication when somebody dumps and restores
the db goes against POLA.
- DDL, I see several approaches we could do here for 10.0. a) don't
deal with DDL at all yet, b) provide function which pushes the DDL
into replication queue and then executes on downstream (like
londiste, slony, pglogical do), c) capture the DDL query as text
and allow user defined function to be called with that DDL text on
the subscriber (that's what oracle did with CDC)
- FDW support on downstream, currently only INSERTs should work
there but that should be easy to fix.
- Monitoring, I'd like to add some pg_stat_subscription view on the
downstream (the rest of monitoring is very similar to physical
streaming so that needs mostly docs).
- TRUNCATE, this is handled using triggers in BDR and pglogical but
I am not convinced that's the right way to do it for incore as it
brings limitations (fe. inability to use restart identity).

The parts I am not overly happy with:
- The fact that subscription handles slot creation/drop means we do
some automagic that might fail and user might need to fix that up
manually. I am not saying this is necessarily problem as that's how
most of the publish/subscribe replication systems work but I wonder
if there is better way of doing this that I missed.
- The initial copy patch adds some interfaces for getting table list
and data into the DecodingContext and I wonder if that's good place
for those or if we should create some TableSync API instead that
would load plugin as well and have these two new interfaces and put
into the tablesync module. One reason why I didn't do it is that
the interface would be almost the same and the plugin then would
have to do separate init for DecodingContext and TableSync.
- The initial copy uses the snapshot from slot creation in the
walsender. I currently just push it as active snapshot inside
snapbuilder which is probably not the right thing to do (tm). That
is mostly because I don't really know what the right thing is there.

About individual pathes:
0001-Add-PUBLICATION-catalogs-and-DDL.patch: This patch defines a
Publication which his basically same thing as replication set. It adds
database local catalog pg_publication which stores the publications and
DML filters, and pg_publication_rel catalog for storing membership of
relation in the publication. Adds the DDL, dependency handling and all
the necessary boilerplate around that including some basic regression
tests for the DDL.

0002-Add-SUBSCRIPTION-catalog-and-DDL.patch: Adds Subscriptions with
shared nailed (!) catalog pg_subscription which stores the individual
subscriptions for each database. The reason why this is nailed is that
it needs to be accessible without connection to database so that the
logical replication launcher can read it and start/stop workers as
necessary. This does not include regression tests as I am usure how to
test this within regression testing framework given that it is
supposed to start workers (those are added in later patches).

0003-Define-logical-replication-protocol-and-output-plugi.patch:
Adds the logical replication protocol (api and docs) and "standard"
output plugin for logical decoding that produces output based on that
protocol and the publication definitions.

0004-Make-libpqwalreceiver-reentrant.patch: Redesigns the
libpqwalreceiver to be reusable outside of walreceiver by exporting
the api as struct and opaque connection handle. Also adds couple of
additional functions for logical replication.

0005-Add-logical-replication-workers.patch: This patch adds the actual
logical replication workers that use all above to implement the data
change replication from publisher to subscriber. It adds two different
background workers. First is Launcher which works like the autovacuum
laucnher in that it gets list of subscriptions and starts/stops the
apply workers for those subscriptions as needed. Apply workers connect
to the output plugin via streaming protocol and handle the actual data
replication. I exported the ExecUpdate/ExecInsert/ExecDelete functions
from nodeModifyTable to handle the actual database updates so that
things like triggers, etc are handled automatically without special
code. This also adds couple of TAP tests that test basic replication
setup and also wide variety of type support. Also the overview doc for
logical replication that Simon previously posted to the list is part
of this one.

0006-Logical-replication-support-for-initial-data-copy.patch: PoC of
initial sync. It adds another mode into apply worker which just applies
updates for single table and some handover logic for when the table is
given synchronized and can be replicated normally. It also adds new
catalog pg_subscription_rel which keeps information about
synchronization status of individual tables. Note that tables added to
publications at later time are not yet synchronized, there is also no
resynchronization UI yet.

On the upstream side it adds two new commands into replication protocol
for getting list of tables and for streaming existing table data. I
discussed this part as suboptimal above so won't repeat here.

Feedback is welcome.

[1]: /messages/by-id/CANP8+j+NMHP-yFvoG03tpb4_s7GdmnCriEEOJeKkXWmUu_=-HA@mail.gmail.com
/messages/by-id/CANP8+j+NMHP-yFvoG03tpb4_s7GdmnCriEEOJeKkXWmUu_=-HA@mail.gmail.com

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0001-Add-PUBLICATION-catalogs-and-DDL.patchapplication/x-patch; name=0001-Add-PUBLICATION-catalogs-and-DDL.patchDownload+2370-16
0002-Add-SUBSCRIPTION-catalog-and-DDL.patchapplication/x-patch; name=0002-Add-SUBSCRIPTION-catalog-and-DDL.patchDownload+1478-11
0003-Define-logical-replication-protocol-and-output-plugi.patchapplication/x-patch; name=0003-Define-logical-replication-protocol-and-output-plugi.patchDownload+2130-3
0004-Make-libpqwalreceiver-reentrant.patchapplication/x-patch; name=0004-Make-libpqwalreceiver-reentrant.patchDownload+306-165
0005-Add-logical-replication-workers.patchapplication/x-patch; name=0005-Add-logical-replication-workers.patchDownload+3338-17
0006-Logical-replication-support-for-initial-data-copy.patchapplication/x-patch; name=0006-Logical-replication-support-for-initial-data-copy.patchDownload+2043-130
#2Andres Freund
andres@anarazel.de
In reply to: Petr Jelinek (#1)
Re: Logical Replication WIP

On 2016-08-05 17:00:13 +0200, Petr Jelinek wrote:

as promised here is WIP version of logical replication patch.

Yay!

I'm about to head out for a week of, desperately needed, holidays, but
after that I plan to spend a fair amount of time helping to review
etc. this.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#3Simon Riggs
simon@2ndQuadrant.com
In reply to: Andres Freund (#2)
Re: Logical Replication WIP

On 5 August 2016 at 16:22, Andres Freund <andres@anarazel.de> wrote:

On 2016-08-05 17:00:13 +0200, Petr Jelinek wrote:

as promised here is WIP version of logical replication patch.

Yay!

Yay2

I'm about to head out for a week of, desperately needed, holidays, but
after that I plan to spend a fair amount of time helping to review
etc. this.

Have a good one.

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#4Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Simon Riggs (#3)
Re: Logical Replication WIP

On Sat, Aug 6, 2016 at 2:04 AM, Simon Riggs <simon@2ndquadrant.com> wrote:

On 5 August 2016 at 16:22, Andres Freund <andres@anarazel.de> wrote:

On 2016-08-05 17:00:13 +0200, Petr Jelinek wrote:

as promised here is WIP version of logical replication patch.

Yay!

Yay2

Thank you for working on this!

I've applied these patches to current HEAD, but got the following error.

libpqwalreceiver.c:48: error: redefinition of typedef ‘WalReceiverConnHandle’
../../../../src/include/replication/walreceiver.h:137: note: previous
declaration of ‘WalReceiverConnHandle’ was here
make[2]: *** [libpqwalreceiver.o] Error 1
make[1]: *** [install-backend/replication/libpqwalreceiver-recurse] Error 2
make: *** [install-src-recurse] Error 2

After fixed this issue with attached patch, I used logical replication a little.
Some random comments and questions.

The logical replication launcher process and the apply process are
implemented as a bgworker. Isn't better to have them as an auxiliary
process like checkpointer, wal writer?
IMO the number of logical replication connections should not be
limited by max_worker_processes.

--
We need to set the publication up by at least CREATE PUBLICATION and
ALTER PUBLICATION command.
Can we make CREATE PUBLICATION possible to define tables as well?
For example,
CREATE PUBLICATION mypub [ TABLE table_name, ...] [WITH options]

--
This patch can not drop the subscription.

=# drop subscription sub;
ERROR: unrecognized object class: 6102

-- 
+/*-------------------------------------------------------------------------
+ *
+ * proto.c
+ *             logical replication protocol functions
+ *
+ * Copyright (c) 2015, PostgreSQL Global Development Group
+ *

The copyright of added files are old.

And this patch has some whitespace problems.
Please run "git show --check" or "git diff origin/master --check"

Regards,

--
Masahiko Sawada

Attachments:

fix_compile_error.patchapplication/x-patch; name=fix_compile_error.patchDownload+2-2
#5Craig Ringer
craig@2ndquadrant.com
In reply to: Masahiko Sawada (#4)
Re: Logical Replication WIP

On 9 August 2016 at 15:59, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

The logical replication launcher process and the apply process are
implemented as a bgworker. Isn't better to have them as an auxiliary
process like checkpointer, wal writer?

I don't think so. The checkpointer, walwriter, autovacuum, etc predate
bgworkers. I strongly suspect that if they were to be implemented now
they'd use bgworkers.

Now, perhaps we want a new bgworker "kind" for system workers or some other
minor tweaks. But basically I think bgworkers are exactly what we should be
using here.

IMO the number of logical replication connections should not be
limited by max_worker_processes.

Well, they *are* worker processes... but I take your point, that that
setting has been "number of bgworkers the user can run" and it might not be
expected that logical replication would use the same space.

max_worker_progresses isn't just a limit, it controls how many shmem slots
we allocate.

I guess we could have a separate max_logical_workers or something, but I'm
inclined to think that adds complexity without really making things any
nicer. We'd just add them together to decide how many shmem slots to
allocate and we'd have to keep track of how many slots were used by which
types of backend. Or create a near-duplicate of the bgworker facility for
logical rep.

Sure, you can go deeper down the rabbit hole here and say that we need to
add bgworker "categories" with reserved pools of worker slots for each
category. But do we really need that?

max_connections includes everything, both system and user backends. It's
not like we don't do this elsewhere. It's at worst a mild wart.

The only argument I can see for not using bgworkers is for the supervisor
worker. It's a singleton that launches the per-database workers, and
arguably is a job that the postmaster could do better. The current design
there stems from its origins as an extension. Maybe worker management could
be simplified a bit as a result. I'd really rather not invent yet another
new and mostly duplicate category of custom workers to achieve that though.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#6Michael Paquier
michael@paquier.xyz
In reply to: Craig Ringer (#5)
Re: Logical Replication WIP

On Tue, Aug 9, 2016 at 5:13 PM, Craig Ringer <craig@2ndquadrant.com> wrote:

On 9 August 2016 at 15:59, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

The logical replication launcher process and the apply process are
implemented as a bgworker. Isn't better to have them as an auxiliary
process like checkpointer, wal writer?

I don't think so. The checkpointer, walwriter, autovacuum, etc predate
bgworkers. I strongly suspect that if they were to be implemented now they'd
use bgworkers.

+1. We could always get them now under the umbrella of the bgworker
infrastructure if this cleans up some code duplication.
-- 
Michael

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#7Petr Jelinek
petr@2ndquadrant.com
In reply to: Masahiko Sawada (#4)
Re: Logical Replication WIP

On 09/08/16 09:59, Masahiko Sawada wrote:

On 2016-08-05 17:00:13 +0200, Petr Jelinek wrote:

as promised here is WIP version of logical replication patch.

Thank you for working on this!

Thanks for looking!

I've applied these patches to current HEAD, but got the following error.

libpqwalreceiver.c:48: error: redefinition of typedef ‘WalReceiverConnHandle’
../../../../src/include/replication/walreceiver.h:137: note: previous
declaration of ‘WalReceiverConnHandle’ was here
make[2]: *** [libpqwalreceiver.o] Error 1
make[1]: *** [install-backend/replication/libpqwalreceiver-recurse] Error 2
make: *** [install-src-recurse] Error 2

After fixed this issue with attached patch, I used logical replication a little.
Some random comments and questions.

Interesting, my compiler does have problem. Will investigate.

The logical replication launcher process and the apply process are
implemented as a bgworker. Isn't better to have them as an auxiliary
process like checkpointer, wal writer?
IMO the number of logical replication connections should not be
limited by max_worker_processes.

What Craig said reflects my rationale for doing this pretty well.

We need to set the publication up by at least CREATE PUBLICATION and
ALTER PUBLICATION command.
Can we make CREATE PUBLICATION possible to define tables as well?
For example,
CREATE PUBLICATION mypub [ TABLE table_name, ...] [WITH options]

Agreed, that just didn't make it to the first cut to -hackers. We've
been also thinking of having special ALL TABLES parameter there that
would encompass whole db.

--
This patch can not drop the subscription.

=# drop subscription sub;
ERROR: unrecognized object class: 6102

Yeah that's because of the patch 0006, I didn't finish all the
dependency tracking for the pg_subscription_rel catalog that it adds
(which is why I called it PoC). I expect to have this working in next
version (there is still quite a bit of polish work needed in general).

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#8Petr Jelinek
petr@2ndquadrant.com
In reply to: Craig Ringer (#5)
Re: Logical Replication WIP

On 09/08/16 10:13, Craig Ringer wrote:

On 9 August 2016 at 15:59, Masahiko Sawada <sawada.mshk@gmail.com
<mailto:sawada.mshk@gmail.com>> wrote:

The logical replication launcher process and the apply process are
implemented as a bgworker. Isn't better to have them as an auxiliary
process like checkpointer, wal writer?

I don't think so. The checkpointer, walwriter, autovacuum, etc predate
bgworkers. I strongly suspect that if they were to be implemented now
they'd use bgworkers.

Now, perhaps we want a new bgworker "kind" for system workers or some
other minor tweaks. But basically I think bgworkers are exactly what we
should be using here.

Agreed.

IMO the number of logical replication connections should not be
limited by max_worker_processes.

Well, they *are* worker processes... but I take your point, that that
setting has been "number of bgworkers the user can run" and it might not
be expected that logical replication would use the same space.

Again agree, I think we should ultimately go towards what PeterE
suggested in
/messages/by-id/a2fffd92-6e59-a4eb-dd85-c5865ebca1a0@2ndquadrant.com

The only argument I can see for not using bgworkers is for the
supervisor worker. It's a singleton that launches the per-database
workers, and arguably is a job that the postmaster could do better. The
current design there stems from its origins as an extension. Maybe
worker management could be simplified a bit as a result. I'd really
rather not invent yet another new and mostly duplicate category of
custom workers to achieve that though.

It is simplified compared to pglogical (there is only 2 worker types not
3). I don't think it's job of postmaster to scan catalogs however so it
can't really start workers for logical replication. I actually modeled
it more after autovacuum (using bgworkers though) than the original
extension.

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#9Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Craig Ringer (#5)
Re: Logical Replication WIP

On Tue, Aug 9, 2016 at 5:13 PM, Craig Ringer <craig@2ndquadrant.com> wrote:

On 9 August 2016 at 15:59, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

The logical replication launcher process and the apply process are
implemented as a bgworker. Isn't better to have them as an auxiliary
process like checkpointer, wal writer?

I don't think so. The checkpointer, walwriter, autovacuum, etc predate
bgworkers. I strongly suspect that if they were to be implemented now they'd
use bgworkers.

Now, perhaps we want a new bgworker "kind" for system workers or some other
minor tweaks. But basically I think bgworkers are exactly what we should be
using here.

I understood. Thanks!

IMO the number of logical replication connections should not be
limited by max_worker_processes.

Well, they *are* worker processes... but I take your point, that that
setting has been "number of bgworkers the user can run" and it might not be
expected that logical replication would use the same space.

max_worker_progresses isn't just a limit, it controls how many shmem slots
we allocate.

I guess we could have a separate max_logical_workers or something, but I'm
inclined to think that adds complexity without really making things any
nicer. We'd just add them together to decide how many shmem slots to
allocate and we'd have to keep track of how many slots were used by which
types of backend. Or create a near-duplicate of the bgworker facility for
logical rep.

Sure, you can go deeper down the rabbit hole here and say that we need to
add bgworker "categories" with reserved pools of worker slots for each
category. But do we really need that?

If we change these processes to bgworker, we can categorize them into
two, auxiliary process(check pointer and wal sender etc) and other
worker process.
And max_worker_processes controls the latter.

max_connections includes everything, both system and user backends. It's not
like we don't do this elsewhere. It's at worst a mild wart.

The only argument I can see for not using bgworkers is for the supervisor
worker. It's a singleton that launches the per-database workers, and
arguably is a job that the postmaster could do better. The current design
there stems from its origins as an extension. Maybe worker management could
be simplified a bit as a result. I'd really rather not invent yet another
new and mostly duplicate category of custom workers to achieve that though.

Regards,

--
Masahiko Sawada

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#10Craig Ringer
craig@2ndquadrant.com
In reply to: Masahiko Sawada (#9)
Re: Logical Replication WIP

On 9 August 2016 at 17:28, Masahiko Sawada <sawada.mshk@gmail.com> wrote:

Sure, you can go deeper down the rabbit hole here and say that we need to
add bgworker "categories" with reserved pools of worker slots for each
category. But do we really need that?

If we change these processes to bgworker, we can categorize them into
two, auxiliary process(check pointer and wal sender etc) and other
worker process.
And max_worker_processes controls the latter.

Right. I think that's probably the direction we should be going eventually.
Personally I don't think such a change should block the logical replication
work from proceeding with bgworkers, though. It's been delayed a long time,
a lot of people want it, and I think we need to focus on meeting the core
requirements not getting too sidetracked on minor points.

Of course, everyone's idea of what's core and what's a minor sidetrack
differs ;)

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#11Petr Jelinek
petr@2ndquadrant.com
In reply to: Craig Ringer (#10)
Re: Logical Replication WIP

On 09/08/16 12:16, Craig Ringer wrote:

On 9 August 2016 at 17:28, Masahiko Sawada <sawada.mshk@gmail.com
<mailto:sawada.mshk@gmail.com>> wrote:

Sure, you can go deeper down the rabbit hole here and say that we need to
add bgworker "categories" with reserved pools of worker slots for each
category. But do we really need that?

If we change these processes to bgworker, we can categorize them into
two, auxiliary process(check pointer and wal sender etc) and other
worker process.
And max_worker_processes controls the latter.

Right. I think that's probably the direction we should be going
eventually. Personally I don't think such a change should block the
logical replication work from proceeding with bgworkers, though. It's
been delayed a long time, a lot of people want it, and I think we need
to focus on meeting the core requirements not getting too sidetracked on
minor points.

Of course, everyone's idea of what's core and what's a minor sidetrack
differs ;)

Yeah that's why I added local max GUC that just handles the logical
worker limit within the max_worker_processes. I didn't want to also
write generic framework for managing the max workers using tags or
something as part of this, it's big enough as it is and we can always
move the limit to the more generic place once we have it.

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#12Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Petr Jelinek (#11)
Re: Logical Replication WIP

Petr Jelinek wrote:

On 09/08/16 12:16, Craig Ringer wrote:

Right. I think that's probably the direction we should be going
eventually. Personally I don't think such a change should block the
logical replication work from proceeding with bgworkers, though.

Yeah that's why I added local max GUC that just handles the logical worker
limit within the max_worker_processes. I didn't want to also write generic
framework for managing the max workers using tags or something as part of
this, it's big enough as it is and we can always move the limit to the more
generic place once we have it.

Parallel query does exactly that: the workers are allocated from the
bgworkers array, and if you want more, it's on you to increase that
limit (it doesn't even have the GUC for a maximum). As far as logical
replication and parallel query are concerned, that's fine. We can
improve this later, if it proves to be a problem.

I think there are far more pressing matters to review.

--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#13Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Petr Jelinek (#8)
Re: Logical Replication WIP

Petr Jelinek wrote:

On 09/08/16 10:13, Craig Ringer wrote:

The only argument I can see for not using bgworkers is for the
supervisor worker. It's a singleton that launches the per-database
workers, and arguably is a job that the postmaster could do better. The
current design there stems from its origins as an extension. Maybe
worker management could be simplified a bit as a result. I'd really
rather not invent yet another new and mostly duplicate category of
custom workers to achieve that though.

It is simplified compared to pglogical (there is only 2 worker types not 3).
I don't think it's job of postmaster to scan catalogs however so it can't
really start workers for logical replication. I actually modeled it more
after autovacuum (using bgworkers though) than the original extension.

Yeah, it's a very bad idea to put postmaster on this task. We should
definitely stay away from that.

--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#14Stas Kelvich
s.kelvich@postgrespro.ru
In reply to: Alvaro Herrera (#13)
Re: Logical Replication WIP

On 05 Aug 2016, at 18:00, Petr Jelinek <petr@2ndquadrant.com> wrote:

Hi,

as promised here is WIP version of logical replication patch.

Great!

Proposed DDL about publication/subscriptions looks very nice to me.

Some notes and thoughts about patch:

* Clang grumbles at following pieces of code:

apply.c:1316:6: warning: variable 'origin_startpos' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized]

tablesync.c:436:45: warning: if statement has empty body [-Wempty-body]
if (wait_for_sync_status_change(tstate));

* max_logical_replication_workers mentioned everywhere in docs, but guc.c defines
variable called max_logical_replication_processes for postgresql.conf

* Since pg_subscription already shared across the cluster, it can be also handy to
share pg_publications too and allow publication of tables from different databases. That
is rare scenarios but quite important for virtual hosting use case — tons of small databases
in a single postgres cluster.

* There is no way to see attachecd tables/schemas to publication through \drp

* As far as I understand there is no way to add table/tablespace right in CREATE
PUBLICATION and one need explicitly do ALTER PUBLICATION right after creation.
May be add something like WITH TABLE/TABLESPACE to CREATE?

* So binary protocol goes into core. Is it still possible to use it as decoding plugin for
manually created walsender? May be also include json as it was in pglogical? While
i’m not arguing that it should be done, i’m interested about your opinion on that.

* Also I’ve noted that you got rid of reserved byte (flags) in protocol comparing to
pglogical_native. It was very handy to use it for two phase tx decoding (0 — usual
commit, 1 — prepare, 2 — commit prepared), because both prepare and commit
prepared generates commit record in xlog.

On 05 Aug 2016, at 18:00, Petr Jelinek <petr@2ndquadrant.com> wrote:

- DDL, I see several approaches we could do here for 10.0. a) don't
deal with DDL at all yet, b) provide function which pushes the DDL
into replication queue and then executes on downstream (like
londiste, slony, pglogical do), c) capture the DDL query as text
and allow user defined function to be called with that DDL text on
the subscriber

* Since here DDL is mostly ALTER / CREATE / DROP TABLE (or am I wrong?) may be
we can add something like WITH SUBSCRIBERS to statements?

* Talking about exact mechanism of DDL replication I like you variant b), but since we
have transactional DDL, we can do two phase commit here. That will require two phase
decoding and some logic about catching prepare responses through logical messages. If that
approach sounds interesting i can describe proposal in more details and create a patch.

* Also I wasn’t able actually to run replication itself =) While regression tests passes, TAP
tests and manual run stuck. pg_subscription_rel.substate never becomes ‘r’. I’ll investigate
that more and write again.

* As far as I understand sync starts automatically on enabling publication. May be split that
logic into a different command with some options? Like don’t sync at all for example.

* When I’m trying to create subscription to non-existent publication, CREATE SUBSRITION
creates replication slot and do not destroys it:

# create subscription sub connection 'host=127.0.0.1 dbname=postgres' publication mypub;
NOTICE: created replication slot "sub" on provider
ERROR: could not receive list of replicated tables from the provider: ERROR: cache lookup failed for publication 0
CONTEXT: slot "sub", output plugin "pgoutput", in the list_tables callback

after that:

postgres=# drop subscription sub;
ERROR: subscription "sub" does not exist
postgres=# create subscription sub connection 'host=127.0.0.1 dbname=postgres' publication pub;
ERROR: could not crate replication slot "sub": ERROR: replication slot "sub" already exists

* Also can’t drop subscription:

postgres=# \drs
List of subscriptions
Name | Database | Enabled | Publication | Conninfo
------+----------+---------+-------------+--------------------------------
sub | postgres | t | {mypub} | host=127.0.0.1 dbname=postgres
(1 row)

postgres=# drop subscription sub;
ERROR: unrecognized object class: 6102

* Several time i’ve run in a situation where provider's postmaster ignores Ctrl-C until subscribed
node is switched off.

* Patch with small typos fixed attached.

I’ll do more testing, just want to share what i have so far.

Attachments:

typos.diffapplication/octet-stream; name=typos.diffDownload+4-4
#15Petr Jelinek
petr@2ndquadrant.com
In reply to: Stas Kelvich (#14)
Re: Logical Replication WIP

Hi,

On 11/08/16 13:34, Stas Kelvich wrote:

* max_logical_replication_workers mentioned everywhere in docs, but guc.c defines
variable called max_logical_replication_processes for postgresql.conf

Ah changed it in code but not in docs, will fix.

* Since pg_subscription already shared across the cluster, it can be also handy to
share pg_publications too and allow publication of tables from different databases. That
is rare scenarios but quite important for virtual hosting use case � tons of small databases
in a single postgres cluster.

You can't decode changes from multiple databases in one slot so I don't
see the usefulness there. The pg_subscription is currently shared
because it's technical necessity (as in I don't see how to solve the
need to access the catalog from launcher in any other way) not because I
think it's great design :)

* There is no way to see attachecd tables/schemas to publication through \drp

That's mostly intentional as publications for table are visible in \d,
but I am not against adding it to \drp.

* As far as I understand there is no way to add table/tablespace right in CREATE
PUBLICATION and one need explicitly do ALTER PUBLICATION right after creation.
May be add something like WITH TABLE/TABLESPACE to CREATE?

Yes, as I said to Masahiko Sawada, it's just not there yet but I plan to
have that.

* So binary protocol goes into core. Is it still possible to use it as decoding plugin for
manually created walsender? May be also include json as it was in pglogical? While
i�m not arguing that it should be done, i�m interested about your opinion on that.

Well the plugin is bit more integrated into the publication infra so if
somebody would want to use it directly they'd have to use that part as
well. OTOH the protocol itself is provided as API so it's reusable by
other plugins if needed.

JSON plugin is something that would be nice to have in core as well, but
I don't think it's part of this patch.

* Also I�ve noted that you got rid of reserved byte (flags) in protocol comparing to
pglogical_native. It was very handy to use it for two phase tx decoding (0 � usual
commit, 1 � prepare, 2 � commit prepared), because both prepare and commit
prepared generates commit record in xlog.

Hmm maybe commit message could get it back. PGLogical has them sprinkled
all around the protocol which I don't really like so I want to limit
them to the places where they are actually useful.

On 05 Aug 2016, at 18:00, Petr Jelinek <petr@2ndquadrant.com> wrote:

- DDL, I see several approaches we could do here for 10.0. a) don't
deal with DDL at all yet, b) provide function which pushes the DDL
into replication queue and then executes on downstream (like
londiste, slony, pglogical do), c) capture the DDL query as text
and allow user defined function to be called with that DDL text on
the subscriber

* Since here DDL is mostly ALTER / CREATE / DROP TABLE (or am I wrong?) may be
we can add something like WITH SUBSCRIBERS to statements?

Not sure I follow. How does that help?

* Talking about exact mechanism of DDL replication I like you variant b), but since we
have transactional DDL, we can do two phase commit here. That will require two phase
decoding and some logic about catching prepare responses through logical messages. If that
approach sounds interesting i can describe proposal in more details and create a patch.

I'd think that such approach is somewhat more interesting with c)
honestly. The difference between b) and c) is mostly about explicit vs
implicit. I definitely would like to see the 2PC patch updated to work
with this. But maybe it's wise to wait a while until the core of the
patch stabilizes during the discussion.

* Also I wasn�t able actually to run replication itself =) While regression tests passes, TAP
tests and manual run stuck. pg_subscription_rel.substate never becomes �r�. I�ll investigate
that more and write again.

Interesting, please keep me posted. It's possible for tables to stay in
's' state for some time if there is nothing happening on the server, but
that should not mean anything is stuck.

* As far as I understand sync starts automatically on enabling publication. May be split that
logic into a different command with some options? Like don�t sync at all for example.

I think SYNC should be option of subscription creation just like
INITIALLY ENABLED/DISABLED is. And then there should be interface to
resync a table manually (like pglogical has). Not yet sure how that
interface should look like in terms of DDL though.

* When I�m trying to create subscription to non-existent publication, CREATE SUBSRITION
creates replication slot and do not destroys it:

# create subscription sub connection 'host=127.0.0.1 dbname=postgres' publication mypub;
NOTICE: created replication slot "sub" on provider
ERROR: could not receive list of replicated tables from the provider: ERROR: cache lookup failed for publication 0
CONTEXT: slot "sub", output plugin "pgoutput", in the list_tables callback

after that:

postgres=# drop subscription sub;
ERROR: subscription "sub" does not exist
postgres=# create subscription sub connection 'host=127.0.0.1 dbname=postgres' publication pub;
ERROR: could not crate replication slot "sub": ERROR: replication slot "sub" already exists

See the TODO in CreateSubscription function :)

* Also can�t drop subscription:

postgres=# \drs
List of subscriptions
Name | Database | Enabled | Publication | Conninfo
------+----------+---------+-------------+--------------------------------
sub | postgres | t | {mypub} | host=127.0.0.1 dbname=postgres
(1 row)

postgres=# drop subscription sub;
ERROR: unrecognized object class: 6102

Yes that has been already reported.

* Several time i�ve run in a situation where provider's postmaster ignores Ctrl-C until subscribed
node is switched off.

Hmm I guess there is bug in signal processing code somewhere.

* Patch with small typos fixed attached.

I�ll do more testing, just want to share what i have so far.

Thanks for both.

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#16Steve Singer
steve@ssinger.info
In reply to: Petr Jelinek (#1)
Re: Logical Replication WIP

On 08/05/2016 11:00 AM, Petr Jelinek wrote:

Hi,

as promised here is WIP version of logical replication patch.

Thanks for keeping on this. This is important work

Feedback is welcome.

+<sect1 id="logical-replication-publication">
+  <title>Publication</title>
+  <para>
+    A Publication object can be defined on any master node, owned by one
+    user. A Publication is a set of changes generated from a group of
+    tables, and might also be described as a Change Set or Replication Set.
+    Each Publication exists in only one database.

'A publication object can be defined on *any master node*'. I found
this confusing the first time I read it because I thought it was
circular (what makes a node a 'master' node? Having a publication object
published from it?). On reflection I realized that you mean ' any
*physical replication master*'. I think this might be better worded as
'A publication object can be defined on any node other than a standby
node'. I think referring to 'master' in the context of logical
replication might confuse people.

I am raising this in the context of the larger terminology that we want
to use and potential confusion with the terminology we use for physical
replication. I like the publication / subscription terminology you've
gone with.

  <para>
+    Publications are different from table schema and do not affect
+    how the table is accessed. Each table can be added to multiple
+    Publications if needed.  Publications may include both tables
+    and materialized views. Objects must be added explicitly, except
+    when a Publication is created for "ALL TABLES". There is no
+    default name for a Publication which specifies all tables.
+  </para>
+  <para>
+    The Publication is different from table schema, it does not affect
+    how the table is accessed and each table can be added to multiple

Those 2 paragraphs seem to start the same way. I get the feeling that
there is some point your trying to express that I'm not catching onto.
Of course a publication is different than a tables schema, or different
than a function.

The definition of publication you have on the CREATE PUBLICATION page
seems better and should be repeated here (A publication is essentially a
group of tables intended for managing logical replication. See Section
30.1 <cid:part1.06040100.08080900@ssinger.info> for details about how
publications fit into logical replication setup. )

+  <para>
+    Conflicts happen when the replicated changes is breaking any
+    specified constraints (with the exception of foreign keys which are
+    not checked). Currently conflicts are not resolved automatically and
+    cause replication to be stopped with an error until the conflict is
+    manually resolved.

What options are there for manually resolving conflicts? Is the only
option to change the data on the subscriber to avoid the conflict?
I assume there isn't a way to flag a particular row coming from the
publisher and say ignore it. I don't think this is something we need to
support for the first version.

<sect1 id="logical-replication-architecture">
+  <title>Architecture</title>
+  <para>
+    Logical replication starts by copying a snapshot of the data on
+    the Provider database. Once that is done, the changes on Provider

I notice the user of 'Provider' above do you intend to update that to
'Publisher' or does provider mean something different. If we like the
'publication' terminology then I think 'publishers' should publish them
not providers.

I'm trying to test a basic subscription and I do the following

I did the following:

cluster 1:
create database test1;
create table a(id serial8 primary key,b text);
create publication testpub1;
alter publication testpub1 add table a;
insert into a(b) values ('1');

cluster2
create database test1;
create table a(id serial8 primary key,b text);
create subscription testsub2 publication testpub1 connection
'host=localhost port=5440 dbname=test1';
NOTICE: created replication slot "testsub2" on provider
NOTICE: synchronized table states
CREATE SUBSCRIPTION

This resulted in
LOG: logical decoding found consistent point at 0/15625E0
DETAIL: There are no running transactions.
LOG: exported logical decoding snapshot: "00000494-1" with 0
transaction IDs
LOG: logical replication apply for subscription testsub2 started
LOG: starting logical decoding for slot "testsub2"
DETAIL: streaming transactions committing after 0/1562618, reading WAL
from 0/15625E0
LOG: logical decoding found consistent point at 0/15625E0
DETAIL: There are no running transactions.
LOG: logical replication sync for subscription testsub2, table a started
LOG: logical decoding found consistent point at 0/1562640
DETAIL: There are no running transactions.
LOG: exported logical decoding snapshot: "00000495-1" with 0
transaction IDs
LOG: logical replication synchronization worker finished processing

The initial sync completed okay, then I did

insert into a(b) values ('2');

but the second insert never replicated.

I had the following output

LOG: terminating walsender process due to replication timeout

On cluster 1 I do

select * FROM pg_stat_replication;
pid | usesysid | usename | application_name | client_addr |
client_hostname | client_port | backend_start |
backend_xmin | state | sent_location | write_location | flush_location |
replay_location | sync_priority | sy
nc_state
-----+----------+---------+------------------+-------------+-----------------+-------------+---------------+-
-------------+-------+---------------+----------------+----------------+-----------------+---------------+---
---------
(0 rows)

If I then kill the cluster2 postmaster, I have to do a -9 or it won't die

I get

LOG: worker process: logical replication worker 16396 sync 16387 (PID
3677) exited with exit code 1
WARNING: could not launch logical replication worker
LOG: logical replication sync for subscription testsub2, table a started
ERROR: replication slot "testsub2_sync_a" does not exist
ERROR: could not start WAL streaming: ERROR: replication slot
"testsub2_sync_a" does not exist

I'm not really sure what I need to do to debug this, I suspect the
worker on cluster2 is having some issue.

[1]
/messages/by-id/CANP8+j+NMHP-yFvoG03tpb4_s7GdmnCriEEOJeKkXWmUu_=-HA@mail.gmail.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#17Petr Jelinek
petr@2ndquadrant.com
In reply to: Steve Singer (#16)
Re: Logical Replication WIP

On 13/08/16 17:34, Steve Singer wrote:

On 08/05/2016 11:00 AM, Petr Jelinek wrote:

Hi,

as promised here is WIP version of logical replication patch.

Thanks for keeping on this. This is important work

Feedback is welcome.

+<sect1 id="logical-replication-publication">
+  <title>Publication</title>
+  <para>
+    A Publication object can be defined on any master node, owned by one
+    user. A Publication is a set of changes generated from a group of
+    tables, and might also be described as a Change Set or Replication
Set.
+    Each Publication exists in only one database.

'A publication object can be defined on *any master node*'. I found
this confusing the first time I read it because I thought it was
circular (what makes a node a 'master' node? Having a publication object
published from it?). On reflection I realized that you mean ' any
*physical replication master*'. I think this might be better worded as
'A publication object can be defined on any node other than a standby
node'. I think referring to 'master' in the context of logical
replication might confuse people.

Makes sense to me.

I am raising this in the context of the larger terminology that we want
to use and potential confusion with the terminology we use for physical
replication. I like the publication / subscription terminology you've
gone with.

<para>
+    Publications are different from table schema and do not affect
+    how the table is accessed. Each table can be added to multiple
+    Publications if needed.  Publications may include both tables
+    and materialized views. Objects must be added explicitly, except
+    when a Publication is created for "ALL TABLES". There is no
+    default name for a Publication which specifies all tables.
+  </para>
+  <para>
+    The Publication is different from table schema, it does not affect
+    how the table is accessed and each table can be added to multiple

Those 2 paragraphs seem to start the same way. I get the feeling that
there is some point your trying to express that I'm not catching onto.
Of course a publication is different than a tables schema, or different
than a function.

Ah that's relic of some editorialization, will fix. The reason why we
think it's important to mention the difference between publication and
schema is that they are the only objects that contain tables but they
affect them in very different ways which might confuse users.

The definition of publication you have on the CREATE PUBLICATION page
seems better and should be repeated here (A publication is essentially a
group of tables intended for managing logical replication. See Section
30.1 <cid:part1.06040100.08080900@ssinger.info> for details about how
publications fit into logical replication setup. )

+  <para>
+    Conflicts happen when the replicated changes is breaking any
+    specified constraints (with the exception of foreign keys which are
+    not checked). Currently conflicts are not resolved automatically and
+    cause replication to be stopped with an error until the conflict is
+    manually resolved.

What options are there for manually resolving conflicts? Is the only
option to change the data on the subscriber to avoid the conflict?
I assume there isn't a way to flag a particular row coming from the
publisher and say ignore it. I don't think this is something we need to
support for the first version.

Yes you have to update data on subscriber or skip the the replication of
whole transaction (for which the UI is not very friendly currently as
you either have to consume the transaction
pg_logical_slot_get_binary_changes or by moving origin on subscriber
using pg_replication_origin_advance).

It's relatively easy to add some automatic conflict resolution as well,
but it didn't seem absolutely necessary so I didn't do it for the
initial version.

<sect1 id="logical-replication-architecture">
+  <title>Architecture</title>
+  <para>
+    Logical replication starts by copying a snapshot of the data on
+    the Provider database. Once that is done, the changes on Provider

I notice the user of 'Provider' above do you intend to update that to
'Publisher' or does provider mean something different. If we like the
'publication' terminology then I think 'publishers' should publish them
not providers.

Okay, I am just used to 'provider' in general (I guess londiste habit),
but 'publisher' is fine as well.

I'm trying to test a basic subscription and I do the following

I did the following:

cluster 1:
create database test1;
create table a(id serial8 primary key,b text);
create publication testpub1;
alter publication testpub1 add table a;
insert into a(b) values ('1');

cluster2
create database test1;
create table a(id serial8 primary key,b text);
create subscription testsub2 publication testpub1 connection
'host=localhost port=5440 dbname=test1';
NOTICE: created replication slot "testsub2" on provider
NOTICE: synchronized table states
CREATE SUBSCRIPTION

[...]

The initial sync completed okay, then I did

insert into a(b) values ('2');

but the second insert never replicated.

I had the following output

LOG: terminating walsender process due to replication timeout

On cluster 1 I do

select * FROM pg_stat_replication;
pid | usesysid | usename | application_name | client_addr |
client_hostname | client_port | backend_start |
backend_xmin | state | sent_location | write_location | flush_location |
replay_location | sync_priority | sy
nc_state
-----+----------+---------+------------------+-------------+-----------------+-------------+---------------+-

-------------+-------+---------------+----------------+----------------+-----------------+---------------+---

---------
(0 rows)

If I then kill the cluster2 postmaster, I have to do a -9 or it won't die

That might explain why it didn't replicate. The wait loops in apply
worker clearly need some work. Thanks for the report.

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#18Stas Kelvich
s.kelvich@postgrespro.ru
In reply to: Petr Jelinek (#15)
Re: Logical Replication WIP

On 11 Aug 2016, at 17:43, Petr Jelinek <petr@2ndquadrant.com> wrote:

* Also I wasn’t able actually to run replication itself =) While regression tests passes, TAP
tests and manual run stuck. pg_subscription_rel.substate never becomes ‘r’. I’ll investigate
that more and write again.

Interesting, please keep me posted. It's possible for tables to stay in 's' state for some time if there is nothing happening on the server, but that should not mean anything is stuck.

Slightly played around, it seems that apply worker waits forever for substate change.

(lldb) bt
* thread #1: tid = 0x183e00, 0x00007fff88c7f2a2 libsystem_kernel.dylib`poll + 10, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP
frame #0: 0x00007fff88c7f2a2 libsystem_kernel.dylib`poll + 10
frame #1: 0x00000001017ca8a3 postgres`WaitEventSetWaitBlock(set=0x00007fd2dc816b30, cur_timeout=10000, occurred_events=0x00007fff5e7f67d8, nevents=1) + 51 at latch.c:1108
frame #2: 0x00000001017ca438 postgres`WaitEventSetWait(set=0x00007fd2dc816b30, timeout=10000, occurred_events=0x00007fff5e7f67d8, nevents=1) + 248 at latch.c:941
frame #3: 0x00000001017c9fde postgres`WaitLatchOrSocket(latch=0x000000010ab208a4, wakeEvents=25, sock=-1, timeout=10000) + 254 at latch.c:347
frame #4: 0x00000001017c9eda postgres`WaitLatch(latch=0x000000010ab208a4, wakeEvents=25, timeout=10000) + 42 at latch.c:302
* frame #5: 0x0000000101793352 postgres`wait_for_sync_status_change(tstate=0x0000000101e409b0) + 178 at tablesync.c:228
frame #6: 0x0000000101792bbe postgres`process_syncing_tables_apply(slotname="subbi", end_lsn=140734778796592) + 430 at tablesync.c:436
frame #7: 0x00000001017928c1 postgres`process_syncing_tables(slotname="subbi", end_lsn=140734778796592) + 81 at tablesync.c:518
frame #8: 0x000000010177b620 postgres`LogicalRepApplyLoop(last_received=140734778796592) + 704 at apply.c:1122
frame #9: 0x000000010177bef4 postgres`ApplyWorkerMain(main_arg=0) + 1044 at apply.c:1353
frame #10: 0x000000010174cb5a postgres`StartBackgroundWorker + 826 at bgworker.c:729
frame #11: 0x0000000101762227 postgres`do_start_bgworker(rw=0x00007fd2db700000) + 343 at postmaster.c:5553
frame #12: 0x000000010175d42b postgres`maybe_start_bgworker + 427 at postmaster.c:5761
frame #13: 0x000000010175bccf postgres`sigusr1_handler(postgres_signal_arg=30) + 383 at postmaster.c:4979
frame #14: 0x00007fff9ab2352a libsystem_platform.dylib`_sigtramp + 26
frame #15: 0x00007fff88c7e07b libsystem_kernel.dylib`__select + 11
frame #16: 0x000000010175d5ac postgres`ServerLoop + 252 at postmaster.c:1665
frame #17: 0x000000010175b2e0 postgres`PostmasterMain(argc=3, argv=0x00007fd2db403840) + 5968 at postmaster.c:1309
frame #18: 0x000000010169507f postgres`main(argc=3, argv=0x00007fd2db403840) + 751 at main.c:228
frame #19: 0x00007fff8d45c5ad libdyld.dylib`start + 1
(lldb) p state
(char) $1 = 'c'
(lldb) p tstate->state
(char) $2 = ‘c’

Also I’ve noted that some lsn position looks wrong on publisher:

postgres=# select restart_lsn, confirmed_flush_lsn from pg_replication_slots;
restart_lsn | confirmed_flush_lsn
-------------+---------------------
0/1530EF8 | 7FFF/5E7F6A30
(1 row)

postgres=# select sent_location, write_location, flush_location, replay_location from pg_stat_replication;
sent_location | write_location | flush_location | replay_location
---------------+----------------+----------------+-----------------
0/1530F30 | 7FFF/5E7F6A30 | 7FFF/5E7F6A30 | 7FFF/5E7F6A30
(1 row)

--
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#19Petr Jelinek
petr@2ndquadrant.com
In reply to: Stas Kelvich (#18)
Re: Logical Replication WIP

On 15/08/16 15:51, Stas Kelvich wrote:

On 11 Aug 2016, at 17:43, Petr Jelinek <petr@2ndquadrant.com> wrote:

* Also I wasn�t able actually to run replication itself =) While regression tests passes, TAP
tests and manual run stuck. pg_subscription_rel.substate never becomes �r�. I�ll investigate
that more and write again.

Interesting, please keep me posted. It's possible for tables to stay in 's' state for some time if there is nothing happening on the server, but that should not mean anything is stuck.

Slightly played around, it seems that apply worker waits forever for substate change.

(lldb) bt
* thread #1: tid = 0x183e00, 0x00007fff88c7f2a2 libsystem_kernel.dylib`poll + 10, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP
frame #0: 0x00007fff88c7f2a2 libsystem_kernel.dylib`poll + 10
frame #1: 0x00000001017ca8a3 postgres`WaitEventSetWaitBlock(set=0x00007fd2dc816b30, cur_timeout=10000, occurred_events=0x00007fff5e7f67d8, nevents=1) + 51 at latch.c:1108
frame #2: 0x00000001017ca438 postgres`WaitEventSetWait(set=0x00007fd2dc816b30, timeout=10000, occurred_events=0x00007fff5e7f67d8, nevents=1) + 248 at latch.c:941
frame #3: 0x00000001017c9fde postgres`WaitLatchOrSocket(latch=0x000000010ab208a4, wakeEvents=25, sock=-1, timeout=10000) + 254 at latch.c:347
frame #4: 0x00000001017c9eda postgres`WaitLatch(latch=0x000000010ab208a4, wakeEvents=25, timeout=10000) + 42 at latch.c:302
* frame #5: 0x0000000101793352 postgres`wait_for_sync_status_change(tstate=0x0000000101e409b0) + 178 at tablesync.c:228
frame #6: 0x0000000101792bbe postgres`process_syncing_tables_apply(slotname="subbi", end_lsn=140734778796592) + 430 at tablesync.c:436
frame #7: 0x00000001017928c1 postgres`process_syncing_tables(slotname="subbi", end_lsn=140734778796592) + 81 at tablesync.c:518
frame #8: 0x000000010177b620 postgres`LogicalRepApplyLoop(last_received=140734778796592) + 704 at apply.c:1122
frame #9: 0x000000010177bef4 postgres`ApplyWorkerMain(main_arg=0) + 1044 at apply.c:1353
frame #10: 0x000000010174cb5a postgres`StartBackgroundWorker + 826 at bgworker.c:729
frame #11: 0x0000000101762227 postgres`do_start_bgworker(rw=0x00007fd2db700000) + 343 at postmaster.c:5553
frame #12: 0x000000010175d42b postgres`maybe_start_bgworker + 427 at postmaster.c:5761
frame #13: 0x000000010175bccf postgres`sigusr1_handler(postgres_signal_arg=30) + 383 at postmaster.c:4979
frame #14: 0x00007fff9ab2352a libsystem_platform.dylib`_sigtramp + 26
frame #15: 0x00007fff88c7e07b libsystem_kernel.dylib`__select + 11
frame #16: 0x000000010175d5ac postgres`ServerLoop + 252 at postmaster.c:1665
frame #17: 0x000000010175b2e0 postgres`PostmasterMain(argc=3, argv=0x00007fd2db403840) + 5968 at postmaster.c:1309
frame #18: 0x000000010169507f postgres`main(argc=3, argv=0x00007fd2db403840) + 751 at main.c:228
frame #19: 0x00007fff8d45c5ad libdyld.dylib`start + 1
(lldb) p state
(char) $1 = 'c'
(lldb) p tstate->state
(char) $2 = �c�

Hmm, not sure why is that, it might be related to the lsn reported being
wrong. Could you check what is the lsn there (either in tstate or or in
pg_subscription_rel)? Especially in comparison with what the
sent_location is.

Also I�ve noted that some lsn position looks wrong on publisher:

postgres=# select restart_lsn, confirmed_flush_lsn from pg_replication_slots;
restart_lsn | confirmed_flush_lsn
-------------+---------------------
0/1530EF8 | 7FFF/5E7F6A30
(1 row)

postgres=# select sent_location, write_location, flush_location, replay_location from pg_stat_replication;
sent_location | write_location | flush_location | replay_location
---------------+----------------+----------------+-----------------
0/1530F30 | 7FFF/5E7F6A30 | 7FFF/5E7F6A30 | 7FFF/5E7F6A30
(1 row)

That's most likely result of the unitialized origin_startpos warning. I
am working on new version of patch where that part is fixed, if you want
to check this before I send it in, the patch looks like this:

diff --git a/src/backend/replication/logical/apply.c 
b/src/backend/replication/logical/apply.c
index 581299e..7a9e775 100644
--- a/src/backend/replication/logical/apply.c
+++ b/src/backend/replication/logical/apply.c
@@ -1353,6 +1353,7 @@ ApplyWorkerMain(Datum main_arg)
                 originid = replorigin_by_name(myslotname, false);
                 replorigin_session_setup(originid);
                 replorigin_session_origin = originid;
+               origin_startpos = replorigin_session_get_progress(false);
                 CommitTransactionCommand();

wrcapi->connect(wrchandle, MySubscription->conninfo, true,

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#20Petr Jelinek
petr@2ndquadrant.com
In reply to: Petr Jelinek (#19)
Re: Logical Replication WIP

Hi all,

attaching updated version of the patch. Still very much WIP but it's
slowly getting there.

Changes since last time:
- Mostly rewrote publication handling in pgoutput which brings a)
ability to add FOR ALL TABLES publications, b) performs better (no need
to syscache lookup for every change like before), c) does correct
invalidation of publications on DDL
- added FOR TABLE and FOR ALL TABLES clause to both CREATE PUBLICATION
and ALTER PUBLICATION so that one can create publication directly with
table list, the FOR TABLE in ALTER PUBLICATION behaves like SET
operation (removes existing, adds new ones)
- fixed several issues with initial table synchronization (most of which
have been reported here)
- added pg_stat_subscription monitoring view
- updated docs to reflect all the changes, also removed the stuff that's
only planned from the docs (there is copy of the planned stuff docs in
the neighboring thread so no need to keep it in the patch)
- added documentation improvements suggested by Steve Singer and removed
the capitalization in the main doc
- added pg_dump support
- improved psql support (\drp+ shows list of tables)
- added flags to COMMIT message in the protocol so that we can add 2PC
support in the future
- fixed DROP SUBSCRIPTION issues and added tests for it

I decided to not deal with ACLs so far, assuming superuser/replication
role for now. We can always make it less restrictive later by adding the
grantable privileges.

FDW support is still TODO. I think TRUNCATE will have to be solved as
part of other DDL in the future. I do have some ideas what to do with
DDL but I don't plan to implement them in the initial patch.

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0003-Define-logical-replication-protocol-and-output-plugi.patch.gzapplication/gzip; name=0003-Define-logical-replication-protocol-and-output-plugi.patch.gzDownload
0004-Make-libpqwalreceiver-reentrant.patch.gzapplication/gzip; name=0004-Make-libpqwalreceiver-reentrant.patch.gzDownload
0005-Add-logical-replication-workers.patch.gzapplication/gzip; name=0005-Add-logical-replication-workers.patch.gzDownload
0006-Logical-replication-support-for-initial-data-copy.patch.gzapplication/gzip; name=0006-Logical-replication-support-for-initial-data-copy.patch.gzDownload
0001-Add-PUBLICATION-catalogs-and-DDL.patch.gzapplication/gzip; name=0001-Add-PUBLICATION-catalogs-and-DDL.patch.gzDownload
0002-Add-SUBSCRIPTION-catalog-and-DDL.patch.gzapplication/gzip; name=0002-Add-SUBSCRIPTION-catalog-and-DDL.patch.gzDownload
#21Petr Jelinek
petr@2ndquadrant.com
In reply to: Petr Jelinek (#20)
#22Petr Jelinek
petr@2ndquadrant.com
In reply to: Petr Jelinek (#21)
#23Erik Rijkers
er@xs4all.nl
In reply to: Petr Jelinek (#22)
#24Erik Rijkers
er@xs4all.nl
In reply to: Erik Rijkers (#23)
#25Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#22)
#26Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#25)
#27Tom Lane
tgl@sss.pgh.pa.us
In reply to: Petr Jelinek (#26)
#28Petr Jelinek
petr@2ndquadrant.com
In reply to: Tom Lane (#27)
#29Steve Singer
steve@ssinger.info
In reply to: Petr Jelinek (#22)
#30Steve Singer
steve@ssinger.info
In reply to: Steve Singer (#29)
#31Petr Jelinek
petr@2ndquadrant.com
In reply to: Steve Singer (#29)
#32Petr Jelinek
petr@2ndquadrant.com
In reply to: Steve Singer (#30)
#33Petr Jelinek
petr@2ndquadrant.com
In reply to: Erik Rijkers (#24)
#34Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#26)
#35Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#34)
#36Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#22)
#37Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#36)
#38Erik Rijkers
er@xs4all.nl
In reply to: Petr Jelinek (#22)
#39Petr Jelinek
petr@2ndquadrant.com
In reply to: Erik Rijkers (#38)
#40Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#36)
#41Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#22)
#42Peter Eisentraut
peter_e@gmx.net
In reply to: Peter Eisentraut (#36)
#43Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#40)
#44Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#41)
#45Andres Freund
andres@anarazel.de
In reply to: Petr Jelinek (#44)
#46Petr Jelinek
petr@2ndquadrant.com
In reply to: Andres Freund (#45)
#47Andres Freund
andres@anarazel.de
In reply to: Petr Jelinek (#46)
#48Petr Jelinek
petr@2ndquadrant.com
In reply to: Andres Freund (#47)
#49Craig Ringer
craig@2ndquadrant.com
In reply to: Petr Jelinek (#48)
#50Petr Jelinek
petr@2ndquadrant.com
In reply to: Craig Ringer (#49)
#51Andres Freund
andres@anarazel.de
In reply to: Petr Jelinek (#40)
#52Andres Freund
andres@anarazel.de
In reply to: Petr Jelinek (#40)
#53Peter Eisentraut
peter_e@gmx.net
In reply to: Andres Freund (#52)
#54Andres Freund
andres@anarazel.de
In reply to: Peter Eisentraut (#53)
#55Petr Jelinek
petr@2ndquadrant.com
In reply to: Andres Freund (#51)
#56Petr Jelinek
petr@2ndquadrant.com
In reply to: Andres Freund (#54)
#57Andres Freund
andres@anarazel.de
In reply to: Petr Jelinek (#55)
#58Petr Jelinek
petr@2ndquadrant.com
In reply to: Andres Freund (#52)
#59Petr Jelinek
petr@2ndquadrant.com
In reply to: Andres Freund (#57)
#60Craig Ringer
craig@2ndquadrant.com
In reply to: Petr Jelinek (#50)
#61Steve Singer
steve@ssinger.info
In reply to: Petr Jelinek (#40)
#62Steve Singer
steve@ssinger.info
In reply to: Petr Jelinek (#40)
#63Peter Eisentraut
peter_e@gmx.net
In reply to: Steve Singer (#61)
#64Steve Singer
steve@ssinger.info
In reply to: Peter Eisentraut (#63)
#65Petr Jelinek
petr@2ndquadrant.com
In reply to: Steve Singer (#64)
#66Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#40)
#67Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#66)
#68Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#67)
#69Michael Paquier
michael@paquier.xyz
In reply to: Peter Eisentraut (#68)
#70Petr Jelinek
petr@2ndquadrant.com
In reply to: Michael Paquier (#69)
#71Steve Singer
steve@ssinger.info
In reply to: Petr Jelinek (#70)
#72Petr Jelinek
petr@2ndquadrant.com
In reply to: Steve Singer (#71)
#73Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#70)
#74Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#73)
#75Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#74)
#76Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#70)
#77Andres Freund
andres@anarazel.de
In reply to: Petr Jelinek (#72)
#78Andres Freund
andres@anarazel.de
In reply to: Petr Jelinek (#72)
#79Andres Freund
andres@anarazel.de
In reply to: Petr Jelinek (#72)
#80Andres Freund
andres@anarazel.de
In reply to: Petr Jelinek (#72)
#81Steve Singer
steve@ssinger.info
In reply to: Petr Jelinek (#72)
#82Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#72)
#83Peter Eisentraut
peter_e@gmx.net
In reply to: Andres Freund (#79)
#84Petr Jelinek
petr@2ndquadrant.com
In reply to: Andres Freund (#78)
#85Petr Jelinek
petr@2ndquadrant.com
In reply to: Andres Freund (#77)
#86Petr Jelinek
petr@2ndquadrant.com
In reply to: Andres Freund (#79)
#87Petr Jelinek
petr@2ndquadrant.com
In reply to: Andres Freund (#80)
#88Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#82)
#89Andres Freund
andres@anarazel.de
In reply to: Petr Jelinek (#86)
#90Andres Freund
andres@anarazel.de
In reply to: Petr Jelinek (#84)
#91Petr Jelinek
petr@2ndquadrant.com
In reply to: Andres Freund (#90)
#92Peter Eisentraut
peter_e@gmx.net
In reply to: Andres Freund (#89)
#93Andres Freund
andres@anarazel.de
In reply to: Peter Eisentraut (#92)
#94Steve Singer
steve@ssinger.info
In reply to: Petr Jelinek (#72)
#95Petr Jelinek
petr@2ndquadrant.com
In reply to: Andres Freund (#93)
#96Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#93)
#97Petr Jelinek
petr@2ndquadrant.com
In reply to: Andres Freund (#89)
#98Petr Jelinek
petr@2ndquadrant.com
In reply to: Steve Singer (#94)
#99Steve Singer
steve@ssinger.info
In reply to: Petr Jelinek (#98)
#100Erik Rijkers
er@xs4all.nl
In reply to: Petr Jelinek (#98)
#101Erik Rijkers
er@xs4all.nl
In reply to: Erik Rijkers (#100)
#102Erik Rijkers
er@xs4all.nl
In reply to: Petr Jelinek (#97)
#103Petr Jelinek
petr@2ndquadrant.com
In reply to: Erik Rijkers (#102)
#104Erik Rijkers
er@xs4all.nl
In reply to: Petr Jelinek (#103)
#105Petr Jelinek
petr@2ndquadrant.com
In reply to: Erik Rijkers (#104)
#106Petr Jelinek
petr@2ndquadrant.com
In reply to: Petr Jelinek (#105)
#107Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#106)
#108Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#107)
#109Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#108)
#110Thomas Munro
thomas.munro@gmail.com
In reply to: Peter Eisentraut (#109)
#111Petr Jelinek
petr@2ndquadrant.com
In reply to: Thomas Munro (#110)
#112Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Petr Jelinek (#111)
#113Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#97)
#114Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#109)
#115Petr Jelinek
petr@2ndquadrant.com
In reply to: Petr Jelinek (#114)
#116Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#106)
#117Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Peter Eisentraut (#116)
#118Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#116)
#119Andres Freund
andres@anarazel.de
In reply to: Peter Eisentraut (#113)
#120Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#118)
#121Peter Eisentraut
peter_e@gmx.net
In reply to: Peter Eisentraut (#120)
#122Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#121)
#123Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#115)
#124Erik Rijkers
er@xs4all.nl
In reply to: Peter Eisentraut (#123)
#125Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#123)
#126Petr Jelinek
petr@2ndquadrant.com
In reply to: Erik Rijkers (#124)
#127Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#122)
#128Andres Freund
andres@anarazel.de
In reply to: Peter Eisentraut (#127)
#129Petr Jelinek
petr@2ndquadrant.com
In reply to: Andres Freund (#128)
#130Andres Freund
andres@anarazel.de
In reply to: Petr Jelinek (#125)
#131Petr Jelinek
petr@2ndquadrant.com
In reply to: Andres Freund (#130)
#132Petr Jelinek
petr@2ndquadrant.com
In reply to: Petr Jelinek (#131)
#133Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#125)
#134Andres Freund
andres@anarazel.de
In reply to: Peter Eisentraut (#133)
#135Andres Freund
andres@anarazel.de
In reply to: Petr Jelinek (#132)
#136Peter Eisentraut
peter_e@gmx.net
In reply to: Andres Freund (#128)
#137Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#136)
#138Petr Jelinek
petr@2ndquadrant.com
In reply to: Andres Freund (#134)
#139Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#133)
#140Petr Jelinek
petr@2ndquadrant.com
In reply to: Andres Freund (#128)
#141Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#133)
#142Craig Ringer
craig@2ndquadrant.com
In reply to: Petr Jelinek (#40)
#143Petr Jelinek
petr@2ndquadrant.com
In reply to: Craig Ringer (#142)
#144Petr Jelinek
petr@2ndquadrant.com
In reply to: Andres Freund (#134)
#145Erik Rijkers
er@xs4all.nl
In reply to: Petr Jelinek (#144)
#146Steve Singer
steve@ssinger.info
In reply to: Petr Jelinek (#144)
#147Petr Jelinek
petr@2ndquadrant.com
In reply to: Erik Rijkers (#145)
#148Petr Jelinek
petr@2ndquadrant.com
In reply to: Steve Singer (#146)
#149Steve Singer
steve@ssinger.info
In reply to: Petr Jelinek (#148)
#150Petr Jelinek
petr@2ndquadrant.com
In reply to: Steve Singer (#149)
#151Erik Rijkers
er@xs4all.nl
In reply to: Petr Jelinek (#147)
#152Petr Jelinek
petr@2ndquadrant.com
In reply to: Erik Rijkers (#151)
#153Steve Singer
steve@ssinger.info
In reply to: Petr Jelinek (#150)
#154Petr Jelinek
petr@2ndquadrant.com
In reply to: Steve Singer (#153)
#155Erik Rijkers
er@xs4all.nl
In reply to: Erik Rijkers (#151)
#156Petr Jelinek
petr@2ndquadrant.com
In reply to: Erik Rijkers (#155)
#157Erik Rijkers
er@xs4all.nl
In reply to: Petr Jelinek (#156)
#158Petr Jelinek
petr@2ndquadrant.com
In reply to: Erik Rijkers (#157)
#159Erik Rijkers
er@xs4all.nl
In reply to: Petr Jelinek (#158)
#160Petr Jelinek
petr@2ndquadrant.com
In reply to: Erik Rijkers (#159)
#161Petr Jelinek
petr@2ndquadrant.com
In reply to: Petr Jelinek (#160)
#162Erik Rijkers
er@xs4all.nl
In reply to: Petr Jelinek (#161)
#163Erik Rijkers
er@xs4all.nl
In reply to: Petr Jelinek (#161)
#164Steve Singer
steve@ssinger.info
In reply to: Petr Jelinek (#161)
#165Petr Jelinek
petr@2ndquadrant.com
In reply to: Steve Singer (#164)
#166Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#144)
#167Peter Eisentraut
peter_e@gmx.net
In reply to: Peter Eisentraut (#166)
#168Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#166)
#169Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#167)
#170Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#144)
#171Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#144)
#172Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#144)
#173Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#144)
#174Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#165)
#175Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#168)
#176Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#174)
#177Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#175)
#178Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#176)
#179Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#177)
#180Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#179)
#181Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#178)
#182Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#181)
#183Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#180)
#184Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#183)
#185Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#173)
In reply to: Petr Jelinek (#184)
#187Petr Jelinek
petr@2ndquadrant.com
In reply to: Euler Taveira de Oliveira (#186)
#188Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#170)
#189Erik Rijkers
er@xs4all.nl
In reply to: Petr Jelinek (#188)
#190Petr Jelinek
petr@2ndquadrant.com
In reply to: Erik Rijkers (#189)
#191Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#187)
#192Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#188)
#193Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#185)
#194Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#192)
#195Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#193)
#196Robert Haas
robertmhaas@gmail.com
In reply to: Petr Jelinek (#194)
#197Petr Jelinek
petr@2ndquadrant.com
In reply to: Robert Haas (#196)
#198Erik Rijkers
er@xs4all.nl
In reply to: Petr Jelinek (#197)
#199Petr Jelinek
petr@2ndquadrant.com
In reply to: Erik Rijkers (#198)
#200Erik Rijkers
er@xs4all.nl
In reply to: Petr Jelinek (#199)
#201Petr Jelinek
petr@2ndquadrant.com
In reply to: Petr Jelinek (#197)
#202Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#201)
#203Fujii Masao
masao.fujii@gmail.com
In reply to: Peter Eisentraut (#202)
#204Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#202)
#205Petr Jelinek
petr@2ndquadrant.com
In reply to: Fujii Masao (#203)
#206Jaime Casanova
jcasanov@systemguards.com.ec
In reply to: Petr Jelinek (#205)
#207Petr Jelinek
petr@2ndquadrant.com
In reply to: Jaime Casanova (#206)
#208Jaime Casanova
jcasanov@systemguards.com.ec
In reply to: Petr Jelinek (#207)
#209Robert Haas
robertmhaas@gmail.com
In reply to: Petr Jelinek (#207)
#210Craig Ringer
craig@2ndquadrant.com
In reply to: Robert Haas (#209)
#211Robert Haas
robertmhaas@gmail.com
In reply to: Craig Ringer (#210)
#212Petr Jelinek
petr@2ndquadrant.com
In reply to: Petr Jelinek (#204)
#213Petr Jelinek
petr@2ndquadrant.com
In reply to: Petr Jelinek (#212)
#214Thom Brown
thom@linux.com
In reply to: Petr Jelinek (#213)
#215Fujii Masao
masao.fujii@gmail.com
In reply to: Petr Jelinek (#207)
#216Petr Jelinek
petr@2ndquadrant.com
In reply to: Fujii Masao (#215)
#217Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#213)
#218Peter Eisentraut
peter_e@gmx.net
In reply to: Fujii Masao (#215)
#219Petr Jelinek
petr@2ndquadrant.com
In reply to: Peter Eisentraut (#217)
#220Petr Jelinek
petr@2ndquadrant.com
In reply to: Petr Jelinek (#219)
#221Petr Jelinek
petr@2ndquadrant.com
In reply to: Petr Jelinek (#220)
#222Peter Eisentraut
peter_e@gmx.net
In reply to: Petr Jelinek (#221)