Column Filtering in Logical Replication

Started by Rahila Syedalmost 5 years ago257 messageshackers
Jump to latest
#1Rahila Syed
rahilasyed90@gmail.com

Hi,

Filtering of columns at the publisher node will allow for selective
replication of data between publisher and subscriber. In case the updates
on the publisher are targeted only towards specific columns, the user will
have an option to reduce network consumption by not sending the data
corresponding to new columns that do not change. Note that replica
identity values will always be sent irrespective of column filtering settings.
The column values that are not sent by the publisher will be populated
using local values on the subscriber. For insert command, non-replicated
column values will be NULL or the default.
If column names are not specified while creating or altering a publication,
all the columns are replicated as per current behaviour.

The proposal for syntax to add table with column names to publication is as
follows:
Create publication:

CREATE PUBLICATION <pub_name> [ FOR TABLE [ONLY] table_name [(colname
[,…])] | FOR ALL TABLES]

Alter publication:

ALTER PUBLICATION <pub_name> ADD TABLE [ONLY] table_name [(colname [, ..])]

Please find attached a patch that implements the above proposal.
While the patch contains basic implementation and tests, several
improvements
and sanity checks are underway. I will post an updated patch with those
changes soon.

Kindly let me know your opinion.

Thank you,

Rahila Syed

Attachments:

0001-Add-column-filtering-to-logical-replication.patchapplication/octet-stream; name=0001-Add-column-filtering-to-logical-replication.patchDownload+224-37
#2Dilip Kumar
dilipbalaut@gmail.com
In reply to: Rahila Syed (#1)
Re: Column Filtering in Logical Replication

On Thu, Jul 1, 2021 at 1:06 AM Rahila Syed <rahilasyed90@gmail.com> wrote:

Hi,

Filtering of columns at the publisher node will allow for selective replication of data between publisher and subscriber. In case the updates on the publisher are targeted only towards specific columns, the user will have an option to reduce network consumption by not sending the data corresponding to new columns that do not change. Note that replica identity values will always be sent irrespective of column filtering settings. The column values that are not sent by the publisher will be populated using local values on the subscriber. For insert command, non-replicated column values will be NULL or the default.
If column names are not specified while creating or altering a publication,
all the columns are replicated as per current behaviour.

The proposal for syntax to add table with column names to publication is as follows:
Create publication:

CREATE PUBLICATION <pub_name> [ FOR TABLE [ONLY] table_name [(colname [,…])] | FOR ALL TABLES]

Alter publication:

ALTER PUBLICATION <pub_name> ADD TABLE [ONLY] table_name [(colname [, ..])]

Please find attached a patch that implements the above proposal.
While the patch contains basic implementation and tests, several improvements
and sanity checks are underway. I will post an updated patch with those changes soon.

Kindly let me know your opinion.

I haven't looked into the patch yet but +1 for the idea.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

#3vignesh C
vignesh21@gmail.com
In reply to: Rahila Syed (#1)
Re: Column Filtering in Logical Replication

On Thu, Jul 1, 2021 at 1:06 AM Rahila Syed <rahilasyed90@gmail.com> wrote:

Hi,

Filtering of columns at the publisher node will allow for selective replication of data between publisher and subscriber. In case the updates on the publisher are targeted only towards specific columns, the user will have an option to reduce network consumption by not sending the data corresponding to new columns that do not change. Note that replica identity values will always be sent irrespective of column filtering settings. The column values that are not sent by the publisher will be populated using local values on the subscriber. For insert command, non-replicated column values will be NULL or the default.
If column names are not specified while creating or altering a publication,
all the columns are replicated as per current behaviour.

The proposal for syntax to add table with column names to publication is as follows:
Create publication:

CREATE PUBLICATION <pub_name> [ FOR TABLE [ONLY] table_name [(colname [,…])] | FOR ALL TABLES]

Alter publication:

ALTER PUBLICATION <pub_name> ADD TABLE [ONLY] table_name [(colname [, ..])]

Please find attached a patch that implements the above proposal.
While the patch contains basic implementation and tests, several improvements
and sanity checks are underway. I will post an updated patch with those changes soon.

Kindly let me know your opinion.

This idea gives more flexibility to the user, +1 for the feature.

Regards,
Vignesh

#4Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Rahila Syed (#1)
Re: Column Filtering in Logical Replication

Hello, here are a few comments on this patch.

The patch adds a function get_att_num_by_name; but we have a lsyscache.c
function for that purpose, get_attnum. Maybe that one should be used
instead?

get_tuple_columns_map() returns a bitmapset of the attnos of the columns
in the given list, so its name feels wrong. I propose
get_table_columnset(). However, this function is invoked for every
insert/update change, so it's going to be far too slow to be usable. I
think you need to cache the bitmapset somewhere, so that the function is
only called on first use. I didn't look very closely, but it seems that
struct RelationSyncEntry may be a good place to cache it.

The patch adds a new parse node PublicationTable, but doesn't add
copyfuncs.c, equalfuncs.c, readfuncs.c, outfuncs.c support for it.
Maybe try a compile with WRITE_READ_PARSE_PLAN_TREES and/or
COPY_PARSE_PLAN_TREES enabled to make sure everything is covered.
(I didn't verify that this actually catches anything ...)

The new column in pg_publication_rel is prrel_attr. This name seems at
odds with existing column names (we don't use underscores in catalog
column names). Maybe prattrs is good enough? prrelattrs? We tend to
use plurals for columns that are arrays.

It's not super clear to me that strlist_to_textarray() and related
processing will behave sanely when the column names contain weird
characters such as commas or quotes, or just when used with uppercase
column names. Maybe it's worth having tests that try to break such
cases.

You seem to have left a debugging "elog(LOG)" line in OpenTableList.

I got warnings from "git am" about trailing whitespace being added by
the patch in two places.

Thanks!

--
Álvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/

#5Peter Smith
smithpb2250@gmail.com
In reply to: Rahila Syed (#1)
Re: Column Filtering in Logical Replication

Hi, I was wondering if/when a subset of cols is specified then does
that mean it will be possible for the table to be replicated to a
*smaller* table at the subscriber side?

e.g Can a table with 7 cols replicated to a table with 2 cols?

table tab1(a,b,c,d,e,f,g) --> CREATE PUBLICATION pub1 FOR TABLE
tab1(a,b) --> table tab1(a,b)

~~

I thought maybe that should be possible, but the expected behaviour
for that scenario was not very clear to me from the thread/patch
comments. And the new TAP test uses the tab1 table created exactly the
same for pub/sub, so I couldn't tell from the test code either.

------
Kind Regards,
Peter Smith.
Fujitsu Australia

#6Rahila Syed
rahilasyed90@gmail.com
In reply to: Peter Smith (#5)
Re: Column Filtering in Logical Replication

Hi Peter,

Hi, I was wondering if/when a subset of cols is specified then does

that mean it will be possible for the table to be replicated to a
*smaller* table at the subscriber side?

e.g Can a table with 7 cols replicated to a table with 2 cols?

table tab1(a,b,c,d,e,f,g) --> CREATE PUBLICATION pub1 FOR TABLE
tab1(a,b) --> table tab1(a,b)

~~

I thought maybe that should be possible, but the expected behaviour
for that scenario was not very clear to me from the thread/patch
comments. And the new TAP test uses the tab1 table created exactly the
same for pub/sub, so I couldn't tell from the test code either.

Currently, this capability is not included in the patch. If the table on
the subscriber
server has lesser attributes than that on the publisher server, it throws
an error at the
time of CREATE SUBSCRIPTION.

About having such a functionality, I don't immediately see any issue with
it as long
as we make sure replica identity columns are always present on both
instances.
However, need to carefully consider situations in which a server subscribes
to multiple
publications, each publishing a different subset of columns of a table.

Thank you,
Rahila Syed

#7Rahila Syed
rahilasyed90@gmail.com
In reply to: Alvaro Herrera (#4)
Re: Column Filtering in Logical Replication

Hi Alvaro,

Thank you for comments.

The patch adds a function get_att_num_by_name; but we have a lsyscache.c

function for that purpose, get_attnum. Maybe that one should be used
instead?

Thank you for pointing that out, I agree it makes sense to reuse the

existing function.
Changed it accordingly in the attached patch.

get_tuple_columns_map() returns a bitmapset of the attnos of the columns
in the given list, so its name feels wrong. I propose
get_table_columnset(). However, this function is invoked for every
insert/update change, so it's going to be far too slow to be usable. I
think you need to cache the bitmapset somewhere, so that the function is
only called on first use. I didn't look very closely, but it seems that
struct RelationSyncEntry may be a good place to cache it.

Makes sense, changed accordingly.

The patch adds a new parse node PublicationTable, but doesn't add
copyfuncs.c, equalfuncs.c, readfuncs.c, outfuncs.c support for it.
Maybe try a compile with WRITE_READ_PARSE_PLAN_TREES and/or
COPY_PARSE_PLAN_TREES enabled to make sure everything is covered.
(I didn't verify that this actually catches anything ...)

I will test this and include these changes in the next version.

The new column in pg_publication_rel is prrel_attr. This name seems at
odds with existing column names (we don't use underscores in catalog
column names). Maybe prattrs is good enough? prrelattrs? We tend to
use plurals for columns that are arrays.

Renamed it to prattrs as per suggestion.

It's not super clear to me that strlist_to_textarray() and related

processing will behave sanely when the column names contain weird
characters such as commas or quotes, or just when used with uppercase
column names. Maybe it's worth having tests that try to break such
cases.

Sure, I will include these tests in the next version.

You seem to have left a debugging "elog(LOG)" line in OpenTableList.

Removed.

I got warnings from "git am" about trailing whitespace being added by
the patch in two places.

Should be fixed now.

Thank you,
Rahila Syed

Attachments:

v1-0001-Add-column-filtering-to-logical-replication.patchapplication/octet-stream; name=v1-0001-Add-column-filtering-to-logical-replication.patchDownload+216-37
#8Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Rahila Syed (#6)
Re: Column Filtering in Logical Replication

On 7/12/21 10:32 AM, Rahila Syed wrote:

Hi Peter,

Hi, I was wondering if/when a subset of cols is specified then does
that mean it will be possible for the table to be replicated to a
*smaller* table at the subscriber side? 

e.g Can a table with 7 cols replicated to a table with 2 cols?

table tab1(a,b,c,d,e,f,g) --> CREATE PUBLICATION pub1 FOR TABLE
tab1(a,b)  --> table tab1(a,b)

~~

I thought maybe that should be possible, but the expected behaviour
for that scenario was not very clear to me from the thread/patch
comments. And the new TAP test uses the tab1 table created exactly the
same for pub/sub, so I couldn't tell from the test code either.

 
Currently, this capability is not included in the patch. If the table on
the subscriber
server has lesser attributes than that on the publisher server, it
throws an error at the 
time of CREATE SUBSCRIPTION.

That's a bit surprising, to be honest. I do understand the patch simply
treats the filtered columns as "unchanged" because that's the simplest
way to filter the *data* of the columns. But if someone told me we can
"filter columns" I'd expect this to work without the columns on the
subscriber.

About having such a functionality, I don't immediately see any issue
with it as long
as we make sure replica identity columns are always present on both
instances.

Yeah, that seems like an inherent requirement.

However, need to carefully consider situations in which a server
subscribes to multiple 
publications,  each publishing a different subset of columns of a table.  
 

Isn't that pretty much the same situation as for multiple subscriptions
each with a different set of I/U/D operations? IIRC we simply merge
those, so why not to do the same thing here and merge the attributes?

regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#9Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Rahila Syed (#7)
Re: Column Filtering in Logical Replication

On 7/12/21 11:38 AM, Rahila Syed wrote:

Hi Alvaro,

Thank you for comments.

The patch adds a function get_att_num_by_name; but we have a lsyscache.c
function for that purpose, get_attnum.  Maybe that one should be used
instead?

Thank you for pointing that out, I agree it makes sense to reuse the
existing function.
Changed it accordingly in the attached patch.
 

get_tuple_columns_map() returns a bitmapset of the attnos of the columns
in the given list, so its name feels wrong.  I propose
get_table_columnset().  However, this function is invoked for every
insert/update change, so it's going to be far too slow to be usable.  I
think you need to cache the bitmapset somewhere, so that the function is
only called on first use.  I didn't look very closely, but it seems that
struct RelationSyncEntry may be a good place to cache it.

Makes sense, changed accordingly.
 

To nitpick, I find "Bitmapset *att_list" a bit annoying, because it's
not really a list ;-)

FWIW "make check" fails for me with this version, due to segfault in
OpenTableLists. Apparenly there's some confusion - the code expects the
list to contain PublicationTable nodes, and tries to extract the
RangeVar from the elements. But the list actually contains RangeVar, so
this crashes and burns. See the attached backtrace.

I'd bet this is because the patch uses list of RangeVar in some cases
and list of PublicationTable in some cases, similarly to the "row
filtering" patch nearby. IMHO this is just confusing and we should
always pass list of PublicationTable nodes.

regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachments:

crash.txttext/plain; charset=UTF-8; name=crash.txtDownload
#10Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Tomas Vondra (#9)
Re: Column Filtering in Logical Replication

On 2021-Jul-12, Tomas Vondra wrote:

FWIW "make check" fails for me with this version, due to segfault in
OpenTableLists. Apparenly there's some confusion - the code expects the
list to contain PublicationTable nodes, and tries to extract the
RangeVar from the elements. But the list actually contains RangeVar, so
this crashes and burns. See the attached backtrace.

I'd bet this is because the patch uses list of RangeVar in some cases
and list of PublicationTable in some cases, similarly to the "row
filtering" patch nearby. IMHO this is just confusing and we should
always pass list of PublicationTable nodes.

+1 don't make the code guess what type of list it is. Changing all the
uses of that node to deal with PublicationTable seems best.

--
Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/
"Cuando no hay humildad las personas se degradan" (A. Christie)

#11Rahila Syed
rahilasyed90@gmail.com
In reply to: Tomas Vondra (#8)
Re: Column Filtering in Logical Replication

Hi Tomas,

Thank you for your comments.

Currently, this capability is not included in the patch. If the table on
the subscriber
server has lesser attributes than that on the publisher server, it
throws an error at the
time of CREATE SUBSCRIPTION.

That's a bit surprising, to be honest. I do understand the patch simply
treats the filtered columns as "unchanged" because that's the simplest
way to filter the *data* of the columns. But if someone told me we can
"filter columns" I'd expect this to work without the columns on the
subscriber.

OK, I will look into adding this.

However, need to carefully consider situations in which a server
subscribes to multiple
publications, each publishing a different subset of columns of a table.

Isn't that pretty much the same situation as for multiple subscriptions

each with a different set of I/U/D operations? IIRC we simply merge
those, so why not to do the same thing here and merge the attributes?

Yeah, I agree with the solution to merge the attributes, similar to how
operations are merged. My concern was also from an implementation point
of view, will it be a very drastic change. I now had a look at how remote
relation
attributes are acquired for comparison with local attributes at the
subscriber.
It seems that the publisher will need to send the information about the
filtered columns
for each publication specified during CREATE SUBSCRIPTION.
This will be read at the subscriber side which in turn updates its cache
accordingly.
Currently, the subscriber expects all attributes of a published relation to
be present.
I will add code for this in the next version of the patch.

To nitpick, I find "Bitmapset *att_list" a bit annoying, because it's

not really a list ;-)

I will make this change with the next version

FWIW "make check" fails for me with this version, due to segfault in
OpenTableLists. Apparenly there's some confusion - the code expects the
list to contain PublicationTable nodes, and tries to extract the
RangeVar from the elements. But the list actually contains RangeVar, so
this crashes and burns. See the attached backtrace.

Thank you for the report, This is fixed in the attached version, now all
publication
function calls accept the PublicationTableInfo list.

Thank you,
Rahila Syed

Attachments:

v2-0001-Add-column-filtering-to-logical-replication.patchapplication/octet-stream; name=v2-0001-Add-column-filtering-to-logical-replication.patchDownload+235-43
#12Ibrar Ahmed
ibrar.ahmad@gmail.com
In reply to: Rahila Syed (#11)
Re: Column Filtering in Logical Replication

On Tue, Jul 13, 2021 at 7:44 PM Rahila Syed <rahilasyed90@gmail.com> wrote:

Hi Tomas,

Thank you for your comments.

Currently, this capability is not included in the patch. If the table on
the subscriber
server has lesser attributes than that on the publisher server, it
throws an error at the
time of CREATE SUBSCRIPTION.

That's a bit surprising, to be honest. I do understand the patch simply
treats the filtered columns as "unchanged" because that's the simplest
way to filter the *data* of the columns. But if someone told me we can
"filter columns" I'd expect this to work without the columns on the
subscriber.

OK, I will look into adding this.

However, need to carefully consider situations in which a server
subscribes to multiple
publications, each publishing a different subset of columns of a

table.

Isn't that pretty much the same situation as for multiple subscriptions

each with a different set of I/U/D operations? IIRC we simply merge
those, so why not to do the same thing here and merge the attributes?

Yeah, I agree with the solution to merge the attributes, similar to how
operations are merged. My concern was also from an implementation point
of view, will it be a very drastic change. I now had a look at how remote
relation
attributes are acquired for comparison with local attributes at the
subscriber.
It seems that the publisher will need to send the information about the
filtered columns
for each publication specified during CREATE SUBSCRIPTION.
This will be read at the subscriber side which in turn updates its cache
accordingly.
Currently, the subscriber expects all attributes of a published relation
to be present.
I will add code for this in the next version of the patch.

To nitpick, I find "Bitmapset *att_list" a bit annoying, because it's

not really a list ;-)

I will make this change with the next version

FWIW "make check" fails for me with this version, due to segfault in
OpenTableLists. Apparenly there's some confusion - the code expects the
list to contain PublicationTable nodes, and tries to extract the
RangeVar from the elements. But the list actually contains RangeVar, so
this crashes and burns. See the attached backtrace.

Thank you for the report, This is fixed in the attached version, now all
publication
function calls accept the PublicationTableInfo list.

Thank you,
Rahila Syed

The patch does not apply, and an rebase is required

Hunk #8 succeeded at 1259 (offset 99 lines).
Hunk #9 succeeded at 1360 (offset 99 lines).
1 out of 9 hunks FAILED -- saving rejects to file
src/backend/replication/pgoutput/pgoutput.c.rej
patching file src/include/catalog/pg_publication.h

Changing the status to "Waiting on Author"

--
Ibrar Ahmed

#13Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Rahila Syed (#11)
Re: Column Filtering in Logical Replication

Hello,

I think this looks good regarding the PublicationRelationInfo API that was
discussed.

Looking at OpenTableList(), I think you forgot to update the comment --
it says "open relations specified by a RangeVar list", but the list is
now of PublicationTable. Also I think it would be good to say that the
returned tables are PublicationRelationInfo, maybe such as "In the
returned list of PublicationRelationInfo, the tables are locked ..."

In AlterPublicationTables() I was confused by some code that seemed
commented a bit too verbosely (for a moment I thought the whole list was
being copied into a different format). May I suggest something more
compact like

/* Not yet in list; open it and add it to the list */
if (!found)
{
Relation oldrel;
PublicationRelationInfo *pubrel;

oldrel = table_open(oldrelid, ShareUpdateExclusiveLock);

/* Wrap it in PublicationRelationInfo */
pubrel = palloc(sizeof(PublicationRelationInfo));
pubrel->relation = oldrel;
pubrel->relid = oldrelid;
pubrel->columns = NIL; /* not needed */

delrels = lappend(delrels, pubrel);
}

Thanks!

--
Álvaro Herrera 39°49'30"S 73°17'W — https://www.EnterpriseDB.com/

#14Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Rahila Syed (#11)
Re: Column Filtering in Logical Replication

One thing I just happened to notice is this part of your commit message

: REPLICA IDENTITY columns are always replicated
: irrespective of column names specification.

... for which you don't have any tests -- I mean, create a table with a
certain REPLICA IDENTITY and later try to publish a set of columns that
doesn't include all the columns in the replica identity, then verify
that those columns are indeed published.

Having said that, I'm not sure I agree with this design decision; what I
think this is doing is hiding from the user the fact that they are
publishing columns that they don't want to publish. I think as a user I
would rather get an error in that case:

ERROR: invalid column list in published set
DETAIL: The set of published commands does not include all the replica identity columns.

or something like that. Avoid possible nasty surprises of security-
leaking nature.

--
Álvaro Herrera 39°49'30"S 73°17'W — https://www.EnterpriseDB.com/
"On the other flipper, one wrong move and we're Fatal Exceptions"
(T.U.X.: Term Unit X - http://www.thelinuxreview.com/TUX/)

#15Rahila Syed
rahilasyed90@gmail.com
In reply to: Rahila Syed (#11)
Re: Column Filtering in Logical Replication

Hi,

Currently, this capability is not included in the patch. If the table on
the subscriber
server has lesser attributes than that on the publisher server, it
throws an error at the
time of CREATE SUBSCRIPTION.

That's a bit surprising, to be honest. I do understand the patch simply
treats the filtered columns as "unchanged" because that's the simplest
way to filter the *data* of the columns. But if someone told me we can
"filter columns" I'd expect this to work without the columns on the
subscriber.

OK, I will look into adding this.

This has been added in the attached patch. Now, instead of
treating the filtered columns as unchanged and sending a byte
with that information, unfiltered columns are not sent to the subscriber
server at all. This along with saving the network bandwidth, allows
the logical replication to even work between tables with different numbers
of
columns i.e with the table on subscriber server containing only the
filtered
columns. Currently, replica identity columns are replicated irrespective of
the presence of the column filters, hence the table on the subscriber side
must
contain the replica identity columns.

The patch adds a new parse node PublicationTable, but doesn't add

copyfuncs.c, equalfuncs.c, readfuncs.c, outfuncs.c support for it.

Thanks, added this.

Looking at OpenTableList(), I think you forgot to update the comment --
it says "open relations specified by a RangeVar list",

Thank you for the review, Modified this.

To nitpick, I find "Bitmapset *att_list" a bit annoying, because it's

not really a list ;-)

Changed this.

It's not super clear to me that strlist_to_textarray() and related
processing will behave sanely when the column names contain weird
characters such as commas or quotes, or just when used with uppercase
column names. Maybe it's worth having tests that try to break such
cases.

Added a few test cases for this.

In AlterPublicationTables() I was confused by some code that seemed

commented a bit too verbosely

Modified this as per the suggestion.

: REPLICA IDENTITY columns are always replicated

: irrespective of column names specification.

... for which you don't have any tests

I have added these tests.

Having said that, I'm not sure I agree with this design decision; what I

think this is doing is hiding from the user the fact that they are
publishing columns that they don't want to publish. I think as a user I
would rather get an error in that case:

ERROR: invalid column list in published set

DETAIL: The set of published commands does not include all the replica
identity columns.

or something like that. Avoid possible nasty surprises of security-

leaking nature.

Ok, Thank you for your opinion. I agree that giving an explicit error in
this case will be safer.
I will include this, in case there are no counter views.

Thank you for your review comments. Please find attached the rebased and
updated patch.

Thank you,
Rahila Syed

Attachments:

v3-0001-Add-column-filtering-to-logical-replication.patchapplication/octet-stream; name=v3-0001-Add-column-filtering-to-logical-replication.patchDownload+499-73
#16Amit Kapila
amit.kapila16@gmail.com
In reply to: Rahila Syed (#15)
Re: Column Filtering in Logical Replication

On Mon, Aug 9, 2021 at 1:36 AM Rahila Syed <rahilasyed90@gmail.com> wrote:

Having said that, I'm not sure I agree with this design decision; what I
think this is doing is hiding from the user the fact that they are
publishing columns that they don't want to publish. I think as a user I
would rather get an error in that case:

ERROR: invalid column list in published set
DETAIL: The set of published commands does not include all the replica identity columns.

or something like that. Avoid possible nasty surprises of security-
leaking nature.

Ok, Thank you for your opinion. I agree that giving an explicit error in this case will be safer.

+1 for an explicit error in this case.

Can you please explain why you have the restriction for including
replica identity columns and do we want to put a similar restriction
for the primary key? As far as I understand, if we allow default
values on subscribers for replica identity, then probably updates,
deletes won't work as they need to use replica identity (or PK) to
search the required tuple. If so, shouldn't we add this restriction
only when a publication has been defined for one of these (Update,
Delete) actions?

Another point is what if someone drops the column used in one of the
publications? Do we want to drop the entire relation from publication
or just remove the column filter or something else?

Do we want to consider that the columns specified in the filter must
not have NOT NULL constraint? Because, otherwise, the subscriber will
error out inserting such rows?

Minor comments:
================
pq_sendbyte(out, flags);
-
/* attribute name */
pq_sendstring(out, NameStr(att->attname));

@@ -953,6 +1000,7 @@ logicalrep_write_attrs(StringInfo out, Relation rel)

/* attribute mode */
pq_sendint32(out, att->atttypmod);
+
}

  bms_free(idattrs);
diff --git a/src/backend/replication/logical/relation.c
b/src/backend/replication/logical/relation.c
index c37e2a7e29..d7a7b00841 100644
--- a/src/backend/replication/logical/relation.c
+++ b/src/backend/replication/logical/relation.c
@@ -354,7 +354,6 @@ logicalrep_rel_open(LogicalRepRelId remoteid,
LOCKMODE lockmode)

attnum = logicalrep_rel_att_by_name(remoterel,
NameStr(attr->attname));
-
entry->attrmap->attnums[i] = attnum;

There are quite a few places in the patch that contains spurious line
additions or removals.

--
With Regards,
Amit Kapila.

#17Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#16)
Re: Column Filtering in Logical Replication

On Mon, Aug 9, 2021 at 3:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Aug 9, 2021 at 1:36 AM Rahila Syed <rahilasyed90@gmail.com> wrote:

Having said that, I'm not sure I agree with this design decision; what I
think this is doing is hiding from the user the fact that they are
publishing columns that they don't want to publish. I think as a user I
would rather get an error in that case:

ERROR: invalid column list in published set
DETAIL: The set of published commands does not include all the replica identity columns.

or something like that. Avoid possible nasty surprises of security-
leaking nature.

Ok, Thank you for your opinion. I agree that giving an explicit error in this case will be safer.

+1 for an explicit error in this case.

Can you please explain why you have the restriction for including
replica identity columns and do we want to put a similar restriction
for the primary key? As far as I understand, if we allow default
values on subscribers for replica identity, then probably updates,
deletes won't work as they need to use replica identity (or PK) to
search the required tuple. If so, shouldn't we add this restriction
only when a publication has been defined for one of these (Update,
Delete) actions?

Another point is what if someone drops the column used in one of the
publications? Do we want to drop the entire relation from publication
or just remove the column filter or something else?

Do we want to consider that the columns specified in the filter must
not have NOT NULL constraint? Because, otherwise, the subscriber will
error out inserting such rows?

I noticed that other databases provide this feature [1]https://docs.oracle.com/en/cloud/paas/goldengate-cloud/gwuad/selecting-columns.html#GUID-9A851C8B-48F7-43DF-8D98-D086BE069E20 and they allow
users to specify "Columns that are included in Filter" or specify "All
columns to be included in filter except for a subset of columns". I am
not sure if want to provide both ways in the first version but at
least we should consider it as a future extensibility requirement and
try to choose syntax accordingly.

[1]: https://docs.oracle.com/en/cloud/paas/goldengate-cloud/gwuad/selecting-columns.html#GUID-9A851C8B-48F7-43DF-8D98-D086BE069E20

--
With Regards,
Amit Kapila.

#18Rahila Syed
rahilasyed90@gmail.com
In reply to: Amit Kapila (#16)
Re: Column Filtering in Logical Replication

Hi Amit,

Thanks for your review.

Can you please explain why you have the restriction for including
replica identity columns and do we want to put a similar restriction
for the primary key? As far as I understand, if we allow default
values on subscribers for replica identity, then probably updates,
deletes won't work as they need to use replica identity (or PK) to
search the required tuple. If so, shouldn't we add this restriction
only when a publication has been defined for one of these (Update,
Delete) actions?

Yes, like you mentioned they are needed for Updates and Deletes to work.
The restriction for including replica identity columns in column filters
exists because
In case the replica identity column values did not change, the old row
replica identity columns
are not sent to the subscriber, thus we would need new replica identity
columns
to be sent to identify the row that is to be Updated or Deleted.
I haven't tested if it would break Insert as well though. I will update
the patch accordingly.

Another point is what if someone drops the column used in one of the
publications? Do we want to drop the entire relation from publication
or just remove the column filter or something else?

Thanks for pointing this out. Currently, this is not handled in the patch.
I think dropping the column from the filter would make sense on the lines
of the table being dropped from publication, in case of drop table.

Do we want to consider that the columns specified in the filter must
not have NOT NULL constraint? Because, otherwise, the subscriber will
error out inserting such rows?

I think you mean columns *not* specified in the filter must not have NOT

NULL constraint
on the subscriber, as this will break during insert, as it will try to
insert NULL for columns
not sent by the publisher.
I will look into fixing this. Probably this won't be a problem in
case the column is auto generated or contains a default value.

Minor comments:
================
pq_sendbyte(out, flags);
-
/* attribute name */
pq_sendstring(out, NameStr(att->attname));

@@ -953,6 +1000,7 @@ logicalrep_write_attrs(StringInfo out, Relation rel)

/* attribute mode */
pq_sendint32(out, att->atttypmod);
+
}

bms_free(idattrs);
diff --git a/src/backend/replication/logical/relation.c
b/src/backend/replication/logical/relation.c
index c37e2a7e29..d7a7b00841 100644
--- a/src/backend/replication/logical/relation.c
+++ b/src/backend/replication/logical/relation.c
@@ -354,7 +354,6 @@ logicalrep_rel_open(LogicalRepRelId remoteid,
LOCKMODE lockmode)

attnum = logicalrep_rel_att_by_name(remoterel,
NameStr(attr->attname));
-
entry->attrmap->attnums[i] = attnum;

There are quite a few places in the patch that contains spurious line
additions or removals.

Thank you for your comments, I will fix these.

Thank you,
Rahila Syed

#19Amit Kapila
amit.kapila16@gmail.com
In reply to: Rahila Syed (#18)
Re: Column Filtering in Logical Replication

On Thu, Aug 12, 2021 at 8:40 AM Rahila Syed <rahilasyed90@gmail.com> wrote:

Can you please explain why you have the restriction for including
replica identity columns and do we want to put a similar restriction
for the primary key? As far as I understand, if we allow default
values on subscribers for replica identity, then probably updates,
deletes won't work as they need to use replica identity (or PK) to
search the required tuple. If so, shouldn't we add this restriction
only when a publication has been defined for one of these (Update,
Delete) actions?

Yes, like you mentioned they are needed for Updates and Deletes to work.
The restriction for including replica identity columns in column filters exists because
In case the replica identity column values did not change, the old row replica identity columns
are not sent to the subscriber, thus we would need new replica identity columns
to be sent to identify the row that is to be Updated or Deleted.
I haven't tested if it would break Insert as well though. I will update the patch accordingly.

Okay, but then we also need to ensure that the user shouldn't be
allowed to enable the 'update' or 'delete' for a publication that
contains some filter that doesn't have replica identity columns.

Another point is what if someone drops the column used in one of the
publications? Do we want to drop the entire relation from publication
or just remove the column filter or something else?

Thanks for pointing this out. Currently, this is not handled in the patch.
I think dropping the column from the filter would make sense on the lines
of the table being dropped from publication, in case of drop table.

I think it would be tricky if you want to remove the column from the
filter because you need to recompute the entire filter and update it
again. Also, you might need to do this for all the publications that
have a particular column in their filter clause. It might be easier to
drop the entire filter but you can check if it is easier another way
than it is good.

Do we want to consider that the columns specified in the filter must
not have NOT NULL constraint? Because, otherwise, the subscriber will
error out inserting such rows?

I think you mean columns *not* specified in the filter must not have NOT NULL constraint
on the subscriber, as this will break during insert, as it will try to insert NULL for columns
not sent by the publisher.

Right.

--
With Regards,
Amit Kapila.

#20Rahila Syed
rahilasyed90@gmail.com
In reply to: Rahila Syed (#18)
Re: Column Filtering in Logical Replication

Hi,

Another point is what if someone drops the column used in one of the
publications? Do we want to drop the entire relation from publication
or just remove the column filter or something else?

After thinking about this, I think it is best to remove the entire table
from publication,
if a column specified in the column filter is dropped from the table.
Because, if we drop the entire filter without dropping the table, it means
all the columns will be replicated,
and the downstream server table might not have those columns.
If we drop only the column from the filter we might have to recreate the
filter and check for replica identity.
That means if the replica identity column is dropped, you can't drop it
from the filter,
and might have to drop the entire publication-table mapping anyways.

Thus, I think it is cleanest to drop the entire relation from publication.

This has been implemented in the attached version.

Do we want to consider that the columns specified in the filter must

not have NOT NULL constraint? Because, otherwise, the subscriber will
error out inserting such rows?

I think you mean columns *not* specified in the filter must not have NOT

NULL constraint
on the subscriber, as this will break during insert, as it will try to
insert NULL for columns
not sent by the publisher.
I will look into fixing this. Probably this won't be a problem in
case the column is auto generated or contains a default value.

I am not sure if this needs to be handled. Ideally, we need to prevent the
subscriber tables from having a NOT NULL
constraint if the publisher uses column filters to publish the values of
the table. There is no way
to do this at the time of creating a table on subscriber.
As this involves querying the publisher for this information, it can be
done at the time of initial table synchronization.
i.e error out if any of the subscribed tables has NOT NULL constraint on
non-filter columns.
This will lead to the user dropping and recreating the subscription after
removing the
NOT NULL constraint from the table.
I think the same can be achieved by doing nothing and letting the
subscriber error out while inserting rows.

Minor comments:

================
pq_sendbyte(out, flags);
-
/* attribute name */
pq_sendstring(out, NameStr(att->attname));

@@ -953,6 +1000,7 @@ logicalrep_write_attrs(StringInfo out, Relation rel)

/* attribute mode */
pq_sendint32(out, att->atttypmod);
+
}

bms_free(idattrs);
diff --git a/src/backend/replication/logical/relation.c
b/src/backend/replication/logical/relation.c
index c37e2a7e29..d7a7b00841 100644
--- a/src/backend/replication/logical/relation.c
+++ b/src/backend/replication/logical/relation.c
@@ -354,7 +354,6 @@ logicalrep_rel_open(LogicalRepRelId remoteid,
LOCKMODE lockmode)

attnum = logicalrep_rel_att_by_name(remoterel,
NameStr(attr->attname));
-
entry->attrmap->attnums[i] = attnum;

There are quite a few places in the patch that contains spurious line
additions or removals.

Fixed these in the attached patch.

Having said that, I'm not sure I agree with this design decision; what I

think this is doing is hiding from the user the fact that they are
publishing columns that they don't want to publish. I think as a user I
would rather get an error in that case:

ERROR: invalid column list in published set

DETAIL: The set of published commands does not include all the replica
identity columns.

Added this.

Also added some more tests. Please find attached a rebased and updated
patch.

Thank you,
Rahila Syed

Attachments:

v4-0001-Add-column-filtering-to-logical-replication.patchapplication/octet-stream; name=v4-0001-Add-column-filtering-to-logical-replication.patchDownload+574-72
#21Peter Smith
smithpb2250@gmail.com
In reply to: Rahila Syed (#20)
#22Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Rahila Syed (#20)
#23Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Rahila Syed (#20)
#24Amit Kapila
amit.kapila16@gmail.com
In reply to: Alvaro Herrera (#22)
#25Amit Kapila
amit.kapila16@gmail.com
In reply to: Rahila Syed (#20)
#26Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#25)
#27Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Amit Kapila (#24)
#28Amit Kapila
amit.kapila16@gmail.com
In reply to: Alvaro Herrera (#27)
#29Rahila Syed
rahilasyed90@gmail.com
In reply to: Amit Kapila (#28)
#30Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Rahila Syed (#29)
#31Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Rahila Syed (#20)
#32Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Alvaro Herrera (#30)
#33Peter Smith
smithpb2250@gmail.com
In reply to: Alvaro Herrera (#30)
#34Amit Kapila
amit.kapila16@gmail.com
In reply to: Rahila Syed (#29)
#35Amit Kapila
amit.kapila16@gmail.com
In reply to: Alvaro Herrera (#30)
#36Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#35)
#37Amit Kapila
amit.kapila16@gmail.com
In reply to: Dilip Kumar (#36)
#38Euler Taveira
euler@eulerto.com
In reply to: Alvaro Herrera (#30)
#39Amit Kapila
amit.kapila16@gmail.com
In reply to: Alvaro Herrera (#30)
#40Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Amit Kapila (#39)
#41vignesh C
vignesh21@gmail.com
In reply to: Alvaro Herrera (#40)
#42Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: vignesh C (#41)
#43Euler Taveira
euler@eulerto.com
In reply to: vignesh C (#41)
#44Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: vignesh C (#41)
#45Peter Smith
smithpb2250@gmail.com
In reply to: Alvaro Herrera (#30)
#46Amit Kapila
amit.kapila16@gmail.com
In reply to: Alvaro Herrera (#42)
#47Amit Kapila
amit.kapila16@gmail.com
In reply to: Euler Taveira (#43)
#48vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#46)
#49Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Amit Kapila (#46)
#50Amit Kapila
amit.kapila16@gmail.com
In reply to: Alvaro Herrera (#49)
#51Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: vignesh C (#48)
#52Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#51)
#53vignesh C
vignesh21@gmail.com
In reply to: Alvaro Herrera (#51)
#54Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Alvaro Herrera (#52)
#55Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#53)
#56Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: vignesh C (#41)
#57Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#56)
#58vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#57)
#59Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Amit Kapila (#55)
#60Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: vignesh C (#58)
#61Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Tomas Vondra (#60)
#62Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Alvaro Herrera (#61)
#63Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Tomas Vondra (#62)
#64Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Alvaro Herrera (#59)
#65vignesh C
vignesh21@gmail.com
In reply to: Alvaro Herrera (#59)
#66Amit Kapila
amit.kapila16@gmail.com
In reply to: vignesh C (#65)
#67vignesh C
vignesh21@gmail.com
In reply to: Amit Kapila (#66)
#68Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Amit Kapila (#66)
#69Rahila Syed
rahilasyed90@gmail.com
In reply to: Alvaro Herrera (#63)
#70Amit Kapila
amit.kapila16@gmail.com
In reply to: Alvaro Herrera (#68)
#71Amit Kapila
amit.kapila16@gmail.com
In reply to: Rahila Syed (#69)
#72Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Amit Kapila (#70)
#73Amit Kapila
amit.kapila16@gmail.com
In reply to: Alvaro Herrera (#72)
#74Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Rahila Syed (#1)
#75Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#74)
#76Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#74)
#77Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Peter Smith (#45)
#78vignesh C
vignesh21@gmail.com
In reply to: Alvaro Herrera (#77)
#79Peter Eisentraut
peter_e@gmx.net
In reply to: Alvaro Herrera (#76)
#80Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Peter Eisentraut (#79)
#81Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#22)
#82Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#81)
#83Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#82)
#84Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#83)
#85Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#84)
#86Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#85)
#87Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Alvaro Herrera (#86)
#88Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Tomas Vondra (#87)
#89Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Alvaro Herrera (#88)
#90Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Alvaro Herrera (#84)
#91Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Alvaro Herrera (#84)
#92Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Zhijie Hou (Fujitsu) (#91)
#93Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Tomas Vondra (#90)
#94Amit Kapila
amit.kapila16@gmail.com
In reply to: Alvaro Herrera (#88)
#95Peter Eisentraut
peter_e@gmx.net
In reply to: Amit Kapila (#94)
#96Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Alvaro Herrera (#92)
#97Rahila Syed
rahilasyed90@gmail.com
In reply to: Alvaro Herrera (#86)
#98Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Rahila Syed (#97)
#99Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Rahila Syed (#1)
#100Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Alvaro Herrera (#99)
#101Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Tomas Vondra (#100)
#102Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Alvaro Herrera (#101)
#103Amit Kapila
amit.kapila16@gmail.com
In reply to: Alvaro Herrera (#101)
#104Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#96)
#105Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Tomas Vondra (#102)
#106Peter Eisentraut
peter_e@gmx.net
In reply to: Alvaro Herrera (#99)
#107Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Peter Eisentraut (#106)
#108Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alvaro Herrera (#107)
#109Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Tom Lane (#108)
#110Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Rahila Syed (#1)
#111Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#110)
#112Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#111)
#113Justin Pryzby
pryzby@telsasoft.com
In reply to: Alvaro Herrera (#112)
#114Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Justin Pryzby (#113)
#115Justin Pryzby
pryzby@telsasoft.com
In reply to: Alvaro Herrera (#114)
#116Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Justin Pryzby (#115)
#117Justin Pryzby
pryzby@telsasoft.com
In reply to: Alvaro Herrera (#116)
#118Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Justin Pryzby (#117)
#119Amit Kapila
amit.kapila16@gmail.com
In reply to: Alvaro Herrera (#116)
#120Amit Kapila
amit.kapila16@gmail.com
In reply to: Alvaro Herrera (#107)
#121Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Amit Kapila (#119)
#122Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Amit Kapila (#120)
#123Peter Eisentraut
peter_e@gmx.net
In reply to: Alvaro Herrera (#116)
#124Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Eisentraut (#123)
#125Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Peter Eisentraut (#123)
#126Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Peter Eisentraut (#123)
#127Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#126)
#128Justin Pryzby
pryzby@telsasoft.com
In reply to: Alvaro Herrera (#126)
#129Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#127)
#130Peter Eisentraut
peter_e@gmx.net
In reply to: Alvaro Herrera (#129)
#131Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Justin Pryzby (#128)
#132Amit Kapila
amit.kapila16@gmail.com
In reply to: Justin Pryzby (#128)
#133tanghy.fnst@fujitsu.com
tanghy.fnst@fujitsu.com
In reply to: Amit Kapila (#132)
#134Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Amit Kapila (#132)
#135Amit Kapila
amit.kapila16@gmail.com
In reply to: Alvaro Herrera (#134)
#136Peter Eisentraut
peter_e@gmx.net
In reply to: Amit Kapila (#135)
#137Peter Smith
smithpb2250@gmail.com
In reply to: Alvaro Herrera (#127)
#138Peter Eisentraut
peter_e@gmx.net
In reply to: Alvaro Herrera (#129)
#139Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Peter Eisentraut (#138)
#140Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Peter Smith (#137)
#141Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Tomas Vondra (#139)
#142Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Alvaro Herrera (#141)
#143Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#142)
#144Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#139)
#145Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#144)
#146Justin Pryzby
pryzby@telsasoft.com
In reply to: Tomas Vondra (#145)
#147Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#145)
#148Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#147)
#149Peter Eisentraut
peter_e@gmx.net
In reply to: Tomas Vondra (#148)
#150Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#148)
#151Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Amit Kapila (#150)
#152Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#150)
#153Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#150)
#154Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Zhijie Hou (Fujitsu) (#151)
#155Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Peter Eisentraut (#149)
#156Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#154)
#157Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#153)
#158Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#156)
#159Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#157)
#160Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#158)
#161Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#159)
#162wangw.fnst@fujitsu.com
wangw.fnst@fujitsu.com
In reply to: Tomas Vondra (#158)
#163Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#158)
#164Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: wangw.fnst@fujitsu.com (#162)
#165Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#163)
#166Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#165)
#167Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#166)
#168Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#167)
#169Zhijie Hou (Fujitsu)
houzj.fnst@fujitsu.com
In reply to: Tomas Vondra (#167)
#170Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#168)
#171Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Zhijie Hou (Fujitsu) (#169)
#172Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#171)
#173Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#172)
#174shiy.fnst@fujitsu.com
shiy.fnst@fujitsu.com
In reply to: Tomas Vondra (#167)
#175Amit Kapila
amit.kapila16@gmail.com
In reply to: Zhijie Hou (Fujitsu) (#169)
#176Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#173)
#177Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#175)
#178Amit Kapila
amit.kapila16@gmail.com
In reply to: shiy.fnst@fujitsu.com (#174)
#179Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#177)
#180Peter Eisentraut
peter_e@gmx.net
In reply to: Tomas Vondra (#167)
#181Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Peter Eisentraut (#180)
#182Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#179)
#183Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#182)
#184Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#183)
#185Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#184)
#186Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#184)
#187Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#186)
#188Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#187)
#189Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#185)
#190Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#189)
#191Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#190)
#192Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#191)
#193Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#184)
#194Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#186)
#195Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Tomas Vondra (#188)
#196Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#188)
#197Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Tomas Vondra (#188)
#198Amit Kapila
amit.kapila16@gmail.com
In reply to: Alvaro Herrera (#197)
#199Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Amit Kapila (#198)
#200Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#194)
#201Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#193)
#202Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#201)
#203Peter Eisentraut
peter_e@gmx.net
In reply to: Tomas Vondra (#181)
#204Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Peter Eisentraut (#203)
#205Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#204)
#206Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#205)
#207Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#196)
#208Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#206)
In reply to: Tomas Vondra (#208)
#210Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Shinoda, Noriyoshi (PN Japan FSIP) (#209)
#211Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#210)
#212Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tomas Vondra (#206)
#213Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tom Lane (#212)
#214Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tomas Vondra (#213)
#215Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tom Lane (#214)
#216Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tomas Vondra (#215)
#217Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#191)
#218Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#217)
#219Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#218)
#220Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#219)
#221Amit Kapila
amit.kapila16@gmail.com
In reply to: Tomas Vondra (#220)
#222Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#221)
#223Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#190)
#224Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#223)
#225Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#224)
#226Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#225)
#227Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#226)
#228Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Amit Kapila (#227)
#229Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Amit Kapila (#227)
#230Amit Kapila
amit.kapila16@gmail.com
In reply to: Masahiko Sawada (#229)
#231Jonathan S. Katz
jkatz@postgresql.org
In reply to: Amit Kapila (#230)
#232Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Jonathan S. Katz (#231)
#233Jonathan S. Katz
jkatz@postgresql.org
In reply to: Tomas Vondra (#232)
#234Amit Kapila
amit.kapila16@gmail.com
In reply to: Jonathan S. Katz (#231)
#235Peter Smith
smithpb2250@gmail.com
In reply to: Alvaro Herrera (#80)
#236Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#235)
#237Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#236)
#238Peter Smith
smithpb2250@gmail.com
In reply to: Peter Smith (#237)
#239vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#238)
#240Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#239)
#241Erik Rijkers
er@xs4all.nl
In reply to: Peter Smith (#240)
#242vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#240)
#243Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#242)
#244Peter Smith
smithpb2250@gmail.com
In reply to: Erik Rijkers (#241)
#245vignesh C
vignesh21@gmail.com
In reply to: Peter Smith (#243)
#246Peter Smith
smithpb2250@gmail.com
In reply to: vignesh C (#245)
#247Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#246)
#248Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#247)
#249Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#248)
#250Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#249)
#251shiy.fnst@fujitsu.com
shiy.fnst@fujitsu.com
In reply to: Peter Smith (#250)
#252Peter Smith
smithpb2250@gmail.com
In reply to: shiy.fnst@fujitsu.com (#251)
#253Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#252)
#254Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#253)
#255Amit Kapila
amit.kapila16@gmail.com
In reply to: Peter Smith (#254)
#256Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#255)
#257Peter Smith
smithpb2250@gmail.com
In reply to: Amit Kapila (#256)