REINDEX CONCURRENTLY 2.0

Started by Michael Paquierover 12 years ago170 messageshackers
Jump to latest
#1Michael Paquier
michael@paquier.xyz

Hi all,

Please find attached updated patches for the support of REINDEX
CONCURRENTLY, renamed 2.0 for the occasion:
- 20131114_1_index_drop_comments.patch, patch that updates some
comments in index_drop. This updates only a couple of comments in
index_drop but has not been committed yet. It should be IMO...
- 20131114_2_WaitForOldsnapshots_refactor.patch, a refactoring patch
providing a single API that can be used to wait for old snapshots
- 20131114_3_reindex_concurrently.patch, providing the core feature.
Patch 3 needs to have patch 2 applied first. Regression tests,
isolation tests and documentation are included with the patch.

This is the continuation of the previous thread that finished here:
/messages/by-id/CAB7nPqS+WYN021oQHd9GPe_5dSVcVXMvEBW_E2AV9OOEwggMHw@mail.gmail.com

This patch has been added for this commit fest.
Regards,
--
Michael

Attachments:

20131114_1_index_drop_comments.patchtext/x-patch; charset=US-ASCII; name=20131114_1_index_drop_comments.patchDownload+9-6
20131114_2_WaitForOldsnapshots_refactor.patchtext/x-patch; charset=US-ASCII; name=20131114_2_WaitForOldsnapshots_refactor.patchDownload+82-70
20131114_3_reindex_concurrently.patchtext/x-patch; charset=US-ASCII; name=20131114_3_reindex_concurrently.patchDownload+1512-113
#2Peter Eisentraut
peter_e@gmx.net
In reply to: Michael Paquier (#1)
Re: REINDEX CONCURRENTLY 2.0

On 11/14/13, 9:40 PM, Michael Paquier wrote:

Hi all,

Please find attached updated patches for the support of REINDEX
CONCURRENTLY, renamed 2.0 for the occasion:
- 20131114_1_index_drop_comments.patch, patch that updates some
comments in index_drop. This updates only a couple of comments in
index_drop but has not been committed yet. It should be IMO...
- 20131114_2_WaitForOldsnapshots_refactor.patch, a refactoring patch
providing a single API that can be used to wait for old snapshots
- 20131114_3_reindex_concurrently.patch, providing the core feature.
Patch 3 needs to have patch 2 applied first. Regression tests,
isolation tests and documentation are included with the patch.

The third patch needs to be rebased, because of a conflict in index.c.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#3Andres Freund
andres@anarazel.de
In reply to: Michael Paquier (#1)
Re: REINDEX CONCURRENTLY 2.0

Hi,

On 2013-11-15 11:40:17 +0900, Michael Paquier wrote:

- 20131114_3_reindex_concurrently.patch, providing the core feature.
Patch 3 needs to have patch 2 applied first. Regression tests,
isolation tests and documentation are included with the patch.

Have you addressed my concurrency concerns from the last version?

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#4Michael Paquier
michael@paquier.xyz
In reply to: Andres Freund (#3)
Re: REINDEX CONCURRENTLY 2.0

On Sat, Nov 16, 2013 at 5:09 AM, Andres Freund <andres@2ndquadrant.com> wrote:

On 2013-11-15 11:40:17 +0900, Michael Paquier wrote:

- 20131114_3_reindex_concurrently.patch, providing the core feature.
Patch 3 needs to have patch 2 applied first. Regression tests,
isolation tests and documentation are included with the patch.

Have you addressed my concurrency concerns from the last version?

I have added documentation in the patch with a better explanation
about why those choices of implementation are made.
Thanks,
--
Michael

Attachments:

20131119_0_reindex_base.patchtext/x-patch; charset=US-ASCII; name=20131119_0_reindex_base.patchDownload+82-70
20131119_1_reindex_conc.patchtext/x-patch; charset=US-ASCII; name=20131119_1_reindex_conc.patchDownload+1524-113
20131119_index_drop_comments.patchtext/x-patch; charset=US-ASCII; name=20131119_index_drop_comments.patchDownload+9-6
#5Andres Freund
andres@anarazel.de
In reply to: Michael Paquier (#4)
Re: REINDEX CONCURRENTLY 2.0

On 2013-11-18 19:52:29 +0900, Michael Paquier wrote:

On Sat, Nov 16, 2013 at 5:09 AM, Andres Freund <andres@2ndquadrant.com> wrote:

On 2013-11-15 11:40:17 +0900, Michael Paquier wrote:

- 20131114_3_reindex_concurrently.patch, providing the core feature.
Patch 3 needs to have patch 2 applied first. Regression tests,
isolation tests and documentation are included with the patch.

Have you addressed my concurrency concerns from the last version?

I have added documentation in the patch with a better explanation
about why those choices of implementation are made.

The dropping still isn't safe:
After phase 4 we are in the state:
old index: valid, live, !isdead
new index: !valid, live, !isdead
Then you do a index_concurrent_set_dead() from that state on in phase 5.
There you do WaitForLockers(locktag, AccessExclusiveLock) before
index_set_state_flags(INDEX_DROP_SET_DEAD).
That's not sufficient.

Consider what happens with the following sequence:
1) WaitForLockers(locktag, AccessExclusiveLock)
-> GetLockConflicts() => virtualxact 1
-> VirtualXactLock(1)
2) virtualxact 2 starts, opens the *old* index since it's currently the
only valid one.
3) virtualxact 1 finishes
4) index_concurrent_set_dead() does index_set_state_flags(DROP_SET_DEAD)
5) another transaction (vxid 3) starts inserting data into the relation, updates
only the new index, the old index is dead
6) vxid 2 inserts data, updates only the old index. Since it had the
index already open the cache invalidations won't be processed.

Now the indexes are out of sync. There's entries only in the old index
and there's entries only in the new index. Not good.

I hate to repeat myself, but you really need to follow the current
protocol for concurrently dropping indexes. Which involves *first*
marking the index as invalid so it won't be used for querying anymore,
then wait for everyone possibly still seing that entry to finish, and
only *after* that mark the index as dead. You cannot argue away
correctness concerns with potential deadlocks.

c.f. /messages/by-id/20130926103400.GA2471420@alap2.anarazel.de

I am also still unconvinced that the logic in index_concurrent_swap() is
correct. It very much needs to explain why no backend can see values
that are inconsistent. E.g. what prevents a backend thinking the old and
new indexes have the same relfilenode? MVCC snapshots don't seem to
protect you against that.
I am not sure there's a problem, but there certainly needs to more
comments explaining why there are none.

Something like the following might be possible:

Backend 1: start reindex concurrently, till phase 4
Backend 2: ExecOpenIndices()
-> RelationGetIndexList (that list is consistent due to mvcc snapshots!)
Backend 2: -> index_open(old_index) (old relfilenode)
Backend 1: index_concurrent_swap()
-> CommitTransaction()
-> ProcArrayEndTransaction() (changes visible to others henceforth!)
Backend 2: -> index_open(new_index)
=> no cache invalidations yet, gets the old relfilenode
Backend 2: ExecInsertIndexTuples()
=> updates the same relation twice, corruptf
Backend 1: stil. in CommitTransaction()
-> AtEOXact_Inval() sends out invalidations

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#6Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Michael Paquier (#1)
Re: REINDEX CONCURRENTLY 2.0

Michael Paquier escribi�:

Hi all,

Please find attached updated patches for the support of REINDEX
CONCURRENTLY, renamed 2.0 for the occasion:
- 20131114_1_index_drop_comments.patch, patch that updates some
comments in index_drop. This updates only a couple of comments in
index_drop but has not been committed yet. It should be IMO...

Pushed this one, thanks.

--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#7Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Michael Paquier (#1)
Re: REINDEX CONCURRENTLY 2.0

Sorry for the lateness of this...

On 11/14/13, 8:40 PM, Michael Paquier wrote:

+	/*
+	 * Phase 4 of REINDEX CONCURRENTLY
+	 *
+	 * Now that the concurrent indexes have been validated could be used,
+	 * we need to swap each concurrent index with its corresponding old index.
+	 * Note that the concurrent index used for swaping is not marked as valid
+	 * because we need to keep the former index and the concurrent index with
+	 * a different valid status to avoid an implosion in the number of indexes
+	 * a parent relation could have if this operation fails multiple times in
+	 * a row due to a reason or another. Note that we already know thanks to
+	 * validation step that
+	 */
+

Was there supposed to be more to that comment?

In the loop right below it...

+	/* Swap the indexes and mark the indexes that have the old data as invalid */
+	forboth(lc, indexIds, lc2, concurrentIndexIds)
...
+		CacheInvalidateRelcacheByRelid(relOid);

Do we actually need to invalidate the cache on each case? Is it because we're grabbing a new transaction each time through?
--
Jim C. Nasby, Data Architect jim@nasby.net
512.569.9461 (cell) http://jim.nasby.net

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#8Michael Paquier
michael@paquier.xyz
In reply to: Jim Nasby (#7)
Re: REINDEX CONCURRENTLY 2.0

Hi,

Thanks for your comments.

On Fri, Jan 10, 2014 at 9:59 AM, Jim Nasby <jim@nasby.net> wrote:

Sorry for the lateness of this...

On 11/14/13, 8:40 PM, Michael Paquier wrote:

+       /*
+        * Phase 4 of REINDEX CONCURRENTLY
+        *
+        * Now that the concurrent indexes have been validated could be
used,
+        * we need to swap each concurrent index with its corresponding
old index.
+        * Note that the concurrent index used for swaping is not marked
as valid
+        * because we need to keep the former index and the concurrent
index with
+        * a different valid status to avoid an implosion in the number of
indexes
+        * a parent relation could have if this operation fails multiple
times in
+        * a row due to a reason or another. Note that we already know
thanks to
+        * validation step that
+        */
+

Was there supposed to be more to that comment?

Not really, it seems that this chunk came out after writing multiple
successive versions of this patch.

In the loop right below it...

+       /* Swap the indexes and mark the indexes that have the old data as
invalid */
+       forboth(lc, indexIds, lc2, concurrentIndexIds)
...
+               CacheInvalidateRelcacheByRelid(relOid);

Do we actually need to invalidate the cache on each case? Is it because
we're grabbing a new transaction each time through?

This is to force a refresh of the cached plans that have been using
the old index before transaction of step 4 began.

I have realigned this patch with latest head (d2458e3)... In case
someone is interested at some point...

Regards,
--
Michael

Attachments:

20140121_0_old_snapshots.patchtext/x-diff; charset=US-ASCII; name=20140121_0_old_snapshots.patchDownload+82-70
20140121_1_reindex_conc_core.patchtext/x-diff; charset=US-ASCII; name=20140121_1_reindex_conc_core.patchDownload+1529-115
#9Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#8)
Re: REINDEX CONCURRENTLY 2.0

On Tue, Jan 21, 2014 at 10:12 PM, Michael Paquier
<michael.paquier@gmail.com> wrote:

I have realigned this patch with latest head (d2458e3)... In case
someone is interested at some point...

Attached is a patch for REINDEX CONCURRENTLY rebased on HEAD
(d7938a4), as some people are showing interest in it by reading recent
discussions. Patch compiles and passes regression as well as isolation
tests..
--
Michael

Attachments:

20140827_Support-for-REINDEX-CONCURRENTLY.patchtext/x-patch; charset=US-ASCII; name=20140827_Support-for-REINDEX-CONCURRENTLY.patchDownload+1610-185
#10Andres Freund
andres@anarazel.de
In reply to: Michael Paquier (#9)
Re: REINDEX CONCURRENTLY 2.0

On 2014-08-27 11:00:56 +0900, Michael Paquier wrote:

On Tue, Jan 21, 2014 at 10:12 PM, Michael Paquier
<michael.paquier@gmail.com> wrote:

I have realigned this patch with latest head (d2458e3)... In case
someone is interested at some point...

Attached is a patch for REINDEX CONCURRENTLY rebased on HEAD
(d7938a4), as some people are showing interest in it by reading recent
discussions. Patch compiles and passes regression as well as isolation
tests..

Can you add it to the next CF? I'll try to look earlier, but can't
promise anything.

I very much would like this to get committed in some form or another.

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#11Michael Paquier
michael@paquier.xyz
In reply to: Andres Freund (#10)
Re: REINDEX CONCURRENTLY 2.0

On Wed, Aug 27, 2014 at 3:41 PM, Andres Freund <andres@2ndquadrant.com> wrote:

Can you add it to the next CF? I'll try to look earlier, but can't
promise anything.

I very much would like this to get committed in some form or another.

Added it here to keep track of it:
https://commitfest.postgresql.org/action/patch_view?id=1563
Regards,
--
Michael

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#12Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#11)
Re: REINDEX CONCURRENTLY 2.0

On Wed, Aug 27, 2014 at 3:53 PM, Michael Paquier <michael.paquier@gmail.com>
wrote:

On Wed, Aug 27, 2014 at 3:41 PM, Andres Freund <andres@2ndquadrant.com>
wrote:

Can you add it to the next CF? I'll try to look earlier, but can't
promise anything.

I very much would like this to get committed in some form or another.

Added it here to keep track of it:
https://commitfest.postgresql.org/action/patch_view?id=1563

Attached is a fairly-refreshed patch that should be used as a base for the
next commit fest. The following changes should be noticed:
- Use of AccessExclusiveLock when swapping relfilenodes of an index and its
concurrent entry instead of ShareUpdateExclusiveLock for safety. At the
limit of my understanding, that's the consensus reached until now.
- Cleanup of many comments and refresh of the documentation that was
somewhat wrongly-formulated or shaped at some places
- Addition of support for autocommit off in psql for REINDEX [ TABLE |
INDEX ] CONCURRENTLY
- Some more code cleanup..
I haven't been through the tab completion support for psql but looking at
tab-completion.c this seems a bit tricky with the stuff related to CREATE
INDEX CONCURRENTLY already present. Nothing huge though.
Regards,
--
Michael

Attachments:

20141001_reindex_concurrently.patchtext/x-diff; charset=US-ASCII; name=20141001_reindex_concurrently.patchDownload+1615-190
#13Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Michael Paquier (#12)
Re: REINDEX CONCURRENTLY 2.0

On 10/1/14, 2:00 AM, Michael Paquier wrote:

On Wed, Aug 27, 2014 at 3:53 PM, Michael Paquier <michael.paquier@gmail.com <mailto:michael.paquier@gmail.com>> wrote:

On Wed, Aug 27, 2014 at 3:41 PM, Andres Freund <andres@2ndquadrant.com <mailto:andres@2ndquadrant.com>> wrote:

Can you add it to the next CF? I'll try to look earlier, but can't
promise anything.

I very much would like this to get committed in some form or another.

Added it here to keep track of it:
https://commitfest.postgresql.org/action/patch_view?id=1563

Attached is a fairly-refreshed patch that should be used as a base for the next commit fest. The following changes should be noticed:
- Use of AccessExclusiveLock when swapping relfilenodes of an index and its concurrent entry instead of ShareUpdateExclusiveLock for safety. At the limit of my understanding, that's the consensus reached until now.
- Cleanup of many comments and refresh of the documentation that was somewhat wrongly-formulated or shaped at some places
- Addition of support for autocommit off in psql for REINDEX [ TABLE | INDEX ] CONCURRENTLY
- Some more code cleanup..
I haven't been through the tab completion support for psql but looking at tab-completion.c this seems a bit tricky with the stuff related to CREATE INDEX CONCURRENTLY already present. Nothing huge though.

Patch applies against current HEAD and builds, but I'm getting 37 failed tests (mostly parallel, but also misc and WITH; results attached). Is that expeccted?

+   <para>
+    In a concurrent index build, a new index whose storage will replace the one
+    to be rebuild is actually entered into the system catalogs in one
+    transaction, then two table scans occur in two more transactions. Once this
+    is performed, the old and fresh indexes are swapped by taking a lock
+    <literal>ACCESS EXCLUSIVE</>. Finally two additional transactions
+    are used to mark the concurrent index as not ready and then drop it.
+   </para>

The "mark the concurrent index" bit is rather confusing; it sounds like it's referring to the new index instead of the old. Now that I've read the code I understand what's going on here between the concurrent index *entry* and the filenode swap, but I don't think the docs make this sufficiently clear to users.

How about something like this:

The following steps occur in a concurrent index build, each in a separate transaction. Note that if there are multiple indexes to be rebuilt then each step loops through all the indexes we're rebuilding, using a separate transaction for each one.

1. A new "temporary" index definition is entered into the catalog. This definition is only used to build the new index, and will be removed at the completion of the process.
2. A first pass index build is done.
3. A second pass is performed to add tuples that were added while the first pass build was running.
4. pg_class.relfilenode for the existing index definition and the "temporary" definition are swapped. This means that the existing index definition now uses the index data that we stored during the build, and the "temporary" definition is using the old index data.
5. The "temporary" index definition is marked as dead.
6. The "temporary" index definition and it's data (which is now the data for the old index) are dropped.

+ * index_concurrent_create
+ *
+ * Create an index based on the given one that will be used for concurrent
+ * operations. The index is inserted into catalogs and needs to be built later
+ * on. This is called during concurrent index processing. The heap relation
+ * on which is based the index needs to be closed by the caller.

Last bit presumably should be "on which the index is based".

+ /* Build the list of column names, necessary for index_create */
Instead of all this work wouldn't it be easier to create a version of index_create/ConstructTupleDescriptor that will use the IndexInfo for the old index? ISTM index_concurrent_create() is doing a heck of a lot of work to marshal data into one form just to have it get marshaled yet again. Worst case, if we do have to play this game, there should be a stand-alone function to get the columns/expressions for an existing index; you're duplicating a lot of code from pg_get_indexdef_worker().

index_concurrent_swap(): Perhaps it'd be better to create index_concurrent_swap_setup() and index_concurrent_swap_cleanup() and refactor the duplicated code out... the actual function would then become:

ReindexRelationConcurrently()

+ * Process REINDEX CONCURRENTLY for given relation Oid. The relation can be
+ * either an index or a table. If a table is specified, each step of REINDEX
+ * CONCURRENTLY is done in parallel with all the table's indexes as well as
+ * its dependent toast indexes.
This comment is a bit misleading; we're not actually doing anything in parallel, right? AFAICT index_concurrent_build is going to block while each index is built the first time.
+	 * relkind is an index, this index itself will be rebuilt. The locks taken
+	 * parent relations and involved indexes are kept until this transaction
+	 * is committed to protect against schema changes that might occur until
+	 * the session lock is taken on each relation.

This comment is a bit unclear to me... at minimum I think it should be "* on parent relations" instead of "* parent relations", but I think it needs to elaborate on why/when we're also taking session level locks.

I also wordsmithed this comment a bit...
* Here begins the process for concurrently rebuilding the index entries.
* We need first to create an index which is based on the same data
* as the former index except that it will be only registered in catalogs
* and will be built later. It is possible to perform all the operations
* on all the indexes at the same time for a parent relation including
* indexes for its toast relation.

and this one...
* During this phase the concurrent indexes catch up with any new tuples that
* were created during the previous phase.
*
* We once again wait until no transaction can have the table open with
* the index marked as read-only for updates. Each index validation is done
* in a separate transaction to minimize how long we hold an open transaction.

+	 * a different valid status to avoid an implosion in the number of indexes
+	 * a parent relation could have if this operation step fails multiple times
+	 * in a row due to a reason or another.

I'd change that to "explosion in the number of indexes a parent relation could have if this operation fails."

Phase 4, 5 and 6 are rather confusing if you don't understand that each "concurrent index" entry is meant to be thrown away. I think the Phase 4 comment should elaborate on that.

The comment in check_exclusion_constraint() is good; shouldn't the related comment on this line in index_create() mention that check_exclusion_constraint() needs to be changed if we ever support concurrent builds of exclusion indexes?
if (concurrent && is_exclusion && !is_reindex)

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

Attachments:

regression.diffstext/plain; charset=UTF-8; name=regression.diffs; x-mac-creator=0; x-mac-type=0Download+22016-22027
regression.outtext/plain; charset=UTF-8; name=regression.out; x-mac-creator=0; x-mac-type=0Download
#14Michael Paquier
michael@paquier.xyz
In reply to: Jim Nasby (#13)
Re: REINDEX CONCURRENTLY 2.0

Thanks for your input, Jim!

On Wed, Oct 29, 2014 at 7:59 AM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:

Patch applies against current HEAD and builds, but I'm getting 37 failed
tests (mostly parallel, but also misc and WITH; results attached). Is that
expected?

This is caused by the recent commit 7b1c2a0 (that I actually participated
in reviewing :p) because of a missing inclusion of ruleutils.h in index.c.

The "mark the concurrent index" bit is rather confusing; it sounds like

it's

referring to the new index instead of the old. Now that I've read the

code I

understand what's going on here between the concurrent index *entry* and

the

filenode swap, but I don't think the docs make this sufficiently clear to
users.

How about something like this:

The following steps occur in a concurrent index build, each in a separate
transaction. Note that if there are multiple indexes to be rebuilt then

each

step loops through all the indexes we're rebuilding, using a separate
transaction for each one.

1. [blah]

Definitely a good idea! I took your text and made it more precise, listing
the actions done for each step, the pg_index flags switched, using
<orderedlist> to make the list of steps described in a separate paragraph
more exhaustive for the user. At the same time I reworked the docs removing
a part that was somewhat duplicated about dealing with the constraints
having invalid index entries and how to drop them.

+ * index_concurrent_create
+ *
+ * Create an index based on the given one that will be used for

concurrent

+ * operations. The index is inserted into catalogs and needs to be built
later
+ * on. This is called during concurrent index processing. The heap

relation

+ * on which is based the index needs to be closed by the caller.

Last bit presumably should be "on which the index is based".

What about "Create a concurrent index based on the definition of the one
provided by caller"?

+ /* Build the list of column names, necessary for index_create */
Instead of all this work wouldn't it be easier to create a version of
index_create/ConstructTupleDescriptor that will use the IndexInfo for the
old index? ISTM index_concurrent_create() is doing a heck of a lot of work
to marshal data into one form just to have it get marshaled yet again.

Worst

case, if we do have to play this game, there should be a stand-alone
function to get the columns/expressions for an existing index; you're
duplicating a lot of code from pg_get_indexdef_worker().

Yes, this definitely sucks and the approach creating a function to get all
the column names is not productive as well. Then let's define an additional
argument in index_create to pass a potential TupleDesc instead of this
whole wart. I noticed as well that we need to actually reset attcacheoff to
be able to use a fresh version of the tuple descriptor of the old index. I
added a small API for this purpose in tupdesc.h called ResetTupleDescCache.
Would it make sense instead to extend CreateTupleDescCopyConstr or
CreateTupleDescCopy with a boolean flag?

index_concurrent_swap(): Perhaps it'd be better to create
index_concurrent_swap_setup() and index_concurrent_swap_cleanup() and
refactor the duplicated code out... the actual function would then become:

This sentence is not finished :) IMO, index_concurrent_swap looks good as
is, taking as arguments the index and its concurrent entry, and swapping
their relfilenode after taking AccessExclusiveLock that will be hold until
the end of this transaction.

ReindexRelationConcurrently()

+ * Process REINDEX CONCURRENTLY for given relation Oid. The relation can

be

+ * either an index or a table. If a table is specified, each step of
REINDEX
+ * CONCURRENTLY is done in parallel with all the table's indexes as well

as

+ * its dependent toast indexes.
This comment is a bit misleading; we're not actually doing anything in
parallel, right? AFAICT index_concurrent_build is going to block while

each

index is built the first time.

Yes, parallel may be misleading. What is meant here is that each step of
the process is done one by one on all the valid indexes a table may have.

+ * relkind is an index, this index itself will be rebuilt. The

locks

taken
+        * parent relations and involved indexes are kept until this
transaction
+        * is committed to protect against schema changes that might occur
until
+        * the session lock is taken on each relation.

This comment is a bit unclear to me... at minimum I think it should be "*

on

parent relations" instead of "* parent relations", but I think it needs to
elaborate on why/when we're also taking session level locks.

Hum, done as follows:
@@ -896,9 +896,11 @@ ReindexRelationConcurrently(Oid relationOid)
         * If the relkind of given relation Oid is a table, all its valid
indexes
         * will be rebuilt, including its associated toast table indexes. If
         * relkind is an index, this index itself will be rebuilt. The
locks taken
-        * parent relations and involved indexes are kept until this
transaction
+        * on parent relations and involved indexes are kept until this
transaction
         * is committed to protect against schema changes that might occur
until
-        * the session lock is taken on each relation.
+        * the session lock is taken on each relation, session lock used to
+        * similarly protect from any schema change that could happen
within the
+        * multiple transactions that are used during this process.
         */

I also wordsmithed this comment a bit...
* Here begins the process for concurrently rebuilding the index
and this one...
* During this phase the concurrent indexes catch up with any new

Slight differences indeed. Thanks and included.

I'd change that to "explosion in the number of indexes a parent relation
could have if this operation fails."

Well, implosion was more... I don't recall my state of mind when writing
that. So changed the way you recommend.

Phase 4, 5 and 6 are rather confusing if you don't understand that each
"concurrent index" entry is meant to be thrown away. I think the Phase 4
comment should elaborate on that.

OK, done.

The comment in check_exclusion_constraint() is good; shouldn't the related
comment on this line in index_create() mention that
check_exclusion_constraint() needs to be changed if we ever support
concurrent builds of exclusion indexes?

if (concurrent && is_exclusion && !is_reindex)

OK, what about that then:
        /*
-        * This case is currently not supported, but there's no way to ask
for it
-        * in the grammar anyway, so it can't happen.
+        * This case is currently only supported during a concurrent index
+        * rebuild, but there is no way to ask for it in the grammar
otherwise
+        * anyway. If support for exclusion constraints is added in the
future,
+        * the check similar to this one in check_exclusion_constraint
should as
+        * well be changed accordingly.

Updated patch is attached.
Thanks again.
Regards,
--
Michael

Attachments:

20141030_reindex_concurrently_3_v2.patchtext/x-diff; charset=US-ASCII; name=20141030_reindex_concurrently_3_v2.patchDownload+1648-197
#15Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Michael Paquier (#14)
Re: REINDEX CONCURRENTLY 2.0

On 10/30/14, 3:19 AM, Michael Paquier wrote:

Thanks for your input, Jim!

On Wed, Oct 29, 2014 at 7:59 AM, Jim Nasby <Jim.Nasby@bluetreble.com <mailto:Jim.Nasby@bluetreble.com>> wrote:

Patch applies against current HEAD and builds, but I'm getting 37 failed
tests (mostly parallel, but also misc and WITH; results attached). Is that
expected?

This is caused by the recent commit 7b1c2a0 (that I actually participated in reviewing :p) because of a missing inclusion of ruleutils.h in index.c.

The "mark the concurrent index" bit is rather confusing; it sounds like it's
referring to the new index instead of the old. Now that I've read the code I
understand what's going on here between the concurrent index *entry* and the
filenode swap, but I don't think the docs make this sufficiently clear to
users.

How about something like this:

The following steps occur in a concurrent index build, each in a separate
transaction. Note that if there are multiple indexes to be rebuilt then each
step loops through all the indexes we're rebuilding, using a separate
transaction for each one.

1. [blah]

Definitely a good idea! I took your text and made it more precise, listing the actions done for each step, the pg_index flags switched, using <orderedlist> to make the list of steps described in a separate paragraph more exhaustive for the user. At the same time I reworked the docs removing a part that was somewhat duplicated about dealing with the constraints having invalid index entries and how to drop them.

Awesome! Just a few items here:

+       Then a second pass is performed to add tuples that were added while
+       the first pass build was running. One the validation of the index

s/One the/Once the/

+ * index_concurrent_create
+ *
+ * Create an index based on the given one that will be used for concurrent
+ * operations. The index is inserted into catalogs and needs to be built
later
+ * on. This is called during concurrent index processing. The heap relation
+ * on which is based the index needs to be closed by the caller.

Last bit presumably should be "on which the index is based".

What about "Create a concurrent index based on the definition of the one provided by caller"?

That's good too, but my comment was on the last sentence, not the first.

+ /* Build the list of column names, necessary for index_create */
Instead of all this work wouldn't it be easier to create a version of
index_create/ConstructTupleDescriptor that will use the IndexInfo for the
old index? ISTM index_concurrent_create() is doing a heck of a lot of work
to marshal data into one form just to have it get marshaled yet again. Worst
case, if we do have to play this game, there should be a stand-alone
function to get the columns/expressions for an existing index; you're
duplicating a lot of code from pg_get_indexdef_worker().

Yes, this definitely sucks and the approach creating a function to get all the column names is not productive as well. Then let's define an additional argument in index_create to pass a potential TupleDesc instead of this whole wart. I noticed as well that we need to actually reset attcacheoff to be able to use a fresh version of the tuple descriptor of the old index. I added a small API for this purpose in tupdesc.h called ResetTupleDescCache. Would it make sense instead to extend CreateTupleDescCopyConstr or CreateTupleDescCopy with a boolean flag?

Perhaps there'd be other places that would want to reset the stats, so I lean slightly that direction.

The comment at the beginning of index_create needs to be modified to mention tupdesc and especially that setting tupdesc over-rides indexColNames.

index_concurrent_swap(): Perhaps it'd be better to create
index_concurrent_swap_setup() and index_concurrent_swap_cleanup() and
refactor the duplicated code out... the actual function would then become:

This sentence is not finished :) IMO, index_concurrent_swap looks good as is, taking as arguments the index and its concurrent entry, and swapping their relfilenode after taking AccessExclusiveLock that will be hold until the end of this transaction.

Fair enough.

ReindexRelationConcurrently()

+ * Process REINDEX CONCURRENTLY for given relation Oid. The relation can be
+ * either an index or a table. If a table is specified, each step of
REINDEX
+ * CONCURRENTLY is done in parallel with all the table's indexes as well as
+ * its dependent toast indexes.
This comment is a bit misleading; we're not actually doing anything in
parallel, right? AFAICT index_concurrent_build is going to block while each
index is built the first time.

Yes, parallel may be misleading. What is meant here is that each step of the process is done one by one on all the valid indexes a table may have.

New comment looks good.

+        * relkind is an index, this index itself will be rebuilt. The locks
taken
+        * parent relations and involved indexes are kept until this
transaction
+        * is committed to protect against schema changes that might occur
until
+        * the session lock is taken on each relation.

This comment is a bit unclear to me... at minimum I think it should be "* on
parent relations" instead of "* parent relations", but I think it needs to
elaborate on why/when we're also taking session level locks.

Hum, done as follows:
@@ -896,9 +896,11 @@ ReindexRelationConcurrently(Oid relationOid)
* If the relkind of given relation Oid is a table, all its valid indexes
* will be rebuilt, including its associated toast table indexes. If
* relkind is an index, this index itself will be rebuilt. The locks taken
-        * parent relations and involved indexes are kept until this transaction
+        * on parent relations and involved indexes are kept until this transaction
* is committed to protect against schema changes that might occur until
-        * the session lock is taken on each relation.
+        * the session lock is taken on each relation, session lock used to
+        * similarly protect from any schema change that could happen within the
+        * multiple transactions that are used during this process.
*/

Cool.

OK, what about that then:
/*
-        * This case is currently not supported, but there's no way to ask for it
-        * in the grammar anyway, so it can't happen.
+        * This case is currently only supported during a concurrent index
+        * rebuild, but there is no way to ask for it in the grammar otherwise
+        * anyway. If support for exclusion constraints is added in the future,
+        * the check similar to this one in check_exclusion_constraint should as
+        * well be changed accordingly.

Updated patch is attached.

Works for me.

Keep in mind I'm not super familiar with the guts of index creation, so it'd be good for someone else to look at that bit (especially index_concurrent_create and ReindexRelationConcurrently).
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#16Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Michael Paquier (#14)
Re: REINDEX CONCURRENTLY 2.0

On 10/30/14, 3:19 AM, Michael Paquier wrote:

On Wed, Oct 29, 2014 at 7:59 AM, Jim Nasby <Jim.Nasby@bluetreble.com <mailto:Jim.Nasby@bluetreble.com>> wrote:

Patch applies against current HEAD and builds, but I'm getting 37 failed
tests (mostly parallel, but also misc and WITH; results attached). Is that
expected?

This is caused by the recent commit 7b1c2a0 (that I actually participated in reviewing :p) because of a missing inclusion of ruleutils.h in index.c.

Sorry, forgot to mention patch now passes make check cleanly.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#17Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#14)
Re: REINDEX CONCURRENTLY 2.0

On Thu, Oct 30, 2014 at 5:19 PM, Michael Paquier
<michael.paquier@gmail.com> wrote:

Updated patch is attached.

Please find attached an updated patch with the following things changed:
- Addition of tab completion in psql for all new commands
- Addition of a call to WaitForLockers in index_concurrent_swap to
ensure that there are no running transactions on the parent table
running before exclusive locks are taken on the index and its
concurrent entry. Previous patch versions created deadlocks because of
that, issue spotted by the isolation tests integrated in the patch.
- Isolation tests for reindex concurrently are re-enabled by default.
Regards,
--
Michael

Attachments:

20141106_reindex_concurrently_3_v3.patchtext/x-diff; charset=US-ASCII; name=20141106_reindex_concurrently_3_v3.patchDownload+1703-202
#18Peter Eisentraut
peter_e@gmx.net
In reply to: Michael Paquier (#12)
Re: REINDEX CONCURRENTLY 2.0

On 10/1/14 3:00 AM, Michael Paquier wrote:

- Use of AccessExclusiveLock when swapping relfilenodes of an index and
its concurrent entry instead of ShareUpdateExclusiveLock for safety. At
the limit of my understanding, that's the consensus reached until now.

I'm very curious about this point. I looked through all the previous
discussions, and the only time I saw this mentioned was at the very
beginning when it was said that we could review the patch while ignoring
this issue and fix it later with MVCC catalog access. Then it got very
technical, but it was never explicitly concluded whether it was possible
to fix this or not.

Also, in the thread "Concurrently option for reindexdb" you pointed out
that requiring an exclusive lock isn't really concurrent and proposed an
option like --minimum-locks.

I will point out again that we specifically invented DROP INDEX
CONCURRENTLY because holding an exclusive lock even briefly isn't good
enough.

If REINDEX cannot work without an exclusive lock, we should invent some
other qualifier, like WITH FEWER LOCKS. It's still useful, but we
shouldn't give people the idea that they can throw away their custom
CREATE INDEX CONCURRENTLY + DROP INDEX CONCURRENTLY scripts.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#19Jeff Janes
jeff.janes@gmail.com
In reply to: Michael Paquier (#17)
Re: REINDEX CONCURRENTLY 2.0

On Wed, Nov 5, 2014 at 8:49 PM, Michael Paquier <michael.paquier@gmail.com>
wrote:

On Thu, Oct 30, 2014 at 5:19 PM, Michael Paquier
<michael.paquier@gmail.com> wrote:

Updated patch is attached.

Please find attached an updated patch with the following things changed:
- Addition of tab completion in psql for all new commands
- Addition of a call to WaitForLockers in index_concurrent_swap to
ensure that there are no running transactions on the parent table
running before exclusive locks are taken on the index and its
concurrent entry. Previous patch versions created deadlocks because of
that, issue spotted by the isolation tests integrated in the patch.
- Isolation tests for reindex concurrently are re-enabled by default.
Regards,

It looks like this needs another rebase, I get failures
on index.c, toasting.c, indexcmds.c, and index.h

Thanks,

Jeff

#20Michael Paquier
michael@paquier.xyz
In reply to: Jeff Janes (#19)
Re: REINDEX CONCURRENTLY 2.0

On Tue, Nov 11, 2014 at 3:24 AM, Jeff Janes <jeff.janes@gmail.com> wrote:

On Wed, Nov 5, 2014 at 8:49 PM, Michael Paquier <michael.paquier@gmail.com> wrote:

On Thu, Oct 30, 2014 at 5:19 PM, Michael Paquier
<michael.paquier@gmail.com> wrote:

Updated patch is attached.

Please find attached an updated patch with the following things changed:
- Addition of tab completion in psql for all new commands
- Addition of a call to WaitForLockers in index_concurrent_swap to
ensure that there are no running transactions on the parent table
running before exclusive locks are taken on the index and its
concurrent entry. Previous patch versions created deadlocks because of
that, issue spotted by the isolation tests integrated in the patch.
- Isolation tests for reindex concurrently are re-enabled by default.
Regards,

It looks like this needs another rebase, I get failures on index.c, toasting.c, indexcmds.c, and index.h

Indeed. There are some conflicts created by the recent modification of
index_create. Here is a rebased patch.
--
Michael

Attachments:

20141110_reindex_concurrently_3_v4.patchapplication/x-patch; name=20141110_reindex_concurrently_3_v4.patchDownload+1704-202
#21Robert Haas
robertmhaas@gmail.com
In reply to: Peter Eisentraut (#18)
#22Robert Haas
robertmhaas@gmail.com
In reply to: Robert Haas (#21)
#23Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#22)
#24Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#23)
#25Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#24)
#26Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Andres Freund (#25)
#27Michael Paquier
michael@paquier.xyz
In reply to: Andres Freund (#25)
#28Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#27)
#29Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#25)
#30Peter Eisentraut
peter_e@gmx.net
In reply to: Andres Freund (#25)
#31Andres Freund
andres@anarazel.de
In reply to: Peter Eisentraut (#30)
#32Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Andres Freund (#31)
#33Andres Freund
andres@anarazel.de
In reply to: Jim Nasby (#32)
#34Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#29)
#35Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#34)
#36Michael Paquier
michael@paquier.xyz
In reply to: Alvaro Herrera (#26)
In reply to: Andres Freund (#31)
#38Michael Paquier
michael@paquier.xyz
In reply to: Oskari Saarenmaa (#37)
#39Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#22)
#40Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#39)
#41Andreas Karlsson
andreas.karlsson@percona.com
In reply to: Andres Freund (#39)
#42Michael Paquier
michael@paquier.xyz
In reply to: Andreas Karlsson (#41)
#43Andreas Karlsson
andreas.karlsson@percona.com
In reply to: Michael Paquier (#42)
#44Michael Paquier
michael@paquier.xyz
In reply to: Andreas Karlsson (#43)
#45Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#44)
#46Andreas Karlsson
andreas.karlsson@percona.com
In reply to: Michael Paquier (#44)
#47Andreas Karlsson
andreas.karlsson@percona.com
In reply to: Andreas Karlsson (#46)
#48Michael Paquier
michael@paquier.xyz
In reply to: Andreas Karlsson (#47)
#49Bruce Momjian
bruce@momjian.us
In reply to: Michael Paquier (#48)
#50Michael Paquier
michael@paquier.xyz
In reply to: Bruce Momjian (#49)
#51Tom Lane
tgl@sss.pgh.pa.us
In reply to: Michael Paquier (#50)
#52Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#51)
#53Andreas Karlsson
andreas.karlsson@percona.com
In reply to: Michael Paquier (#1)
#54Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Andreas Karlsson (#53)
#55Michael Paquier
michael@paquier.xyz
In reply to: Andreas Karlsson (#53)
#56Andres Freund
andres@anarazel.de
In reply to: Jim Nasby (#54)
#57Andreas Karlsson
andreas.karlsson@percona.com
In reply to: Jim Nasby (#54)
#58Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#56)
#59Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#58)
#60Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#59)
In reply to: Andres Freund (#59)
#62Andreas Karlsson
andreas.karlsson@percona.com
In reply to: Robert Haas (#60)
#63Robert Haas
robertmhaas@gmail.com
In reply to: Andreas Karlsson (#62)
#64Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#63)
#65Andreas Karlsson
andreas.karlsson@percona.com
In reply to: Robert Haas (#63)
#66Jim Nasby
Jim.Nasby@BlueTreble.com
In reply to: Andreas Karlsson (#65)
#67Thomas Munro
thomas.munro@gmail.com
In reply to: Andres Freund (#64)
#68Thomas Munro
thomas.munro@gmail.com
In reply to: Thomas Munro (#67)
#69Andreas Karlsson
andreas.karlsson@percona.com
In reply to: Michael Paquier (#55)
#70Andreas Karlsson
andreas.karlsson@percona.com
In reply to: Andreas Karlsson (#69)
#71Michael Banck
michael.banck@credativ.de
In reply to: Andreas Karlsson (#69)
#72Michael Paquier
michael@paquier.xyz
In reply to: Andreas Karlsson (#69)
#73Michael Paquier
michael@paquier.xyz
In reply to: Michael Banck (#71)
#74Andreas Karlsson
andreas.karlsson@percona.com
In reply to: Michael Paquier (#73)
#75Michael Paquier
michael@paquier.xyz
In reply to: Andreas Karlsson (#74)
#76Andreas Karlsson
andreas.karlsson@percona.com
In reply to: Michael Paquier (#75)
#77Andreas Karlsson
andreas.karlsson@percona.com
In reply to: Michael Paquier (#73)
#78Andreas Karlsson
andreas.karlsson@percona.com
In reply to: Andreas Karlsson (#77)
#79Michael Paquier
michael@paquier.xyz
In reply to: Andreas Karlsson (#78)
#80Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Andreas Karlsson (#78)
#81Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Michael Paquier (#72)
#82Michael Paquier
michael@paquier.xyz
In reply to: Alvaro Herrera (#81)
#83Craig Ringer
craig@2ndquadrant.com
In reply to: Michael Paquier (#82)
#84Stephen Frost
sfrost@snowman.net
In reply to: Craig Ringer (#83)
#85Andreas Karlsson
andreas.karlsson@percona.com
In reply to: Stephen Frost (#84)
#86Michael Paquier
michael@paquier.xyz
In reply to: Andreas Karlsson (#85)
#87Peter Eisentraut
peter_e@gmx.net
In reply to: Michael Paquier (#86)
In reply to: Peter Eisentraut (#87)
#89Peter Eisentraut
peter_e@gmx.net
In reply to: Sergei Kornilov (#88)
In reply to: Peter Eisentraut (#89)
#91Peter Eisentraut
peter_e@gmx.net
In reply to: Sergei Kornilov (#90)
#92Stephen Frost
sfrost@snowman.net
In reply to: Peter Eisentraut (#91)
#93Michael Paquier
michael@paquier.xyz
In reply to: Stephen Frost (#92)
#94Peter Eisentraut
peter_e@gmx.net
In reply to: Stephen Frost (#92)
#95Peter Eisentraut
peter_e@gmx.net
In reply to: Michael Paquier (#93)
#96Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Peter Eisentraut (#95)
#97Stephen Frost
sfrost@snowman.net
In reply to: Peter Eisentraut (#94)
#98Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Stephen Frost (#97)
#99Stephen Frost
sfrost@snowman.net
In reply to: Alvaro Herrera (#98)
#100Michael Paquier
michael@paquier.xyz
In reply to: Alvaro Herrera (#96)
#101Peter Eisentraut
peter_e@gmx.net
In reply to: Sergei Kornilov (#90)
#102Michael Paquier
michael@paquier.xyz
In reply to: Peter Eisentraut (#101)
#103Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Stephen Frost (#99)
#104Andrew Gierth
andrew@tao11.riddles.org.uk
In reply to: Alvaro Herrera (#103)
#105Andres Freund
andres@anarazel.de
In reply to: Andrew Gierth (#104)
#106Stephen Frost
sfrost@snowman.net
In reply to: Alvaro Herrera (#103)
#107Peter Eisentraut
peter_e@gmx.net
In reply to: Sergei Kornilov (#90)
In reply to: Peter Eisentraut (#107)
#109Pavel Stehule
pavel.stehule@gmail.com
In reply to: Sergei Kornilov (#108)
#110Michael Paquier
michael@paquier.xyz
In reply to: Sergei Kornilov (#108)
#111Andreas Karlsson
andreas.karlsson@percona.com
In reply to: Michael Paquier (#110)
#112Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Michael Paquier (#110)
#113Michael Paquier
michael@paquier.xyz
In reply to: Alvaro Herrera (#112)
#114Michael Paquier
michael@paquier.xyz
In reply to: Andreas Karlsson (#111)
#115Vik Fearing
vik@postgresfriends.org
In reply to: Alvaro Herrera (#112)
#116Michael Paquier
michael@paquier.xyz
In reply to: Vik Fearing (#115)
#117Vik Fearing
vik@postgresfriends.org
In reply to: Michael Paquier (#116)
#118Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#114)
In reply to: Vik Fearing (#117)
#120Stephen Frost
sfrost@snowman.net
In reply to: Vik Fearing (#115)
#121Andres Freund
andres@anarazel.de
In reply to: Stephen Frost (#120)
#122Michael Paquier
michael@paquier.xyz
In reply to: Vik Fearing (#117)
#123Robert Haas
robertmhaas@gmail.com
In reply to: Vik Fearing (#117)
#124Pavel Stehule
pavel.stehule@gmail.com
In reply to: Robert Haas (#123)
#125Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#123)
#126Peter Eisentraut
peter_e@gmx.net
In reply to: Michael Paquier (#110)
#127Michael Paquier
michael@paquier.xyz
In reply to: Peter Eisentraut (#126)
#128Peter Eisentraut
peter_e@gmx.net
In reply to: Michael Paquier (#127)
#129Michael Paquier
michael@paquier.xyz
In reply to: Peter Eisentraut (#128)
In reply to: Peter Eisentraut (#128)
#131Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Peter Eisentraut (#128)
#132Michael Paquier
michael@paquier.xyz
In reply to: Alvaro Herrera (#131)
In reply to: Sergei Kornilov (#130)
#134Peter Eisentraut
peter_e@gmx.net
In reply to: Sergei Kornilov (#133)
#135Michael Banck
michael.banck@credativ.de
In reply to: Peter Eisentraut (#134)
#136Peter Eisentraut
peter_e@gmx.net
In reply to: Michael Banck (#135)
In reply to: Peter Eisentraut (#134)
#138Peter Eisentraut
peter_e@gmx.net
In reply to: Sergei Kornilov (#137)
#139Michael Paquier
michael@paquier.xyz
In reply to: Peter Eisentraut (#138)
In reply to: Peter Eisentraut (#138)
#141Peter Eisentraut
peter_e@gmx.net
In reply to: Sergei Kornilov (#140)
In reply to: Peter Eisentraut (#141)
#143Michael Paquier
michael@paquier.xyz
In reply to: Sergei Kornilov (#142)
#144Peter Eisentraut
peter_e@gmx.net
In reply to: Michael Paquier (#143)
#145Peter Eisentraut
peter_e@gmx.net
In reply to: Peter Eisentraut (#144)
#146Michael Paquier
michael@paquier.xyz
In reply to: Peter Eisentraut (#144)
In reply to: Michael Paquier (#146)
#148Robert Treat
xzilla@users.sourceforge.net
In reply to: Peter Eisentraut (#141)
#149Andres Freund
andres@anarazel.de
In reply to: Robert Treat (#148)
#150Nathan Bossart
nathandbossart@gmail.com
In reply to: Andres Freund (#149)
#151Justin Pryzby
pryzby@telsasoft.com
In reply to: Nathan Bossart (#150)
#152Peter Eisentraut
peter_e@gmx.net
In reply to: Nathan Bossart (#150)
#153Peter Eisentraut
peter_e@gmx.net
In reply to: Justin Pryzby (#151)
#154Michael Paquier
michael@paquier.xyz
In reply to: Shinoda, Noriyoshi (PN Japan FSIP) (#147)
#155Michael Paquier
michael@paquier.xyz
In reply to: Andres Freund (#149)
#156Peter Eisentraut
peter_e@gmx.net
In reply to: Shinoda, Noriyoshi (PN Japan FSIP) (#147)
#157Michael Paquier
michael@paquier.xyz
In reply to: Peter Eisentraut (#156)
#158Michael Paquier
michael@paquier.xyz
In reply to: Peter Eisentraut (#156)
#159Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#158)
#160Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#159)
In reply to: Peter Eisentraut (#141)
#162Michael Paquier
michael@paquier.xyz
In reply to: Dagfinn Ilmari Mannsåker (#161)
#163Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Michael Paquier (#162)
#164Michael Paquier
michael@paquier.xyz
In reply to: Alvaro Herrera (#163)
#165Peter Eisentraut
peter_e@gmx.net
In reply to: Michael Paquier (#160)
#166Michael Paquier
michael@paquier.xyz
In reply to: Peter Eisentraut (#165)
In reply to: Michael Paquier (#164)
#168Michael Paquier
michael@paquier.xyz
In reply to: Dagfinn Ilmari Mannsåker (#167)
#169Peter Eisentraut
peter_e@gmx.net
In reply to: Michael Paquier (#168)
#170Michael Paquier
michael@paquier.xyz
In reply to: Peter Eisentraut (#169)