Statistics Import and Export

Started by Corey Huinkerover 2 years ago555 messages
Jump to latest
#1Corey Huinker
corey.huinker@gmail.com

pg_stats_export is a view that aggregates pg_statistic data by relation
oid and stores all of the column statistical data in a system-indepdent
(i.e.
no oids, collation information removed, all MCV values rendered as text)
jsonb format, along with the relation's relname, reltuples, and relpages
from pg_class, as well as the schemaname from pg_namespace.

pg_import_rel_stats is a function which takes a relation oid,
server_version_num, num_tuples, num_pages, and a column_stats jsonb in
a format matching that of pg_stats_export, and applies that data to
the specified pg_class and pg_statistics rows for the relation
specified.

The most common use-case for such a function is in upgrades and
dump/restore, wherein the upgrade process would capture the output of
pg_stats_export into a regular table, perform the upgrade, and then
join that data to the existing pg_class rows, updating statistics to be
a close approximation of what they were just prior to the upgrade. The
hope is that these statistics are better than the early stages of
--analyze-in-stages and can be applied faster, thus reducing system
downtime.

The values applied to pg_class are done inline, which is to say
non-transactionally. The values applied to pg_statitics are applied
transactionally, as if an ANALYZE operation was reading from a
cheat-sheet.

This function and view will need to be followed up with corresponding
ones for pg_stastitic_ext and pg_stastitic_ext_data, and while we would
likely never backport the import functions, we can have user programs
do the same work as the export views such that statistics can be brought
forward from versions as far back as there is jsonb to store it.

While the primary purpose of the import function(s) are to reduce downtime
during an upgrade, it is not hard to see that they could also be used to
facilitate tuning and development operations, asking questions like "how
might
this query plan change if this table has 1000x rows in it?", without
actually
putting those rows into the table.

Attachments:

v1-0001-Introduce-the-system-view-pg_stats_export-and-the.patchtext/x-patch; charset=US-ASCII; name=v1-0001-Introduce-the-system-view-pg_stats_export-and-the.patchDownload+954-3
#2Ashutosh Bapat
ashutosh.bapat@enterprisedb.com
In reply to: Corey Huinker (#1)
Re: Statistics Import and Export

On Thu, Aug 31, 2023 at 12:17 PM Corey Huinker <corey.huinker@gmail.com> wrote:

While the primary purpose of the import function(s) are to reduce downtime
during an upgrade, it is not hard to see that they could also be used to
facilitate tuning and development operations, asking questions like "how might
this query plan change if this table has 1000x rows in it?", without actually
putting those rows into the table.

Thanks. I think this may be used with postgres_fdw to import
statistics directly from the foreigns server, whenever possible,
rather than fetching the rows and building it locally. If it's known
that the stats on foreign and local servers match for a foreign table,
we will be one step closer to accurately estimating the cost of a
foreign plan locally rather than through EXPLAIN.

--
Best Wishes,
Ashutosh Bapat

#3Corey Huinker
corey.huinker@gmail.com
In reply to: Ashutosh Bapat (#2)
Re: Statistics Import and Export

Thanks. I think this may be used with postgres_fdw to import
statistics directly from the foreigns server, whenever possible,
rather than fetching the rows and building it locally. If it's known
that the stats on foreign and local servers match for a foreign table,
we will be one step closer to accurately estimating the cost of a
foreign plan locally rather than through EXPLAIN.

Yeah, that use makes sense as well, and if so then postgres_fdw would
likely need to be aware of the appropriate query for several versions back
- they change, not by much, but they do change. So now we'd have each query
text in three places: a system view, postgres_fdw, and the bin/scripts
pre-upgrade program. So I probably should consider the best way to share
those in the codebase.

#4Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#3)
Re: Statistics Import and Export

Yeah, that use makes sense as well, and if so then postgres_fdw would
likely need to be aware of the appropriate query for several versions back
- they change, not by much, but they do change. So now we'd have each query
text in three places: a system view, postgres_fdw, and the bin/scripts
pre-upgrade program. So I probably should consider the best way to share
those in the codebase.

Attached is v2 of this patch.

New features:
* imports index statistics. This is not strictly accurate: it re-computes
index statistics the same as ANALYZE does, which is to say it derives those
stats entirely from table column stats, which are imported, so in that
sense we're getting index stats without touching the heap.
* now support extended statistics except for MCV, which is currently
serialized as an difficult-to-decompose bytea field.
* bare-bones CLI script pg_export_stats, which extracts stats on databases
back to v12 (tested) and could work back to v10.
* bare-bones CLI script pg_import_stats, which obviously only works on
current devel dbs, but can take exports from older versions.

Attachments:

v2-0001-Additional-internal-jsonb-access-functions.patchtext/x-patch; charset=US-ASCII; name=v2-0001-Additional-internal-jsonb-access-functions.patchDownload+20-2
v2-0002-Add-system-view-pg_statistic_export.patchtext/x-patch; charset=US-ASCII; name=v2-0002-Add-system-view-pg_statistic_export.patchDownload+291-1
v2-0003-Add-pg_import_rel_stats.patchtext/x-patch; charset=US-ASCII; name=v2-0003-Add-pg_import_rel_stats.patchDownload+1527-64
v2-0004-Add-pg_export_stats-pg_import_stats.patchtext/x-patch; charset=US-ASCII; name=v2-0004-Add-pg_export_stats-pg_import_stats.patchDownload+1254-2
#5Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Corey Huinker (#4)
Re: Statistics Import and Export

On 10/31/23 08:25, Corey Huinker wrote:

Attached is v2 of this patch.

New features:
* imports index statistics. This is not strictly accurate: it
re-computes index statistics the same as ANALYZE does, which is to
say it derives those stats entirely from table column stats, which
are imported, so in that sense we're getting index stats without
touching the heap.

Maybe I just don't understand, but I'm pretty sure ANALYZE does not
derive index stats from column stats. It actually builds them from the
row sample.

* now support extended statistics except for MCV, which is currently
serialized as an difficult-to-decompose bytea field.

Doesn't pg_mcv_list_items() already do all the heavy work?

regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#6Corey Huinker
corey.huinker@gmail.com
In reply to: Tomas Vondra (#5)
Re: Statistics Import and Export

Maybe I just don't understand, but I'm pretty sure ANALYZE does not
derive index stats from column stats. It actually builds them from the
row sample.

That is correct, my error.

* now support extended statistics except for MCV, which is currently
serialized as an difficult-to-decompose bytea field.

Doesn't pg_mcv_list_items() already do all the heavy work?

Thanks! I'll look into that.

The comment below in mcv.c made me think there was no easy way to get
output.

/*
* pg_mcv_list_out - output routine for type pg_mcv_list.
*
* MCV lists are serialized into a bytea value, so we simply call byteaout()
* to serialize the value into text. But it'd be nice to serialize that into
* a meaningful representation (e.g. for inspection by people).
*
* XXX This should probably return something meaningful, similar to what
* pg_dependencies_out does. Not sure how to deal with the deduplicated
* values, though - do we want to expand that or not?
*/

#7Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Corey Huinker (#6)
Re: Statistics Import and Export

On 11/2/23 06:01, Corey Huinker wrote:

Maybe I just don't understand, but I'm pretty sure ANALYZE does not
derive index stats from column stats. It actually builds them from the
row sample.

That is correct, my error.
 

* now support extended statistics except for MCV, which is currently
serialized as an difficult-to-decompose bytea field.

Doesn't pg_mcv_list_items() already do all the heavy work?

Thanks! I'll look into that.

The comment below in mcv.c made me think there was no easy way to get
output.

/*
 * pg_mcv_list_out      - output routine for type pg_mcv_list.
 *
 * MCV lists are serialized into a bytea value, so we simply call byteaout()
 * to serialize the value into text. But it'd be nice to serialize that into
 * a meaningful representation (e.g. for inspection by people).
 *
 * XXX This should probably return something meaningful, similar to what
 * pg_dependencies_out does. Not sure how to deal with the deduplicated
 * values, though - do we want to expand that or not?
 */

Yeah, that was the simplest output function possible, it didn't seem
worth it to implement something more advanced. pg_mcv_list_items() is
more convenient for most needs, but it's quite far from the on-disk
representation.

That's actually a good question - how closely should the exported data
be to the on-disk format? I'd say we should keep it abstract, not tied
to the details of the on-disk format (which might easily change between
versions).

I'm a bit confused about the JSON schema used in pg_statistic_export
view, though. It simply serializes stakinds, stavalues, stanumbers into
arrays ... which works, but why not to use the JSON nesting? I mean,
there could be a nested document for histogram, MCV, ... with just the
correct fields.

{
...
histogram : { stavalues: [...] },
mcv : { stavalues: [...], stanumbers: [...] },
...
}

and so on. Also, what does TRIVIAL stand for?

regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#8Shubham Khanna
khannashubham1197@gmail.com
In reply to: Corey Huinker (#4)
Re: Statistics Import and Export

On Mon, Nov 6, 2023 at 4:16 PM Corey Huinker <corey.huinker@gmail.com> wrote:

Yeah, that use makes sense as well, and if so then postgres_fdw would likely need to be aware of the appropriate query for several versions back - they change, not by much, but they do change. So now we'd have each query text in three places: a system view, postgres_fdw, and the bin/scripts pre-upgrade program. So I probably should consider the best way to share those in the codebase.

Attached is v2 of this patch.

While applying Patch, I noticed few Indentation issues:
1) D:\Project\Postgres>git am v2-0003-Add-pg_import_rel_stats.patch
.git/rebase-apply/patch:1265: space before tab in indent.
errmsg("invalid statistics
format, stxndeprs must be array or null");
.git/rebase-apply/patch:1424: trailing whitespace.
errmsg("invalid statistics format,
stxndistinct attnums elements must be strings, but one is %s",
.git/rebase-apply/patch:1315: new blank line at EOF.
+
warning: 3 lines add whitespace errors.
Applying: Add pg_import_rel_stats().

2) D:\Project\Postgres>git am v2-0004-Add-pg_export_stats-pg_import_stats.patch
.git/rebase-apply/patch:282: trailing whitespace.
const char *export_query_v14 =
.git/rebase-apply/patch:489: trailing whitespace.
const char *export_query_v12 =
.git/rebase-apply/patch:648: trailing whitespace.
const char *export_query_v10 =
.git/rebase-apply/patch:826: trailing whitespace.

.git/rebase-apply/patch:1142: trailing whitespace.
result = PQexec(conn,
warning: squelched 4 whitespace errors
warning: 9 lines add whitespace errors.
Applying: Add pg_export_stats, pg_import_stats.

Thanks and Regards,
Shubham Khanna.

#9Ashutosh Bapat
ashutosh.bapat@enterprisedb.com
In reply to: Corey Huinker (#4)
Re: Statistics Import and Export

On Tue, Oct 31, 2023 at 12:55 PM Corey Huinker <corey.huinker@gmail.com> wrote:

Yeah, that use makes sense as well, and if so then postgres_fdw would likely need to be aware of the appropriate query for several versions back - they change, not by much, but they do change. So now we'd have each query text in three places: a system view, postgres_fdw, and the bin/scripts pre-upgrade program. So I probably should consider the best way to share those in the codebase.

Attached is v2 of this patch.

New features:
* imports index statistics. This is not strictly accurate: it re-computes index statistics the same as ANALYZE does, which is to say it derives those stats entirely from table column stats, which are imported, so in that sense we're getting index stats without touching the heap.
* now support extended statistics except for MCV, which is currently serialized as an difficult-to-decompose bytea field.
* bare-bones CLI script pg_export_stats, which extracts stats on databases back to v12 (tested) and could work back to v10.
* bare-bones CLI script pg_import_stats, which obviously only works on current devel dbs, but can take exports from older versions.

I did a small experiment with your patches. In a separate database
"fdw_dst" I created a table t1 and populated it with 100K rows
#create table t1 (a int, b int);
#insert into t1 select i, i + 1 from generate_series(1, 100000) i;
#analyse t1;

In database "postgres" on the same server, I created a foreign table
pointing to t1
#create server fdw_dst_server foreign data wrapper postgres_fdw
OPTIONS ( dbname 'fdw_dst', port '5432');
#create user mapping for public server fdw_dst_server ;
#create foreign table t1 (a int, b int) server fdw_dst_server;

The estimates are off
#explain select * from t1 where a = 100;
QUERY PLAN
-----------------------------------------------------------
Foreign Scan on t1 (cost=100.00..142.26 rows=13 width=8)
(1 row)

Export and import stats for table t1
$ pg_export_stats -d fdw_dst | pg_import_stats -d postgres

gives accurate estimates
#explain select * from t1 where a = 100;
QUERY PLAN
-----------------------------------------------------------
Foreign Scan on t1 (cost=100.00..1793.02 rows=1 width=8)
(1 row)

In this simple case it's working like a charm.

Then I wanted to replace all ANALYZE commands in postgres_fdw.sql with
import and export of statistics. But I can not do that since it
requires table names to match. Foreign table metadata stores the
mapping between local and remote table as well as column names. Import
can use that mapping to install the statistics appropriately. We may
want to support a command or function in postgres_fdw to import
statistics of all the tables that point to a given foreign server.
That may be some future work based on your current patches.

I have not looked at the code though.

--
Best Wishes,
Ashutosh Bapat

#10Corey Huinker
corey.huinker@gmail.com
In reply to: Tomas Vondra (#7)
Re: Statistics Import and Export

Yeah, that was the simplest output function possible, it didn't seem

worth it to implement something more advanced. pg_mcv_list_items() is

more convenient for most needs, but it's quite far from the on-disk
representation.

I was able to make it work.

That's actually a good question - how closely should the exported data
be to the on-disk format? I'd say we should keep it abstract, not tied
to the details of the on-disk format (which might easily change between
versions).

For the most part, I chose the exported data json types and formats in a
way that was the most accommodating to cstring input functions. So, while
so many of the statistic values are obviously only ever integers/floats,
those get stored as a numeric data type which lacks direct
numeric->int/float4/float8 functions (though we could certainly create
them, and I'm not against that), casting them to text lets us leverage
pg_strtoint16, etc.

I'm a bit confused about the JSON schema used in pg_statistic_export
view, though. It simply serializes stakinds, stavalues, stanumbers into
arrays ... which works, but why not to use the JSON nesting? I mean,
there could be a nested document for histogram, MCV, ... with just the
correct fields.

{
...
histogram : { stavalues: [...] },
mcv : { stavalues: [...], stanumbers: [...] },
...
}

That's a very good question. I went with this format because it was fairly
straightforward to code in SQL using existing JSON/JSONB functions, and
that's what we will need if we want to export statistics on any server
currently in existence. I'm certainly not locked in with the current
format, and if it can be shown how to transform the data into a superior
format, I'd happily do so.

and so on. Also, what does TRIVIAL stand for?

It's currently serving double-duty for "there are no stats in this slot"
and the situations where the stats computation could draw no conclusions
about the data.

Attached is v3 of this patch. Key features are:

* Handles regular pg_statistic stats for any relation type.
* Handles extended statistics.
* Export views pg_statistic_export and pg_statistic_ext_export to allow
inspection of existing stats and saving those values for later use.
* Import functions pg_import_rel_stats() and pg_import_ext_stats() which
take Oids as input. This is intentional to allow stats from one object to
be imported into another object.
* User scripts pg_export_stats and pg_import stats, which offer a primitive
way to serialize all the statistics of one database and import them into
another.
* Has regression test coverage for both with a variety of data types.
* Passes my own manual test of extracting all of the stats from a v15
version of the popular "dvdrental" example database, as well as some
additional extended statistics objects, and importing them into a
development database.
* Import operations never touch the heap of any relation outside of
pg_catalog. As such, this should be significantly faster than even the most
cursory analyze operation, and therefore should be useful in upgrade
situations, allowing the database to work with "good enough" stats more
quickly, while still allowing for regular autovacuum to recalculate the
stats "for real" at some later point.

The relation statistics code was adapted from similar features in
analyze.c, but is now done in a query context. As before, the
rowcount/pagecount values are updated on pg_class in a non-transactional
fashion to avoid table bloat, while the updates to pg_statistic are
pg_statistic_ext_data are done transactionally.

The existing statistics _store() functions were leveraged wherever
practical, so much so that the extended statistics import is mostly just
adapting the existing _build() functions into _import() functions which
pull their values from JSON rather than computing the statistics.

Current concerns are:

1. I had to code a special-case exception for MCELEM stats on array data
types, so that the array_in() call uses the element type rather than the
array type. I had assumed that the existing exmaine_attribute() functions
would have properly derived the typoid for that column, but it appears to
not be the case, and I'm clearly missing how the existing code gets it
right.
2. This hasn't been tested with external custom datatypes, but if they have
a custom typanalyze function things should be ok.
3. While I think I have cataloged all of the schema-structural changes to
pg_statistic[_ext[_data]] since version 10, I may have missed a case where
the schema stayed the same, but the values are interpreted differently.
4. I don't yet have a complete vision for how these tools will be used by
pg_upgrade and pg_dump/restore, the places where these will provide the
biggest win for users.

Attachments:

v3-0002-Add-system-view-pg_statistic_export.patchtext/x-patch; charset=US-ASCII; name=v3-0002-Add-system-view-pg_statistic_export.patchDownload+120-1
v3-0001-Additional-internal-jsonb-access-functions.patchtext/x-patch; charset=US-ASCII; name=v3-0001-Additional-internal-jsonb-access-functions.patchDownload+20-2
v3-0003-Add-pg_import_rel_stats.patchtext/x-patch; charset=US-ASCII; name=v3-0003-Add-pg_import_rel_stats.patchDownload+1130-3
v3-0004-Add-pg_export_stats-pg_import_stats.patchtext/x-patch; charset=US-ASCII; name=v3-0004-Add-pg_export_stats-pg_import_stats.patchDownload+608-2
v3-0005-Add-system-view-pg_statistic_ext_export.patchtext/x-patch; charset=US-ASCII; name=v3-0005-Add-system-view-pg_statistic_ext_export.patchDownload+170-1
v3-0006-Create-create_stat_ext_entry-from-fetch_statentri.patchtext/x-patch; charset=US-ASCII; name=v3-0006-Create-create_stat_ext_entry-from-fetch_statentri.patchDownload+78-69
v3-0008-Allow-explicit-nulls-in-container-lookups.patchtext/x-patch; charset=US-ASCII; name=v3-0008-Allow-explicit-nulls-in-container-lookups.patchDownload+10-5
v3-0007-Add-pg_import_ext_stats.patchtext/x-patch; charset=US-ASCII; name=v3-0007-Add-pg_import_ext_stats.patchDownload+794-7
v3-0009-Enable-pg_export_stats-pg_import_stats-to-use-ext.patchtext/x-patch; charset=US-ASCII; name=v3-0009-Enable-pg_export_stats-pg_import_stats-to-use-ext.patchDownload+672-66
#11Andrei Lepikhov
lepihov@gmail.com
In reply to: Corey Huinker (#10)
Re: Statistics Import and Export

On 13/12/2023 17:26, Corey Huinker wrote:> 4. I don't yet have a
complete vision for how these tools will be used

by pg_upgrade and pg_dump/restore, the places where these will provide
the biggest win for users.

Some issues here with docs:

func.sgml:28465: parser error : Opening and ending tag mismatch: sect1
line 26479 and sect2
</sect2>
^

Also, as I remember, we already had some attempts to invent dump/restore
statistics [1,2]. They were stopped with the problem of type
verification. What if the definition of the type has changed between the
dump and restore? As I see in the code, Importing statistics you just
check the column name and don't see into the type.

[1]: Backup and recovery of pg_statistic /messages/by-id/724322880.K8vzik8zPz@abook
/messages/by-id/724322880.K8vzik8zPz@abook
[2]: Re: Ideas about a better API for postgres_fdw remote estimates /messages/by-id/7a40707d-1758-85a2-7bb1-6e5775518e64@postgrespro.ru
/messages/by-id/7a40707d-1758-85a2-7bb1-6e5775518e64@postgrespro.ru

--
regards,
Andrei Lepikhov
Postgres Professional

#12Corey Huinker
corey.huinker@gmail.com
In reply to: Andrei Lepikhov (#11)
Re: Statistics Import and Export

On Fri, Dec 15, 2023 at 3:36 AM Andrei Lepikhov <a.lepikhov@postgrespro.ru>
wrote:

On 13/12/2023 17:26, Corey Huinker wrote:> 4. I don't yet have a
complete vision for how these tools will be used

by pg_upgrade and pg_dump/restore, the places where these will provide
the biggest win for users.

Some issues here with docs:

func.sgml:28465: parser error : Opening and ending tag mismatch: sect1
line 26479 and sect2
</sect2>
^

Apologies, will fix.

Also, as I remember, we already had some attempts to invent dump/restore
statistics [1,2]. They were stopped with the problem of type
verification. What if the definition of the type has changed between the
dump and restore? As I see in the code, Importing statistics you just
check the column name and don't see into the type.

We look up the imported statistics via column name, that is correct.

However, the values in stavalues and mcv and such are stored purely as
text, so they must be casted using the input functions for that particular
datatype. If that column definition changed, or the underlying input
function changed, the stats import of that particular table would fail. It
should be noted, however, that those same input functions were used to
bring the data into the table via restore, so it would have already failed
on that step. Either way, the structure of the table has effectively
changed, so failure to import those statistics would be a good thing.

[1] Backup and recovery of pg_statistic
/messages/by-id/724322880.K8vzik8zPz@abook

That proposal sought to serialize enough information on the old server such
that rows could be directly inserted into pg_statistic on the new server.
As was pointed out at the time, version N of a server cannot know what the
format of pg_statistic will be in version N+1.

This patch avoids that problem by inspecting the structure of the object to
be faux-analyzed, and using that to determine what parts of the JSON to
fetch, and what datatype to cast values to in cases like mcv and
stavaluesN. The exported JSON has no oids in it whatseover, all elements
subject to casting on import have already been cast to text, and the record
returned has the server version number of the producing system, and the
import function can use that to determine how it interprets the data it
finds.

[2] Re: Ideas about a better API for postgres_fdw remote estimates

/messages/by-id/7a40707d-1758-85a2-7bb1-6e5775518e64@postgrespro.ru

This one seems to be pulling oids from the remote server, and we can't
guarantee their stability across systems, especially for objects and
operators from extensions. I tried to go the route of extracting the full
text name of an operator, but discovered that the qualified names, in
addition to being unsightly, were irrelevant because we can't insert stats
that disagree about type with the attribute/expression. So it didn't matter
what type the remote system thought it had, the local system was going to
coerce it into the expected data type or ereport() trying.

I think there is hope for having do_analyze() run a remote query fetching
the remote table's exported stats and then storing them locally, possibly
after some modification, and that would save us from having to sample a
remote table.

#13Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Corey Huinker (#10)
Re: Statistics Import and Export

Hi,

I finally had time to look at the last version of the patch, so here's a
couple thoughts and questions in somewhat random order. Please take this
as a bit of a brainstorming and push back if you disagree some of my
comments.

In general, I like the goal of this patch - not having statistics is a
common issue after an upgrade, and people sometimes don't even realize
they need to run analyze. So, it's definitely worth improving.

I'm not entirely sure about the other use case - allowing people to
tweak optimizer statistics on a running cluster, to see what would be
the plan in that case. Or more precisely - I agree that would be an
interesting and useful feature, but maybe the interface should not be
the same as for the binary upgrade use case?

interfaces
----------

When I thought about the ability to dump/load statistics in the past, I
usually envisioned some sort of DDL that would do the export and import.
So for example we'd have EXPORT STATISTICS / IMPORT STATISTICS commands,
or something like that, and that'd do all the work. This would mean
stats are "first-class citizens" and it'd be fairly straightforward to
add this into pg_dump, for example. Or at least I think so ...

Alternatively we could have the usual "functional" interface, with a
functions to export/import statistics, replacing the DDL commands.

Unfortunately, none of this works for the pg_upgrade use case, because
existing cluster versions would not support this new interface, of
course. That's a significant flaw, as it'd make this useful only for
upgrades of future versions.

So I think for the pg_upgrade use case, we don't have much choice other
than using "custom" export through a view, which is what the patch does.

However, for the other use case (tweaking optimizer stats) this is not
really an issue - that always happens on the same instance, so no issue
with not having the "export" function and so on. I'd bet there are more
convenient ways to do this than using the export view. I'm sure it could
share a lot of the infrastructure, ofc.

I suggest we focus on the pg_upgrade use case for now. In particular, I
think we really need to find a good way to integrate this into
pg_upgrade. I'm not against having custom CLI commands, but it's still a
manual thing - I wonder if we could extend pg_dump to dump stats, or
make it built-in into pg_upgrade in some way (possibly disabled by
default, or something like that).

JSON format
-----------

As for the JSON format, I wonder if we need that at all? Isn't that an
unnecessary layer of indirection? Couldn't we simply dump pg_statistic
and pg_statistic_ext_data in CSV, or something like that? The amount of
new JSONB code seems to be very small, so it's OK I guess.

I'm still a bit unsure about the "right" JSON schema. I find it a bit
inconvenient that the JSON objects mimic the pg_statistic schema very
closely. In particular, it has one array for stakind values, another
array for stavalues, array for stanumbers etc. I understand generating
this JSON in SQL is fairly straightforward, and for the pg_upgrade use
case it's probably OK. But my concern is it's not very convenient for
the "manual tweaking" use case, because the "related" fields are
scattered in different parts of the JSON.

That's pretty much why I envisioned a format "grouping" the arrays for a
particular type of statistics (MCV, histogram) into the same object, as
for example in

{
"mcv" : {"values" : [...], "frequencies" : [...]}
"histogram" : {"bounds" : [...]}
}

But that's probably much harder to generate from plain SQL (at least I
think so, I haven't tried).

data missing in the export
--------------------------

I think the data needs to include more information. Maybe not for the
pg_upgrade use case, where it's mostly guaranteed not to change, but for
the "manual tweak" use case it can change. And I don't think we want two
different formats - we want one, working for everything.

Consider for example about the staopN and stacollN fields - if we clone
the stats from one table to the other, and the table uses different
collations, will that still work? Similarly, I think we should include
the type of each column, because it's absolutely not guaranteed the
import function will fail if the type changes. For example, if the type
changes from integer to text, that will work, but the ordering will
absolutely not be the same. And so on.

For the extended statistics export, I think we need to include also the
attribute names and expressions, because these can be different between
the statistics. And not only that - the statistics values reference the
attributes by positions, but if the two tables have the attributes in a
different order (when ordered by attnum), that will break stuff.

more strict checks
------------------

I think the code should be a bit more "defensive" when importing stuff,
and do at least some sanity checks. For the pg_upgrade use case this
should be mostly non-issue (except for maybe helping to detect bugs
earlier), but for the "manual tweak" use case it's much more important.

By this I mean checks like:

* making sure the frequencies in MCV lists are not obviously wrong
(outside [0,1], sum exceeding > 1.0, etc.)

* cross-checking that stanumbers/stavalues make sense (e.g. that MCV has
both arrays while histogram has only stavalues, that the arrays have
the same length for MCV, etc.)

* checking there are no duplicate stakind values (e.g. two MCV lists)

This is another reason why I was thinking the current JSON format may be
a bit inconvenient, because it loads the fields separately, making the
checks harder. But I guess it could be done after loading everything, as
a separate phase.

Not sure if all the checks need to be regular elog(ERROR), perhaps some
could/should be just asserts.

minor questions
---------------

1) Should the views be called pg_statistic_export or pg_stats_export?
Perhaps pg_stats_export is better, because the format is meant to be
human-readable (rather than 100% internal).

2) It's not very clear what "non-transactional update" of pg_class
fields actually means. Does that mean we update the fields in-place,
can't be rolled back, is not subject to MVCC or what? I suspect users
won't know unless the docs say that explicitly.

3) The "statistics.c" code should really document the JSON structure. Or
maybe if we plan to use this for other purposes, it should be documented
in the SGML?

Actually, this means that the use supported cases determine if the
expected JSON structure is part of the API. For pg_upgrade we could keep
it as "internal" and maybe change it as needed, but for "manual tweak"
it'd become part of the public API.

4) Why do we need the separate "replaced" flags in import_stakinds? Can
it happen that collreplaces/opreplaces differ from kindreplaces?

5) What happens in we import statistics for a table that already has
some statistics? Will this discard the existing statistics, or will this
merge them somehow? (I think we should always discard the existing
stats, and keep only the new version.)

6) What happens if we import extended stats with mismatching definition?
For example, what if the "new" statistics object does not have "mcv"
enabled, but the imported data do include MCV? What if the statistics do
have the same number of "dimensions" but not the same number of columns
and expressions?

7) The func.sgml additions in 0007 seems a bit strange, particularly the
first sentence of the paragraph.

8) While experimenting with the patch, I noticed this:

create table t (a int, b int, c text);
create statistics s on a, b, c, (a+b), (a-b) from t;

create table t2 (a text, b text, c text);
create statistics s2 on a, c from t2;

select pg_import_ext_stats(
(select oid from pg_statistic_ext where stxname = 's2'),
(select server_version_num from pg_statistic_ext_export
where ext_stats_name = 's'),
(select stats from pg_statistic_ext_export
where ext_stats_name = 's'));

WARNING: statistics import has 5 mcv dimensions, but the expects 2.
Skipping excess dimensions.
ERROR: statistics import has 5 mcv dimensions, but the expects 2.
Skipping excess dimensions.

I guess we should not trigger WARNING+ERROR with the same message.

regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#14Bruce Momjian
bruce@momjian.us
In reply to: Tomas Vondra (#13)
Re: Statistics Import and Export

On Tue, Dec 26, 2023 at 02:18:56AM +0100, Tomas Vondra wrote:

interfaces
----------

When I thought about the ability to dump/load statistics in the past, I
usually envisioned some sort of DDL that would do the export and import.
So for example we'd have EXPORT STATISTICS / IMPORT STATISTICS commands,
or something like that, and that'd do all the work. This would mean
stats are "first-class citizens" and it'd be fairly straightforward to
add this into pg_dump, for example. Or at least I think so ...

Alternatively we could have the usual "functional" interface, with a
functions to export/import statistics, replacing the DDL commands.

Unfortunately, none of this works for the pg_upgrade use case, because
existing cluster versions would not support this new interface, of
course. That's a significant flaw, as it'd make this useful only for
upgrades of future versions.

So I think for the pg_upgrade use case, we don't have much choice other
than using "custom" export through a view, which is what the patch does.

However, for the other use case (tweaking optimizer stats) this is not
really an issue - that always happens on the same instance, so no issue
with not having the "export" function and so on. I'd bet there are more
convenient ways to do this than using the export view. I'm sure it could
share a lot of the infrastructure, ofc.

I suggest we focus on the pg_upgrade use case for now. In particular, I
think we really need to find a good way to integrate this into
pg_upgrade. I'm not against having custom CLI commands, but it's still a
manual thing - I wonder if we could extend pg_dump to dump stats, or
make it built-in into pg_upgrade in some way (possibly disabled by
default, or something like that).

I have some thoughts on this too. I understand the desire to add
something that can be used for upgrades _to_ PG 17, but I am concerned
that this will give us a cumbersome API that will hamper future
development. I think we should develop the API we want, regardless of
how useful it is for upgrades _to_ PG 17, and then figure out what
short-term hacks we can add to get it working for upgrades _to_ PG 17;
these hacks can eventually be removed. Even if they can't be removed,
they are export-only and we can continue developing the import SQL
command cleanly, and I think import is going to need the most long-term
maintenance.

I think we need a robust API to handle two cases:

* changes in how we store statistics
* changes in how how data type values are represented in the statistics

We have had such changes in the past, and I think these two issues are
what have prevented import/export of statistics up to this point.
Developing an API that doesn't cleanly handle these will cause long-term
pain.

In summary, I think we need an SQL-level command for this. I think we
need to embed the Postgres export version number into the statistics
export file (maybe in the COPY header), and then load the file via COPY
internally (not JSON) into a temporary table that we know matches the
exported Postgres version. We then need to use SQL to make any
adjustments to it before loading it into pg_statistic. Doing that
internally in JSON just isn't efficient. If people want JSON for such
cases, I suggest we add a JSON format to COPY.

I think we can then look at pg_upgrade to see if we can simulate the
export action which can use the statistics import SQL command.

--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com

Only you can decide what is important to you.

#15Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#14)
Re: Statistics Import and Export

Bruce Momjian <bruce@momjian.us> writes:

I think we need a robust API to handle two cases:

* changes in how we store statistics
* changes in how how data type values are represented in the statistics

We have had such changes in the past, and I think these two issues are
what have prevented import/export of statistics up to this point.
Developing an API that doesn't cleanly handle these will cause long-term
pain.

Agreed.

In summary, I think we need an SQL-level command for this.

I think a SQL command is an actively bad idea. It'll just add development
and maintenance overhead that we don't need. When I worked on this topic
years ago at Salesforce, I had things set up with simple functions, which
pg_dump would invoke by writing more or less

SELECT pg_catalog.load_statistics(....);

This has a number of advantages, not least of which is that an extension
could plausibly add compatible functions to older versions. The trick,
as you say, is to figure out what the argument lists ought to be.
Unfortunately I recall few details of what I wrote for Salesforce,
but I think I had it broken down in a way where there was a separate
function call occurring for each pg_statistic "slot", thus roughly

load_statistics(table regclass, attname text, stakind int, stavalue ...);

I might have had a separate load_statistics_xxx function for each
stakind, which would ease the issue of deciding what the datatype
of "stavalue" is. As mentioned already, we'd also need some sort of
version identifier, and we'd expect the load_statistics() functions
to be able to transform the data if the old version used a different
representation. I agree with the idea that an explicit representation
of the source table attribute's type would be wise, too.

regards, tom lane

#16Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Tom Lane (#15)
Re: Statistics Import and Export

On 12/26/23 20:19, Tom Lane wrote:

Bruce Momjian <bruce@momjian.us> writes:

I think we need a robust API to handle two cases:

* changes in how we store statistics
* changes in how how data type values are represented in the statistics

We have had such changes in the past, and I think these two issues are
what have prevented import/export of statistics up to this point.
Developing an API that doesn't cleanly handle these will cause long-term
pain.

Agreed.

I agree the format is important - we don't want to end up with a format
that's cumbersome or inconvenient to use. But I don't think the proposed
format is somewhat bad in those respects - it mostly reflects how we
store statistics and if I was designing a format for humans, it might
look a bit differently. But that's not the goal here, IMHO.

I don't quite understand the two cases above. Why should this affect how
we store statistics? Surely, making the statistics easy to use for the
optimizer is much more important than occasional export/import.

In summary, I think we need an SQL-level command for this.

I think a SQL command is an actively bad idea. It'll just add development
and maintenance overhead that we don't need. When I worked on this topic
years ago at Salesforce, I had things set up with simple functions, which
pg_dump would invoke by writing more or less

SELECT pg_catalog.load_statistics(....);

This has a number of advantages, not least of which is that an extension
could plausibly add compatible functions to older versions. The trick,
as you say, is to figure out what the argument lists ought to be.
Unfortunately I recall few details of what I wrote for Salesforce,
but I think I had it broken down in a way where there was a separate
function call occurring for each pg_statistic "slot", thus roughly

load_statistics(table regclass, attname text, stakind int, stavalue ...);

I might have had a separate load_statistics_xxx function for each
stakind, which would ease the issue of deciding what the datatype
of "stavalue" is. As mentioned already, we'd also need some sort of
version identifier, and we'd expect the load_statistics() functions
to be able to transform the data if the old version used a different
representation. I agree with the idea that an explicit representation
of the source table attribute's type would be wise, too.

Yeah, this is pretty much what I meant by "functional" interface. But if
I said maybe the format implemented by the patch is maybe too close to
how we store the statistics, then this has exactly the same issue. And
it has other issues too, I think - it breaks down the stats into
multiple function calls, so ensuring the sanity/correctness of whole
sets of statistics gets much harder, I think.

I'm not sure about the extension idea. Yes, we could have an extension
providing such functions, but do we have any precedent of making
pg_upgrade dependent on an external extension? I'd much rather have
something built-in that just works, especially if we intend to make it
the default behavior (which I think should be our aim here).

regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#17Bruce Momjian
bruce@momjian.us
In reply to: Tomas Vondra (#16)
Re: Statistics Import and Export

On Wed, Dec 27, 2023 at 01:08:47PM +0100, Tomas Vondra wrote:

On 12/26/23 20:19, Tom Lane wrote:

Bruce Momjian <bruce@momjian.us> writes:

I think we need a robust API to handle two cases:

* changes in how we store statistics
* changes in how how data type values are represented in the statistics

We have had such changes in the past, and I think these two issues are
what have prevented import/export of statistics up to this point.
Developing an API that doesn't cleanly handle these will cause long-term
pain.

Agreed.

I agree the format is important - we don't want to end up with a format
that's cumbersome or inconvenient to use. But I don't think the proposed
format is somewhat bad in those respects - it mostly reflects how we
store statistics and if I was designing a format for humans, it might
look a bit differently. But that's not the goal here, IMHO.

I don't quite understand the two cases above. Why should this affect how
we store statistics? Surely, making the statistics easy to use for the
optimizer is much more important than occasional export/import.

The two items above were to focus on getting a solution that can easily
handle future statistics storage changes. I figured we would want to
manipulate the data as a table internally so I am confused why we would
export JSON instead of a COPY format. I didn't think we were changing
how we internall store or use the statistics.

In summary, I think we need an SQL-level command for this.

I think a SQL command is an actively bad idea. It'll just add development
and maintenance overhead that we don't need. When I worked on this topic
years ago at Salesforce, I had things set up with simple functions, which
pg_dump would invoke by writing more or less

SELECT pg_catalog.load_statistics(....);

This has a number of advantages, not least of which is that an extension
could plausibly add compatible functions to older versions. The trick,
as you say, is to figure out what the argument lists ought to be.
Unfortunately I recall few details of what I wrote for Salesforce,
but I think I had it broken down in a way where there was a separate
function call occurring for each pg_statistic "slot", thus roughly

load_statistics(table regclass, attname text, stakind int, stavalue ...);

I might have had a separate load_statistics_xxx function for each
stakind, which would ease the issue of deciding what the datatype
of "stavalue" is. As mentioned already, we'd also need some sort of
version identifier, and we'd expect the load_statistics() functions
to be able to transform the data if the old version used a different
representation. I agree with the idea that an explicit representation
of the source table attribute's type would be wise, too.

Yeah, this is pretty much what I meant by "functional" interface. But if
I said maybe the format implemented by the patch is maybe too close to
how we store the statistics, then this has exactly the same issue. And
it has other issues too, I think - it breaks down the stats into
multiple function calls, so ensuring the sanity/correctness of whole
sets of statistics gets much harder, I think.

I was suggesting an SQL command because this feature is going to need a
lot of options and do a lot of different things, I am afraid, and a
single function might be too complex to manage.

I'm not sure about the extension idea. Yes, we could have an extension
providing such functions, but do we have any precedent of making
pg_upgrade dependent on an external extension? I'd much rather have
something built-in that just works, especially if we intend to make it
the default behavior (which I think should be our aim here).

Uh, an extension seems nice to allow people in back branches to install
it, but not for normal usage.

--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com

Only you can decide what is important to you.

#18Corey Huinker
corey.huinker@gmail.com
In reply to: Tomas Vondra (#13)
Re: Statistics Import and Export

On Mon, Dec 25, 2023 at 8:18 PM Tomas Vondra <tomas.vondra@enterprisedb.com>
wrote:

Hi,

I finally had time to look at the last version of the patch, so here's a
couple thoughts and questions in somewhat random order. Please take this
as a bit of a brainstorming and push back if you disagree some of my
comments.

In general, I like the goal of this patch - not having statistics is a
common issue after an upgrade, and people sometimes don't even realize
they need to run analyze. So, it's definitely worth improving.

I'm not entirely sure about the other use case - allowing people to
tweak optimizer statistics on a running cluster, to see what would be
the plan in that case. Or more precisely - I agree that would be an
interesting and useful feature, but maybe the interface should not be
the same as for the binary upgrade use case?

interfaces
----------

When I thought about the ability to dump/load statistics in the past, I
usually envisioned some sort of DDL that would do the export and import.
So for example we'd have EXPORT STATISTICS / IMPORT STATISTICS commands,
or something like that, and that'd do all the work. This would mean
stats are "first-class citizens" and it'd be fairly straightforward to
add this into pg_dump, for example. Or at least I think so ...

Alternatively we could have the usual "functional" interface, with a
functions to export/import statistics, replacing the DDL commands.

Unfortunately, none of this works for the pg_upgrade use case, because
existing cluster versions would not support this new interface, of
course. That's a significant flaw, as it'd make this useful only for
upgrades of future versions.

This was the reason I settled on the interface that I did: while we can
create whatever interface we want for importing the statistics, we would
need to be able to extract stats from databases using only the facilities
available in those same databases, and then store that in a medium that
could be conveyed across databases, either by text files or by saving them
off in a side table prior to upgrade. JSONB met the criteria.

So I think for the pg_upgrade use case, we don't have much choice other
than using "custom" export through a view, which is what the patch does.

However, for the other use case (tweaking optimizer stats) this is not
really an issue - that always happens on the same instance, so no issue
with not having the "export" function and so on. I'd bet there are more
convenient ways to do this than using the export view. I'm sure it could
share a lot of the infrastructure, ofc.

So, there is a third use case - foreign data wrappers. When analyzing a
foreign table, at least one in the postgresql_fdw family of foreign
servers, we should be able to send a query specific to the version and
dialect of that server, get back the JSONB, and import those results. That
use case may be more tangible to you than the tweak/tuning case.

JSON format
-----------

As for the JSON format, I wonder if we need that at all? Isn't that an
unnecessary layer of indirection? Couldn't we simply dump pg_statistic
and pg_statistic_ext_data in CSV, or something like that? The amount of
new JSONB code seems to be very small, so it's OK I guess.

I see a few problems with dumping pg_statistic[_ext_data]. The first is
that the importer now has to understand all of the past formats of those
two tables. The next is that the tables are chock full of Oids that don't
necessarily carry forward. I could see us having a text-ified version of
those two tables, but we'd need that for all previous iterations of those
table formats. Instead, I put the burden on the stats export to de-oid the
data and make it *_in() function friendly.

That's pretty much why I envisioned a format "grouping" the arrays for a
particular type of statistics (MCV, histogram) into the same object, as
for example in

{
"mcv" : {"values" : [...], "frequencies" : [...]}
"histogram" : {"bounds" : [...]}
}

I agree that would be a lot more readable, and probably a lot more
debuggable. But I went into this unsure if there could be more than one
stats slot of a given kind per table. Knowing that they must be unique
helps.

But that's probably much harder to generate from plain SQL (at least I
think so, I haven't tried).

I think it would be harder, but far from impossible.

data missing in the export
--------------------------

I think the data needs to include more information. Maybe not for the
pg_upgrade use case, where it's mostly guaranteed not to change, but for
the "manual tweak" use case it can change. And I don't think we want two
different formats - we want one, working for everything.

I"m not against this at all, and I started out doing that, but the
qualified names of operators got _ugly_, and I quickly realized that what I
was generating wouldn't matter, either the input data would make sense for
the attribute's stats or it would fail trying.

Consider for example about the staopN and stacollN fields - if we clone
the stats from one table to the other, and the table uses different
collations, will that still work? Similarly, I think we should include
the type of each column, because it's absolutely not guaranteed the
import function will fail if the type changes. For example, if the type
changes from integer to text, that will work, but the ordering will
absolutely not be the same. And so on.

I can see including the type of the column, that's a lot cleaner than the
operator names for sure, and I can see us rejecting stats or sections of
stats in certain situations. Like in your example, if the collation
changed, then reject all "<" op stats but keep the "=" ones.

For the extended statistics export, I think we need to include also the
attribute names and expressions, because these can be different between
the statistics. And not only that - the statistics values reference the
attributes by positions, but if the two tables have the attributes in a
different order (when ordered by attnum), that will break stuff.

Correct me if I'm wrong, but I thought expression parse trees change _a
lot_ from version to version?

Attribute reordering is a definite vulnerability of the current
implementation, so an attribute name export might be a way to mitigate that.

* making sure the frequencies in MCV lists are not obviously wrong
(outside [0,1], sum exceeding > 1.0, etc.)

+1

* cross-checking that stanumbers/stavalues make sense (e.g. that MCV has
both arrays while histogram has only stavalues, that the arrays have
the same length for MCV, etc.)

To this end, there's an edge-case hack in the code where I have to derive
the array elemtype. I had thought that examine_attribute() or
std_typanalyze() was going to do that for me, but it didn't. Very much want
your input there.

* checking there are no duplicate stakind values (e.g. two MCV lists)

Per previous comment, it's good to learn these restrictions.

Not sure if all the checks need to be regular elog(ERROR), perhaps some
could/should be just asserts.

For this first pass, all errors were one-size fits all, safe for the
WARNING vs ERROR.

minor questions
---------------

1) Should the views be called pg_statistic_export or pg_stats_export?
Perhaps pg_stats_export is better, because the format is meant to be
human-readable (rather than 100% internal).

I have no opinion on what the best name would be, and will go with
consensus.

2) It's not very clear what "non-transactional update" of pg_class
fields actually means. Does that mean we update the fields in-place,
can't be rolled back, is not subject to MVCC or what? I suspect users
won't know unless the docs say that explicitly.

Correct. Cannot be rolled back, not subject to MVCC.

3) The "statistics.c" code should really document the JSON structure. Or
maybe if we plan to use this for other purposes, it should be documented
in the SGML?

I agree, but I also didn't expect the format to survive first contact with
reviewers, so I held back.

4) Why do we need the separate "replaced" flags in import_stakinds? Can
it happen that collreplaces/opreplaces differ from kindreplaces?

That was initially done to maximize the amount of code that could be copied
from do_analyze(). In retrospect, I like how extended statistics just
deletes all the pg_statistic_ext_data rows and replaces them and I would
like to do the same for pg_statistic before this is all done.

5) What happens in we import statistics for a table that already has
some statistics? Will this discard the existing statistics, or will this
merge them somehow? (I think we should always discard the existing
stats, and keep only the new version.)

In the case of pg_statistic_ext_data, the stats are thrown out and replaced
by the imported ones.

In the case of pg_statistic, it's basically an upsert, and any values that
were missing in the JSON are not updated on the existing row. That's
appealing in a tweak situation where you want to only alter one or two bits
of a stat, but not really useful in other situations. Per previous comment,
I'd prefer a clean slate and forcing tweaking use cases to fill in all the
blanks.

6) What happens if we import extended stats with mismatching definition?
For example, what if the "new" statistics object does not have "mcv"
enabled, but the imported data do include MCV? What if the statistics do
have the same number of "dimensions" but not the same number of columns
and expressions?

The importer is currently driven by the types of stats to be expected for
that pg_attribute/pg_statistic_ext. It only looks for things that are
possible for that stat type, and any extra JSON values are ignored.

#19Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#15)
Re: Statistics Import and Export

As mentioned already, we'd also need some sort of
version identifier, and we'd expect the load_statistics() functions
to be able to transform the data if the old version used a different
representation. I agree with the idea that an explicit representation
of the source table attribute's type would be wise, too.

There is a version identifier currently (its own column not embedded in the
JSON), but I discovered that I was able to put the burden on the export
queries to spackle-over the changes in the table structures over time.
Still, I knew that we'd need the version number in there eventually.

#20Corey Huinker
corey.huinker@gmail.com
In reply to: Tomas Vondra (#16)
Re: Statistics Import and Export

Yeah, this is pretty much what I meant by "functional" interface. But if
I said maybe the format implemented by the patch is maybe too close to
how we store the statistics, then this has exactly the same issue. And
it has other issues too, I think - it breaks down the stats into
multiple function calls, so ensuring the sanity/correctness of whole
sets of statistics gets much harder, I think.

Export functions was my original plan, for simplicity, maintenance, etc,
but it seemed like I'd be adding quite a few functions, so the one view
made more sense for an initial version. Also, I knew that pg_dump or some
other stats exporter would have to inline the guts of those functions into
queries for older versions, and adapting a view definition seemed more
straightforward for the reader than function definitions.

#21Tom Lane
tgl@sss.pgh.pa.us
In reply to: Corey Huinker (#20)
#22Bruce Momjian
bruce@momjian.us
In reply to: Corey Huinker (#18)
#23Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#21)
#24Bruce Momjian
bruce@momjian.us
In reply to: Corey Huinker (#23)
#25Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Corey Huinker (#10)
#26Corey Huinker
corey.huinker@gmail.com
In reply to: Tomas Vondra (#25)
#27Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Corey Huinker (#26)
#28Peter Smith
smithpb2250@gmail.com
In reply to: Corey Huinker (#10)
#29Corey Huinker
corey.huinker@gmail.com
In reply to: Peter Smith (#28)
#30Corey Huinker
corey.huinker@gmail.com
In reply to: Peter Smith (#28)
#31Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Corey Huinker (#30)
#32Corey Huinker
corey.huinker@gmail.com
In reply to: Tomas Vondra (#31)
#33Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#32)
#34Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#33)
#35Stephen Frost
sfrost@snowman.net
In reply to: Corey Huinker (#34)
#36Corey Huinker
corey.huinker@gmail.com
In reply to: Stephen Frost (#35)
#37Stephen Frost
sfrost@snowman.net
In reply to: Corey Huinker (#36)
#38Corey Huinker
corey.huinker@gmail.com
In reply to: Stephen Frost (#37)
#39Nathan Bossart
nathandbossart@gmail.com
In reply to: Corey Huinker (#38)
#40Stephen Frost
sfrost@snowman.net
In reply to: Nathan Bossart (#39)
#41Bertrand Drouvot
bertranddrouvot.pg@gmail.com
In reply to: Corey Huinker (#34)
#42Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Corey Huinker (#38)
#43Stephen Frost
sfrost@snowman.net
In reply to: Matthias van de Meent (#42)
#44Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Stephen Frost (#43)
#45Stephen Frost
sfrost@snowman.net
In reply to: Matthias van de Meent (#44)
#46Corey Huinker
corey.huinker@gmail.com
In reply to: Matthias van de Meent (#42)
#47Corey Huinker
corey.huinker@gmail.com
In reply to: Stephen Frost (#45)
#48Stephen Frost
sfrost@snowman.net
In reply to: Corey Huinker (#47)
#49Corey Huinker
corey.huinker@gmail.com
In reply to: Stephen Frost (#48)
#50Bharath Rupireddy
bharath.rupireddyforpostgres@gmail.com
In reply to: Corey Huinker (#47)
#51Corey Huinker
corey.huinker@gmail.com
In reply to: Bharath Rupireddy (#50)
#52Bertrand Drouvot
bertranddrouvot.pg@gmail.com
In reply to: Corey Huinker (#47)
#53Stephen Frost
sfrost@snowman.net
In reply to: Corey Huinker (#49)
#54Corey Huinker
corey.huinker@gmail.com
In reply to: Stephen Frost (#53)
#55Stephen Frost
sfrost@snowman.net
In reply to: Corey Huinker (#54)
#56Corey Huinker
corey.huinker@gmail.com
In reply to: Stephen Frost (#55)
#57Stephen Frost
sfrost@snowman.net
In reply to: Corey Huinker (#56)
#58Corey Huinker
corey.huinker@gmail.com
In reply to: Stephen Frost (#57)
#59Stephen Frost
sfrost@snowman.net
In reply to: Corey Huinker (#58)
#60Corey Huinker
corey.huinker@gmail.com
In reply to: Stephen Frost (#59)
#61Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#60)
#62Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#61)
#63Jeff Davis
pgsql@j-davis.com
In reply to: Jeff Davis (#62)
#64Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#63)
#65Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#64)
#66Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#65)
#67Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#66)
#68Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#67)
#69Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#68)
#70Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#69)
#71Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#70)
#72Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#71)
#73Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#72)
#74Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#73)
#75Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#74)
#76Ashutosh Bapat
ashutosh.bapat@enterprisedb.com
In reply to: Corey Huinker (#74)
#77Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Corey Huinker (#75)
#78Corey Huinker
corey.huinker@gmail.com
In reply to: Ashutosh Bapat (#76)
#79Corey Huinker
corey.huinker@gmail.com
In reply to: Tomas Vondra (#77)
#80Jeff Davis
pgsql@j-davis.com
In reply to: Tom Lane (#15)
#81Jeff Davis
pgsql@j-davis.com
In reply to: Tomas Vondra (#77)
#82Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#81)
#83Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#82)
#84Stephen Frost
sfrost@snowman.net
In reply to: Jeff Davis (#83)
#85Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#83)
#86Jeff Davis
pgsql@j-davis.com
In reply to: Stephen Frost (#84)
#87Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeff Davis (#86)
#88Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#86)
#89Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#87)
#90Stephen Frost
sfrost@snowman.net
In reply to: Jeff Davis (#86)
#91Jeff Davis
pgsql@j-davis.com
In reply to: Stephen Frost (#90)
#92Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#91)
#93Magnus Hagander
magnus@hagander.net
In reply to: Corey Huinker (#88)
#94Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#92)
#95Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeff Davis (#94)
#96Jeff Davis
pgsql@j-davis.com
In reply to: Tom Lane (#95)
#97Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeff Davis (#96)
#98Jeff Davis
pgsql@j-davis.com
In reply to: Tom Lane (#97)
#99Corey Huinker
corey.huinker@gmail.com
In reply to: Magnus Hagander (#93)
#100Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#97)
#101Tom Lane
tgl@sss.pgh.pa.us
In reply to: Corey Huinker (#100)
#102Corey Huinker
corey.huinker@gmail.com
In reply to: Magnus Hagander (#93)
#103Tom Lane
tgl@sss.pgh.pa.us
In reply to: Corey Huinker (#102)
#104Tom Lane
tgl@sss.pgh.pa.us
In reply to: Corey Huinker (#102)
#105Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#103)
#106Tom Lane
tgl@sss.pgh.pa.us
In reply to: Corey Huinker (#105)
#107Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#106)
#108Tom Lane
tgl@sss.pgh.pa.us
In reply to: Corey Huinker (#107)
#109Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#104)
#110Tom Lane
tgl@sss.pgh.pa.us
In reply to: Corey Huinker (#109)
#111Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#110)
#112Ashutosh Bapat
ashutosh.bapat@enterprisedb.com
In reply to: Ashutosh Bapat (#76)
#113Ashutosh Bapat
ashutosh.bapat@enterprisedb.com
In reply to: Corey Huinker (#111)
#114Tom Lane
tgl@sss.pgh.pa.us
In reply to: Corey Huinker (#111)
#115Jeff Davis
pgsql@j-davis.com
In reply to: Tom Lane (#101)
#116Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#114)
#117Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeff Davis (#115)
#118Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#116)
#119Jeff Davis
pgsql@j-davis.com
In reply to: Bruce Momjian (#116)
#120Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#108)
#121Jeff Davis
pgsql@j-davis.com
In reply to: Tom Lane (#104)
#122Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeff Davis (#119)
#123Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeff Davis (#121)
#124Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#114)
#125Corey Huinker
corey.huinker@gmail.com
In reply to: Bruce Momjian (#120)
#126Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#123)
#127Tom Lane
tgl@sss.pgh.pa.us
In reply to: Corey Huinker (#126)
#128Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#127)
#129Tom Lane
tgl@sss.pgh.pa.us
In reply to: Corey Huinker (#128)
#130Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#129)
#131Tom Lane
tgl@sss.pgh.pa.us
In reply to: Corey Huinker (#130)
#132Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#131)
#133Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#132)
#134Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#133)
#135Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#134)
#136Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeff Davis (#135)
#137Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#136)
#138Jeff Davis
pgsql@j-davis.com
In reply to: Tom Lane (#136)
#139Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeff Davis (#138)
#140Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#139)
#141Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#140)
#142Tom Lane
tgl@sss.pgh.pa.us
In reply to: Corey Huinker (#141)
#143Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#142)
#144Tom Lane
tgl@sss.pgh.pa.us
In reply to: Corey Huinker (#143)
#145Michael Paquier
michael@paquier.xyz
In reply to: Tom Lane (#118)
#146Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#144)
#147Corey Huinker
corey.huinker@gmail.com
In reply to: Ashutosh Bapat (#113)
#148Ashutosh Bapat
ashutosh.bapat@enterprisedb.com
In reply to: Corey Huinker (#147)
#149Tom Lane
tgl@sss.pgh.pa.us
In reply to: Ashutosh Bapat (#148)
#150Ashutosh Bapat
ashutosh.bapat@enterprisedb.com
In reply to: Tom Lane (#149)
#151Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#146)
#152Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeff Davis (#151)
#153Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#152)
#154Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#153)
#155Nathan Bossart
nathandbossart@gmail.com
In reply to: Corey Huinker (#154)
#156Jeff Davis
pgsql@j-davis.com
In reply to: Nathan Bossart (#155)
#157Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeff Davis (#156)
#158Jeff Davis
pgsql@j-davis.com
In reply to: Tom Lane (#157)
#159Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeff Davis (#158)
#160Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Tom Lane (#159)
#161Corey Huinker
corey.huinker@gmail.com
In reply to: Matthias van de Meent (#160)
#162Bruce Momjian
bruce@momjian.us
In reply to: Matthias van de Meent (#160)
#163Matthias van de Meent
boekewurm+postgres@gmail.com
In reply to: Bruce Momjian (#162)
#164Corey Huinker
corey.huinker@gmail.com
In reply to: Matthias van de Meent (#163)
#165Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#164)
#166Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#165)
#167Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#166)
#168Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#167)
#169Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#168)
#170Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#169)
#171Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#170)
#172Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#171)
#173Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#172)
#174Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#173)
#175Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#174)
#176Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#175)
#177Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#176)
#178Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#176)
#179Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#178)
#180Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#179)
#181Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#180)
#182Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#179)
#183Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#181)
#184Jeff Davis
pgsql@j-davis.com
In reply to: Jeff Davis (#183)
#185Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#184)
#186Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#185)
#187jian he
jian.universality@gmail.com
In reply to: Jeff Davis (#186)
#188Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#186)
#189Corey Huinker
corey.huinker@gmail.com
In reply to: jian he (#187)
#190jian he
jian.universality@gmail.com
In reply to: Corey Huinker (#189)
#191Corey Huinker
corey.huinker@gmail.com
In reply to: jian he (#190)
#192jian he
jian.universality@gmail.com
In reply to: Corey Huinker (#191)
#193jian he
jian.universality@gmail.com
In reply to: jian he (#192)
#194Nathan Bossart
nathandbossart@gmail.com
In reply to: Corey Huinker (#191)
#195Corey Huinker
corey.huinker@gmail.com
In reply to: Nathan Bossart (#194)
#196Nathan Bossart
nathandbossart@gmail.com
In reply to: Corey Huinker (#195)
#197Jeff Davis
pgsql@j-davis.com
In reply to: jian he (#192)
In reply to: Jeff Davis (#197)
#199Corey Huinker
corey.huinker@gmail.com
In reply to: Shinoda, Noriyoshi (PN Japan FSIP) (#198)
#200Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#199)
#201Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#200)
#202Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#201)
#203Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#202)
#204Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#203)
#205Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#204)
#206Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#205)
#207Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#206)
#208Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#207)
In reply to: Corey Huinker (#208)
#210Corey Huinker
corey.huinker@gmail.com
In reply to: Shinoda, Noriyoshi (PN Japan FSIP) (#209)
#211Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#191)
In reply to: Jeff Davis (#211)
#213Jeff Davis
pgsql@j-davis.com
In reply to: Shinoda, Noriyoshi (PN Japan FSIP) (#212)
#214Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#191)
#215Alexander Lakhin
exclusion@gmail.com
In reply to: Jeff Davis (#214)
#216Jeff Davis
pgsql@j-davis.com
In reply to: Alexander Lakhin (#215)
#217Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#214)
#218Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#217)
#219Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#218)
#220Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Jeff Davis (#211)
#221Corey Huinker
corey.huinker@gmail.com
In reply to: Heikki Linnakangas (#220)
#222Michael Paquier
michael@paquier.xyz
In reply to: Corey Huinker (#221)
#223Bruce Momjian
bruce@momjian.us
In reply to: Corey Huinker (#219)
#224Wetmore, Matthew  (CTR)
Matthew.Wetmore@mdlive.com
In reply to: Bruce Momjian (#223)
#225Bruce Momjian
bruce@momjian.us
In reply to: Wetmore, Matthew (CTR) (#224)
#226Corey Huinker
corey.huinker@gmail.com
In reply to: Bruce Momjian (#225)
#227Corey Huinker
corey.huinker@gmail.com
In reply to: Bruce Momjian (#223)
#228Bruce Momjian
bruce@momjian.us
In reply to: Corey Huinker (#227)
#229Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#228)
#230Nathan Bossart
nathandbossart@gmail.com
In reply to: Bruce Momjian (#228)
#231Corey Huinker
corey.huinker@gmail.com
In reply to: Nathan Bossart (#230)
#232Bruce Momjian
bruce@momjian.us
In reply to: Corey Huinker (#231)
#233Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#232)
#234Nathan Bossart
nathandbossart@gmail.com
In reply to: Bruce Momjian (#233)
#235Jeff Davis
pgsql@j-davis.com
In reply to: Nathan Bossart (#234)
#236Nathan Bossart
nathandbossart@gmail.com
In reply to: Jeff Davis (#235)
#237Nathan Bossart
nathandbossart@gmail.com
In reply to: Nathan Bossart (#236)
#238Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#227)
#239Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#238)
#240Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#239)
#241Magnus Hagander
magnus@hagander.net
In reply to: Bruce Momjian (#229)
#242Bruce Momjian
bruce@momjian.us
In reply to: Magnus Hagander (#241)
#243Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Bruce Momjian (#242)
#244Nathan Bossart
nathandbossart@gmail.com
In reply to: Alvaro Herrera (#243)
#245Bruce Momjian
bruce@momjian.us
In reply to: Nathan Bossart (#244)
#246Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Bruce Momjian (#245)
#247Nathan Bossart
nathandbossart@gmail.com
In reply to: Alvaro Herrera (#246)
#248Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alvaro Herrera (#246)
#249Bruce Momjian
bruce@momjian.us
In reply to: Alvaro Herrera (#246)
#250Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#248)
#251Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Tom Lane (#248)
#252Magnus Hagander
magnus@hagander.net
In reply to: Tom Lane (#248)
#253Corey Huinker
corey.huinker@gmail.com
In reply to: Alvaro Herrera (#251)
#254Bruce Momjian
bruce@momjian.us
In reply to: Corey Huinker (#253)
#255Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#240)
#256Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#239)
#257Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#256)
#258Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#257)
#259Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#258)
#260Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#259)
#261Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#260)
#262Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#261)
#263Jeff Davis
pgsql@j-davis.com
In reply to: Jeff Davis (#261)
#264Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#263)
#265Bruce Momjian
bruce@momjian.us
In reply to: Jeff Davis (#261)
#266Bruce Momjian
bruce@momjian.us
In reply to: Corey Huinker (#259)
#267Jeff Davis
pgsql@j-davis.com
In reply to: Bruce Momjian (#266)
#268Bruce Momjian
bruce@momjian.us
In reply to: Jeff Davis (#267)
#269Nathan Bossart
nathandbossart@gmail.com
In reply to: Bruce Momjian (#268)
#270Corey Huinker
corey.huinker@gmail.com
In reply to: Nathan Bossart (#269)
#271Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#270)
#272Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#271)
#273jian he
jian.universality@gmail.com
In reply to: Corey Huinker (#272)
#274Corey Huinker
corey.huinker@gmail.com
In reply to: jian he (#273)
#275jian he
jian.universality@gmail.com
In reply to: Corey Huinker (#274)
#276jian he
jian.universality@gmail.com
In reply to: jian he (#275)
#277Michael Paquier
michael@paquier.xyz
In reply to: jian he (#276)
#278jian he
jian.universality@gmail.com
In reply to: Corey Huinker (#272)
#279Corey Huinker
corey.huinker@gmail.com
In reply to: jian he (#278)
#280Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#279)
#281Corey Huinker
corey.huinker@gmail.com
In reply to: Michael Paquier (#277)
#282Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#280)
#283Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#281)
#284Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#282)
#285Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#283)
#286Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#285)
#287Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#286)
#288Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#287)
#289Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#288)
#290Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#289)
#291jian he
jian.universality@gmail.com
In reply to: Corey Huinker (#290)
#292Corey Huinker
corey.huinker@gmail.com
In reply to: jian he (#291)
#293Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#292)
#294jian he
jian.universality@gmail.com
In reply to: Corey Huinker (#293)
#295Corey Huinker
corey.huinker@gmail.com
In reply to: jian he (#294)
#296Michael Paquier
michael@paquier.xyz
In reply to: Corey Huinker (#295)
#297jian he
jian.universality@gmail.com
In reply to: Jeff Davis (#284)
#298Corey Huinker
corey.huinker@gmail.com
In reply to: jian he (#297)
#299Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#286)
#300Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#298)
#301Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#300)
#302Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#301)
#303Michael Paquier
michael@paquier.xyz
In reply to: Jeff Davis (#301)
#304Corey Huinker
corey.huinker@gmail.com
In reply to: Michael Paquier (#303)
#305Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#304)
#306Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#304)
#307Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#306)
#308Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#307)
#309Jeff Davis
pgsql@j-davis.com
In reply to: Jeff Davis (#308)
#310vignesh C
vignesh21@gmail.com
In reply to: Jeff Davis (#309)
#311Andrew Dunstan
andrew@dunslane.net
In reply to: Jeff Davis (#309)
#312Hayato Kuroda (Fujitsu)
kuroda.hayato@fujitsu.com
In reply to: Andrew Dunstan (#311)
#313Andres Freund
andres@anarazel.de
In reply to: Jeff Davis (#309)
#314Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#313)
#315Andres Freund
andres@anarazel.de
In reply to: Andres Freund (#313)
#316Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#314)
#317Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#315)
#318Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#317)
#319Tom Lane
tgl@sss.pgh.pa.us
In reply to: Corey Huinker (#318)
#320Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#317)
#321Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#320)
#322Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#321)
#323Jeff Davis
pgsql@j-davis.com
In reply to: Hayato Kuroda (Fujitsu) (#312)
#324Tom Lane
tgl@sss.pgh.pa.us
In reply to: Corey Huinker (#322)
#325Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#322)
#326Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#325)
#327Tom Lane
tgl@sss.pgh.pa.us
In reply to: Corey Huinker (#326)
#328Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#326)
#329Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#328)
#330Andres Freund
andres@anarazel.de
In reply to: Corey Huinker (#329)
#331Jeff Davis
pgsql@j-davis.com
In reply to: Andres Freund (#330)
#332Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#330)
#333Corey Huinker
corey.huinker@gmail.com
In reply to: Andres Freund (#330)
#334Jeff Davis
pgsql@j-davis.com
In reply to: Tom Lane (#332)
#335Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#332)
#336Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#335)
#337Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#336)
#338Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeff Davis (#336)
#339Tom Lane
tgl@sss.pgh.pa.us
In reply to: Corey Huinker (#337)
#340Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#333)
#341Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeff Davis (#340)
#342Jeff Davis
pgsql@j-davis.com
In reply to: Tom Lane (#341)
#343Andres Freund
andres@anarazel.de
In reply to: Corey Huinker (#335)
#344Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeff Davis (#342)
#345Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#339)
#346Jeff Davis
pgsql@j-davis.com
In reply to: Andres Freund (#343)
#347Corey Huinker
corey.huinker@gmail.com
In reply to: Andres Freund (#343)
#348Andres Freund
andres@anarazel.de
In reply to: Corey Huinker (#347)
#349Tom Lane
tgl@sss.pgh.pa.us
In reply to: Corey Huinker (#345)
#350Jeff Davis
pgsql@j-davis.com
In reply to: Andres Freund (#348)
#351Corey Huinker
corey.huinker@gmail.com
In reply to: Andres Freund (#348)
#352Jeff Davis
pgsql@j-davis.com
In reply to: Tom Lane (#344)
#353Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#352)
#354Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#349)
#355Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#354)
#356Andres Freund
andres@anarazel.de
In reply to: Corey Huinker (#355)
#357Corey Huinker
corey.huinker@gmail.com
In reply to: Andres Freund (#356)
#358Daniel Gustafsson
daniel@yesql.se
In reply to: Andres Freund (#356)
#359Nathan Bossart
nathandbossart@gmail.com
In reply to: Daniel Gustafsson (#358)
#360Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#355)
#361Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#360)
#362jian he
jian.universality@gmail.com
In reply to: Jeff Davis (#350)
#363Ashutosh Bapat
ashutosh.bapat@enterprisedb.com
In reply to: jian he (#362)
#364Jeff Davis
pgsql@j-davis.com
In reply to: jian he (#362)
#365Jeff Davis
pgsql@j-davis.com
In reply to: Tom Lane (#332)
#366Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#365)
#367Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#366)
#368Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#366)
#369Jeff Davis
pgsql@j-davis.com
In reply to: Andres Freund (#330)
#370Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#369)
#371Tom Lane
tgl@sss.pgh.pa.us
In reply to: Corey Huinker (#370)
#372Jeff Davis
pgsql@j-davis.com
In reply to: Tom Lane (#371)
#373Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#371)
#374Tom Lane
tgl@sss.pgh.pa.us
In reply to: Corey Huinker (#373)
#375Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#374)
#376Tom Lane
tgl@sss.pgh.pa.us
In reply to: Corey Huinker (#375)
#377Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#376)
#378Tom Lane
tgl@sss.pgh.pa.us
In reply to: Corey Huinker (#377)
#379Jeff Davis
pgsql@j-davis.com
In reply to: Tom Lane (#378)
#380Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeff Davis (#379)
#381Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#380)
#382Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#377)
#383Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeff Davis (#382)
#384Jeff Davis
pgsql@j-davis.com
In reply to: Tom Lane (#383)
#385Melanie Plageman
melanieplageman@gmail.com
In reply to: Corey Huinker (#367)
#386Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#343)
#387Jeff Davis
pgsql@j-davis.com
In reply to: Robert Haas (#386)
#388Robert Haas
robertmhaas@gmail.com
In reply to: Jeff Davis (#387)
#389Jeff Davis
pgsql@j-davis.com
In reply to: Robert Haas (#388)
#390Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeff Davis (#389)
#391Tom Lane
tgl@sss.pgh.pa.us
In reply to: Melanie Plageman (#385)
#392Corey Huinker
corey.huinker@gmail.com
In reply to: Melanie Plageman (#385)
#393Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#391)
#394Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#393)
#395Melanie Plageman
melanieplageman@gmail.com
In reply to: Corey Huinker (#394)
#396Jeff Davis
pgsql@j-davis.com
In reply to: Ashutosh Bapat (#363)
#397Corey Huinker
corey.huinker@gmail.com
In reply to: Melanie Plageman (#395)
#398Greg Sabino Mullane
greg@turnstep.com
In reply to: Jeff Davis (#396)
#399Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#397)
#400Ashutosh Bapat
ashutosh.bapat@enterprisedb.com
In reply to: Jeff Davis (#396)
#401Jeff Davis
pgsql@j-davis.com
In reply to: Ashutosh Bapat (#400)
#402Jeff Davis
pgsql@j-davis.com
In reply to: Greg Sabino Mullane (#398)
#403Nathan Bossart
nathandbossart@gmail.com
In reply to: Jeff Davis (#402)
#404Jeff Davis
pgsql@j-davis.com
In reply to: Nathan Bossart (#403)
#405Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#404)
#406Nathan Bossart
nathandbossart@gmail.com
In reply to: Corey Huinker (#405)
#407Alexander Lakhin
exclusion@gmail.com
In reply to: Jeff Davis (#369)
#408Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alexander Lakhin (#407)
#409Alexander Lakhin
exclusion@gmail.com
In reply to: Tom Lane (#408)
#410Greg Sabino Mullane
greg@turnstep.com
In reply to: Jeff Davis (#402)
#411Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alexander Lakhin (#409)
#412Jeff Davis
pgsql@j-davis.com
In reply to: Greg Sabino Mullane (#410)
#413Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeff Davis (#412)
#414Corey Huinker
corey.huinker@gmail.com
In reply to: Tom Lane (#411)
#415Magnus Hagander
magnus@hagander.net
In reply to: Jeff Davis (#412)
#416Corey Huinker
corey.huinker@gmail.com
In reply to: Magnus Hagander (#415)
#417Ashutosh Bapat
ashutosh.bapat@enterprisedb.com
In reply to: Jeff Davis (#401)
#418Greg Sabino Mullane
greg@turnstep.com
In reply to: Tom Lane (#413)
#419Jeff Davis
pgsql@j-davis.com
In reply to: Ashutosh Bapat (#417)
#420Ashutosh Bapat
ashutosh.bapat@enterprisedb.com
In reply to: Jeff Davis (#419)
#421Jeff Davis
pgsql@j-davis.com
In reply to: Ashutosh Bapat (#420)
#422Corey Huinker
corey.huinker@gmail.com
In reply to: Greg Sabino Mullane (#418)
#423Ashutosh Bapat
ashutosh.bapat@enterprisedb.com
In reply to: Jeff Davis (#421)
#424Andres Freund
andres@anarazel.de
In reply to: Corey Huinker (#370)
#425Corey Huinker
corey.huinker@gmail.com
In reply to: Andres Freund (#424)
#426Nathan Bossart
nathandbossart@gmail.com
In reply to: Andres Freund (#424)
#427Corey Huinker
corey.huinker@gmail.com
In reply to: Nathan Bossart (#426)
#428Andres Freund
andres@anarazel.de
In reply to: Corey Huinker (#427)
#429Nathan Bossart
nathandbossart@gmail.com
In reply to: Corey Huinker (#427)
#430Corey Huinker
corey.huinker@gmail.com
In reply to: Andres Freund (#428)
#431Andres Freund
andres@anarazel.de
In reply to: Corey Huinker (#430)
#432Corey Huinker
corey.huinker@gmail.com
In reply to: Andres Freund (#431)
#433Jeff Davis
pgsql@j-davis.com
In reply to: Ashutosh Bapat (#423)
#434Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#432)
#435Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#432)
#436Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#435)
#437Andres Freund
andres@anarazel.de
In reply to: Corey Huinker (#432)
#438Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#437)
#439Corey Huinker
corey.huinker@gmail.com
In reply to: Andres Freund (#437)
#440Andres Freund
andres@anarazel.de
In reply to: Corey Huinker (#439)
#441Corey Huinker
corey.huinker@gmail.com
In reply to: Robert Haas (#438)
#442Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#441)
#443Corey Huinker
corey.huinker@gmail.com
In reply to: Andres Freund (#440)
#444Andres Freund
andres@anarazel.de
In reply to: Corey Huinker (#441)
#445Jeff Davis
pgsql@j-davis.com
In reply to: Andres Freund (#440)
#446Andres Freund
andres@anarazel.de
In reply to: Corey Huinker (#443)
#447Nathan Bossart
nathandbossart@gmail.com
In reply to: Andres Freund (#444)
#448Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#445)
#449Andres Freund
andres@anarazel.de
In reply to: Jeff Davis (#445)
#450Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#444)
#451Corey Huinker
corey.huinker@gmail.com
In reply to: Andres Freund (#444)
#452Andres Freund
andres@anarazel.de
In reply to: Corey Huinker (#451)
#453Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#452)
#454Nathan Bossart
nathandbossart@gmail.com
In reply to: Tom Lane (#450)
#455Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#450)
#456Tom Lane
tgl@sss.pgh.pa.us
In reply to: Nathan Bossart (#454)
#457Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#455)
#458Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#456)
#459Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#457)
#460Nathan Bossart
nathandbossart@gmail.com
In reply to: Andres Freund (#459)
#461Robert Haas
robertmhaas@gmail.com
In reply to: Nathan Bossart (#460)
#462Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#436)
#463Jeff Davis
pgsql@j-davis.com
In reply to: Robert Haas (#438)
#464Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#462)
#465Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#463)
#466Andres Freund
andres@anarazel.de
In reply to: Jeff Davis (#463)
#467Jeff Davis
pgsql@j-davis.com
In reply to: Andres Freund (#466)
#468Robert Treat
xzilla@users.sourceforge.net
In reply to: Jeff Davis (#463)
#469Jeff Davis
pgsql@j-davis.com
In reply to: Robert Treat (#468)
#470Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeff Davis (#433)
#471Robert Treat
xzilla@users.sourceforge.net
In reply to: Jeff Davis (#469)
#472Jeff Davis
pgsql@j-davis.com
In reply to: Robert Treat (#471)
#473Corey Huinker
corey.huinker@gmail.com
In reply to: Robert Treat (#471)
#474Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#472)
#475Hari Krishna Sunder
hari.db.pg@gmail.com
In reply to: Corey Huinker (#474)
#476Corey Huinker
corey.huinker@gmail.com
In reply to: Hari Krishna Sunder (#475)
#477Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#476)
#478Robert Treat
xzilla@users.sourceforge.net
In reply to: Corey Huinker (#473)
#479Corey Huinker
corey.huinker@gmail.com
In reply to: Robert Treat (#478)
#480Jeff Davis
pgsql@j-davis.com
In reply to: Robert Treat (#478)
#481Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#470)
#482Jeff Davis
pgsql@j-davis.com
In reply to: Tom Lane (#481)
#483Ashutosh Bapat
ashutosh.bapat@enterprisedb.com
In reply to: Jeff Davis (#482)
#484Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#480)
#485Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#484)
#486Nathan Bossart
nathandbossart@gmail.com
In reply to: Corey Huinker (#484)
#487Corey Huinker
corey.huinker@gmail.com
In reply to: Nathan Bossart (#486)
#488Nathan Bossart
nathandbossart@gmail.com
In reply to: Corey Huinker (#487)
#489Corey Huinker
corey.huinker@gmail.com
In reply to: Nathan Bossart (#488)
#490Nathan Bossart
nathandbossart@gmail.com
In reply to: Corey Huinker (#489)
#491Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#485)
#492Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#491)
#493Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#479)
#494Jeff Davis
pgsql@j-davis.com
In reply to: Jeff Davis (#491)
#495Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#494)
#496Robert Treat
xzilla@users.sourceforge.net
In reply to: Jeff Davis (#493)
#497Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#495)
#498Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#497)
#499Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#498)
#500Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#499)
#501Jeff Davis
pgsql@j-davis.com
In reply to: Corey Huinker (#500)
#502Corey Huinker
corey.huinker@gmail.com
In reply to: Jeff Davis (#501)
#503Corey Huinker
corey.huinker@gmail.com
In reply to: Corey Huinker (#502)
#504Robert Haas
robertmhaas@gmail.com
In reply to: Greg Sabino Mullane (#398)
#505Jeff Davis
pgsql@j-davis.com
In reply to: Robert Haas (#504)
#506Nathan Bossart
nathandbossart@gmail.com
In reply to: Corey Huinker (#503)
#507Robert Treat
xzilla@users.sourceforge.net
In reply to: Nathan Bossart (#506)
#508Robert Haas
robertmhaas@gmail.com
In reply to: Jeff Davis (#505)
#509Nathan Bossart
nathandbossart@gmail.com
In reply to: Nathan Bossart (#506)
#510Nathan Bossart
nathandbossart@gmail.com
In reply to: Nathan Bossart (#509)
#511Jeff Davis
pgsql@j-davis.com
In reply to: Robert Haas (#508)
#512Jeff Davis
pgsql@j-davis.com
In reply to: Nathan Bossart (#510)
#513Robert Haas
robertmhaas@gmail.com
In reply to: Jeff Davis (#511)
#514Nathan Bossart
nathandbossart@gmail.com
In reply to: Jeff Davis (#512)
#515Jeff Davis
pgsql@j-davis.com
In reply to: Nathan Bossart (#514)
#516Andres Freund
andres@anarazel.de
In reply to: Jeff Davis (#515)
#517Jeff Davis
pgsql@j-davis.com
In reply to: Nathan Bossart (#514)
#518Nathan Bossart
nathandbossart@gmail.com
In reply to: Jeff Davis (#515)
#519Corey Huinker
corey.huinker@gmail.com
In reply to: Nathan Bossart (#514)
#520Nathan Bossart
nathandbossart@gmail.com
In reply to: Corey Huinker (#519)
#521Jeff Davis
pgsql@j-davis.com
In reply to: Nathan Bossart (#518)
#522Nathan Bossart
nathandbossart@gmail.com
In reply to: Jeff Davis (#521)
#523Nathan Bossart
nathandbossart@gmail.com
In reply to: Nathan Bossart (#522)
#524Nathan Bossart
nathandbossart@gmail.com
In reply to: Nathan Bossart (#523)
#525Nathan Bossart
nathandbossart@gmail.com
In reply to: Nathan Bossart (#524)
#526Nathan Bossart
nathandbossart@gmail.com
In reply to: Nathan Bossart (#525)
#527Corey Huinker
corey.huinker@gmail.com
In reply to: Nathan Bossart (#526)
#528Nathan Bossart
nathandbossart@gmail.com
In reply to: Corey Huinker (#527)
#529Greg Sabino Mullane
greg@turnstep.com
In reply to: Robert Haas (#513)
#530Hari Krishna Sunder
hari.db.pg@gmail.com
In reply to: Nathan Bossart (#528)
#531Nathan Bossart
nathandbossart@gmail.com
In reply to: Hari Krishna Sunder (#530)
#532Hari Krishna Sunder
hari.db.pg@gmail.com
In reply to: Nathan Bossart (#531)
#533Hari Krishna Sunder
hari.db.pg@gmail.com
In reply to: Hari Krishna Sunder (#532)
#534Michael Paquier
michael@paquier.xyz
In reply to: Hari Krishna Sunder (#533)
#535Nathan Bossart
nathandbossart@gmail.com
In reply to: Hari Krishna Sunder (#532)
#536Hari Krishna Sunder
hari.db.pg@gmail.com
In reply to: Nathan Bossart (#535)
#537Nathan Bossart
nathandbossart@gmail.com
In reply to: Hari Krishna Sunder (#536)
#538Hari Krishna Sunder
hari.db.pg@gmail.com
In reply to: Nathan Bossart (#537)
#539Nathan Bossart
nathandbossart@gmail.com
In reply to: Hari Krishna Sunder (#538)
#540Hari Krishna Sunder
hari.db.pg@gmail.com
In reply to: Nathan Bossart (#539)
#541Jeff Davis
pgsql@j-davis.com
In reply to: Nathan Bossart (#539)
#542Nathan Bossart
nathandbossart@gmail.com
In reply to: Jeff Davis (#541)
#543Corey Huinker
corey.huinker@gmail.com
In reply to: Nathan Bossart (#542)
#544Jeff Davis
pgsql@j-davis.com
In reply to: Nathan Bossart (#542)
#545Robert Haas
robertmhaas@gmail.com
In reply to: Greg Sabino Mullane (#529)
#546Nathan Bossart
nathandbossart@gmail.com
In reply to: Robert Haas (#545)
#547Tom Lane
tgl@sss.pgh.pa.us
In reply to: Nathan Bossart (#546)
#548Nathan Bossart
nathandbossart@gmail.com
In reply to: Jeff Davis (#544)
#549Jeff Davis
pgsql@j-davis.com
In reply to: Robert Haas (#545)
#550Hari Krishna Sunder
hari.db.pg@gmail.com
In reply to: Nathan Bossart (#548)
#551Greg Sabino Mullane
greg@turnstep.com
In reply to: Jeff Davis (#549)
#552Nathan Bossart
nathandbossart@gmail.com
In reply to: Greg Sabino Mullane (#551)
#553Robert Haas
robertmhaas@gmail.com
In reply to: Nathan Bossart (#552)
#554Tom Lane
tgl@sss.pgh.pa.us
In reply to: Greg Sabino Mullane (#551)
#555Jeff Davis
pgsql@j-davis.com
In reply to: Tom Lane (#554)