pg_upgrade and statistics

Started by Daniel Farinaabout 14 years ago57 messageshackers
Jump to latest
#1Daniel Farina
daniel@heroku.com

As noted by the manual, pg_statistic is ported in any way when
performing pg_upgrade. I have been investigating what it would take
to (even via just a connected SQL superuser client running UPDATE or
INSERT against pg_statistic) get at least some baseline statistics
into the database as quickly as possible, since in practice the
underlying implementation of the statistics and cost estimation does
not change so dramatically between releases as to make the old
statistics useless (AFAIK). I eventually used a few contortions to be
able to update the anyarray elements in pg_statistic:

UPDATE pg_statistic SET
stavalues1=array_in(anyarray_out('{thearrayliteral}'::concrete_type[]),
'concrete_type'::regtype, atttypemod)
WHERE staattnum = attnum and starelid = therelation;

Notably, the type analysis phase is a bit too smart for me to simply
cast to "anyarray" from a concrete type, so I run it through a
deparse/reparse phase instead to fool it.

Now I'm stuck trying to ensure that autoanalyze will run at least once
after we have committed the old statistics to the new catalogs,
regardless of how much activity has taken place on the table,
regardless of how cold (thus, tuning the GUC thresholds is not
attractive, because at what point should I tune them back to normal
settings?). One idea I had was to jigger pg_stat to indicate that a
lot of tuples have changed since the last analyze (which will be
automatically fixed after autoanalyze on a relation completes) but
because this is not a regular table it doesn't look too easy unless I
break out a new C extension.

You probably are going to ask: "why not just run ANALYZE and be done
with it?" The reasons are:

* ANALYZE can take a sufficiently long time on large databases that
the downtime of switching versions is not attractive

* If we don't run ANALYZE and have no old statistics, then the plans
can be disastrously bad for the user

* If we do run the ANALYZE statement on a user's behalf as part of
the upgrade, any compatibility fixups that require an exclusive lock
(such as some ALTER TABLE statements) would have to block on this
relatively long ANALYZE. autoanalyze/autovacuum, by comparison, backs
off frequently, so disaster is averted.

If anyone has any insightful comments as to how to meet these
requirements, I'd appreciate them, otherwise I can consider it an
interesting area for improvement and will eat the ANALYZE and salt the
documentation with caveats.

--
fdr

#2Bruce Momjian
bruce@momjian.us
In reply to: Daniel Farina (#1)
Re: pg_upgrade and statistics

On Mon, Mar 12, 2012 at 06:38:30PM -0700, Daniel Farina wrote:

You probably are going to ask: "why not just run ANALYZE and be done
with it?" The reasons are:

* ANALYZE can take a sufficiently long time on large databases that
the downtime of switching versions is not attractive

* If we don't run ANALYZE and have no old statistics, then the plans
can be disastrously bad for the user

* If we do run the ANALYZE statement on a user's behalf as part of
the upgrade, any compatibility fixups that require an exclusive lock
(such as some ALTER TABLE statements) would have to block on this
relatively long ANALYZE. autoanalyze/autovacuum, by comparison, backs
off frequently, so disaster is averted.

If anyone has any insightful comments as to how to meet these
requirements, I'd appreciate them, otherwise I can consider it an
interesting area for improvement and will eat the ANALYZE and salt the
documentation with caveats.

Copying the statistics from the old server is on the pg_upgrade TODO
list. I have avoided it because it will add an additional requirement
that will make pg_upgrade more fragile in case of major version changes.

Does anyone have a sense of how often we change the statistics data
between major versions? Ideally, pg_dump/pg_dumpall would add the
ability to dump statistics, and pg_upgrade could use that.

To answer your specific question, I think clearing the last analyzed
fields should cause autovacuum to run on analyze those tables. What I
don't know is whether not clearing the last vacuum datetime will cause
the table not to be analyzed.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

#3Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#2)
Re: pg_upgrade and statistics

Bruce Momjian <bruce@momjian.us> writes:

Copying the statistics from the old server is on the pg_upgrade TODO
list. I have avoided it because it will add an additional requirement
that will make pg_upgrade more fragile in case of major version changes.

Does anyone have a sense of how often we change the statistics data
between major versions?

I don't think pg_statistic is inherently any more stable than any other
system catalog. We've whacked it around significantly just last week,
which might color my perception a bit, but there are other changes on
the to-do list. (For one example, see nearby complaints about
estimating TOAST-related costs, which we could not fix without adding
more stats data.)

regards, tom lane

#4Daniel Farina
daniel@heroku.com
In reply to: Bruce Momjian (#2)
Re: pg_upgrade and statistics

On Mon, Mar 12, 2012 at 8:10 PM, Bruce Momjian <bruce@momjian.us> wrote:

To answer your specific question, I think clearing the last analyzed
fields should cause autovacuum to run on analyze those tables.  What I
don't know is whether not clearing the last vacuum datetime will cause
the table not to be analyzed.

Thank you very much for this reference. I will look into it.

--
fdr

#5Daniel Farina
daniel@heroku.com
In reply to: Tom Lane (#3)
Re: pg_upgrade and statistics

On Mon, Mar 12, 2012 at 9:12 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Bruce Momjian <bruce@momjian.us> writes:

Copying the statistics from the old server is on the pg_upgrade TODO
list.  I have avoided it because it will add an additional requirement
that will make pg_upgrade more fragile in case of major version changes.

Does anyone have a sense of how often we change the statistics data
between major versions?

I don't think pg_statistic is inherently any more stable than any other
system catalog.

Agreed, but it would appear that in practice that a fair amount of it
carries forward. If someone ripped up the statistics system and did
them all over in such a way that the old fields had no meaning on
future costing metrics, that'd probably be reasonable cause for a
caveat involving full-blown reanalyze...still, that doesn't seem to
happen every year.

We've whacked it around significantly just last week,
which might color my perception a bit, but there are other changes on
the to-do list.  (For one example, see nearby complaints about
estimating TOAST-related costs, which we could not fix without adding
more stats data.)

Is accruing additional statistics likely going to be a big problem? I
noticed the addition of the new anyarray (presumably for
array-selectivity) features; would planning with an "empty" assumption
be disastrous vs. the old behavior, which had no concept of those at
all?

I don't think it's necessary to make statistics porting a feature of
pg_upgrade in all circumstances, but it would be "nice" when possible.
That having been said, perhaps there are other ways for pg_upgrade to
be better invested in....or, best of all and somewhat unrelatedly,
full blown logical replication.

Although this conversation has taken focus on "how do we move stats
forward", I am about as interested in "how do I run statements (like
ANALYZE) more 'nicely'". The same general problem pervades many
background task issues, including autovacuum and large physical
reorganizations of data.

--
fdr

#6Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#3)
Re: pg_upgrade and statistics

On Tue, Mar 13, 2012 at 12:12:27AM -0400, Tom Lane wrote:

Bruce Momjian <bruce@momjian.us> writes:

Copying the statistics from the old server is on the pg_upgrade TODO
list. I have avoided it because it will add an additional requirement
that will make pg_upgrade more fragile in case of major version changes.

Does anyone have a sense of how often we change the statistics data
between major versions?

I don't think pg_statistic is inherently any more stable than any other
system catalog. We've whacked it around significantly just last week,
which might color my perception a bit, but there are other changes on
the to-do list. (For one example, see nearby complaints about
estimating TOAST-related costs, which we could not fix without adding
more stats data.)

Yes, that was my reaction too. pg_upgrade has worked hard to avoid
copying any system tables, relying on pg_dump to handle that.

I just received a sobering blog comment stating that pg_upgrade took 5
minutes on a 0.5TB database, but analyze took over an hour:

http://momjian.us/main/blogs/pgblog/2012.html#March_12_2012

Is there some type of intermediate format we could use to dump/restore
the statistics? Is there an analyze "light" mode we could support that
would run faster?

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

#7Bruce Momjian
bruce@momjian.us
In reply to: Daniel Farina (#4)
Re: pg_upgrade and statistics

On Tue, Mar 13, 2012 at 12:33:09AM -0700, Daniel Farina wrote:

On Mon, Mar 12, 2012 at 8:10 PM, Bruce Momjian <bruce@momjian.us> wrote:

To answer your specific question, I think clearing the last analyzed
fields should cause autovacuum to run on analyze those tables. �What I
don't know is whether not clearing the last vacuum datetime will cause
the table not to be analyzed.

Thank you very much for this reference. I will look into it.

I assume a missing last_analyze would trigger an auto-analyze, but I am
unclear if we assume a last_vacuum included an analyze; I think you
need to look at autovacuum.c for the details; let me know if you need
help.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

#8Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Bruce Momjian (#6)
Re: pg_upgrade and statistics

Bruce Momjian <bruce@momjian.us> wrote:

I just received a sobering blog comment stating that pg_upgrade
took 5 minutes on a 0.5TB database, but analyze took over an hour:

Yeah, we have had similar experiences. Even if this can't be done
for every release or for every data type, bringing over statistics
from the old release as a starting point would really help minimize
downtime on large databases.

Of course, release docs should indicate which statistics *won't* be
coming across, and should probably recommend a database ANALYZE or
VACUUM ANALYZE be done when possible.

-Kevin

#9Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Bruce Momjian (#7)
Re: pg_upgrade and statistics

Excerpts from Bruce Momjian's message of mar mar 13 11:14:43 -0300 2012:

On Tue, Mar 13, 2012 at 12:33:09AM -0700, Daniel Farina wrote:

On Mon, Mar 12, 2012 at 8:10 PM, Bruce Momjian <bruce@momjian.us> wrote:

To answer your specific question, I think clearing the last analyzed
fields should cause autovacuum to run on analyze those tables.  What I
don't know is whether not clearing the last vacuum datetime will cause
the table not to be analyzed.

Thank you very much for this reference. I will look into it.

I assume a missing last_analyze would trigger an auto-analyze,

You're wrong. Autovacuum does not consider time, only dead/live tuple
counts. The formulas it uses are in the autovacuum docs; some details
(such as the fact that it skips tables that do not have stat entries)
might be missing.

--
Álvaro Herrera <alvherre@commandprompt.com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

#10Bruce Momjian
bruce@momjian.us
In reply to: Alvaro Herrera (#9)
Re: pg_upgrade and statistics

On Tue, Mar 13, 2012 at 11:34:16AM -0300, Alvaro Herrera wrote:

Excerpts from Bruce Momjian's message of mar mar 13 11:14:43 -0300 2012:

On Tue, Mar 13, 2012 at 12:33:09AM -0700, Daniel Farina wrote:

On Mon, Mar 12, 2012 at 8:10 PM, Bruce Momjian <bruce@momjian.us> wrote:

To answer your specific question, I think clearing the last analyzed
fields should cause autovacuum to run on analyze those tables. �What I
don't know is whether not clearing the last vacuum datetime will cause
the table not to be analyzed.

Thank you very much for this reference. I will look into it.

I assume a missing last_analyze would trigger an auto-analyze,

You're wrong. Autovacuum does not consider time, only dead/live tuple
counts. The formulas it uses are in the autovacuum docs; some details
(such as the fact that it skips tables that do not have stat entries)
might be missing.

Oh, yes. Thank you for the correction; not sure what I was thinking.

How would they trigger an autovacuum then?

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

#11Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Bruce Momjian (#10)
Re: pg_upgrade and statistics

Excerpts from Bruce Momjian's message of mar mar 13 11:49:26 -0300 2012:

On Tue, Mar 13, 2012 at 11:34:16AM -0300, Alvaro Herrera wrote:

Excerpts from Bruce Momjian's message of mar mar 13 11:14:43 -0300 2012:

On Tue, Mar 13, 2012 at 12:33:09AM -0700, Daniel Farina wrote:

On Mon, Mar 12, 2012 at 8:10 PM, Bruce Momjian <bruce@momjian.us> wrote:

To answer your specific question, I think clearing the last analyzed
fields should cause autovacuum to run on analyze those tables.  What I
don't know is whether not clearing the last vacuum datetime will cause
the table not to be analyzed.

Thank you very much for this reference. I will look into it.

I assume a missing last_analyze would trigger an auto-analyze,

You're wrong. Autovacuum does not consider time, only dead/live tuple
counts. The formulas it uses are in the autovacuum docs; some details
(such as the fact that it skips tables that do not have stat entries)
might be missing.

Oh, yes. Thank you for the correction; not sure what I was thinking.

How would they trigger an autovacuum then?

We don't have any mechanism to trigger it currently. Maybe we could
inject fake messages to the stats collector so that it'd believe the
tables have lots of new tuples and an analyze is necessary.

--
Álvaro Herrera <alvherre@commandprompt.com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

#12Bruce Momjian
bruce@momjian.us
In reply to: Kevin Grittner (#8)
Re: pg_upgrade and statistics

On Tue, Mar 13, 2012 at 09:28:58AM -0500, Kevin Grittner wrote:

Bruce Momjian <bruce@momjian.us> wrote:

I just received a sobering blog comment stating that pg_upgrade
took 5 minutes on a 0.5TB database, but analyze took over an hour:

Yeah, we have had similar experiences. Even if this can't be done
for every release or for every data type, bringing over statistics
from the old release as a starting point would really help minimize
downtime on large databases.

Of course, release docs should indicate which statistics *won't* be
coming across, and should probably recommend a database ANALYZE or
VACUUM ANALYZE be done when possible.

Having a "works timetimes" behavior is really not good; some users
aren't going to notice until it is too late that they need to run
analyze. It is fine for hard-core folks like Kevin, but not for the
average user.

At best, pg_upgrade needs to copy over the statistics it can, and adjust
the system statistics to cause autoanalyze to run on those that can't be
migrated. Frankly, as Tom stated, we have been adjusting the system
statistics collection so often that I have avoided hard-coding that
information into pg_upgrade --- it could potentially make pg_upgrade
less reliable, i.e. vacuumdb --all --analyze always works.

We might decide we want a consistently slow process rather than one that
is sometimes fast and sometimes slow.

As you can see, I am at a loss in how to improve this.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

#13Bruce Momjian
bruce@momjian.us
In reply to: Alvaro Herrera (#11)
Re: pg_upgrade and statistics

On Tue, Mar 13, 2012 at 12:08:41PM -0300, Alvaro Herrera wrote:

You're wrong. Autovacuum does not consider time, only dead/live tuple
counts. The formulas it uses are in the autovacuum docs; some details
(such as the fact that it skips tables that do not have stat entries)
might be missing.

Oh, yes. Thank you for the correction; not sure what I was thinking.

How would they trigger an autovacuum then?

We don't have any mechanism to trigger it currently. Maybe we could
inject fake messages to the stats collector so that it'd believe the
tables have lots of new tuples and an analyze is necessary.

Ewe! Yes, I thought some more and realized these are system _views_,
meaning we can't just update them with UPDATE. It sounds like something
pg_upgrade will have to do with a server-side function, someday.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

#14Bruce Momjian
bruce@momjian.us
In reply to: Daniel Farina (#1)
Re: pg_upgrade and statistics

On Tue, Mar 13, 2012 at 1:38 AM, Daniel Farina <daniel@heroku.com> wrote:

You probably are going to ask: "why not just run ANALYZE and be done
with it?"

Uhm yes. If analyze takes a long time then something is broken. It's
only reading a sample which should be pretty much a fixed number of
pages per table. It shouldn't take much longer on your large database
than on your smaller databases.

Perhaps you're running vacuum analyze by mistake?

If Analyze is taking a long time then we're getting the worst of both
worlds. The statistics are very poor for certain metrics (namely
ndistinct). The main reason we don't do better is because we don't
want to do a full scan.

--
greg

#15Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Bruce Momjian (#14)
Re: pg_upgrade and statistics

Greg Stark <stark@mit.edu> wrote:

Daniel Farina <daniel@heroku.com> wrote:

You probably are going to ask: "why not just run ANALYZE and be
done with it?"

Uhm yes. If analyze takes a long time then something is broken.
It's only reading a sample which should be pretty much a fixed
number of pages per table. It shouldn't take much longer on your
large database than on your smaller databases.

On a small database:

cc=# analyze "CaseHist";
ANALYZE
Time: 255.107 ms
cc=# select relpages, reltuples from pg_class where relname =
'CaseHist';
relpages | reltuples
----------+-----------
1264 | 94426
(1 row)

Same table on a much larger database (and much more powerful
hardware):

cir=# analyze "CaseHist";
ANALYZE
Time: 143450.467 ms
cir=# select relpages, reltuples from pg_class where relname =
'CaseHist';
relpages | reltuples
----------+-------------
3588659 | 2.12391e+08
(1 row)

Either way, there are about 500 tables in the database.

-Kevin

#16Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#14)
Re: pg_upgrade and statistics

On Tue, Mar 13, 2012 at 05:46:06PM +0000, Greg Stark wrote:

On Tue, Mar 13, 2012 at 1:38 AM, Daniel Farina <daniel@heroku.com> wrote:

You probably are going to ask: "why not just run ANALYZE and be done
with it?"

Uhm yes. If analyze takes a long time then something is broken. It's
only reading a sample which should be pretty much a fixed number of
pages per table. It shouldn't take much longer on your large database
than on your smaller databases.

Perhaps you're running vacuum analyze by mistake?

pg_upgrade recommends running this command:

vacuumdb --all --analyze-only

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

#17Bruce Momjian
bruce@momjian.us
In reply to: Kevin Grittner (#15)
Re: pg_upgrade and statistics

On Tue, Mar 13, 2012 at 01:18:58PM -0500, Kevin Grittner wrote:

Greg Stark <stark@mit.edu> wrote:

Daniel Farina <daniel@heroku.com> wrote:

You probably are going to ask: "why not just run ANALYZE and be
done with it?"

Uhm yes. If analyze takes a long time then something is broken.
It's only reading a sample which should be pretty much a fixed
number of pages per table. It shouldn't take much longer on your
large database than on your smaller databases.

On a small database:

cc=# analyze "CaseHist";
ANALYZE
Time: 255.107 ms
cc=# select relpages, reltuples from pg_class where relname =
'CaseHist';
relpages | reltuples
----------+-----------
1264 | 94426
(1 row)

Same table on a much larger database (and much more powerful
hardware):

cir=# analyze "CaseHist";
ANALYZE
Time: 143450.467 ms
cir=# select relpages, reltuples from pg_class where relname =
'CaseHist';
relpages | reltuples
----------+-------------
3588659 | 2.12391e+08
(1 row)

Either way, there are about 500 tables in the database.

That is 2.5 minutes. How large is that database?

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

#18Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Bruce Momjian (#17)
Re: pg_upgrade and statistics

Bruce Momjian <bruce@momjian.us> wrote:

On Tue, Mar 13, 2012 at 01:18:58PM -0500, Kevin Grittner wrote:

cir=# analyze "CaseHist";
ANALYZE
Time: 143450.467 ms
cir=# select relpages, reltuples from pg_class where relname =
'CaseHist';
relpages | reltuples
----------+-------------
3588659 | 2.12391e+08
(1 row)

Either way, there are about 500 tables in the database.

That is 2.5 minutes. How large is that database?

cir=# select pg_size_pretty(pg_database_size('cir'));
pg_size_pretty
----------------
2563 GB
(1 row)

In case you meant "How large is that table that took 2.5 minutes to
analyze?":

cir=# select pg_size_pretty(pg_total_relation_size('"CaseHist"'));
pg_size_pretty
----------------
44 GB
(1 row)

I've started a database analyze, to see how long that takes. Even
if each table took 1/4 second (like on the small database) with over
500 user tables, plus the system tables, it'd be 15 minutes. I'm
guessing it'll run over an hour, but I haven't timed it lately, so
-- we'll see.

-Kevin

#19Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Kevin Grittner (#18)
Re: pg_upgrade and statistics

"Kevin Grittner" <Kevin.Grittner@wicourts.gov> wrote:

Bruce Momjian <bruce@momjian.us> wrote:

That is 2.5 minutes. How large is that database?

I dug around a little and found that we had turned on vacuum cost
limits on the central databases, because otherwise the web team
complained about performance during maintenance windows. On the
county database we generally don't have users working all night, so
we do maintenance during off hours, and run without cost-based
limits.

When the full run completes, I'll try analyze on that table again,
in a session with the limits off.

Maybe vacuumdb should have an option to disable the limits, and we
recommend that after pg_upgrade?

-Kevin

#20Bruce Momjian
bruce@momjian.us
In reply to: Kevin Grittner (#18)
Re: pg_upgrade and statistics

On Tue, Mar 13, 2012 at 02:07:14PM -0500, Kevin Grittner wrote:

Bruce Momjian <bruce@momjian.us> wrote:

On Tue, Mar 13, 2012 at 01:18:58PM -0500, Kevin Grittner wrote:

cir=# analyze "CaseHist";
ANALYZE
Time: 143450.467 ms
cir=# select relpages, reltuples from pg_class where relname =
'CaseHist';
relpages | reltuples
----------+-------------
3588659 | 2.12391e+08
(1 row)

Either way, there are about 500 tables in the database.

That is 2.5 minutes. How large is that database?

cir=# select pg_size_pretty(pg_database_size('cir'));
pg_size_pretty
----------------
2563 GB
(1 row)

In case you meant "How large is that table that took 2.5 minutes to
analyze?":

cir=# select pg_size_pretty(pg_total_relation_size('"CaseHist"'));
pg_size_pretty
----------------
44 GB
(1 row)

I've started a database analyze, to see how long that takes. Even
if each table took 1/4 second (like on the small database) with over
500 user tables, plus the system tables, it'd be 15 minutes. I'm
guessing it'll run over an hour, but I haven't timed it lately, so
-- we'll see.

OK, so a single 44GB tables took 2.5 minutes to analyze; that is not
good. It would require 11 such tables to reach 500GB (0.5 TB), and
would take 27 minutes. The report I had was twice as long, but still in
the ballpark of "too long". :-(

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

#21Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#14)
#22Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Bruce Momjian (#20)
#23Peter Eisentraut
peter_e@gmx.net
In reply to: Tom Lane (#21)
#24Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Bruce Momjian (#20)
#25Bruce Momjian
bruce@momjian.us
In reply to: Peter Eisentraut (#23)
#26Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Bruce Momjian (#25)
#27Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#20)
#28Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#27)
#29Bruce Momjian
bruce@momjian.us
In reply to: Kevin Grittner (#26)
#30Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#29)
#31Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Bruce Momjian (#29)
#32Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Robert Haas (#30)
#33Daniel Farina
daniel@heroku.com
In reply to: Robert Haas (#30)
#34Andrew Dunstan
andrew@dunslane.net
In reply to: Robert Haas (#30)
#35Bruce Momjian
bruce@momjian.us
In reply to: Kevin Grittner (#31)
#36Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#35)
#37Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#36)
#38Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#35)
In reply to: Bruce Momjian (#37)
#40Peter Eisentraut
peter_e@gmx.net
In reply to: Bruce Momjian (#37)
#41Bruce Momjian
bruce@momjian.us
In reply to: Peter Eisentraut (#40)
#42Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#38)
#43Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#42)
#44Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#43)
#45Peter Eisentraut
peter_e@gmx.net
In reply to: Bruce Momjian (#41)
#46Bruce Momjian
bruce@momjian.us
In reply to: Peter Eisentraut (#45)
#47Andrew Dunstan
andrew@dunslane.net
In reply to: Bruce Momjian (#46)
#48Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Bruce Momjian (#46)
#49Bruce Momjian
bruce@momjian.us
In reply to: Andrew Dunstan (#47)
#50Bruce Momjian
bruce@momjian.us
In reply to: Kevin Grittner (#48)
#51Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andrew Dunstan (#47)
#52Bruce Momjian
bruce@momjian.us
In reply to: Andrew Dunstan (#47)
#53Peter Eisentraut
peter_e@gmx.net
In reply to: Andrew Dunstan (#47)
#54Tom Lane
tgl@sss.pgh.pa.us
In reply to: Peter Eisentraut (#53)
#55Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Bruce Momjian (#52)
#56Ants Aasma
ants.aasma@cybertec.at
In reply to: Alvaro Herrera (#55)
#57Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#49)