plan time of MASSIVE partitioning ...

Started by Hans-Jürgen Schönigover 15 years ago64 messageshackers
Jump to latest
#1Hans-Jürgen Schönig
postgres@cybertec.at

hello everybody,

we came across an issue which turned out to be more serious than previously expected.
imagine a system with, say, 1000 partitions (heavily indexed) or so. the time taken by the planner is already fairly heavy in this case.

i tried this one with 5000 unindexed tables (just one col):

test=# \timing
Timing is on.
test=# prepare x(int4) AS select * from t_data order by id desc;
PREPARE
Time: 361.552 ms

you will see similar or higher runtimes in case of 500 partitions and a handful of indexes.

does anybody see a potential way to do a shortcut through the planner?
a prepared query is no solution here as constraint exclusion would be dead in this case (making the entire thing an even bigger drama).

did anybody think of a solution to this problem.
or more precisely: can there be a solution to this problem?

many thanks,

hans

--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de

#2Stephen Frost
sfrost@snowman.net
In reply to: Hans-Jürgen Schönig (#1)
Re: plan time of MASSIVE partitioning ...

* PostgreSQL - Hans-Jürgen Schönig (postgres@cybertec.at) wrote:

did anybody think of a solution to this problem.
or more precisely: can there be a solution to this problem?

Please post to the correct list (-performance) and provide information
like PG version, postgresql.conf, the actual table definition, the
resulting query plan, etc, etc...

Thanks,

Stephen

#3Hans-Jürgen Schönig
postgres@cybertec.at
In reply to: Stephen Frost (#2)
Re: plan time of MASSIVE partitioning ...

On Sep 3, 2010, at 2:04 PM, Stephen Frost wrote:

* PostgreSQL - Hans-Jürgen Schönig (postgres@cybertec.at) wrote:

did anybody think of a solution to this problem.
or more precisely: can there be a solution to this problem?

Please post to the correct list (-performance) and provide information
like PG version, postgresql.conf, the actual table definition, the
resulting query plan, etc, etc...

Thanks,

Stephen

hello stephen,

this seems like more a developer question to me than a pre performance one.
it is not related to the table structure at all - it is basically an issue with incredibly large inheritance lists.
it applies to postgres 9 and most likely to everything before.
postgresql.conf is not relevant at all at this point.

the plan is pretty fine.
the question is rather: does anybody see a chance to handle such lists more efficiently inside postgres?
also, it is not the point if my data structure is sane or not. it is really more generic - namely a shortcut for this case inside the planing process.

many thanks,

hans

--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de

#4Stephen Frost
sfrost@snowman.net
In reply to: Hans-Jürgen Schönig (#3)
Re: plan time of MASSIVE partitioning ...

* PostgreSQL - Hans-Jürgen Schönig (postgres@cybertec.at) wrote:

this seems like more a developer question to me than a pre performance one.
it is not related to the table structure at all - it is basically an issue with incredibly large inheritance lists.
it applies to postgres 9 and most likely to everything before.
postgresql.conf is not relevant at all at this point.

Really? What's constraint_exclusion set to? Is GEQO getting used?
What are the GEQO parameters set to? Do you have any CHECK constraints
on the tables?

If you want someone else to build a test case and start looking into it,
it's best to not make them have to guess at what you've done.

the plan is pretty fine.
the question is rather: does anybody see a chance to handle such lists more efficiently inside postgres?
also, it is not the point if my data structure is sane or not. it is really more generic - namely a shortcut for this case inside the planing process.

Coming up with cases where PG doesn't perform well in a completely
contrived unrealistic environment isn't likely to impress anyone to
do anything.

A small (but complete..) test case which mimics a real world environment
and real world problem would go alot farther. I can certainly believe
that people out there have partitions set up with lots of tables and
that it will likely grow- but they probably also have CHECK constraints,
have tweaked what constraint_exclusion is set to, have adjusted their
work_mem and related settings, maybe tweaked some planner GUCs, etc,
etc.

This is an area I'm actually interested in and curious about, I'd rather
work together on it than be told that the questions I'm asking aren't
relevant.

Thanks,

Stephen

#5Robert Haas
robertmhaas@gmail.com
In reply to: Hans-Jürgen Schönig (#1)
Re: plan time of MASSIVE partitioning ...

2010/9/3 PostgreSQL - Hans-Jürgen Schönig <postgres@cybertec.at>:

i tried this one with 5000 unindexed tables (just one col):

test=# \timing
Timing is on.
test=# prepare x(int4) AS select * from t_data order by id desc;
PREPARE
Time: 361.552 ms

you will see similar or higher runtimes in case of 500 partitions and a handful of indexes.

I'd like to see (1) a script to reproduce your test environment (as
Stephen also requested) and (2) gprof or oprofile results.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

#6Tom Lane
tgl@sss.pgh.pa.us
In reply to: Hans-Jürgen Schönig (#1)
Re: plan time of MASSIVE partitioning ...

=?iso-8859-1?Q?PostgreSQL_-_Hans-J=FCrgen_Sch=F6nig?= <postgres@cybertec.at> writes:

imagine a system with, say, 1000 partitions (heavily indexed) or so. the time taken by the planner is already fairly heavy in this case.

As the fine manual points out, the current scheme for managing
partitioned tables isn't intended to scale past a few dozen partitions.

I think we'll be able to do better when we have an explicit
representation of partitioning, since then the planner won't
have to expend large amounts of effort reverse-engineering knowledge
about how an inheritance tree is partitioned. Before that happens,
it's not really worth the trouble to worry about such cases.

regards, tom lane

#7Hans-Jürgen Schönig
postgres@cybertec.at
In reply to: Tom Lane (#6)
Re: plan time of MASSIVE partitioning ...

On Sep 3, 2010, at 4:40 PM, Tom Lane wrote:

=?iso-8859-1?Q?PostgreSQL_-_Hans-J=FCrgen_Sch=F6nig?= <postgres@cybertec.at> writes:

imagine a system with, say, 1000 partitions (heavily indexed) or so. the time taken by the planner is already fairly heavy in this case.

As the fine manual points out, the current scheme for managing
partitioned tables isn't intended to scale past a few dozen partitions.

I think we'll be able to do better when we have an explicit
representation of partitioning, since then the planner won't
have to expend large amounts of effort reverse-engineering knowledge
about how an inheritance tree is partitioned. Before that happens,
it's not really worth the trouble to worry about such cases.

regards, tom lane

thank you ... - the manual is clear here but we wanted to see if there is some reasonably low hanging fruit to get around this.
it is no solution but at least a clear statement ...

many thanks,

hans

--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de

#8Robert Haas
robertmhaas@gmail.com
In reply to: Hans-Jürgen Schönig (#1)
Re: plan time of MASSIVE partitioning ...

On Tue, Sep 7, 2010 at 2:14 PM, Boszormenyi Zoltan <zb@cybertec.at> wrote:

Hi,

Robert Haas írta:

2010/9/3 PostgreSQL - Hans-Jürgen Schönig <postgres@cybertec.at>:

i tried this one with 5000 unindexed tables (just one col):

test=# \timing
Timing is on.
test=# prepare x(int4) AS select * from t_data order by id desc;
PREPARE
Time: 361.552 ms

you will see similar or higher runtimes in case of 500 partitions and a handful of indexes.

I'd like to see (1) a script to reproduce your test environment (as
Stephen also requested) and (2) gprof or oprofile results.

attached are the test scripts, create_tables.sql and childtables.sql.
The following query takes 4.7 seconds according to psql with \timing on:
EXPLAIN SELECT * FROM qdrs
WHERE streamstart BETWEEN '2010-04-06' AND '2010-06-25'
ORDER BY streamhash;

Neat. Have you checked what effect this has on memory consumption?

Also, don't forget to add it to
https://commitfest.postgresql.org/action/commitfest_view/open

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

#9Hans-Jürgen Schönig
postgres@cybertec.at
In reply to: Robert Haas (#8)
Re: plan time of MASSIVE partitioning ...

hello ...

no, we have not checked memory consumption.
there is still some stuff left to optimize away - it seems we are going close to O(n^2) somewhere.
"equal" is called really often in our sample case as well:

ach sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls s/call s/call name
18.87 0.80 0.80 4812 0.00 0.00 make_canonical_pathkey
15.33 1.45 0.65 4811 0.00 0.00
get_eclass_for_sort_expr
14.15 2.05 0.60 8342410 0.00 0.00 equal
6.13 2.31 0.26 229172 0.00 0.00 SearchCatCache
3.66 2.47 0.16 5788835 0.00 0.00 _equalList
3.07 2.60 0.13 1450043 0.00 0.00
hash_search_with_hash_value
2.36 2.70 0.10 2272545 0.00 0.00 AllocSetAlloc
2.12 2.79 0.09 811460 0.00 0.00 hash_any
1.89 2.87 0.08 3014983 0.00 0.00 list_head
1.89 2.95 0.08 574526 0.00 0.00 _bt_compare
1.77 3.02 0.08 11577670 0.00 0.00 list_head
1.42 3.08 0.06 1136 0.00 0.00 tzload
0.94 3.12 0.04 2992373 0.00 0.00 AllocSetFreeIndex

look at the number of calls ...
"equal" is scary ...

make_canonical_pathkey is fixed it seems.
get_eclass_for_sort_expr seems a little more complex to fix.

great you like it ...

regards,

hans

On Sep 8, 2010, at 3:54 PM, Robert Haas wrote:

On Tue, Sep 7, 2010 at 2:14 PM, Boszormenyi Zoltan <zb@cybertec.at> wrote:

Hi,

Robert Haas írta:

2010/9/3 PostgreSQL - Hans-Jürgen Schönig <postgres@cybertec.at>:

i tried this one with 5000 unindexed tables (just one col):

test=# \timing
Timing is on.
test=# prepare x(int4) AS select * from t_data order by id desc;
PREPARE
Time: 361.552 ms

you will see similar or higher runtimes in case of 500 partitions and a handful of indexes.

I'd like to see (1) a script to reproduce your test environment (as
Stephen also requested) and (2) gprof or oprofile results.

attached are the test scripts, create_tables.sql and childtables.sql.
The following query takes 4.7 seconds according to psql with \timing on:
EXPLAIN SELECT * FROM qdrs
WHERE streamstart BETWEEN '2010-04-06' AND '2010-06-25'
ORDER BY streamhash;

Neat. Have you checked what effect this has on memory consumption?

Also, don't forget to add it to
https://commitfest.postgresql.org/action/commitfest_view/open

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de

#10Stephen Frost
sfrost@snowman.net
In reply to: Hans-Jürgen Schönig (#9)
Re: plan time of MASSIVE partitioning ...

* Hans-Jürgen Schönig (postgres@cybertec.at) wrote:

no, we have not checked memory consumption.
there is still some stuff left to optimize away - it seems we are going close to O(n^2) somewhere.
"equal" is called really often in our sample case as well:

Did the mail with the scripts, etc, get hung up due to size or
something..? I didn't see it on the mailing list nor in the archives..
If so, could you post them somewhere so others could look..?

Thanks,

Stephen

#11Hans-Jürgen Schönig
postgres@cybertec.at
In reply to: Stephen Frost (#10)
Re: plan time of MASSIVE partitioning ...

here is the patch again.
we accidentally attached a wrong test file to the original posting so it grew to big. we had to revoke it from the moderator (this happens if you code from 8am to 10pm).
here is just the patch - it is nice and small.

you can easily test it by making yourself a nice parent table, many subtables (hundreds or thousands) and a decent number of indexes per partition.
then run PREPARE with \timing to see what happens.
you should get scary planning times. the more potential indexes and tables the more scary it will be.

using this wonderful RB tree the time for this function call goes down to basically zero.
i hope this is something which is useful to some folks out there.

many thanks,

hans

Attachments:

canon-pathkeys-as-rbtree-3-ctxdiff.patchapplication/octet-stream; name=canon-pathkeys-as-rbtree-3-ctxdiff.patchDownload+235-26
#12Stephen Frost
sfrost@snowman.net
In reply to: Robert Haas (#8)
Re: plan time of MASSIVE partitioning ...

* Robert Haas (robertmhaas@gmail.com) wrote:

Neat. Have you checked what effect this has on memory consumption?

Also, don't forget to add it to
https://commitfest.postgresql.org/action/commitfest_view/open

Would be good to have the patch updated to be against HEAD before
posting to the commitfest.

Thanks,

Stephen

#13Hans-Jürgen Schönig
postgres@cybertec.at
In reply to: Stephen Frost (#12)
Re: plan time of MASSIVE partitioning ...

On Sep 8, 2010, at 4:57 PM, Stephen Frost wrote:

* Robert Haas (robertmhaas@gmail.com) wrote:

Neat. Have you checked what effect this has on memory consumption?

Also, don't forget to add it to
https://commitfest.postgresql.org/action/commitfest_view/open

Would be good to have the patch updated to be against HEAD before
posting to the commitfest.

Thanks,

Stephen

we will definitely provide something which is for HEAD.
but, it seems the problem we are looking is not sufficiently fixed yet.
in our case we shaved off some 18% of planning time or so - looking at the other top 2 functions i got the feeling that more can be done to reduce this. i guess we have to attack this as well.

regards,

hans

--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de

#14Stephen Frost
sfrost@snowman.net
In reply to: Hans-Jürgen Schönig (#13)
Re: plan time of MASSIVE partitioning ...

* Hans-Jürgen Schönig (postgres@cybertec.at) wrote:

but, it seems the problem we are looking is not sufficiently fixed yet.
in our case we shaved off some 18% of planning time or so - looking at the other top 2 functions i got the feeling that more can be done to reduce this. i guess we have to attack this as well.

An 18% increase is certainly nice, provided it doesn't slow down or
break other things.. I'm looking through the patch now actually and
I'm not really happy with the naming, comments, or some of the code
flow, but I think the concept looks reasonable.

Thanks,

Stephen

#15Robert Haas
robertmhaas@gmail.com
In reply to: Hans-Jürgen Schönig (#13)
Re: plan time of MASSIVE partitioning ...

2010/9/8 Hans-Jürgen Schönig <postgres@cybertec.at>:

On Sep 8, 2010, at 4:57 PM, Stephen Frost wrote:

* Robert Haas (robertmhaas@gmail.com) wrote:

Neat.  Have you checked what effect this has on memory consumption?

Also, don't forget to add it to
https://commitfest.postgresql.org/action/commitfest_view/open

Would be good to have the patch updated to be against HEAD before
posting to the commitfest.

we will definitely provide something which is for HEAD.
but, it seems the problem we are looking is not sufficiently fixed yet.
in our case we shaved off some 18% of planning time or so - looking at the other top 2 functions i got the feeling that more can be done to reduce this. i guess we have to attack this as well.

Just remember that four small patches (say) are apt to get committed
faster than one big one.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

#16Stephen Frost
sfrost@snowman.net
In reply to: Robert Haas (#15)
Re: plan time of MASSIVE partitioning ...

* Robert Haas (robertmhaas@gmail.com) wrote:

2010/9/8 Hans-Jürgen Schönig <postgres@cybertec.at>:

but, it seems the problem we are looking is not sufficiently fixed yet.
in our case we shaved off some 18% of planning time or so - looking at the other top 2 functions i got the feeling that more can be done to reduce this. i guess we have to attack this as well.

Just remember that four small patches (say) are apt to get committed
faster than one big one.

Indeed, but code like this makes me wonder if this is really working the
way it's supposed to:

+   val1 = (long)pk_left->pk_eclass;
+   val2 = (long)pk_right->pk_eclass;
+ 
+   if (val1 < val2)
+       return -1;
+   else if (val1 > val2)
+       return 1;

Have you compared how big the tree gets to the size of the list it's
supposed to be replacing..?

Thanks,

Stephen

#17Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Stephen Frost (#14)
Re: plan time of MASSIVE partitioning ...

Excerpts from Stephen Frost's message of mié sep 08 11:26:55 -0400 2010:

* Hans-Jürgen Schönig (postgres@cybertec.at) wrote:

but, it seems the problem we are looking is not sufficiently fixed yet.
in our case we shaved off some 18% of planning time or so - looking at the other top 2 functions i got the feeling that more can be done to reduce this. i guess we have to attack this as well.

An 18% increase is certainly nice, provided it doesn't slow down or
break other things.. I'm looking through the patch now actually and
I'm not really happy with the naming, comments, or some of the code
flow, but I think the concept looks reasonable.

I don't understand the layering between pg_tree and rbtree. Why does it
exist at all? At first I thought this was another implementation of
rbtrees, but then I noticed it sits on top of it. Is this really
necessary?

--
Álvaro Herrera <alvherre@commandprompt.com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

#18Tom Lane
tgl@sss.pgh.pa.us
In reply to: Hans-Jürgen Schönig (#11)
Re: plan time of MASSIVE partitioning ...

=?iso-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <postgres@cybertec.at> writes:

here is the patch again.

This patch seems remarkably documentation-free. What is it trying to
accomplish and what is it doing to the planner data structures?
(Which do have documentation BTW.) Also, what will it do to runtime
in normal cases where the pathkeys list isn't that long?

regards, tom lane

#19Boszormenyi Zoltan
zb@cybertec.at
In reply to: Stephen Frost (#16)
Re: plan time of MASSIVE partitioning ...

Stephen Frost �rta:

* Robert Haas (robertmhaas@gmail.com) wrote:

2010/9/8 Hans-J�rgen Sch�nig <postgres@cybertec.at>:

but, it seems the problem we are looking is not sufficiently fixed yet.
in our case we shaved off some 18% of planning time or so - looking at the other top 2 functions i got the feeling that more can be done to reduce this. i guess we have to attack this as well.

Just remember that four small patches (say) are apt to get committed
faster than one big one.

Indeed, but code like this makes me wonder if this is really working the
way it's supposed to:

+   val1 = (long)pk_left->pk_eclass;
+   val2 = (long)pk_right->pk_eclass;
+ 
+   if (val1 < val2)
+       return -1;
+   else if (val1 > val2)
+       return 1;

The original code checked for pointers being equal among
other conditions. This was an (almost) straight conversion
to a comparison function for rbtree. Do you mean casting
the pointer to long? Yes, e.g. on 64-bit Windows it wouldn't
work. Back to plain pointer comparison.

Have you compared how big the tree gets to the size of the list it's
supposed to be replacing..?

No, but I think it's obvious. Now we have one TreeCell
where we had one ListCell.

Best regards,
Zolt�n B�sz�rm�nyi

--
----------------------------------
Zolt�n B�sz�rm�nyi
Cybertec Sch�nig & Sch�nig GmbH
Gr�hrm�hlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
http://www.postgresql.at/

#20Boszormenyi Zoltan
zb@cybertec.at
In reply to: Alvaro Herrera (#17)
Re: plan time of MASSIVE partitioning ...

Alvaro Herrera írta:

Excerpts from Stephen Frost's message of mié sep 08 11:26:55 -0400 2010:

* Hans-Jürgen Schönig (postgres@cybertec.at) wrote:

but, it seems the problem we are looking is not sufficiently fixed yet.
in our case we shaved off some 18% of planning time or so - looking at the other top 2 functions i got the feeling that more can be done to reduce this. i guess we have to attack this as well.

An 18% increase is certainly nice, provided it doesn't slow down or
break other things.. I'm looking through the patch now actually and
I'm not really happy with the naming, comments, or some of the code
flow, but I think the concept looks reasonable.

I don't understand the layering between pg_tree and rbtree. Why does it
exist at all? At first I thought this was another implementation of
rbtrees, but then I noticed it sits on top of it. Is this really
necessary?

No, if it's acceptable to omit PlannerInfo from outfuncs.c.
Or at least its canon_pathkeys member. Otherwise yes, it's
necessary. We need to store (Node *) in a fast searchable way.

This applies to anything else that may need to be converted
from list to tree to decrease planning time. Like ec_members
in EquivalenceClass.

Best regards,
Zoltán Böszörményi

--
----------------------------------
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
http://www.postgresql.at/

#21Tom Lane
tgl@sss.pgh.pa.us
In reply to: Stephen Frost (#16)
#22Tom Lane
tgl@sss.pgh.pa.us
In reply to: Boszormenyi Zoltan (#20)
#23Tom Lane
tgl@sss.pgh.pa.us
In reply to: Stephen Frost (#14)
#24Boszormenyi Zoltan
zb@cybertec.at
In reply to: Tom Lane (#22)
#25Boszormenyi Zoltan
zb@cybertec.at
In reply to: Tom Lane (#23)
#26Tom Lane
tgl@sss.pgh.pa.us
In reply to: Boszormenyi Zoltan (#24)
#27Boszormenyi Zoltan
zb@cybertec.at
In reply to: Tom Lane (#26)
#28Tom Lane
tgl@sss.pgh.pa.us
In reply to: Boszormenyi Zoltan (#27)
#29Boszormenyi Zoltan
zb@cybertec.at
In reply to: Tom Lane (#28)
#30Boszormenyi Zoltan
zb@cybertec.at
In reply to: Boszormenyi Zoltan (#29)
#31Boszormenyi Zoltan
zb@cybertec.at
In reply to: Boszormenyi Zoltan (#30)
#32Tom Lane
tgl@sss.pgh.pa.us
In reply to: Boszormenyi Zoltan (#31)
#33Boszormenyi Zoltan
zb@cybertec.at
In reply to: Tom Lane (#32)
#34Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Boszormenyi Zoltan (#33)
#35Boszormenyi Zoltan
zb@cybertec.at
In reply to: Heikki Linnakangas (#34)
#36Boszormenyi Zoltan
zb@cybertec.at
In reply to: Boszormenyi Zoltan (#35)
#37Boszormenyi Zoltan
zb@cybertec.at
In reply to: Boszormenyi Zoltan (#36)
#38Boszormenyi Zoltan
zb@cybertec.at
In reply to: Boszormenyi Zoltan (#37)
#39Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Boszormenyi Zoltan (#37)
#40Tom Lane
tgl@sss.pgh.pa.us
In reply to: Boszormenyi Zoltan (#38)
#41Tom Lane
tgl@sss.pgh.pa.us
In reply to: Heikki Linnakangas (#39)
#42Boszormenyi Zoltan
zb@cybertec.at
In reply to: Tom Lane (#40)
#43Tom Lane
tgl@sss.pgh.pa.us
In reply to: Boszormenyi Zoltan (#42)
#44Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#43)
#45Boszormenyi Zoltan
zb@cybertec.at
In reply to: Tom Lane (#44)
#46Leonardo Francalanci
m_lists@yahoo.it
In reply to: Boszormenyi Zoltan (#45)
#47Leonardo Francalanci
m_lists@yahoo.it
In reply to: Leonardo Francalanci (#46)
#48Tom Lane
tgl@sss.pgh.pa.us
In reply to: Leonardo Francalanci (#46)
#49Leonardo Francalanci
m_lists@yahoo.it
In reply to: Tom Lane (#48)
#50Tom Lane
tgl@sss.pgh.pa.us
In reply to: Boszormenyi Zoltan (#45)
#51Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#48)
#52Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#50)
#53Leonardo Francalanci
m_lists@yahoo.it
In reply to: Tom Lane (#51)
#54Tom Lane
tgl@sss.pgh.pa.us
In reply to: Leonardo Francalanci (#53)
#55Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#52)
#56Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Tom Lane (#51)
#57Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alvaro Herrera (#56)
#58Leonardo Francalanci
m_lists@yahoo.it
In reply to: Tom Lane (#54)
#59Tom Lane
tgl@sss.pgh.pa.us
In reply to: Leonardo Francalanci (#58)
#60Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#51)
#61Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#60)
#62Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#59)
#63Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#62)
#64Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#63)