Auto Partitioning Patch - WIP version 1

Started by NikhilSabout 19 years ago100 messageshackers
Jump to latest
#1NikhilS
nikkhils@gmail.com

Hi,

Please find attached the WIP version 1 of the auto partitioning patch. There
was discussion on this a while back on -hackers at:
http://archives.postgresql.org/pgsql-hackers/2007-03/msg00375.php

Please note that this patch tries to automate the activities that currently
are carried out manually. It does nothing fancy beyond that for now. There
were a lot of good suggestions, I have noted them down but for now I have
tried to stick to the initial goal of automating existing steps for
providing partitioning.

Things that this patch does:

i) Handle new syntax to provide partitioning:

CREATE TABLE tabname (
...
) PARTITION BY
RANGE(ColId)
| LIST(ColId)
(
PARTITION partition_name CHECK(...),
PARTITION partition_name CHECK(...)
...
);

ii) Create master table.
iii) Create children tables based on the number of partitions specified and
make them inherit from the master table.

The following things are TODOs:

iv) Auto generate rules using the checks mentioned for the partitions, to
handle INSERTs/DELETEs/UPDATEs to navigate them to the appropriate child.
Note that checks specified directly on the master table will get inherited
automatically.
v) Based on the PRIMARY, UNIQUE information specified, pass it on to the
children tables.
vi) [stretch goal] Support HASH partitions

Will try to complete the above mentioned TODOs as soon as is possible.

Comments, feedback appreciated.

Thanks and Regards,
Nikhils
--

EnterpriseDB http://www.enterprisedb.com

Attachments:

auto-partition-v1.0.patchtext/x-patch; charset=ANSI_X3.4-1968; name=auto-partition-v1.0.patchDownload+249-20
#2NikhilS
nikkhils@gmail.com
In reply to: NikhilS (#1)
Re: Auto Partitioning Patch - WIP version 1

Hi,

The following things are TODOs:

iv) Auto generate rules using the checks mentioned for the partitions, to
handle INSERTs/DELETEs/UPDATEs to navigate them to the appropriate child.
Note that checks specified directly on the master table will get inherited
automatically.

Am planning to do the above by using the check constraint specified for each
partition. This constraint's raw_expr field ends up becoming the whereClause
for the rule specific to that partition.

One question is whether we should we allow auto creation of UPDATE rules
given that updates can end up spanning multiple partitions if the column on
which partitioning is specified gets updated?

Also if we decide to auto - add rules for UPDATE, the raw_expr will need to
be modified to refer to "OLD."col, which can be quite a headache. We do not
have parsetree walker/mutator functions as far as I could see in the code.

Regards,
Nikhils

--
EnterpriseDB http://www.enterprisedb.com

#3Markus Wanner
markus@bluegap.ch
In reply to: NikhilS (#2)
Re: Auto Partitioning

Hi,

NikhilS wrote:

The following things are TODOs:

iv) Auto generate rules using the checks mentioned for the partitions, to
handle INSERTs/DELETEs/UPDATEs to navigate them to the appropriate child.
Note that checks specified directly on the master table will get
inherited
automatically.

Am planning to do the above by using the check constraint specified for
each
partition. This constraint's raw_expr field ends up becoming the
whereClause
for the rule specific to that partition.

I appreciate you efforts, but I'm not sure if this has been discussed
enough. There seem to be two ideas floating around:

- you are heading for automating the current kludge, which involves
creating partitions and constraints by hand. AFAICT, you want to
support list and range partitioning.

- Simon Riggs has proposed partitioning functions, which could easily
handle any type of partitioning (hash, list, range and any mix of
those).

Both proposals do not have much to do with the missing multi-table
indices. It's clear to me that we have to implement those someday, anyway.

AFAICT, the first proposal does not ease the task of writing correct
constraints, so that we are sure that each row ends up in only exactly
one partition. The second would.

But the second proposal makes it hard for the planner to choose the
right partitions, i.e. if you request a range of ids, the planner would
have to query the partitioning function for every possible value. The
first variant could use constraint exclusion for that.

None of the two has gone as far as thinking about switching from one
partitioning rule set to another. That gets especially hard if you
consider database restarts during re-partitioning.

Here are some thought I have come up with recently. This is all about
how to partition and not about how to implement multi-table indices.
Sorry if this got somewhat longish. And no, this is certainly not for
8.3 ;-)

I don't like partitioning rules, which leave open questions, i.e. when
there are values for which the system does not have an answer (and would
have to fall back to a default) or even worse, where it could give
multiple correct answers. Given that premise, I see only two basic
partitioning types:

- splits: those can be used for what's commonly known as list and range
partitioning. If you want customers A-M to end up on partition 1 and
customers N-Z on partition 2 you would split between M and N. (That
way, the system would still know what to do with a customer name
beginning with an @ sign, for example. The only requirement for a
split is that the underlying data type supports comparison
operators.)

- modulo: I think this is commonly known as hash partitioning. It
requires an integer input, possibly by hashing, and calculates the
remainder of a division by n. That should give an equal distribution
among n partitions.

Besides the expression to work on, a split always needs one argument,
the split point, and divides into two buckets. A modulo splits into two
or more buckets and needs the divisor as an argument.

Of course, these two types can be combined. I like to think of these
combinations as trees. Let me give you a simple examlpe:

table customers
|
|
split @ name >= 'N'
/ \
/ \
part1 part2

A combination of the two would look like:

table invoices
|
|
split @ id >= 50000
/ \
/ \
hash(id) modulo 3 part4
/ | \
/ | \
part1 part2 part3

Knowledge of these trees would allow the planner to choose more wisely,
i.e. given a comparative condition (WHERE id > 100000) it could check
the splits in the partitioning tree and only scan the partitions
necessary. Likewise with an equality condition (WHERE id = 1234).

As it's a better definition of the partitioning rules, the planner would
not have to check constraints of all partitions, as the current
constraint exclusion feature does. It might even be likely that querying
this partitioning tree and then scanning the single-table index will be
faster than an index scan on a multi-table index. At least, I cannot see
why it should be any slower.

Such partitioning rule sets would allow us to re-partition by adding a
split node on top of the tree. The split point would have to increment
together with the progress of moving around the rows among the
partitions, so that the database would always be in a consistent state
regarding partitioning.

Additionally, it's easy to figure out, when no or only few moving around
is necessary, i.e. when adding a split @ id >= 1000 to a table which
only has ids < 1000.

I believe that this is a well defined partitioning rule set, which has
more information for the planner than a partitioning function could ever
have. And it is less of a foot-gun than hand written constraints,
because it does not allow the user to specify illegal partitioning rules
(i.e. it's always guaranteed, that every row ends up in only one partition).

Of course, it's far more work than either of the above proposals, but
maybe we can go there step by step? Maybe, NikhilS proposal is more like
a step towards such a beast?

Feedback of any form is very welcome.

Regards

Markus

#4NikhilS
nikkhils@gmail.com
In reply to: Markus Wanner (#3)
Re: Auto Partitioning

Hi,

I appreciate you efforts, but I'm not sure if this has been discussed

Thanks Markus.

enough. There seem to be two ideas floating around:

- you are heading for automating the current kludge, which involves
creating partitions and constraints by hand. AFAICT, you want to
support list and range partitioning.

- Simon Riggs has proposed partitioning functions, which could easily
handle any type of partitioning (hash, list, range and any mix of
those).

When I submitted the proposal, AFAIR there was no objection to going with
the first proposal. Yes there was a lot of forward looking discussion, but
since what I had proposed (atleast syntax wise) was similar/closer to Mysql,
Oracle I did not see any one objecting to it. I think SQL server provides
partitioning functions similar to Simon's proposal. And all along, I had
maintained that I wanted to automate as far as possible, the existing
mechanism for partitioning. To this too, I do not remember anyone objecting
to.

Our current partitioning solution is based on inheritance. With that in
mind, for 8.3 I thought an implementation based on auto rules creation would
be the way to go.

Having said that, obviously I would want to go with the consensus on this
list as to what we think is the *best* way to go forward with partitioning.

Regards,
Nikhils
--
EnterpriseDB http://www.enterprisedb.com

#5Simon Riggs
simon@2ndQuadrant.com
In reply to: Markus Wanner (#3)
Re: Auto Partitioning

On Wed, 2007-04-04 at 14:20 +0200, Markus Schiltknecht wrote:

Both proposals do not have much to do with the missing multi-table
indices. It's clear to me that we have to implement those someday,
anyway.

I agree with much of your post, though this particular point caught my
eye. If you'll forgive me for jumping on an isolated point in your post:

Multi-table indexes sound like a good solution until you consider how
big they would be. The reason we "need" a multi-table index is because
we are using partitioning, which we wouldn't be doing unless the data
was fairly large. So the index is going to be (Num partitions *
fairly-large) in size, which means its absolutely enormous. Adding and
dropping partitions also becomes a management nightmare, so overall
multi-table indexes look unusable to me. Multi-table indexes also remove
the possibility of loading data quickly, then building an index on the
data, then adding the table as a partition - both the COPY and the
CREATE INDEX would be slower with a pre-existing multi-table index.

My hope is to have a mechanism to partition indexes or recognise that
they are partitioned, so that a set of provably-distinct unique indexes
can provide the exact same functionlity as a single large unique index,
just without the management nightmare.

--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com

#6Bruce Momjian
bruce@momjian.us
In reply to: Simon Riggs (#5)
Re: Auto Partitioning

"Simon Riggs" <simon@2ndquadrant.com> writes:

On Wed, 2007-04-04 at 14:20 +0200, Markus Schiltknecht wrote:

Both proposals do not have much to do with the missing multi-table
indices. It's clear to me that we have to implement those someday,
anyway.

I agree with much of your post, though this particular point caught my
eye. If you'll forgive me for jumping on an isolated point in your post:

Multi-table indexes sound like a good solution until you consider how
big they would be.

Put another way, multi-table indexes defeat the whole purpose of having
partitioned the table in the first place. If you could have managed a single
massive index then you wouldn't have bothered partitioning.

However there is a use case that can be handled by a kind of compromise index.
Indexes that have leading columns which restrict all subtrees under that point
to a single partition can be handled by a kind of meta-index. So you have one
index which just points you to the right partition and corresponding index.

That lets you enforce unique constraints as long as the partition key is part
of the unique constraint. In practice people are usually pretty comfortable
not having the database enforce such a constraint since it's easy to have the
application enforce these types of constraints anyways.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com

#7Andrew Dunstan
andrew@dunslane.net
In reply to: Simon Riggs (#5)
Re: Auto Partitioning

Simon Riggs wrote:

My hope is to have a mechanism to partition indexes or recognise that
they are partitioned, so that a set of provably-distinct unique indexes
can provide the exact same functionlity as a single large unique index,
just without the management nightmare.

Will this address the fairly common data design problem where we need to
ensure that a given value is unique across several tables (possibly
siblings, possibly not)? If so, then full steam ahead.

cheers

andrew

#8Markus Wanner
markus@bluegap.ch
In reply to: NikhilS (#4)
Re: Auto Partitioning

Hi,

NikhilS wrote:

Our current partitioning solution is based on inheritance. With that in
mind, for 8.3 I thought an implementation based on auto rules creation
would be the way to go.

That's completely reasonable. And as I've said, it's probably even a
step towards what I've outlined (automation of creation of partitions).

Regards

Markus

#9Markus Wanner
markus@bluegap.ch
In reply to: Simon Riggs (#5)
Re: Auto Partitioning

Hi,

Simon Riggs wrote:

I agree with much of your post, though this particular point caught my
eye. If you'll forgive me for jumping on an isolated point in your post:

No problem.

Multi-table indexes sound like a good solution until you consider how
big they would be. The reason we "need" a multi-table index is because
we are using partitioning, which we wouldn't be doing unless the data
was fairly large. So the index is going to be (Num partitions *
fairly-large) in size, which means its absolutely enormous. Adding and
dropping partitions also becomes a management nightmare, so overall
multi-table indexes look unusable to me. Multi-table indexes also remove
the possibility of loading data quickly, then building an index on the
data, then adding the table as a partition - both the COPY and the
CREATE INDEX would be slower with a pre-existing multi-table index.

I agree. (And thanks to TOAST, we never have very wide tables with
relatively few rows, right? I mean, something like pictures stored in
bytea columns or some such.)

My hope is to have a mechanism to partition indexes or recognise that
they are partitioned, so that a set of provably-distinct unique indexes
can provide the exact same functionlity as a single large unique index,
just without the management nightmare.

Uhm... I don't quite get what you mean by "provably-distinct unique
indexes".

As long as the first columns of an index are equal to all columns of the
partitioning columns, there is no problem. You could easily reduce to
simple per-table indexes and using the partitioning rule set to decide
which index to query.

But how to create an (unique) index which is completely different from
the partitioning key?

Regards

Markus

#10Markus Wanner
markus@bluegap.ch
In reply to: Bruce Momjian (#6)
Re: Auto Partitioning

Hi,

Gregory Stark wrote:

Put another way, multi-table indexes defeat the whole purpose of having
partitioned the table in the first place. If you could have managed a single
massive index then you wouldn't have bothered partitioning.

That depends very much on the implementation of the multi-table index,
as you describe below. I think the major missing part is not *how* such
a meta-index should work - it's easily understandable, that one could
use the per-table indices - but a programming interface, similar to the
current index scan or sequential scan facility, which could return a
table and tuple pointer, no?

However there is a use case that can be handled by a kind of compromise index.
Indexes that have leading columns which restrict all subtrees under that point
to a single partition can be handled by a kind of meta-index. So you have one
index which just points you to the right partition and corresponding index.

Yeah.

That lets you enforce unique constraints as long as the partition key is part
of the unique constraint.

Is that already sufficient? That would alter the ordering of the columns
in the index, no? I mean:

CREATE INDEX x ON test(a, b, c);

isn't the same as

CRETAE INDEX x ON test(c, b, a);

That's why I'd say, the first column of an index would have to be equal
to all of the columns used in the partitioning key.

Regards

Markus

#11Simon Riggs
simon@2ndQuadrant.com
In reply to: Markus Wanner (#9)
Re: Auto Partitioning

On Wed, 2007-04-04 at 16:31 +0200, Markus Schiltknecht wrote:

But how to create an (unique) index which is completely different from
the partitioning key?

Don't?

Most high volume tables are Fact tables with potentially more than 1 row
per Object/Dimension, so the unique index isn't appropriate in those
cases.

When partitioning a Major Entity its much easier to regard the PK as the
partitioning key + unique key, which is frequently possible, even if it
does break the exhortation against intelligent keys.

I wouldn't stand in the way of someone trying to add that functionality,
but I would describe the use case as fairly narrow.

--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com

#12Markus Wanner
markus@bluegap.ch
In reply to: Simon Riggs (#11)
Re: Auto Partitioning

Hi,

Simon Riggs wrote:

Most high volume tables are Fact tables with potentially more than 1 row
per Object/Dimension, so the unique index isn't appropriate in those
cases.

When partitioning a Major Entity its much easier to regard the PK as the
partitioning key + unique key, which is frequently possible, even if it
does break the exhortation against intelligent keys.

Okay, so you are saying that a general purpose multi-table index isn't
needed, but instead something based on the partitioning rule set and the
per table indexes should be sufficient for the vast majority of cases?

Regards

Markus

#13David Fetter
david@fetter.org
In reply to: Andrew Dunstan (#7)
Re: Auto Partitioning

On Wed, Apr 04, 2007 at 10:07:39AM -0400, Andrew Dunstan wrote:

Simon Riggs wrote:

My hope is to have a mechanism to partition indexes or recognise
that they are partitioned, so that a set of provably-distinct
unique indexes can provide the exact same functionlity as a single
large unique index, just without the management nightmare.

Will this address the fairly common data design problem where we
need to ensure that a given value is unique across several tables
(possibly siblings, possibly not)?

That would be where the provably-distinct part comes in, so yes.

If so, then full steam ahead.

Cheers,
D
--
David Fetter <david@fetter.org> http://fetter.org/
phone: +1 415 235 3778 AIM: dfetter666
Skype: davidfetter

Remember to vote!
Consider donating to PostgreSQL: http://www.postgresql.org/about/donate

#14Bruce Momjian
bruce@momjian.us
In reply to: Markus Wanner (#10)
Re: Auto Partitioning

"Markus Schiltknecht" <markus@bluegap.ch> writes:

CREATE INDEX x ON test(a, b, c);

That's why I'd say, the first column of an index would have to be equal to all
of the columns used in the partitioning key.

That's certainly the simple case. It would be nice to be able to create an
index like that and have the system automatically recognize that the leading
column is identical to the partition key of (a) and therefore build indexes on
each partition on (b,c).

However there are also cases such as where you have a=0..99 in one partition
and a=100..199 in partition two, etc. It could still automatically build
indexes on (a,b,c) on each partition and somehow note that the unique
constraint is guaranteed across the whole partitioned table.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com

#15Markus Wanner
markus@bluegap.ch
In reply to: Bruce Momjian (#14)
Re: Auto Partitioning

Hi,

Gregory Stark wrote:

However there are also cases such as where you have a=0..99 in one partition
and a=100..199 in partition two, etc. It could still automatically build
indexes on (a,b,c) on each partition and somehow note that the unique
constraint is guaranteed across the whole partitioned table.

Uhm... yes, because 'a' is the partitioning key.

According to my outline for partitioning rule sets, you would have a
split @ a <= 100. Probably another one @ a <= 200, etc... but none the
less, 'a' is the only column needed to decide what partition a row has
to end up in, so 'a' is the only column in the partitioning key.

What I'm saying is, that given your example, it's not easily possible to
have an index on (b,a) even if 'a' is also in the partitioning key. It's
very well possible to emulate a multi-table index on (a,b), though.

Brainstorming about this somewhat more: how about having multiple
columns in the partitioning key, i.e. 'a' and 'b', and the following
rule set (which admittedly is somewhat special):

table sample
|
|
split @ a >= 100
/ \
/ \
split @ b >= 100 part3
/ \
/ \
part1 part2

An index on (a, b) could easily be 'emulated' by having such an index on
all the partitions, but can we have an index on (b, a) like that?
Probably not, because at the first split, we would have to duplicate.
I.e. for an index scan on 'b = 22', we would have to scan the index on
part3 as well as part1.

Thus one can say, that an multi-table index can only easily be
'emulated', if it has the same columns as the partitioning key, in the
same order. For the above example, these ones would be possible:

(a)
(a,b)
(a,b,...)

Yet another thought: the emulation of multi-table indexes, in this case,
is like concatenating the indexes of the partitions in the right order.
Asking for an index scan for 'WHERE a >= 95 AND a <= 105' when having a
split at a >= 100, you would have to start on the index in the left
bucket (with a < 100) and return everything until the end of the index,
then continue on the index in the right bucket (with a >= 100). So you
also have to be able to determine an order, which is easily possible for
splits, but not so simple for modulos (hash partitioning).

For such a modulo node, the executor would have to start multiple index
scans, i.e.:

table sample
|
|
'id' modulo 4
/ | | \
/ | | \
part1 part2 part3 part4

When scanning for a range (i.e. 'WHERE id >= 5 AND id <= 17'), the
planner would have to request an index scan on each of the partition,
joining the results in the right order.

So, why not completely emulate all multi-table index scans? The above
restriction would disappear, if we could teach the planner and executor
how to join multiple index scan results, no?

Questioning the other way around: do we need any sort of multi-table
indexes at all, or isn't it enough to teach the planner and executor how
to intelligently scan through (possibly) multiple indexes to get what is
requested?

Regards

Markus

#16Simon Riggs
simon@2ndQuadrant.com
In reply to: Markus Wanner (#15)
Re: Auto Partitioning

On Wed, 2007-04-04 at 20:55 +0200, Markus Schiltknecht wrote:

Questioning the other way around: do we need any sort of multi-table
indexes at all, or isn't it enough to teach the planner and executor how
to intelligently scan through (possibly) multiple indexes to get what is
requested?

No, I don't think we need multi-table indexes at all.

The planner already uses the Append node to put together multiple plans.
The great thing is it will put together IndexScans and SeqScans as
applicable. No need for multi-scans as a special node type.

--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com

#17Joshua D. Drake
jd@commandprompt.com
In reply to: Simon Riggs (#16)
Re: Auto Partitioning

Simon Riggs wrote:

On Wed, 2007-04-04 at 20:55 +0200, Markus Schiltknecht wrote:

Questioning the other way around: do we need any sort of multi-table
indexes at all, or isn't it enough to teach the planner and executor how
to intelligently scan through (possibly) multiple indexes to get what is
requested?

No, I don't think we need multi-table indexes at all.

If we don't have multi-table indexes how do we enforce a primary key
against a partitioned set? What about non primary keys that are just
UNIQUE? What about check constraints that aren't apart of the exclusion?

Joshua D. Drake

The planner already uses the Append node to put together multiple plans.
The great thing is it will put together IndexScans and SeqScans as
applicable. No need for multi-scans as a special node type.

--

=== The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive PostgreSQL solutions since 1997
http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

#18Andrew Dunstan
andrew@dunslane.net
In reply to: David Fetter (#13)
Re: Auto Partitioning

David Fetter wrote:

On Wed, Apr 04, 2007 at 10:07:39AM -0400, Andrew Dunstan wrote:

Simon Riggs wrote:

My hope is to have a mechanism to partition indexes or recognise
that they are partitioned, so that a set of provably-distinct
unique indexes can provide the exact same functionlity as a single
large unique index, just without the management nightmare.

Will this address the fairly common data design problem where we
need to ensure that a given value is unique across several tables
(possibly siblings, possibly not)?

That would be where the provably-distinct part comes in, so yes.

That assumes you can provide some provably distinct test. In the general
case I have in mind that isn't so.

cheers

andrew

#19Markus Wanner
markus@bluegap.ch
In reply to: Joshua D. Drake (#17)
Re: Auto Partitioning

Hi,

Joshua D. Drake wrote:

If we don't have multi-table indexes how do we enforce a primary key
against a partitioned set?

The executor would have to be clever enough to not do a single index
scan, but possibly scan through multiple indexes when asking for
uniqueness, depending on the partitioning rule set.

Regards

Markus

#20Markus Wanner
markus@bluegap.ch
In reply to: Simon Riggs (#16)
Re: Auto Partitioning

Simon Riggs wrote:

The planner already uses the Append node to put together multiple plans.
The great thing is it will put together IndexScans and SeqScans as
applicable. No need for multi-scans as a special node type.

Yes... only that mixing 'concurrent' index scans in the right order
would probably save us an extra sort step in some cases. Consider this
with hash partitioning on (id):

SELECT * FROM test WHERE id > 1 AND id < 9999999 ORDER BY id;

Every partition should have an index on (id), so we already have pretty
well sorted data, we just need to mix the results of the index scan in
the correct order, no?

Regards

Markus

#21Markus Wanner
markus@bluegap.ch
In reply to: Andrew Dunstan (#18)
#22Andrew Dunstan
andrew@dunslane.net
In reply to: Markus Wanner (#21)
#23Simon Riggs
simon@2ndQuadrant.com
In reply to: Joshua D. Drake (#17)
#24Markus Wanner
markus@bluegap.ch
In reply to: Andrew Dunstan (#22)
#25Robert Treat
xzilla@users.sourceforge.net
In reply to: NikhilS (#4)
#26Joshua D. Drake
jd@commandprompt.com
In reply to: Robert Treat (#25)
#27Bruce Momjian
bruce@momjian.us
In reply to: Simon Riggs (#23)
#28Martijn van Oosterhout
kleptog@svana.org
In reply to: Markus Wanner (#19)
#29NikhilS
nikkhils@gmail.com
In reply to: Joshua D. Drake (#26)
#30Simon Riggs
simon@2ndQuadrant.com
In reply to: NikhilS (#29)
#31NikhilS
nikkhils@gmail.com
In reply to: Simon Riggs (#30)
#32Simon Riggs
simon@2ndQuadrant.com
In reply to: NikhilS (#31)
#33NikhilS
nikkhils@gmail.com
In reply to: Simon Riggs (#32)
#34Robert Treat
xzilla@users.sourceforge.net
In reply to: Bruce Momjian (#27)
#35Zeugswetter Andreas SB SD
ZeugswetterA@spardat.at
In reply to: Markus Wanner (#10)
#36Markus Wanner
markus@bluegap.ch
In reply to: Zeugswetter Andreas SB SD (#35)
#37Markus Wanner
markus@bluegap.ch
In reply to: Martijn van Oosterhout (#28)
#38Martijn van Oosterhout
kleptog@svana.org
In reply to: Markus Wanner (#37)
#39Tom Lane
tgl@sss.pgh.pa.us
In reply to: Markus Wanner (#37)
#40Simon Riggs
simon@2ndQuadrant.com
In reply to: Tom Lane (#39)
#41Bruce Momjian
bruce@momjian.us
In reply to: Simon Riggs (#40)
#42Zeugswetter Andreas SB SD
ZeugswetterA@spardat.at
In reply to: Simon Riggs (#40)
#43Simon Riggs
simon@2ndQuadrant.com
In reply to: Zeugswetter Andreas SB SD (#42)
#44Joshua D. Drake
jd@commandprompt.com
In reply to: Zeugswetter Andreas SB SD (#42)
#45Joshua D. Drake
jd@commandprompt.com
In reply to: Bruce Momjian (#41)
#46David Fetter
david@fetter.org
In reply to: Joshua D. Drake (#45)
#47Joshua D. Drake
jd@commandprompt.com
In reply to: David Fetter (#46)
#48Richard Troy
rtroy@ScienceTools.com
In reply to: Joshua D. Drake (#47)
#49Simon Riggs
simon@2ndQuadrant.com
In reply to: NikhilS (#2)
#50Bruce Momjian
bruce@momjian.us
In reply to: NikhilS (#1)
#51Simon Riggs
simon@2ndQuadrant.com
In reply to: NikhilS (#1)
#52NikhilS
nikkhils@gmail.com
In reply to: Simon Riggs (#51)
#53Bruce Momjian
bruce@momjian.us
In reply to: NikhilS (#52)
#54Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#53)
#55Simon Riggs
simon@2ndQuadrant.com
In reply to: NikhilS (#52)
#56Simon Riggs
simon@2ndQuadrant.com
In reply to: Bruce Momjian (#53)
#57NikhilS
nikkhils@gmail.com
In reply to: Tom Lane (#54)
#58Bruce Momjian
bruce@momjian.us
In reply to: NikhilS (#1)
#59Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Bruce Momjian (#58)
#60Bruce Momjian
bruce@momjian.us
In reply to: Alvaro Herrera (#59)
#61Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Bruce Momjian (#60)
#62Bruce Momjian
bruce@momjian.us
In reply to: Alvaro Herrera (#61)
#63Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Bruce Momjian (#62)
#64Bruce Momjian
bruce@momjian.us
In reply to: Alvaro Herrera (#63)
#65Jaime Casanova
jcasanov@systemguards.com.ec
In reply to: NikhilS (#52)
#66Emmanuel Cecchet
manu@frogthinker.org
In reply to: Jaime Casanova (#65)
#67Nikhil Sontakke
nikhil.sontakke@enterprisedb.com
In reply to: Jaime Casanova (#65)
#68Nikhil Sontakke
nikhil.sontakke@enterprisedb.com
In reply to: Nikhil Sontakke (#67)
#69Emmanuel Cecchet
manu@frogthinker.org
In reply to: Nikhil Sontakke (#68)
#70Jaime Casanova
jcasanov@systemguards.com.ec
In reply to: Emmanuel Cecchet (#69)
#71Jaime Casanova
jcasanov@systemguards.com.ec
In reply to: Jaime Casanova (#70)
#72Robert Haas
robertmhaas@gmail.com
In reply to: Jaime Casanova (#71)
#73Jaime Casanova
jcasanov@systemguards.com.ec
In reply to: Robert Haas (#72)
#74Robert Haas
robertmhaas@gmail.com
In reply to: Jaime Casanova (#73)
#75Nikhil Sontakke
nikhil.sontakke@enterprisedb.com
In reply to: Robert Haas (#74)
#76Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Nikhil Sontakke (#75)
#77Robert Haas
robertmhaas@gmail.com
In reply to: Alvaro Herrera (#76)
#78Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#77)
#79Nikhil Sontakke
nikhil.sontakke@enterprisedb.com
In reply to: Alvaro Herrera (#76)
#80Jaime Casanova
jcasanov@systemguards.com.ec
In reply to: Robert Haas (#77)
#81Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#78)
#82Robert Haas
robertmhaas@gmail.com
In reply to: Jaime Casanova (#80)
#83Jaime Casanova
jcasanov@systemguards.com.ec
In reply to: Robert Haas (#81)
#84Jaime Casanova
jcasanov@systemguards.com.ec
In reply to: Jaime Casanova (#83)
#85Emmanuel Cecchet
manu@frogthinker.org
In reply to: Jaime Casanova (#84)
#86Robert Haas
robertmhaas@gmail.com
In reply to: Jaime Casanova (#83)
#87Robert Haas
robertmhaas@gmail.com
In reply to: Emmanuel Cecchet (#85)
#88Emmanuel Cecchet
manu@frogthinker.org
In reply to: Robert Haas (#87)
#89Josh Berkus
josh@agliodbs.com
In reply to: Emmanuel Cecchet (#88)
#90Bruce Momjian
bruce@momjian.us
In reply to: Josh Berkus (#89)
#91Jaime Casanova
jcasanov@systemguards.com.ec
In reply to: Bruce Momjian (#90)
#92Emmanuel Cecchet
manu@frogthinker.org
In reply to: Jaime Casanova (#91)
#93Nikhil Sontakke
nikhil.sontakke@enterprisedb.com
In reply to: Emmanuel Cecchet (#92)
#94ITAGAKI Takahiro
itagaki.takahiro@oss.ntt.co.jp
In reply to: Emmanuel Cecchet (#92)
#95Emmanuel Cecchet
manu@frogthinker.org
In reply to: Nikhil Sontakke (#93)
#96Emmanuel Cecchet
manu@frogthinker.org
In reply to: ITAGAKI Takahiro (#94)
#97ITAGAKI Takahiro
itagaki.takahiro@oss.ntt.co.jp
In reply to: Emmanuel Cecchet (#96)
#98Emmanuel Cecchet
manu@frogthinker.org
In reply to: ITAGAKI Takahiro (#97)
#99Grzegorz Jaskiewicz
gj@pointblue.com.pl
In reply to: ITAGAKI Takahiro (#97)
#100Devrim GÜNDÜZ
devrim@gunduz.org
In reply to: Jaime Casanova (#65)