Change GUC hashtable to use simplehash?

Started by Jeff Davisover 2 years ago103 messageshackers
Jump to latest
#1Jeff Davis
pgsql@j-davis.com

I had briefly experimented changing the hash table in guc.c to use
simplehash. It didn't offer any measurable speedup, but the API is
slightly nicer.

I thought I'd post the patch in case others thought this was a good
direction or nice cleanup.

--
Jeff Davis
PostgreSQL Contributor Team - AWS

Attachments:

v2-0001-Convert-GUC-hashtable-to-use-simplehash.patchtext/x-patch; charset=UTF-8; name=v2-0001-Convert-GUC-hashtable-to-use-simplehash.patchDownload+56-86
#2Gurjeet Singh
gurjeet@singh.im
In reply to: Jeff Davis (#1)
Re: Change GUC hashtable to use simplehash?

On Fri, Nov 17, 2023 at 11:02 AM Jeff Davis <pgsql@j-davis.com> wrote:

I had briefly experimented changing the hash table in guc.c to use
simplehash. It didn't offer any measurable speedup, but the API is
slightly nicer.

I thought I'd post the patch in case others thought this was a good
direction or nice cleanup.

This is not a comment on the patch itself, but since GUC operations
are not typically considered performance or space sensitive, this
comment from simplehash.h makes a case against it.

* It's probably not worthwhile to generate such a specialized
implementation
* for hash tables that aren't performance or space sensitive.

But your argument of a nicer API might make a case for the patch. I

Best regards,
Gurjeet
http://Gurje.et

#3Jeff Davis
pgsql@j-davis.com
In reply to: Gurjeet Singh (#2)
Re: Change GUC hashtable to use simplehash?

On Fri, 2023-11-17 at 13:22 -0800, Gurjeet Singh wrote:

This is not a comment on the patch itself, but since GUC operations
are not typically considered performance or space sensitive,

A "SET search_path" clause on a CREATE FUNCTION is a case for better
performance in guc.c, because it repeatedly sets and rolls back the
setting on each function invocation.

Unfortunately, this patch doesn't really improve the performance. The
reason the hash table in guc.c is slow is because of the case folding
in both hashing and comparison. I might get around to fixing that,
which could have a minor impact, and perhaps then the choice between
hsearch/simplehash would matter.

this
comment from simplehash.h makes a case against it.

 *      It's probably not worthwhile to generate such a specialized
implementation
 *      for hash tables that aren't performance or space sensitive.

But your argument of a nicer API might make a case for the patch.

Yeah, that's what I was thinking. simplehash is newer and has a nicer
API, so if we like it and want to move more code over, this is one
step. But if we are fine using both hsearch.h and simplehash.h for
overlapping use cases indefinitely, then I'll drop this.

Regards,
Jeff Davis

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jeff Davis (#3)
Re: Change GUC hashtable to use simplehash?

Jeff Davis <pgsql@j-davis.com> writes:

On Fri, 2023-11-17 at 13:22 -0800, Gurjeet Singh wrote:

But your argument of a nicer API might make a case for the patch.

Yeah, that's what I was thinking. simplehash is newer and has a nicer
API, so if we like it and want to move more code over, this is one
step. But if we are fine using both hsearch.h and simplehash.h for
overlapping use cases indefinitely, then I'll drop this.

I can't imagine wanting to convert *every* hashtable in the system
to simplehash; the added code bloat would be unreasonable. So yeah,
I think we'll have two mechanisms indefinitely. That's not to say
that we might not rewrite hsearch. But simplehash was never meant
to be a universal solution.

regards, tom lane

#5Andres Freund
andres@anarazel.de
In reply to: Jeff Davis (#3)
Re: Change GUC hashtable to use simplehash?

Hi,

On 2023-11-17 13:44:21 -0800, Jeff Davis wrote:

On Fri, 2023-11-17 at 13:22 -0800, Gurjeet Singh wrote:

This is not a comment on the patch itself, but since GUC operations
are not typically considered performance or space sensitive,

I don't think that's quite right - we have a lot of GUCs and they're loaded in
each connection. And there's set/reset around transactions etc. So even
without search path stuff that Jeff mentioned, it could be worth optimizing
this.

Yeah, that's what I was thinking. simplehash is newer and has a nicer
API, so if we like it and want to move more code over, this is one
step. But if we are fine using both hsearch.h and simplehash.h for
overlapping use cases indefinitely, then I'll drop this.

Right now there are use cases where simplehash isn't really usable (if stable
pointers to hash elements are needed and/or the entries are very large). I've
been wondering about providing a layer ontop of simplehash, or an option to
simplehash, providing that though. That then could perhaps also implement
runtime defined key sizes.

I think this would be a completely fair thing to port over - whether it's
worth it I don't quite know, but I'd not be against it on principle or such.

Greetings,

Andres Freund

#6Jeff Davis
pgsql@j-davis.com
In reply to: Tom Lane (#4)
Re: Change GUC hashtable to use simplehash?

On Fri, 2023-11-17 at 17:04 -0500, Tom Lane wrote:

I can't imagine wanting to convert *every* hashtable in the system
to simplehash; the added code bloat would be unreasonable.  So yeah,
I think we'll have two mechanisms indefinitely.  That's not to say
that we might not rewrite hsearch.  But simplehash was never meant
to be a universal solution.

OK, I will withdraw the patch until/unless it provides a concrete
benefit.

Regards,
Jeff Davis

#7Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#4)
Re: Change GUC hashtable to use simplehash?

Hi,

On 2023-11-17 17:04:04 -0500, Tom Lane wrote:

Jeff Davis <pgsql@j-davis.com> writes:

On Fri, 2023-11-17 at 13:22 -0800, Gurjeet Singh wrote:

But your argument of a nicer API might make a case for the patch.

Yeah, that's what I was thinking. simplehash is newer and has a nicer
API, so if we like it and want to move more code over, this is one
step. But if we are fine using both hsearch.h and simplehash.h for
overlapping use cases indefinitely, then I'll drop this.

I can't imagine wanting to convert *every* hashtable in the system
to simplehash; the added code bloat would be unreasonable.

Yea. And it's also just not suitable for everything. Stable pointers can be
very useful and some places have entries that are too large to be moved during
collisions. Chained hashtables have their place.

So yeah, I think we'll have two mechanisms indefinitely. That's not to say
that we might not rewrite hsearch.

We probably should. It's awkward to use, the code is very hard to follow, and
it's really not very fast. Part of that is due to serving too many masters.
I doubt it's good idea to use the same code for highly contended, partitioned,
shared memory hashtables and many tiny local memory hashtables. The design
goals are just very different.

Greetings,

Andres Freund

#8Jeff Davis
pgsql@j-davis.com
In reply to: Andres Freund (#5)
Re: Change GUC hashtable to use simplehash?

On Fri, 2023-11-17 at 14:08 -0800, Andres Freund wrote:

I think this would be a completely fair thing to port over - whether
it's
worth it I don't quite know, but I'd not be against it on principle
or such.

Right now I don't think it offers much. I'll see if I can solve the
case-folding slowness first, and then maybe it will be measurable.

Regards,
Jeff Davis

#9Andres Freund
andres@anarazel.de
In reply to: Jeff Davis (#6)
Re: Change GUC hashtable to use simplehash?

Hi,

On 2023-11-17 14:08:56 -0800, Jeff Davis wrote:

On Fri, 2023-11-17 at 17:04 -0500, Tom Lane wrote:

I can't imagine wanting to convert *every* hashtable in the system
to simplehash; the added code bloat would be unreasonable.  So yeah,
I think we'll have two mechanisms indefinitely.  That's not to say
that we might not rewrite hsearch.  But simplehash was never meant
to be a universal solution.

OK, I will withdraw the patch until/unless it provides a concrete
benefit.

It might already in the space domain:

SELECT count(*), sum(total_bytes) total_bytes, sum(total_nblocks) total_nblocks, sum(free_bytes) free_bytes, sum(free_chunks) free_chunks, sum(used_bytes) used_bytes
FROM pg_backend_memory_contexts
WHERE name LIKE 'GUC%';

HEAD:
┌───────┬─────────────┬───────────────┬────────────┬─────────────┬────────────┐
│ count │ total_bytes │ total_nblocks │ free_bytes │ free_chunks │ used_bytes │
├───────┼─────────────┼───────────────┼────────────┼─────────────┼────────────┤
│ 2 │ 57344 │ 5 │ 25032 │ 10 │ 32312 │
└───────┴─────────────┴───────────────┴────────────┴─────────────┴────────────┘

your patch:
┌───────┬─────────────┬───────────────┬────────────┬─────────────┬────────────┐
│ count │ total_bytes │ total_nblocks │ free_bytes │ free_chunks │ used_bytes │
├───────┼─────────────┼───────────────┼────────────┼─────────────┼────────────┤
│ 1 │ 36928 │ 3 │ 12360 │ 3 │ 24568 │
└───────┴─────────────┴───────────────┴────────────┴─────────────┴────────────┘

However, it fares less well at larger number of GUCs, performance wise. At
first I thought that that's largely because you aren't using SH_STORE_HASH.
With that, it's slower when creating a large number of GUCs, but a good bit
faster retrieving them. But that slowness didn't seem right.

Then I noticed that memory usage was too large when creating many GUCs - a bit
of debugging later, I figured out that that's due to guc_name_hash() being
terrifyingly bad. There's no bit mixing whatsoever! Which leads to very large
numbers of hash conflicts - which simplehash tries to defend against a bit by
making the table larger.

(gdb) p guc_name_hash("andres.c2")
$14 = 3798554171
(gdb) p guc_name_hash("andres.c3")
$15 = 3798554170

Fixing that makes simplehash always faster, but still doesn't win on memory
usage at the upper end - the two pointers in GUCHashEntry make it too big.

I think, independent of this patch, it might be worth requiring that hash
table lookups applied the transformation before the lookup. A comparison
function this expensive is not great...

Greetings,

Andres Freund

#10Jeff Davis
pgsql@j-davis.com
In reply to: Andres Freund (#9)
Re: Change GUC hashtable to use simplehash?

On Fri, 2023-11-17 at 15:27 -0800, Andres Freund wrote:

At
first I thought that that's largely because you aren't using
SH_STORE_HASH.

I might want to use that in the search_path cache, then. The lookup
wasn't showing up much in the profile the last I checked, but I'll take
a second look.

Then I noticed that memory usage was too large when creating many
GUCs - a bit
of debugging later, I figured out that that's due to guc_name_hash()
being
terrifyingly bad. There's no bit mixing whatsoever!

Wow.

It seems like hash_combine() could be more widely used in other places,
too? Here it seems like a worse problem because strings really need
mixing, and maybe ExecHashGetHashValue doesn't. But it seems easier to
use hash_combine() everywhere so that we don't have to think about
strange cases.

I think, independent of this patch, it might be worth requiring that
hash
table lookups applied the transformation before the lookup. A
comparison
function this expensive is not great...

The requested name is already case-folded in most contexts. We can do a
lookup first, and if that fails, case-fold and try again. I'll hack up
a patch -- I believe that would be measurable for the proconfigs.

Regards,
Jeff Davis

#11Andres Freund
andres@anarazel.de
In reply to: Jeff Davis (#10)
Re: Change GUC hashtable to use simplehash?

Hi,

On 2023-11-17 16:01:31 -0800, Jeff Davis wrote:

On Fri, 2023-11-17 at 15:27 -0800, Andres Freund wrote:

At
first I thought that that's largely because you aren't using
SH_STORE_HASH.

I might want to use that in the search_path cache, then. The lookup
wasn't showing up much in the profile the last I checked, but I'll take
a second look.

It also matters for insertions, fwiw.

Then I noticed that memory usage was too large when creating many
GUCs - a bit
of debugging later, I figured out that that's due to guc_name_hash()
being
terrifyingly bad. There's no bit mixing whatsoever!

Wow.

It seems like hash_combine() could be more widely used in other places,
too?

I don't think hash_combine() alone helps that much - you need to actually use
a hash function for the values you are combining. Using a character value
alone as a 32bit hash value unsurprisingly leads to very distribution of bits
set in hashvalues.

Here it seems like a worse problem because strings really need
mixing, and maybe ExecHashGetHashValue doesn't. But it seems easier to
use hash_combine() everywhere so that we don't have to think about
strange cases.

Yea.

I think, independent of this patch, it might be worth requiring that
hash
table lookups applied the transformation before the lookup. A
comparison
function this expensive is not great...

The requested name is already case-folded in most contexts. We can do a
lookup first, and if that fails, case-fold and try again. I'll hack up
a patch -- I believe that would be measurable for the proconfigs.

I'd just always case fold before lookups. The expensive bit of the case
folding imo is that you need to do awkward things during hash lookups.

Greetings,

Andres Freund

#12Jeff Davis
pgsql@j-davis.com
In reply to: Andres Freund (#11)
Re: Change GUC hashtable to use simplehash?

Hi,

On Fri, 2023-11-17 at 16:10 -0800, Andres Freund wrote:

The requested name is already case-folded in most contexts. We can
do a
lookup first, and if that fails, case-fold and try again. I'll hack
up
a patch -- I believe that would be measurable for the proconfigs.

I'd just always case fold before lookups. The expensive bit of the
case
folding imo is that you need to do awkward things during hash
lookups.

Attached are a bunch of tiny patches and some perf numbers based on
simple test described here:

/messages/by-id/04c8592dbd694e4114a3ed87139a7a04e4363030.camel@j-davis.com

0001: Use simplehash (without SH_STORE_HASH)

0002: fold before lookups

0003: have gen->name_key alias gen->name in typical case. Saves
allocations in typical case where the name is already folded.

0004: second-chance lookup in hash table (avoids case-folding for
already-folded names)

0005: Use SH_STORE_HASH

(These are split out into tiny patches for perf measurement, some are
pretty obvious but I wanted to see the impact, if any.)

Numbers below are cumulative (i.e. 0003 includes 0002 and 0001):
master: 7899ms
0001: 7850
0002: 7958
0003: 7942
0004: 7549
0005: 7411

I'm inclined toward all of these patches. I'll also look at adding
SH_STORE_HASH for the search_path cache.

Looks like we're on track to bring the overhead of SET search_path down
to reasonable levels. Thank you!

Regards,
Jeff Davis

Attachments:

v3-0005-Use-SH_STORE_HASH-for-GUC-hash-table.patchtext/x-patch; charset=UTF-8; name=v3-0005-Use-SH_STORE_HASH-for-GUC-hash-table.patchDownload+3-1
v3-0004-GUC-optimize-for-already-case-folded-names.patchtext/x-patch; charset=UTF-8; name=v3-0004-GUC-optimize-for-already-case-folded-names.patchDownload+15-9
v3-0003-Avoid-duplicating-GUC-name-when-it-s-already-case.patchtext/x-patch; charset=UTF-8; name=v3-0003-Avoid-duplicating-GUC-name-when-it-s-already-case.patchDownload+60-14
v3-0002-Case-fold-earlier.patchtext/x-patch; charset=UTF-8; name=v3-0002-Case-fold-earlier.patchDownload+72-32
v3-0001-Convert-GUC-hashtable-to-use-simplehash.patchtext/x-patch; charset=UTF-8; name=v3-0001-Convert-GUC-hashtable-to-use-simplehash.patchDownload+56-86
#13John Naylor
john.naylor@enterprisedb.com
In reply to: Jeff Davis (#12)
Re: Change GUC hashtable to use simplehash?

On Mon, Nov 20, 2023 at 5:54 AM Jeff Davis <pgsql@j-davis.com> wrote:

Attached are a bunch of tiny patches and some perf numbers based on
simple test described here:

/messages/by-id/04c8592dbd694e4114a3ed87139a7a04e4363030.camel@j-davis.com

I tried taking I/O out, like this, thinking the times would be less variable:

cat bench.sql
select 1 from generate_series(1,500000) x(x), lateral (SELECT
inc_ab(x)) a offset 10000000;

(with turbo off)
pgbench -n -T 30 -f bench.sql -M prepared

master:
latency average = 643.625 ms
0001-0005:
latency average = 607.354 ms

...about 5.5% less time, similar to what Jeff found.

I get a noticeable regression in 0002, though, and I think I see why:

 guc_name_hash(const char *name)
 {
- uint32 result = 0;
+ const unsigned char *bytes = (const unsigned char *)name;
+ int                  blen  = strlen(name);

The strlen call required for hashbytes() is not free. The lack of
mixing in the (probably inlined after 0001) previous hash function can
remedied directly, as in the attached:

0001-0002 only:
latency average = 670.059 ms

0001-0002, plus revert hashbytes, add finalizer:
latency average = 656.810 ms

-#define SH_EQUAL(tb, a, b) (guc_name_compare(a, b) == 0)
+#define SH_EQUAL(tb, a, b) (strcmp(a, b) == 0)

Likewise, I suspect calling out to the C library is going to throw
away some of the gains that were won by not needing to downcase all
the time, but I haven't dug deeper.

Attachments:

0002-ADDENDUM-add-finalizer-to-guc-name-hash.patch.txttext/plain; charset=US-ASCII; name=0002-ADDENDUM-add-finalizer-to-guc-name-hash.patch.txtDownload+10-3
#14Jeff Davis
pgsql@j-davis.com
In reply to: John Naylor (#13)
Re: Change GUC hashtable to use simplehash?

On Tue, 2023-11-21 at 16:42 +0700, John Naylor wrote:

The strlen call required for hashbytes() is not free.

Should we have a hash_string() that's like hash_bytes() but checks for
the NUL terminator itself?

That wouldn't be inlinable, but it would save on the strlen() call. It
might benefit some other callers?

Regards,
Jeff Davis

#15John Naylor
john.naylor@enterprisedb.com
In reply to: Jeff Davis (#14)
Re: Change GUC hashtable to use simplehash?

On Wed, Nov 22, 2023 at 12:00 AM Jeff Davis <pgsql@j-davis.com> wrote:

On Tue, 2023-11-21 at 16:42 +0700, John Naylor wrote:

The strlen call required for hashbytes() is not free.

Should we have a hash_string() that's like hash_bytes() but checks for
the NUL terminator itself?

That wouldn't be inlinable, but it would save on the strlen() call. It
might benefit some other callers?

We do have string_hash(), which...calls strlen. :-)

Thinking some more, I'm not quite comfortable with the number of
places in these patches that have to know about the pre-downcased
strings, or whether we need that in the first place. If lower case is
common enough to optimize for, it seems the equality function can just
check strict equality on the char and only on mismatch try downcasing
before returning false. Doing our own function would allow the
compiler to inline it, or at least keep it on the same page. Further,
the old hash function shouldn't need to branch to do the same
downcasing, since hashing is lossy anyway. In the keyword hashes, we
just do "*ch |= 0x20", which downcases letters and turns undercores to
DEL. I can take a stab at that later.

#16John Naylor
john.naylor@enterprisedb.com
In reply to: John Naylor (#15)
Re: Change GUC hashtable to use simplehash?

I wrote:

Thinking some more, I'm not quite comfortable with the number of
places in these patches that have to know about the pre-downcased
strings, or whether we need that in the first place. If lower case is
common enough to optimize for, it seems the equality function can just
check strict equality on the char and only on mismatch try downcasing
before returning false. Doing our own function would allow the
compiler to inline it, or at least keep it on the same page. Further,
the old hash function shouldn't need to branch to do the same
downcasing, since hashing is lossy anyway. In the keyword hashes, we
just do "*ch |= 0x20", which downcases letters and turns undercores to
DEL. I can take a stab at that later.

v4 is a quick POC for that. I haven't verified that it's correct for
the case of the probe and the entry don't match, but in case it
doesn't it should be easy to fix. I also didn't bother with
SH_STORE_HASH in my testing.

0001 adds the murmur32 finalizer -- we should do that regardless of
anything else in this thread.
0002 is just Jeff's 0001
0003 adds an equality function that downcases lazily, and teaches the
hash function about the 0x20 trick.

master:
latency average = 581.765 ms

v3 0001-0005:
latency average = 544.576 ms

v4 0001-0003:
latency average = 547.489 ms

This gives similar results with a tiny amount of code (excluding the
simplehash conversion). I didn't check if the compiler inlined these
functions, but we can hint it if necessary. We could use the new
equality function in all the call sites that currently test for
"guc_name_compare() == 0", in which case it might not end up inlined,
but that's probably okay.

We could also try to improve the hash function's collision behavior by
collecting the bytes on a uint64 and calling our new murmur64 before
returning the lower half, but that's speculative.

Attachments:

v4-0002-Convert-GUC-hashtable-to-use-simplehash.patchtext/x-patch; charset=US-ASCII; name=v4-0002-Convert-GUC-hashtable-to-use-simplehash.patchDownload+56-86
v4-0001-Add-finalizer-to-guc_name_hash.patchtext/x-patch; charset=US-ASCII; name=v4-0001-Add-finalizer-to-guc_name_hash.patchDownload+2-2
v4-0003-Optimize-GUC-functions-for-simple-hash.patchtext/x-patch; charset=US-ASCII; name=v4-0003-Optimize-GUC-functions-for-simple-hash.patchDownload+36-5
#17Andres Freund
andres@anarazel.de
In reply to: John Naylor (#13)
Re: Change GUC hashtable to use simplehash?

Hi,

On 2023-11-21 16:42:55 +0700, John Naylor wrote:

I get a noticeable regression in 0002, though, and I think I see why:

guc_name_hash(const char *name)
{
- uint32 result = 0;
+ const unsigned char *bytes = (const unsigned char *)name;
+ int                  blen  = strlen(name);

The strlen call required for hashbytes() is not free. The lack of
mixing in the (probably inlined after 0001) previous hash function can
remedied directly, as in the attached:

I doubt this is a good hashfunction. For short strings, sure, but after
that... I don't think it makes sense to reduce the internal state of a hash
function to something this small.

Greetings,

Andres Freund

#18Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#17)
Re: Change GUC hashtable to use simplehash?

Andres Freund <andres@anarazel.de> writes:

On 2023-11-21 16:42:55 +0700, John Naylor wrote:

The strlen call required for hashbytes() is not free. The lack of
mixing in the (probably inlined after 0001) previous hash function can
remedied directly, as in the attached:

I doubt this is a good hashfunction. For short strings, sure, but after
that... I don't think it makes sense to reduce the internal state of a hash
function to something this small.

GUC names are just about always short, though, so I'm not sure you've
made your point? At worst, maybe this with 64-bit state instead of 32?

regards, tom lane

#19Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#18)
Re: Change GUC hashtable to use simplehash?

Hi,

On 2023-11-22 15:56:21 -0500, Tom Lane wrote:

Andres Freund <andres@anarazel.de> writes:

On 2023-11-21 16:42:55 +0700, John Naylor wrote:

The strlen call required for hashbytes() is not free. The lack of
mixing in the (probably inlined after 0001) previous hash function can
remedied directly, as in the attached:

I doubt this is a good hashfunction. For short strings, sure, but after
that... I don't think it makes sense to reduce the internal state of a hash
function to something this small.

GUC names are just about always short, though, so I'm not sure you've
made your point?

With short I meant <= 6 characters (32 / 5 = 6.x). After that you're
overwriting bits that you previously set, without dispersing the "overwritten"
bits throughout the hash state.

It's pretty easy to create conflicts this way, even just on paper. E.g. I
think abcdefgg and cbcdefgw would have the same hash, because the accumulated
value passed to murmurhash32 is the same.

The fact that this happens when a large part of the string is the same
is bad, because it makes it more likely that prefixed strings trigger such
conflicts, and they're obviously common with GUC strings.

Greetings,

Andres Freund

#20Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#19)
Re: Change GUC hashtable to use simplehash?

Andres Freund <andres@anarazel.de> writes:

On 2023-11-22 15:56:21 -0500, Tom Lane wrote:

GUC names are just about always short, though, so I'm not sure you've
made your point?

With short I meant <= 6 characters (32 / 5 = 6.x). After that you're
overwriting bits that you previously set, without dispersing the "overwritten"
bits throughout the hash state.

I'm less than convinced about the "overwrite" part:

+		/* Merge into hash ... not very bright, but it needn't be */
+		result = pg_rotate_left32(result, 5);
+		result ^= (uint32) ch;

Rotating a 32-bit value 5 bits at a time doesn't result in successive
characters lining up exactly, and even once they do, XOR is not
"overwrite". I'm pretty dubious that we need something better than this.

regards, tom lane

#21Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#20)
#22John Naylor
john.naylor@enterprisedb.com
In reply to: Andres Freund (#21)
#23John Naylor
john.naylor@enterprisedb.com
In reply to: John Naylor (#22)
#24Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: John Naylor (#23)
#25John Naylor
john.naylor@enterprisedb.com
In reply to: Heikki Linnakangas (#24)
#26John Naylor
john.naylor@enterprisedb.com
In reply to: John Naylor (#23)
#27Jeff Davis
pgsql@j-davis.com
In reply to: John Naylor (#23)
#28John Naylor
john.naylor@enterprisedb.com
In reply to: Jeff Davis (#27)
#29Jeff Davis
pgsql@j-davis.com
In reply to: John Naylor (#28)
#30John Naylor
john.naylor@enterprisedb.com
In reply to: Jeff Davis (#29)
#31Jeff Davis
pgsql@j-davis.com
In reply to: John Naylor (#30)
#32John Naylor
john.naylor@enterprisedb.com
In reply to: Jeff Davis (#31)
#33Jeff Davis
pgsql@j-davis.com
In reply to: John Naylor (#23)
#34Jeff Davis
pgsql@j-davis.com
In reply to: John Naylor (#30)
#35John Naylor
john.naylor@enterprisedb.com
In reply to: Jeff Davis (#33)
#36Jeff Davis
pgsql@j-davis.com
In reply to: John Naylor (#35)
#37John Naylor
john.naylor@enterprisedb.com
In reply to: Jeff Davis (#36)
#38John Naylor
john.naylor@enterprisedb.com
In reply to: John Naylor (#37)
#39John Naylor
john.naylor@enterprisedb.com
In reply to: Jeff Davis (#36)
#40John Naylor
john.naylor@enterprisedb.com
In reply to: John Naylor (#39)
#41John Naylor
john.naylor@enterprisedb.com
In reply to: John Naylor (#39)
#42Jeff Davis
pgsql@j-davis.com
In reply to: John Naylor (#41)
#43John Naylor
john.naylor@enterprisedb.com
In reply to: Jeff Davis (#42)
#44Jeff Davis
pgsql@j-davis.com
In reply to: John Naylor (#43)
#45John Naylor
john.naylor@enterprisedb.com
In reply to: Jeff Davis (#44)
#46John Naylor
john.naylor@enterprisedb.com
In reply to: John Naylor (#45)
#47jian he
jian.universality@gmail.com
In reply to: John Naylor (#46)
#48John Naylor
john.naylor@enterprisedb.com
In reply to: jian he (#47)
#49jian he
jian.universality@gmail.com
In reply to: John Naylor (#48)
#50John Naylor
john.naylor@enterprisedb.com
In reply to: jian he (#49)
#51jian he
jian.universality@gmail.com
In reply to: John Naylor (#50)
#52John Naylor
john.naylor@enterprisedb.com
In reply to: jian he (#51)
#53jian he
jian.universality@gmail.com
In reply to: John Naylor (#52)
#54John Naylor
john.naylor@enterprisedb.com
In reply to: jian he (#53)
#55Junwang Zhao
zhjwpku@gmail.com
In reply to: John Naylor (#54)
#56John Naylor
john.naylor@enterprisedb.com
In reply to: Junwang Zhao (#55)
#57John Naylor
john.naylor@enterprisedb.com
In reply to: John Naylor (#56)
#58Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: John Naylor (#57)
#59John Naylor
john.naylor@enterprisedb.com
In reply to: Heikki Linnakangas (#58)
#60Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: John Naylor (#59)
#61Jeff Davis
pgsql@j-davis.com
In reply to: John Naylor (#59)
#62Jeff Davis
pgsql@j-davis.com
In reply to: Jeff Davis (#61)
#63John Naylor
john.naylor@enterprisedb.com
In reply to: Heikki Linnakangas (#60)
#64John Naylor
john.naylor@enterprisedb.com
In reply to: Jeff Davis (#62)
#65Jeff Davis
pgsql@j-davis.com
In reply to: John Naylor (#64)
#66John Naylor
john.naylor@enterprisedb.com
In reply to: Jeff Davis (#65)
#67Jeff Davis
pgsql@j-davis.com
In reply to: John Naylor (#66)
#68Ants Aasma
ants.aasma@cybertec.at
In reply to: Jeff Davis (#65)
#69John Naylor
john.naylor@enterprisedb.com
In reply to: Ants Aasma (#68)
#70Ants Aasma
ants.aasma@cybertec.at
In reply to: John Naylor (#69)
#71John Naylor
john.naylor@enterprisedb.com
In reply to: Ants Aasma (#70)
#72John Naylor
john.naylor@enterprisedb.com
In reply to: Jeff Davis (#67)
#73Peter Eisentraut
peter_e@gmx.net
In reply to: John Naylor (#66)
#74John Naylor
john.naylor@enterprisedb.com
In reply to: Peter Eisentraut (#73)
#75John Naylor
john.naylor@enterprisedb.com
In reply to: John Naylor (#69)
#76John Naylor
john.naylor@enterprisedb.com
In reply to: John Naylor (#75)
#77Jeff Davis
pgsql@j-davis.com
In reply to: John Naylor (#76)
#78John Naylor
john.naylor@enterprisedb.com
In reply to: Jeff Davis (#77)
#79Jeff Davis
pgsql@j-davis.com
In reply to: John Naylor (#78)
#80John Naylor
john.naylor@enterprisedb.com
In reply to: Jeff Davis (#79)
#81John Naylor
john.naylor@enterprisedb.com
In reply to: John Naylor (#75)
#82John Naylor
john.naylor@enterprisedb.com
In reply to: Jeff Davis (#79)
#83Anton A. Melnikov
a.melnikov@postgrespro.ru
In reply to: John Naylor (#82)
#84John Naylor
john.naylor@enterprisedb.com
In reply to: Anton A. Melnikov (#83)
#85John Naylor
john.naylor@enterprisedb.com
In reply to: John Naylor (#84)
#86Anton A. Melnikov
a.melnikov@postgrespro.ru
In reply to: John Naylor (#84)
#87Anton A. Melnikov
a.melnikov@postgrespro.ru
In reply to: John Naylor (#85)
#88Tom Lane
tgl@sss.pgh.pa.us
In reply to: Anton A. Melnikov (#87)
#89Anton A. Melnikov
a.melnikov@postgrespro.ru
In reply to: Tom Lane (#88)
#90John Naylor
john.naylor@enterprisedb.com
In reply to: Anton A. Melnikov (#89)
#91John Naylor
john.naylor@enterprisedb.com
In reply to: John Naylor (#90)
#92Anton A. Melnikov
a.melnikov@postgrespro.ru
In reply to: John Naylor (#91)
#93John Naylor
john.naylor@enterprisedb.com
In reply to: Anton A. Melnikov (#92)
#94Anton A. Melnikov
a.melnikov@postgrespro.ru
In reply to: John Naylor (#93)
#95John Naylor
john.naylor@enterprisedb.com
In reply to: Anton A. Melnikov (#94)
#96Anton A. Melnikov
a.melnikov@postgrespro.ru
In reply to: John Naylor (#95)
#97John Naylor
john.naylor@enterprisedb.com
In reply to: Anton A. Melnikov (#96)
#98Anton A. Melnikov
a.melnikov@postgrespro.ru
In reply to: John Naylor (#97)
#99John Naylor
john.naylor@enterprisedb.com
In reply to: Anton A. Melnikov (#98)
#100Anton A. Melnikov
a.melnikov@postgrespro.ru
In reply to: John Naylor (#99)
#101John Naylor
john.naylor@enterprisedb.com
In reply to: Anton A. Melnikov (#100)
#102John Naylor
john.naylor@enterprisedb.com
In reply to: John Naylor (#101)
#103Anton A. Melnikov
a.melnikov@postgrespro.ru
In reply to: John Naylor (#102)