Compressed TOAST Slicing

Started by Paul Ramseyover 7 years ago60 messageshackers
Jump to latest
#1Paul Ramsey
pramsey@cleverelephant.ca

Currently, PG_DETOAST_DATUM_SLICE when run on a compressed TOAST entry will
first decompress the whole object, then extract the relevant slice.

When the desired slice is at or near the front of the object, this is
obviously non-optimal.

The attached patch adds in a code path to do a partial decompression of the
TOAST entry, when the requested slice is at the start of the object.

For an example of the improvement possible, this trivial example:

create table slicingtest (
id serial primary key,
a text
);

insert into slicingtest (a) select repeat('xyz123', 10000) as a from
generate_series(1,10000);
\timing
select sum(length(substr(a, 0, 20))) from slicingtest;

On master, in the current state on my wee laptop, I get

Time: 1426.737 ms (00:01.427)

With the patch, on my wee laptop, I get

Time: 46.886 ms

As usual, doing less work is faster.

Interesting note to motivate a follow-on patch: the substr() function does
attempt to slice, but the left() function does not. So, if this patch is
accepted, next patch will be to left() to add slicing behaviour.

If nobody lights me on fire, I'll submit to commitfest shortly.

P.

Attachments:

compressed-datum-slicing-20190101a.patchapplication/octet-stream; name=compressed-datum-slicing-20190101a.patchDownload+48-23
#2Stephen Frost
sfrost@snowman.net
In reply to: Paul Ramsey (#1)
Re: Compressed TOAST Slicing

Greetings,

* Paul Ramsey (pramsey@cleverelephant.ca) wrote:

The attached patch adds in a code path to do a partial decompression of the
TOAST entry, when the requested slice is at the start of the object.

Neat!

As usual, doing less work is faster.

Definitely.

Interesting note to motivate a follow-on patch: the substr() function does
attempt to slice, but the left() function does not. So, if this patch is
accepted, next patch will be to left() to add slicing behaviour.

Makes sense to me.

There two things that I wonder about in the patch- if it would be of any
use to try and allocate on a need basis instead of just allocating the
whole chunk up to the toast size, and secondly, why we wouldn't consider
handling a non-zero offset. A non-zero offset would, of course, still
require decompressing from the start and then just throwing away what we
skip over, but we're going to be doing that anyway, aren't we? Why not
stop when we get to the end, at least, and save ourselves the trouble of
decompressing the rest and then throwing it away.

If nobody lights me on fire, I'll submit to commitfest shortly.

Sounds like a good idea to me.

Thanks!

Stephen

#3Paul Ramsey
pramsey@cleverelephant.ca
In reply to: Stephen Frost (#2)
Re: Compressed TOAST Slicing

On Thu, Nov 1, 2018 at 2:29 PM Stephen Frost <sfrost@snowman.net> wrote:

Greetings,

* Paul Ramsey (pramsey@cleverelephant.ca) wrote:

The attached patch adds in a code path to do a partial decompression of

the

TOAST entry, when the requested slice is at the start of the object.

There two things that I wonder about in the patch- if it would be of any
use to try and allocate on a need basis instead of just allocating the
whole chunk up to the toast size,

I'm not sure what I was thinking when I rejected allocating the slice size
in favour of the whole uncompressed size... I cannot see why that wouldn't
work.

and secondly, why we wouldn't consider
handling a non-zero offset. A non-zero offset would, of course, still
require decompressing from the start and then just throwing away what we
skip over, but we're going to be doing that anyway, aren't we? Why not
stop when we get to the end, at least, and save ourselves the trouble of
decompressing the rest and then throwing it away.

I was worried about changing the pg_lz code too much because it scared me,
but debugging some stuff made me read it more closely so I fear it less
now, and doing interior slices seems not unreasonable, so I will give it a
try.

P

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Paul Ramsey (#3)
Re: Compressed TOAST Slicing

Paul Ramsey <pramsey@cleverelephant.ca> writes:

On Thu, Nov 1, 2018 at 2:29 PM Stephen Frost <sfrost@snowman.net> wrote:

and secondly, why we wouldn't consider
handling a non-zero offset. A non-zero offset would, of course, still
require decompressing from the start and then just throwing away what we
skip over, but we're going to be doing that anyway, aren't we? Why not
stop when we get to the end, at least, and save ourselves the trouble of
decompressing the rest and then throwing it away.

I was worried about changing the pg_lz code too much because it scared me,
but debugging some stuff made me read it more closely so I fear it less
now, and doing interior slices seems not unreasonable, so I will give it a
try.

I think Stephen was just envisioning decompressing from offset 0 up to
the end of what's needed, and then discarding any data before the start
of what's needed; at least, that's what'd occurred to me. It sounds like
you were thinking about hacking pg_lz to not write the leading data
anywhere. While that'd likely be a win for cases where there was leading
data to discard, I'm worried about adding any cycles to the inner loops
of the decompressor. We don't want to pessimize every other use of pg_lz
to buy a little bit for these cases.

regards, tom lane

#5Paul Ramsey
pramsey@cleverelephant.ca
In reply to: Tom Lane (#4)
Re: Compressed TOAST Slicing

On Thu, Nov 1, 2018 at 4:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Paul Ramsey <pramsey@cleverelephant.ca> writes:

On Thu, Nov 1, 2018 at 2:29 PM Stephen Frost <sfrost@snowman.net> wrote:

and secondly, why we wouldn't consider
handling a non-zero offset. A non-zero offset would, of course, still
require decompressing from the start and then just throwing away what we
skip over, but we're going to be doing that anyway, aren't we? Why not
stop when we get to the end, at least, and save ourselves the trouble of
decompressing the rest and then throwing it away.

I was worried about changing the pg_lz code too much because it scared

me,

but debugging some stuff made me read it more closely so I fear it less
now, and doing interior slices seems not unreasonable, so I will give it

a

try.

I think Stephen was just envisioning decompressing from offset 0 up to
the end of what's needed, and then discarding any data before the start
of what's needed; at least, that's what'd occurred to me.

Understood, that makes lots of sense and is a very small change, it turns
out :)
Allocating just what is needed also makes things faster yet, which is nice,
and no big surprise.
Some light testing seems to show no obvious regression in speed of
decompression for the usual "decompress it all" case.

P

Attachments:

compressed-datum-slicing-20190102a.patchapplication/octet-stream; name=compressed-datum-slicing-20190102a.patchDownload+48-23
#6Paul Ramsey
pramsey@cleverelephant.ca
In reply to: Paul Ramsey (#5)
Re: Compressed TOAST Slicing

As threatened, I have also added a patch to left() to also use sliced
access.

Attachments:

compressed-datum-slicing-20190102a.patchapplication/octet-stream; name=compressed-datum-slicing-20190102a.patchDownload+48-23
compressed-datum-slicing-left-20190102a.patchapplication/octet-stream; name=compressed-datum-slicing-left-20190102a.patchDownload+13-8
#7Rafia Sabih
rafia.sabih@enterprisedb.com
In reply to: Paul Ramsey (#6)
Re: Compressed TOAST Slicing

On Fri, Nov 2, 2018 at 11:55 PM Paul Ramsey <pramsey@cleverelephant.ca> wrote:

As threatened, I have also added a patch to left() to also use sliced access.

Hi Paul,

The idea looks good and believing your performance evaluation it seems
like a practical one too.

I had a look at this patch and here are my initial comments,
1.
- if (dp != destend || sp != srcend)
+ if (!is_slice && (dp != destend || sp != srcend))
  return -1;
A comment explaining how this check differs for is_slice case would be helpful.
2.
- int len = VARSIZE_ANY_EXHDR(str);
- int n = PG_GETARG_INT32(1);
- int rlen;
+ int n = PG_GETARG_INT32(1);

Looks like PG indentation is not followed here for n.

--
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/

#8Paul Ramsey
pramsey@cleverelephant.ca
In reply to: Rafia Sabih (#7)
Re: Compressed TOAST Slicing

On Sun, Dec 2, 2018 at 7:03 AM Rafia Sabih <rafia.sabih@enterprisedb.com>
wrote:

The idea looks good and believing your performance evaluation it seems
like a practical one too.

Thank you kindly for the review!

A comment explaining how this check differs for is_slice case would be

helpful.

Looks like PG indentation is not followed here for n.

I have attached updated patches that add the comment and adhere to the Pg
variable declaration indentation styles,
ATB!
P

--
Paul Ramsey
http://crunchydata.com

Attachments:

compressed-datum-slicing-left-20190103a.patchapplication/octet-stream; name=compressed-datum-slicing-left-20190103a.patchDownload+13-8
compressed-datum-slicing-20190103a.patchapplication/octet-stream; name=compressed-datum-slicing-20190103a.patchDownload+51-23
#9Andres Freund
andres@anarazel.de
In reply to: Paul Ramsey (#8)
Re: Compressed TOAST Slicing

Hi Stephen,

On 2018-12-06 12:54:18 -0800, Paul Ramsey wrote:

On Sun, Dec 2, 2018 at 7:03 AM Rafia Sabih <rafia.sabih@enterprisedb.com>
wrote:

The idea looks good and believing your performance evaluation it seems
like a practical one too.

Thank you kindly for the review!

A comment explaining how this check differs for is_slice case would be

helpful.

Looks like PG indentation is not followed here for n.

I have attached updated patches that add the comment and adhere to the Pg
variable declaration indentation styles,
ATB!
P

You were mentioning committing this at the Brussels meeting... :)

Greetings,

Andres Freund

#10Simon Riggs
simon@2ndQuadrant.com
In reply to: Paul Ramsey (#8)
Re: Compressed TOAST Slicing

On Thu, 6 Dec 2018 at 20:54, Paul Ramsey <pramsey@cleverelephant.ca> wrote:

On Sun, Dec 2, 2018 at 7:03 AM Rafia Sabih <rafia.sabih@enterprisedb.com>
wrote:

The idea looks good and believing your performance evaluation it seems
like a practical one too.

Thank you kindly for the review!

Sounds good.

Could we get an similarly optimized implementation of -> operator for JSONB
as well?

Are there any other potential uses? Best to fix em all up at once and then
move on to other things. Thanks.

--
Simon Riggs http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/&gt;
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#11Paul Ramsey
pramsey@cleverelephant.ca
In reply to: Simon Riggs (#10)
Re: Compressed TOAST Slicing

On Sat, Feb 16, 2019 at 7:25 AM Simon Riggs <simon@2ndquadrant.com> wrote:

Could we get an similarly optimized implementation of -> operator for JSONB as well?
Are there any other potential uses? Best to fix em all up at once and then move on to other things. Thanks.

Oddly enough, I couldn't find many/any things that were sensitive to
left-end decompression. The only exception is "LIKE this%" which
clearly would be helped, but unfortunately wouldn't be a quick
drop-in, but a rather major reorganization of the regex handling.

I had a look at "->" and I couldn't see how a slice could be used to
make it faster? We don't a priori know how big a slice would give us
what we want. This again makes Stephen's case for an iterator, but of
course all the iterator benefits only come when the actual function at
the top (in this case the json parser) are also updated to be
iterative.

Committing this little change doesn't preclude an iterator, or even
make doing one more complicated... :)

P.

#12Юрий Соколов
funny.falcon@gmail.com
In reply to: Paul Ramsey (#11)
Re: Compressed TOAST Slicing

Some time ago I posted PoC patch with alternative TOAST compression scheme:
instead of "compress-then-chunk" I suggested "chunk-then-compress". It
decrease compression level, but allows efficient arbitrary slicing.

ср, 20 февр. 2019 г., 2:09 Paul Ramsey pramsey@cleverelephant.ca:

Show quoted text

On Sat, Feb 16, 2019 at 7:25 AM Simon Riggs <simon@2ndquadrant.com> wrote:

Could we get an similarly optimized implementation of -> operator for

JSONB as well?

Are there any other potential uses? Best to fix em all up at once and

then move on to other things. Thanks.

Oddly enough, I couldn't find many/any things that were sensitive to
left-end decompression. The only exception is "LIKE this%" which
clearly would be helped, but unfortunately wouldn't be a quick
drop-in, but a rather major reorganization of the regex handling.

I had a look at "->" and I couldn't see how a slice could be used to
make it faster? We don't a priori know how big a slice would give us
what we want. This again makes Stephen's case for an iterator, but of
course all the iterator benefits only come when the actual function at
the top (in this case the json parser) are also updated to be
iterative.

Committing this little change doesn't preclude an iterator, or even
make doing one more complicated... :)

P.

#13Simon Riggs
simon@2ndQuadrant.com
In reply to: Paul Ramsey (#11)
Re: Compressed TOAST Slicing

On Tue, 19 Feb 2019 at 23:09, Paul Ramsey <pramsey@cleverelephant.ca> wrote:

On Sat, Feb 16, 2019 at 7:25 AM Simon Riggs <simon@2ndquadrant.com> wrote:

Could we get an similarly optimized implementation of -> operator for

JSONB as well?

Are there any other potential uses? Best to fix em all up at once and

then move on to other things. Thanks.

Oddly enough, I couldn't find many/any things that were sensitive to
left-end decompression. The only exception is "LIKE this%" which
clearly would be helped, but unfortunately wouldn't be a quick
drop-in, but a rather major reorganization of the regex handling.

I had a look at "->" and I couldn't see how a slice could be used to
make it faster? We don't a priori know how big a slice would give us
what we want. This again makes Stephen's case for an iterator, but of
course all the iterator benefits only come when the actual function at
the top (in this case the json parser) are also updated to be
iterative.

Committing this little change doesn't preclude an iterator, or even
make doing one more complicated... :)

Sure, but we have the choice between something that benefits just a few
cases or one that benefits more widely.

If we all only work on the narrow use cases that are right in front of us
at the present moment then we would not have come this far. I'm sure many
GIS applications also store JSONB data, so you would be helping the
performance of the whole app, even if there isn't much JSON in PostGIS.

--
Simon Riggs http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/&gt;
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#14Andres Freund
andres@anarazel.de
In reply to: Simon Riggs (#13)
Re: Compressed TOAST Slicing

On 2019-02-20 08:39:38 +0000, Simon Riggs wrote:

On Tue, 19 Feb 2019 at 23:09, Paul Ramsey <pramsey@cleverelephant.ca> wrote:

On Sat, Feb 16, 2019 at 7:25 AM Simon Riggs <simon@2ndquadrant.com> wrote:

Could we get an similarly optimized implementation of -> operator for

JSONB as well?

Are there any other potential uses? Best to fix em all up at once and

then move on to other things. Thanks.

Oddly enough, I couldn't find many/any things that were sensitive to
left-end decompression. The only exception is "LIKE this%" which
clearly would be helped, but unfortunately wouldn't be a quick
drop-in, but a rather major reorganization of the regex handling.

I had a look at "->" and I couldn't see how a slice could be used to
make it faster? We don't a priori know how big a slice would give us
what we want. This again makes Stephen's case for an iterator, but of
course all the iterator benefits only come when the actual function at
the top (in this case the json parser) are also updated to be
iterative.

Committing this little change doesn't preclude an iterator, or even
make doing one more complicated... :)

Sure, but we have the choice between something that benefits just a few
cases or one that benefits more widely.

If we all only work on the narrow use cases that are right in front of us
at the present moment then we would not have come this far. I'm sure many
GIS applications also store JSONB data, so you would be helping the
performance of the whole app, even if there isn't much JSON in PostGIS.

-1, I think this is blowing up the complexity of a already useful patch,
even though there's no increase in complexity due to the patch proposed
here. I totally get wanting incremental decompression for jsonb, but I
don't see why Paul should be held hostage for that.

Greetings,

Andres Freund

#15Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#14)
Re: Compressed TOAST Slicing

On Wed, Feb 20, 2019 at 11:27 AM Andres Freund <andres@anarazel.de> wrote:

-1, I think this is blowing up the complexity of a already useful patch,
even though there's no increase in complexity due to the patch proposed
here. I totally get wanting incremental decompression for jsonb, but I
don't see why Paul should be held hostage for that.

I concur.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#16Simon Riggs
simon@2ndQuadrant.com
In reply to: Andres Freund (#14)
Re: Compressed TOAST Slicing

On Wed, 20 Feb 2019 at 16:27, Andres Freund <andres@anarazel.de> wrote:

Sure, but we have the choice between something that benefits just a few
cases or one that benefits more widely.

If we all only work on the narrow use cases that are right in front of us
at the present moment then we would not have come this far. I'm sure many
GIS applications also store JSONB data, so you would be helping the
performance of the whole app, even if there isn't much JSON in PostGIS.

-1, I think this is blowing up the complexity of a already useful patch,
even though there's no increase in complexity due to the patch proposed
here. I totally get wanting incremental decompression for jsonb, but I
don't see why Paul should be held hostage for that.

Not sure I agree with your emotive language. Review comments != holding
hostages.

If we add one set of code now and need to add another different one later,
we will have 2 sets of code that do similar things.

I'm surprised to hear you think that is a good thing.

--
Simon Riggs http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/&gt;
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#17Paul Ramsey
pramsey@cleverelephant.ca
In reply to: Simon Riggs (#16)
Re: Compressed TOAST Slicing

On Feb 20, 2019, at 10:37 AM, Simon Riggs <simon@2ndquadrant.com> wrote:

-1, I think this is blowing up the complexity of a already useful patch,
even though there's no increase in complexity due to the patch proposed
here. I totally get wanting incremental decompression for jsonb, but I
don't see why Paul should be held hostage for that.

Not sure I agree with your emotive language. Review comments != holding hostages.

If we add one set of code now and need to add another different one later, we will have 2 sets of code that do similar things.

So, current state is, asked for a datum slice, we can now decompress just the parts we need to get that slice. This allows us to speed up anything that knows in advance how big a slice they are going to want. At this moment all I’ve found is left() and substr() for the start-at-front case.

What this does not support: any function that probably wants less-than-everything, but doesn’t know how big a slice to look for. Stephen thinks I should put an iterator on decompression, which would be an interesting piece of work. Having looked at the json code a little doing partial searches would require a lot of re-work that is above my paygrade, but if there was an iterator in place, at least that next stop would then be open.

Note that adding an iterator isn’t adding two ways to do the same thing, since the iterator would slot nicely underneath the existing slicing API, and just iterate to the requested slice size. So this is easily just “another step” along the train line to providing streaming access to compressed and TOASTed data.

I’d hate for the simple slice ability to get stuck behind the other work, since it’s both (a) useful and (b) exists. If you are concerned the iterator will never get done, I can only offer my word that, since it seems important to multiple people on this list, I will do it. (Just not, maybe, very well :)

P.

#18Daniel Verite
daniel@manitou-mail.org
In reply to: Paul Ramsey (#11)
Re: Compressed TOAST Slicing

Paul Ramsey wrote:

Oddly enough, I couldn't find many/any things that were sensitive to
left-end decompression. The only exception is "LIKE this%" which
clearly would be helped, but unfortunately wouldn't be a quick
drop-in, but a rather major reorganization of the regex handling.

What about starts_with(string, prefix)?

text_starts_with(arg1,arg2) in varlena.c does a full decompression
of arg1 when it could limit itself to the length of the smaller arg2:

Datum
text_starts_with(PG_FUNCTION_ARGS)
....
len1 = toast_raw_datum_size(arg1);
len2 = toast_raw_datum_size(arg2);
if (len2 > len1)
result = false;
else
{
text *targ1 = DatumGetTextPP(arg1);
text *targ2 = DatumGetTextPP(arg2);

result = (memcmp(VARDATA_ANY(targ1), VARDATA_ANY(targ2),
VARSIZE_ANY_EXHDR(targ2)) == 0);
...

Best regards,
--
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite

#19Robert Haas
robertmhaas@gmail.com
In reply to: Paul Ramsey (#17)
Re: Compressed TOAST Slicing

On Wed, Feb 20, 2019 at 1:45 PM Paul Ramsey <pramsey@cleverelephant.ca> wrote:

What this does not support: any function that probably wants less-than-everything, but doesn’t know how big a slice to look for. Stephen thinks I should put an iterator on decompression, which would be an interesting piece of work. Having looked at the json code a little doing partial searches would require a lot of re-work that is above my paygrade, but if there was an iterator in place, at least that next stop would then be open.

Note that adding an iterator isn’t adding two ways to do the same thing, since the iterator would slot nicely underneath the existing slicing API, and just iterate to the requested slice size. So this is easily just “another step” along the train line to providing streaming access to compressed and TOASTed data.

Yeah. Plus, I'm not sure the iterator thing is even the right design
for the JSONB case. It might be better to think, for that case, about
whether there's someway to operate directly on the compressed data.
If you could somehow jigger the format and the chunking so that you
could jump directly to the right chunk and decompress from there,
rather than having to walk over all of the earlier chunks to figure
out where the data you want is, you could probably obtain a large
performance benefit. But figuring out how to design such a scheme
seems pretty far afield from the topic at hand.

I'd actually be inclined not to add an iterator until we have a real
user for it, for exactly the reason that we don't know that it is the
right thing. But there is certain value in decompressing partially,
to a known byte position, as your patch does, no matter what we decide
to do about that stuff.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#20Paul Ramsey
pramsey@cleverelephant.ca
In reply to: Daniel Verite (#18)
Re: Compressed TOAST Slicing

On Wed, Feb 20, 2019 at 10:50 AM Daniel Verite <daniel@manitou-mail.org> wrote:

Paul Ramsey wrote:

Oddly enough, I couldn't find many/any things that were sensitive to
left-end decompression. The only exception is "LIKE this%" which
clearly would be helped, but unfortunately wouldn't be a quick
drop-in, but a rather major reorganization of the regex handling.

What about starts_with(string, prefix)?

text_starts_with(arg1,arg2) in varlena.c does a full decompression
of arg1 when it could limit itself to the length of the smaller arg2:

Nice catch, I didn't find that one as it's not user visible, seems to
be only called in spgist (!!)
./backend/access/spgist/spgtextproc.c:
DatumGetBool(DirectFunctionCall2(text_starts_with

Thanks, I'll add that.

P

#21Daniel Verite
daniel@manitou-mail.org
In reply to: Paul Ramsey (#20)
#22Tom Lane
tgl@sss.pgh.pa.us
In reply to: Paul Ramsey (#17)
#23Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Robert Haas (#19)
#24Stephen Frost
sfrost@snowman.net
In reply to: Paul Ramsey (#20)
#25Paul Ramsey
pramsey@cleverelephant.ca
In reply to: Stephen Frost (#24)
In reply to: Paul Ramsey (#25)
#27Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Darafei "Komяpa" Praliaskouski (#26)
#28Regina Obe
lr@pcorp.us
In reply to: Paul Ramsey (#25)
#29Andrey Borodin
amborodin@acm.org
In reply to: Paul Ramsey (#25)
#30Michael Paquier
michael@paquier.xyz
In reply to: Regina Obe (#28)
#31Paul Ramsey
pramsey@cleverelephant.ca
In reply to: Michael Paquier (#30)
#32Andrey Borodin
amborodin@acm.org
In reply to: Paul Ramsey (#31)
#33Andres Freund
andres@anarazel.de
In reply to: Michael Paquier (#30)
#34Paul Ramsey
pramsey@cleverelephant.ca
In reply to: Andres Freund (#33)
#35Paul Ramsey
pramsey@cleverelephant.ca
In reply to: Paul Ramsey (#34)
#36Paul Ramsey
pramsey@cleverelephant.ca
In reply to: Andrey Borodin (#29)
#37Michael Paquier
michael@paquier.xyz
In reply to: Paul Ramsey (#35)
#38Andres Freund
andres@anarazel.de
In reply to: Michael Paquier (#37)
#39Michael Paquier
michael@paquier.xyz
In reply to: Andres Freund (#38)
#40Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Michael Paquier (#39)
#41Paul Ramsey
pramsey@cleverelephant.ca
In reply to: Tomas Vondra (#40)
#42Paul Ramsey
pramsey@cleverelephant.ca
In reply to: Paul Ramsey (#41)
#43Andrey Borodin
amborodin@acm.org
In reply to: Paul Ramsey (#42)
#44Paul Ramsey
pramsey@cleverelephant.ca
In reply to: Andrey Borodin (#43)
#45Stephen Frost
sfrost@snowman.net
In reply to: Andres Freund (#33)
#46Tom Lane
tgl@sss.pgh.pa.us
In reply to: Stephen Frost (#45)
#47Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#46)
#48Paul Ramsey
pramsey@cleverelephant.ca
In reply to: Robert Haas (#47)
#49Stephen Frost
sfrost@snowman.net
In reply to: Paul Ramsey (#48)
#50Paul Ramsey
pramsey@cleverelephant.ca
In reply to: Stephen Frost (#49)
#51Stephen Frost
sfrost@snowman.net
In reply to: Paul Ramsey (#50)
In reply to: Stephen Frost (#51)
#53Stephen Frost
sfrost@snowman.net
In reply to: Darafei "Komяpa" Praliaskouski (#52)
#54Andrey Borodin
amborodin@acm.org
In reply to: Andrey Borodin (#29)
#55Paul Ramsey
pramsey@cleverelephant.ca
In reply to: Andrey Borodin (#54)
#56Andrey Borodin
amborodin@acm.org
In reply to: Paul Ramsey (#55)
#57Andres Freund
andres@anarazel.de
In reply to: Paul Ramsey (#55)
#58Andrey Borodin
amborodin@acm.org
In reply to: Andres Freund (#57)
#59Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#57)
#60Andrey Borodin
amborodin@acm.org
In reply to: Tom Lane (#59)