Compression of full-page-writes

Started by Fujii Masaoover 12 years ago395 messageshackers
Jump to latest
#1Fujii Masao
masao.fujii@gmail.com

Hi,

Attached patch adds new GUC parameter 'compress_backup_block'.
When this parameter is enabled, the server just compresses FPW
(full-page-writes) in WAL by using pglz_compress() before inserting it
to the WAL buffers. Then, the compressed FPW is decompressed
in recovery. This is very simple patch.

The purpose of this patch is the reduction of WAL size.
Under heavy write load, the server needs to write a large amount of
WAL and this is likely to be a bottleneck. What's the worse is,
in replication, a large amount of WAL would have harmful effect on
not only WAL writing in the master, but also WAL streaming and
WAL writing in the standby. Also we would need to spend more
money on the storage to store such a large data.
I'd like to alleviate such harmful situations by reducing WAL size.

My idea is very simple, just compress FPW because FPW is
a big part of WAL. I used pglz_compress() as a compression method,
but you might think that other method is better. We can add
something like FPW-compression-hook for that later. The patch
adds new GUC parameter, but I'm thinking to merge it to full_page_writes
parameter to avoid increasing the number of GUC. That is,
I'm thinking to change full_page_writes so that it can accept new value
'compress'.

I measured how much WAL this patch can reduce, by using pgbench.

* Server spec
CPU: 8core, Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz
Mem: 16GB
Disk: 500GB SSD Samsung 840

* Benchmark
pgbench -c 32 -j 4 -T 900 -M prepared
scaling factor: 100

checkpoint_segments = 1024
checkpoint_timeout = 5min
(every checkpoint during benchmark were triggered by checkpoint_timeout)

* Result
[tps]
1386.8 (compress_backup_block = off)
1627.7 (compress_backup_block = on)

[the amount of WAL generated during running pgbench]
4302 MB (compress_backup_block = off)
1521 MB (compress_backup_block = on)

At least in my test, the patch could reduce the WAL size to one-third!

The patch is WIP yet. But I'd like to hear the opinions about this idea
before completing it, and then add the patch to next CF if okay.

Regards,

--
Fujii Masao

Attachments:

compress_fpw_v1.patchapplication/octet-stream; name=compress_fpw_v1.patchDownload+72-0
#2Satoshi Nagayasu
snaga@uptime.jp
In reply to: Fujii Masao (#1)
Re: Compression of full-page-writes

(2013/08/30 11:55), Fujii Masao wrote:

Hi,

Attached patch adds new GUC parameter 'compress_backup_block'.
When this parameter is enabled, the server just compresses FPW
(full-page-writes) in WAL by using pglz_compress() before inserting it
to the WAL buffers. Then, the compressed FPW is decompressed
in recovery. This is very simple patch.

The purpose of this patch is the reduction of WAL size.
Under heavy write load, the server needs to write a large amount of
WAL and this is likely to be a bottleneck. What's the worse is,
in replication, a large amount of WAL would have harmful effect on
not only WAL writing in the master, but also WAL streaming and
WAL writing in the standby. Also we would need to spend more
money on the storage to store such a large data.
I'd like to alleviate such harmful situations by reducing WAL size.

My idea is very simple, just compress FPW because FPW is
a big part of WAL. I used pglz_compress() as a compression method,
but you might think that other method is better. We can add
something like FPW-compression-hook for that later. The patch
adds new GUC parameter, but I'm thinking to merge it to full_page_writes
parameter to avoid increasing the number of GUC. That is,
I'm thinking to change full_page_writes so that it can accept new value
'compress'.

I measured how much WAL this patch can reduce, by using pgbench.

* Server spec
CPU: 8core, Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz
Mem: 16GB
Disk: 500GB SSD Samsung 840

* Benchmark
pgbench -c 32 -j 4 -T 900 -M prepared
scaling factor: 100

checkpoint_segments = 1024
checkpoint_timeout = 5min
(every checkpoint during benchmark were triggered by checkpoint_timeout)

I believe that the amount of backup blocks in WAL files is affected
by how often the checkpoints are occurring, particularly under such
update-intensive workload.

Under your configuration, checkpoint should occur so often.
So, you need to change checkpoint_timeout larger in order to
determine whether the patch is realistic.

Regards,

* Result
[tps]
1386.8 (compress_backup_block = off)
1627.7 (compress_backup_block = on)

[the amount of WAL generated during running pgbench]
4302 MB (compress_backup_block = off)
1521 MB (compress_backup_block = on)

At least in my test, the patch could reduce the WAL size to one-third!

The patch is WIP yet. But I'd like to hear the opinions about this idea
before completing it, and then add the patch to next CF if okay.

Regards,

--
Satoshi Nagayasu <snaga@uptime.jp>
Uptime Technologies, LLC. http://www.uptime.jp

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#3Satoshi Nagayasu
snaga@uptime.jp
In reply to: Satoshi Nagayasu (#2)
Re: Compression of full-page-writes

(2013/08/30 12:07), Satoshi Nagayasu wrote:

(2013/08/30 11:55), Fujii Masao wrote:

Hi,

Attached patch adds new GUC parameter 'compress_backup_block'.
When this parameter is enabled, the server just compresses FPW
(full-page-writes) in WAL by using pglz_compress() before inserting it
to the WAL buffers. Then, the compressed FPW is decompressed
in recovery. This is very simple patch.

The purpose of this patch is the reduction of WAL size.
Under heavy write load, the server needs to write a large amount of
WAL and this is likely to be a bottleneck. What's the worse is,
in replication, a large amount of WAL would have harmful effect on
not only WAL writing in the master, but also WAL streaming and
WAL writing in the standby. Also we would need to spend more
money on the storage to store such a large data.
I'd like to alleviate such harmful situations by reducing WAL size.

My idea is very simple, just compress FPW because FPW is
a big part of WAL. I used pglz_compress() as a compression method,
but you might think that other method is better. We can add
something like FPW-compression-hook for that later. The patch
adds new GUC parameter, but I'm thinking to merge it to full_page_writes
parameter to avoid increasing the number of GUC. That is,
I'm thinking to change full_page_writes so that it can accept new value
'compress'.

I measured how much WAL this patch can reduce, by using pgbench.

* Server spec
CPU: 8core, Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz
Mem: 16GB
Disk: 500GB SSD Samsung 840

* Benchmark
pgbench -c 32 -j 4 -T 900 -M prepared
scaling factor: 100

checkpoint_segments = 1024
checkpoint_timeout = 5min
(every checkpoint during benchmark were triggered by
checkpoint_timeout)

I believe that the amount of backup blocks in WAL files is affected
by how often the checkpoints are occurring, particularly under such
update-intensive workload.

Under your configuration, checkpoint should occur so often.
So, you need to change checkpoint_timeout larger in order to
determine whether the patch is realistic.

In fact, the following chart shows that checkpoint_timeout=30min
also reduces WAL size to one-third, compared with 5min timeout,
in the pgbench experimentation.

https://www.oss.ecl.ntt.co.jp/ossc/oss/img/pglesslog_img02.jpg

Regards,

Regards,

* Result
[tps]
1386.8 (compress_backup_block = off)
1627.7 (compress_backup_block = on)

[the amount of WAL generated during running pgbench]
4302 MB (compress_backup_block = off)
1521 MB (compress_backup_block = on)

At least in my test, the patch could reduce the WAL size to one-third!

The patch is WIP yet. But I'd like to hear the opinions about this idea
before completing it, and then add the patch to next CF if okay.

Regards,

--
Satoshi Nagayasu <snaga@uptime.jp>
Uptime Technologies, LLC. http://www.uptime.jp

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

In reply to: Fujii Masao (#1)
Re: Compression of full-page-writes

On Thu, Aug 29, 2013 at 7:55 PM, Fujii Masao <masao.fujii@gmail.com> wrote:

[the amount of WAL generated during running pgbench]
4302 MB (compress_backup_block = off)
1521 MB (compress_backup_block = on)

Interesting.

I wonder, what is the impact on recovery time under the same
conditions? I suppose that the cost of the random I/O involved would
probably dominate just as with compress_backup_block = off. That said,
you've used an SSD here, so perhaps not.

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#5Amit Kapila
amit.kapila16@gmail.com
In reply to: Fujii Masao (#1)
Re: Compression of full-page-writes

On Fri, Aug 30, 2013 at 8:25 AM, Fujii Masao <masao.fujii@gmail.com> wrote:

Hi,

Attached patch adds new GUC parameter 'compress_backup_block'.
When this parameter is enabled, the server just compresses FPW
(full-page-writes) in WAL by using pglz_compress() before inserting it
to the WAL buffers. Then, the compressed FPW is decompressed
in recovery. This is very simple patch.

The purpose of this patch is the reduction of WAL size.
Under heavy write load, the server needs to write a large amount of
WAL and this is likely to be a bottleneck. What's the worse is,
in replication, a large amount of WAL would have harmful effect on
not only WAL writing in the master, but also WAL streaming and
WAL writing in the standby. Also we would need to spend more
money on the storage to store such a large data.
I'd like to alleviate such harmful situations by reducing WAL size.

My idea is very simple, just compress FPW because FPW is
a big part of WAL. I used pglz_compress() as a compression method,
but you might think that other method is better. We can add
something like FPW-compression-hook for that later. The patch
adds new GUC parameter, but I'm thinking to merge it to full_page_writes
parameter to avoid increasing the number of GUC. That is,
I'm thinking to change full_page_writes so that it can accept new value
'compress'.

I measured how much WAL this patch can reduce, by using pgbench.

* Server spec
CPU: 8core, Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz
Mem: 16GB
Disk: 500GB SSD Samsung 840

* Benchmark
pgbench -c 32 -j 4 -T 900 -M prepared
scaling factor: 100

checkpoint_segments = 1024
checkpoint_timeout = 5min
(every checkpoint during benchmark were triggered by checkpoint_timeout)

* Result
[tps]
1386.8 (compress_backup_block = off)
1627.7 (compress_backup_block = on)

[the amount of WAL generated during running pgbench]
4302 MB (compress_backup_block = off)
1521 MB (compress_backup_block = on)

This is really nice data.

I think if you want, you can once try with one of the tests Heikki has
posted for one of my other patch which is here:
/messages/by-id/51366323.8070606@vmware.com

Also if possible, for with lesser clients (1,2,4) and may be with more
frequency of checkpoint.

This is just to show benefits of this idea with other kind of workload.

I think we can do these tests later as well, I had mentioned because
sometime back (probably 6 months), one of my colleagues have tried
exactly the same idea of using compression method (LZ and few others)
for FPW, but it turned out that even though the WAL size is reduced
but performance went down which is not the case in the data you have
shown even though you have used SSD, might be he has done some mistake
as he was not as experienced, but I think still it's good to check on
various workloads.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#6KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Fujii Masao (#1)
Re: Compression of full-page-writes

(2013/08/30 11:55), Fujii Masao wrote:

* Benchmark
pgbench -c 32 -j 4 -T 900 -M prepared
scaling factor: 100

checkpoint_segments = 1024
checkpoint_timeout = 5min
(every checkpoint during benchmark were triggered by checkpoint_timeout)

Did you execute munual checkpoint before starting benchmark?
We read only your message, it occuered three times checkpoint during benchmark.
But if you did not executed manual checkpoint, it would be different.

You had better clear this point for more transparent evaluation.

Regards,
--
Mitsumasa KONDO
NTT Open Software Center

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#7NikhilS
nikkhils@gmail.com
In reply to: Fujii Masao (#1)
Re: Compression of full-page-writes

Hi Fujii-san,

I must be missing something really trivial, but why not try to compress all
types of WAL blocks and not just FPW?

Regards,
Nikhils

On Fri, Aug 30, 2013 at 8:25 AM, Fujii Masao <masao.fujii@gmail.com> wrote:

Show quoted text

Hi,

Attached patch adds new GUC parameter 'compress_backup_block'.
When this parameter is enabled, the server just compresses FPW
(full-page-writes) in WAL by using pglz_compress() before inserting it
to the WAL buffers. Then, the compressed FPW is decompressed
in recovery. This is very simple patch.

The purpose of this patch is the reduction of WAL size.
Under heavy write load, the server needs to write a large amount of
WAL and this is likely to be a bottleneck. What's the worse is,
in replication, a large amount of WAL would have harmful effect on
not only WAL writing in the master, but also WAL streaming and
WAL writing in the standby. Also we would need to spend more
money on the storage to store such a large data.
I'd like to alleviate such harmful situations by reducing WAL size.

My idea is very simple, just compress FPW because FPW is
a big part of WAL. I used pglz_compress() as a compression method,
but you might think that other method is better. We can add
something like FPW-compression-hook for that later. The patch
adds new GUC parameter, but I'm thinking to merge it to full_page_writes
parameter to avoid increasing the number of GUC. That is,
I'm thinking to change full_page_writes so that it can accept new value
'compress'.

I measured how much WAL this patch can reduce, by using pgbench.

* Server spec
CPU: 8core, Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz
Mem: 16GB
Disk: 500GB SSD Samsung 840

* Benchmark
pgbench -c 32 -j 4 -T 900 -M prepared
scaling factor: 100

checkpoint_segments = 1024
checkpoint_timeout = 5min
(every checkpoint during benchmark were triggered by checkpoint_timeout)

* Result
[tps]
1386.8 (compress_backup_block = off)
1627.7 (compress_backup_block = on)

[the amount of WAL generated during running pgbench]
4302 MB (compress_backup_block = off)
1521 MB (compress_backup_block = on)

At least in my test, the patch could reduce the WAL size to one-third!

The patch is WIP yet. But I'd like to hear the opinions about this idea
before completing it, and then add the patch to next CF if okay.

Regards,

--
Fujii Masao

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#8Michael Paquier
michael@paquier.xyz
In reply to: Fujii Masao (#1)
Re: Compression of full-page-writes

On Fri, Aug 30, 2013 at 11:55 AM, Fujii Masao <masao.fujii@gmail.com> wrote:

My idea is very simple, just compress FPW because FPW is
a big part of WAL. I used pglz_compress() as a compression method,
but you might think that other method is better. We can add
something like FPW-compression-hook for that later. The patch
adds new GUC parameter, but I'm thinking to merge it to full_page_writes
parameter to avoid increasing the number of GUC. That is,
I'm thinking to change full_page_writes so that it can accept new value
'compress'.

Instead of a generic 'compress', what about using the name of the
compression method as parameter value? Just to keep the door open to
new types of compression methods.

* Result
[tps]
1386.8 (compress_backup_block = off)
1627.7 (compress_backup_block = on)

[the amount of WAL generated during running pgbench]
4302 MB (compress_backup_block = off)
1521 MB (compress_backup_block = on)

At least in my test, the patch could reduce the WAL size to one-third!

Nice numbers! Testing this patch with other benchmarks than pgbench
would be interesting as well.
--
Michael

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#9Fujii Masao
masao.fujii@gmail.com
In reply to: Peter Geoghegan (#4)
Re: Compression of full-page-writes

On Fri, Aug 30, 2013 at 12:43 PM, Peter Geoghegan <pg@heroku.com> wrote:

On Thu, Aug 29, 2013 at 7:55 PM, Fujii Masao <masao.fujii@gmail.com> wrote:

[the amount of WAL generated during running pgbench]
4302 MB (compress_backup_block = off)
1521 MB (compress_backup_block = on)

Interesting.

I wonder, what is the impact on recovery time under the same
conditions?

Will test! I can imagine that the recovery time would be a bit
longer with compress_backup_block=on because compressed
FPW needs to be decompressed.

I suppose that the cost of the random I/O involved would
probably dominate just as with compress_backup_block = off. That said,
you've used an SSD here, so perhaps not.

Oh, maybe my description was confusing. full_page_writes was enabled
while running the benchmark even if compress_backup_block = off.
I've not merged those two parameters yet. So even in
compress_backup_block = off, random I/O would not be increased in recovery.

Regards,

--
Fujii Masao

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#10Fujii Masao
masao.fujii@gmail.com
In reply to: Amit Kapila (#5)
Re: Compression of full-page-writes

On Fri, Aug 30, 2013 at 1:43 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, Aug 30, 2013 at 8:25 AM, Fujii Masao <masao.fujii@gmail.com> wrote:

Hi,

Attached patch adds new GUC parameter 'compress_backup_block'.
When this parameter is enabled, the server just compresses FPW
(full-page-writes) in WAL by using pglz_compress() before inserting it
to the WAL buffers. Then, the compressed FPW is decompressed
in recovery. This is very simple patch.

The purpose of this patch is the reduction of WAL size.
Under heavy write load, the server needs to write a large amount of
WAL and this is likely to be a bottleneck. What's the worse is,
in replication, a large amount of WAL would have harmful effect on
not only WAL writing in the master, but also WAL streaming and
WAL writing in the standby. Also we would need to spend more
money on the storage to store such a large data.
I'd like to alleviate such harmful situations by reducing WAL size.

My idea is very simple, just compress FPW because FPW is
a big part of WAL. I used pglz_compress() as a compression method,
but you might think that other method is better. We can add
something like FPW-compression-hook for that later. The patch
adds new GUC parameter, but I'm thinking to merge it to full_page_writes
parameter to avoid increasing the number of GUC. That is,
I'm thinking to change full_page_writes so that it can accept new value
'compress'.

I measured how much WAL this patch can reduce, by using pgbench.

* Server spec
CPU: 8core, Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz
Mem: 16GB
Disk: 500GB SSD Samsung 840

* Benchmark
pgbench -c 32 -j 4 -T 900 -M prepared
scaling factor: 100

checkpoint_segments = 1024
checkpoint_timeout = 5min
(every checkpoint during benchmark were triggered by checkpoint_timeout)

* Result
[tps]
1386.8 (compress_backup_block = off)
1627.7 (compress_backup_block = on)

[the amount of WAL generated during running pgbench]
4302 MB (compress_backup_block = off)
1521 MB (compress_backup_block = on)

This is really nice data.

I think if you want, you can once try with one of the tests Heikki has
posted for one of my other patch which is here:
/messages/by-id/51366323.8070606@vmware.com

Also if possible, for with lesser clients (1,2,4) and may be with more
frequency of checkpoint.

This is just to show benefits of this idea with other kind of workload.

Yep, I will do more tests.

I think we can do these tests later as well, I had mentioned because
sometime back (probably 6 months), one of my colleagues have tried
exactly the same idea of using compression method (LZ and few others)
for FPW, but it turned out that even though the WAL size is reduced
but performance went down which is not the case in the data you have
shown even though you have used SSD, might be he has done some mistake
as he was not as experienced, but I think still it's good to check on
various workloads.

I'd appreciate if you test the patch with HDD. Now I have no machine with HDD.

Regards,

--
Fujii Masao

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#11Fujii Masao
masao.fujii@gmail.com
In reply to: KONDO Mitsumasa (#6)
Re: Compression of full-page-writes

On Fri, Aug 30, 2013 at 2:32 PM, KONDO Mitsumasa
<kondo.mitsumasa@lab.ntt.co.jp> wrote:

(2013/08/30 11:55), Fujii Masao wrote:

* Benchmark
pgbench -c 32 -j 4 -T 900 -M prepared
scaling factor: 100

checkpoint_segments = 1024
checkpoint_timeout = 5min
(every checkpoint during benchmark were triggered by
checkpoint_timeout)

Did you execute munual checkpoint before starting benchmark?

Yes.

We read only your message, it occuered three times checkpoint during
benchmark.
But if you did not executed manual checkpoint, it would be different.

You had better clear this point for more transparent evaluation.

What I executed was:

-------------------------------------
CHECKPOINT
SELECT pg_current_xlog_location()
pgbench -c 32 -j 4 -T 900 -M prepared -r -P 10
SELECT pg_current_xlog_location()
SELECT pg_xlog_location_diff() -- calculate the diff of the above locations
-------------------------------------

I repeated this several times to eliminate the noise.

Regards,

--
Fujii Masao

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

In reply to: Fujii Masao (#9)
Re: Compression of full-page-writes

On Thu, Aug 29, 2013 at 10:55 PM, Fujii Masao <masao.fujii@gmail.com> wrote:

I suppose that the cost of the random I/O involved would
probably dominate just as with compress_backup_block = off. That said,
you've used an SSD here, so perhaps not.

Oh, maybe my description was confusing. full_page_writes was enabled
while running the benchmark even if compress_backup_block = off.
I've not merged those two parameters yet. So even in
compress_backup_block = off, random I/O would not be increased in recovery.

I understood it that way. I just meant that it could be that the
random I/O was so expensive that the additional cost of decompressing
the FPIs looked insignificant in comparison. If that was the case, the
increase in recovery time would be modest.

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#13Fujii Masao
masao.fujii@gmail.com
In reply to: NikhilS (#7)
Re: Compression of full-page-writes

On Fri, Aug 30, 2013 at 2:37 PM, Nikhil Sontakke <nikkhils@gmail.com> wrote:

Hi Fujii-san,

I must be missing something really trivial, but why not try to compress all
types of WAL blocks and not just FPW?

The size of non-FPW WAL is small, compared to that of FPW.
I thought that compression of such a small WAL would not have
big effect on the reduction of WAL size. Rather, compression of
every WAL records might cause large performance overhead.

Also, focusing on FPW makes the patch very simple. We can
add the compression of other WAL later if we want.

Regards,

--
Fujii Masao

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#14Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Fujii Masao (#1)
Re: Compression of full-page-writes

On 30.08.2013 05:55, Fujii Masao wrote:

* Result
[tps]
1386.8 (compress_backup_block = off)
1627.7 (compress_backup_block = on)

It would be good to check how much of this effect comes from reducing
the amount of data that needs to be CRC'd, because there has been some
talk of replacing the current CRC-32 algorithm with something faster.
See
/messages/by-id/20130829223004.GD4283@awork2.anarazel.de.
It might even be beneficial to use one routine for full-page-writes,
which are generally much larger than other WAL records, and another
routine for smaller records. As long as they both produce the same CRC,
of course.

Speeding up the CRC calculation obviously won't help with the WAL volume
per se, ie. you still generate the same amount of WAL that needs to be
shipped in replication. But then again, if all you want to do is to
reduce the volume, you could just compress the whole WAL stream.

- Heikki

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#15Robert Haas
robertmhaas@gmail.com
In reply to: Fujii Masao (#1)
Re: Compression of full-page-writes

On Thu, Aug 29, 2013 at 10:55 PM, Fujii Masao <masao.fujii@gmail.com> wrote:

Attached patch adds new GUC parameter 'compress_backup_block'.

I think this is a great idea.

(This is not to disagree with any of the suggestions made on this
thread for further investigation, all of which I think I basically
agree with. But I just wanted to voice general support for the
general idea, regardless of what specifically we end up with.)

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#16Fujii Masao
masao.fujii@gmail.com
In reply to: Fujii Masao (#1)
Re: Compression of full-page-writes

On Fri, Aug 30, 2013 at 11:55 AM, Fujii Masao <masao.fujii@gmail.com> wrote:

Hi,

Attached patch adds new GUC parameter 'compress_backup_block'.
When this parameter is enabled, the server just compresses FPW
(full-page-writes) in WAL by using pglz_compress() before inserting it
to the WAL buffers. Then, the compressed FPW is decompressed
in recovery. This is very simple patch.

The purpose of this patch is the reduction of WAL size.
Under heavy write load, the server needs to write a large amount of
WAL and this is likely to be a bottleneck. What's the worse is,
in replication, a large amount of WAL would have harmful effect on
not only WAL writing in the master, but also WAL streaming and
WAL writing in the standby. Also we would need to spend more
money on the storage to store such a large data.
I'd like to alleviate such harmful situations by reducing WAL size.

My idea is very simple, just compress FPW because FPW is
a big part of WAL. I used pglz_compress() as a compression method,
but you might think that other method is better. We can add
something like FPW-compression-hook for that later. The patch
adds new GUC parameter, but I'm thinking to merge it to full_page_writes
parameter to avoid increasing the number of GUC. That is,
I'm thinking to change full_page_writes so that it can accept new value
'compress'.

Done. Attached is the updated version of the patch.

In this patch, full_page_writes accepts three values: on, compress, and off.
When it's set to compress, the full page image is compressed before it's
inserted into the WAL buffers.

I measured how much this patch affects the performance and the WAL
volume again, and I also measured how much this patch affects the
recovery time.

* Server spec
CPU: 8core, Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz
Mem: 16GB
Disk: 500GB SSD Samsung 840

* Benchmark
pgbench -c 32 -j 4 -T 900 -M prepared
scaling factor: 100

checkpoint_segments = 1024
checkpoint_timeout = 5min
(every checkpoint during benchmark were triggered by checkpoint_timeout)

* Result
[tps]
1344.2 (full_page_writes = on)
1605.9 (compress)
1810.1 (off)

[the amount of WAL generated during running pgbench]
4422 MB (on)
1517 MB (compress)
885 MB (off)

[time required to replay WAL generated during running pgbench]
61s (on) .... 1209911 transactions were replayed,
recovery speed: 19834.6 transactions/sec
39s (compress) .... 1445446 transactions were replayed,
recovery speed: 37062.7 transactions/sec
37s (off) .... 1629235 transactions were replayed,
recovery speed: 44033.3 transactions/sec

When full_page_writes is disabled, the recovery speed is basically very low
because of random I/O. But, ISTM that, since I was using SSD in my box,
the recovery with full_page_writse=off was fastest.

Regards,

--
Fujii Masao

Attachments:

compress_fpw_v2.patchapplication/octet-stream; name=compress_fpw_v2.patchDownload+212-97
#17Andres Freund
andres@anarazel.de
In reply to: Fujii Masao (#16)
Re: Compression of full-page-writes

On 2013-09-11 19:39:14 +0900, Fujii Masao wrote:

* Benchmark
pgbench -c 32 -j 4 -T 900 -M prepared
scaling factor: 100

checkpoint_segments = 1024
checkpoint_timeout = 5min
(every checkpoint during benchmark were triggered by checkpoint_timeout)

* Result
[tps]
1344.2 (full_page_writes = on)
1605.9 (compress)
1810.1 (off)

[the amount of WAL generated during running pgbench]
4422 MB (on)
1517 MB (compress)
885 MB (off)

[time required to replay WAL generated during running pgbench]
61s (on) .... 1209911 transactions were replayed,
recovery speed: 19834.6 transactions/sec
39s (compress) .... 1445446 transactions were replayed,
recovery speed: 37062.7 transactions/sec
37s (off) .... 1629235 transactions were replayed,
recovery speed: 44033.3 transactions/sec

ISTM for those benchmarks you should use an absolute number of
transactions, not one based on elapsed time. Otherwise the comparison
isn't really meaningful.

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#18Fujii Masao
masao.fujii@gmail.com
In reply to: Fujii Masao (#16)
Re: Compression of full-page-writes

On Wed, Sep 11, 2013 at 7:39 PM, Fujii Masao <masao.fujii@gmail.com> wrote:

On Fri, Aug 30, 2013 at 11:55 AM, Fujii Masao <masao.fujii@gmail.com> wrote:

Hi,

Attached patch adds new GUC parameter 'compress_backup_block'.
When this parameter is enabled, the server just compresses FPW
(full-page-writes) in WAL by using pglz_compress() before inserting it
to the WAL buffers. Then, the compressed FPW is decompressed
in recovery. This is very simple patch.

The purpose of this patch is the reduction of WAL size.
Under heavy write load, the server needs to write a large amount of
WAL and this is likely to be a bottleneck. What's the worse is,
in replication, a large amount of WAL would have harmful effect on
not only WAL writing in the master, but also WAL streaming and
WAL writing in the standby. Also we would need to spend more
money on the storage to store such a large data.
I'd like to alleviate such harmful situations by reducing WAL size.

My idea is very simple, just compress FPW because FPW is
a big part of WAL. I used pglz_compress() as a compression method,
but you might think that other method is better. We can add
something like FPW-compression-hook for that later. The patch
adds new GUC parameter, but I'm thinking to merge it to full_page_writes
parameter to avoid increasing the number of GUC. That is,
I'm thinking to change full_page_writes so that it can accept new value
'compress'.

Done. Attached is the updated version of the patch.

In this patch, full_page_writes accepts three values: on, compress, and off.
When it's set to compress, the full page image is compressed before it's
inserted into the WAL buffers.

I measured how much this patch affects the performance and the WAL
volume again, and I also measured how much this patch affects the
recovery time.

* Server spec
CPU: 8core, Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz
Mem: 16GB
Disk: 500GB SSD Samsung 840

* Benchmark
pgbench -c 32 -j 4 -T 900 -M prepared
scaling factor: 100

checkpoint_segments = 1024
checkpoint_timeout = 5min
(every checkpoint during benchmark were triggered by checkpoint_timeout)

* Result
[tps]
1344.2 (full_page_writes = on)
1605.9 (compress)
1810.1 (off)

[the amount of WAL generated during running pgbench]
4422 MB (on)
1517 MB (compress)
885 MB (off)

On second thought, the patch could compress WAL very much because I
used pgbench.
Most of data in pgbench are pgbench_accounts table's "filler" columns, i.e.,
blank-padded empty strings. So, the compression ratio of WAL was very high.

I will do the same measurement by using another benchmark.

Regards,

--
Fujii Masao

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#19KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Fujii Masao (#18)
Re: Compression of full-page-writes

Hi Fujii-san,

(2013/09/30 12:49), Fujii Masao wrote:

On second thought, the patch could compress WAL very much because I used pgbench.
I will do the same measurement by using another benchmark.

If you hope, I can test this patch in DBT-2 benchmark in end of this week.
I will use under following test server.

* Test server
Server: HP Proliant DL360 G7
CPU: Xeon E5640 2.66GHz (1P/4C)
Memory: 18GB(PC3-10600R-9)
Disk: 146GB(15k)*4 RAID1+0
RAID controller: P410i/256MB

This is PG-REX test server as you know.

Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#20Fujii Masao
masao.fujii@gmail.com
In reply to: KONDO Mitsumasa (#19)
Re: Compression of full-page-writes

On Mon, Sep 30, 2013 at 1:27 PM, KONDO Mitsumasa
<kondo.mitsumasa@lab.ntt.co.jp> wrote:

Hi Fujii-san,

(2013/09/30 12:49), Fujii Masao wrote:

On second thought, the patch could compress WAL very much because I used
pgbench.

I will do the same measurement by using another benchmark.

If you hope, I can test this patch in DBT-2 benchmark in end of this week.
I will use under following test server.

* Test server
Server: HP Proliant DL360 G7
CPU: Xeon E5640 2.66GHz (1P/4C)
Memory: 18GB(PC3-10600R-9)
Disk: 146GB(15k)*4 RAID1+0
RAID controller: P410i/256MB

Yep, please! It's really helpful!

Regards,

--
Fujii Masao

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#21Amit Kapila
amit.kapila16@gmail.com
In reply to: Fujii Masao (#20)
#22KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Amit Kapila (#21)
#23Fujii Masao
masao.fujii@gmail.com
In reply to: Amit Kapila (#21)
#24Amit Kapila
amit.kapila16@gmail.com
In reply to: Fujii Masao (#23)
#25Haribabu kommi
haribabu.kommi@huawei.com
In reply to: Amit Kapila (#24)
#26Andres Freund
andres@anarazel.de
In reply to: Andres Freund (#17)
#27KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Haribabu kommi (#25)
#28Haribabu kommi
haribabu.kommi@huawei.com
In reply to: KONDO Mitsumasa (#27)
#29KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Fujii Masao (#20)
#30KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Haribabu kommi (#28)
#31Haribabu kommi
haribabu.kommi@huawei.com
In reply to: KONDO Mitsumasa (#30)
#32Dimitri Fontaine
dimitri@2ndQuadrant.fr
In reply to: Fujii Masao (#16)
#33Fujii Masao
masao.fujii@gmail.com
In reply to: KONDO Mitsumasa (#29)
#34Fujii Masao
masao.fujii@gmail.com
In reply to: Haribabu kommi (#31)
#35Fujii Masao
masao.fujii@gmail.com
In reply to: Dimitri Fontaine (#32)
#36Andres Freund
andres@anarazel.de
In reply to: Fujii Masao (#35)
#37Fujii Masao
masao.fujii@gmail.com
In reply to: Fujii Masao (#35)
#38Fujii Masao
masao.fujii@gmail.com
In reply to: Andres Freund (#36)
#39Amit Kapila
amit.kapila16@gmail.com
In reply to: Andres Freund (#36)
#40Haribabu kommi
haribabu.kommi@huawei.com
In reply to: Fujii Masao (#34)
#41Andres Freund
andres@anarazel.de
In reply to: Amit Kapila (#39)
#42Amit Kapila
amit.kapila16@gmail.com
In reply to: Andres Freund (#41)
#43Jesper Krogh
jesper@krogh.cc
In reply to: Andres Freund (#41)
#44KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Amit Kapila (#42)
#45Amit Kapila
amit.kapila16@gmail.com
In reply to: KONDO Mitsumasa (#44)
#46KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Amit Kapila (#45)
In reply to: KONDO Mitsumasa (#46)
#48KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Kenneth Marshall (#47)
In reply to: KONDO Mitsumasa (#48)
#50KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Fujii Masao (#33)
#51Amit Kapila
amit.kapila16@gmail.com
In reply to: KONDO Mitsumasa (#46)
#52KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Amit Kapila (#51)
#53Amit Kapila
amit.kapila16@gmail.com
In reply to: KONDO Mitsumasa (#52)
#54Fujii Masao
masao.fujii@gmail.com
In reply to: Amit Kapila (#53)
#55Amit Kapila
amit.kapila16@gmail.com
In reply to: Fujii Masao (#54)
#56Andres Freund
andres@anarazel.de
In reply to: Fujii Masao (#54)
#57KONDO Mitsumasa
kondo.mitsumasa@lab.ntt.co.jp
In reply to: Fujii Masao (#54)
#58Amit Kapila
amit.kapila16@gmail.com
In reply to: KONDO Mitsumasa (#57)
#59Robert Haas
robertmhaas@gmail.com
In reply to: Fujii Masao (#54)
#60Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#59)
In reply to: Robert Haas (#59)
#62Robert Haas
robertmhaas@gmail.com
In reply to: Kenneth Marshall (#61)
In reply to: Robert Haas (#62)
#64Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#59)
#65Bruce Momjian
bruce@momjian.us
In reply to: Fujii Masao (#37)
#66Fujii Masao
masao.fujii@gmail.com
In reply to: Bruce Momjian (#65)
#67Sameer Thakur
samthakur74@gmail.com
In reply to: Fujii Masao (#16)
#68Tom Lane
tgl@sss.pgh.pa.us
In reply to: Sameer Thakur (#67)
#69Fujii Masao
masao.fujii@gmail.com
In reply to: Sameer Thakur (#67)
#70Sameer Thakur
samthakur74@gmail.com
In reply to: Fujii Masao (#69)
#71Simon Riggs
simon@2ndQuadrant.com
In reply to: Fujii Masao (#1)
#72Fujii Masao
masao.fujii@gmail.com
In reply to: Simon Riggs (#71)
#73Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Fujii Masao (#72)
#74Rahila Syed
rahilasyed.90@gmail.com
In reply to: Fujii Masao (#1)
#75Fujii Masao
masao.fujii@gmail.com
In reply to: Rahila Syed (#74)
#76Simon Riggs
simon@2ndQuadrant.com
In reply to: Fujii Masao (#75)
#77Bruce Momjian
bruce@momjian.us
In reply to: Simon Riggs (#76)
#78Simon Riggs
simon@2ndQuadrant.com
In reply to: Bruce Momjian (#77)
#79Rahila Syed
rahilasyed90@gmail.com
In reply to: Fujii Masao (#75)
#80Bruce Momjian
bruce@momjian.us
In reply to: Simon Riggs (#78)
#81Fujii Masao
masao.fujii@gmail.com
In reply to: Simon Riggs (#78)
#82Rahila Syed
rahilasyed90@gmail.com
In reply to: Rahila Syed (#74)
#83Michael Paquier
michael@paquier.xyz
In reply to: Rahila Syed (#82)
#84Fujii Masao
masao.fujii@gmail.com
In reply to: Michael Paquier (#83)
#85Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Fujii Masao (#84)
#86Rahila Syed
rahilasyed90@gmail.com
In reply to: Rahila Syed (#74)
#87Abhijit Menon-Sen
ams@2ndQuadrant.com
In reply to: Rahila Syed (#86)
#88Claudio Freire
klaussfreire@gmail.com
In reply to: Abhijit Menon-Sen (#87)
#89Abhijit Menon-Sen
ams@2ndQuadrant.com
In reply to: Claudio Freire (#88)
#90Rahila Syed
rahilasyed90@gmail.com
In reply to: Abhijit Menon-Sen (#87)
#91Andres Freund
andres@anarazel.de
In reply to: Rahila Syed (#90)
#92Abhijit Menon-Sen
ams@2ndQuadrant.com
In reply to: Rahila Syed (#90)
#93Abhijit Menon-Sen
ams@2ndQuadrant.com
In reply to: Abhijit Menon-Sen (#92)
#94Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Abhijit Menon-Sen (#92)
#95Rahila Syed
rahilasyed90@gmail.com
In reply to: Abhijit Menon-Sen (#87)
#96Fujii Masao
masao.fujii@gmail.com
In reply to: Rahila Syed (#95)
#97Abhijit Menon-Sen
ams@2ndQuadrant.com
In reply to: Fujii Masao (#96)
#98Rahila Syed
rahilasyed90@gmail.com
In reply to: Abhijit Menon-Sen (#97)
#99Abhijit Menon-Sen
ams@2ndQuadrant.com
In reply to: Rahila Syed (#98)
#100Abhijit Menon-Sen
ams@2ndQuadrant.com
In reply to: Abhijit Menon-Sen (#99)
#101Rahila Syed
rahilasyed90@gmail.com
In reply to: Abhijit Menon-Sen (#100)
#102Rahila Syed
rahilasyed90@gmail.com
In reply to: Rahila Syed (#101)
#103Andres Freund
andres@anarazel.de
In reply to: Rahila Syed (#98)
#104Rahila Syed
rahilasyed90@gmail.com
In reply to: Andres Freund (#103)
#105Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Rahila Syed (#104)
#106Fujii Masao
masao.fujii@gmail.com
In reply to: Pavan Deolasee (#105)
#107Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Fujii Masao (#106)
#108Rahila Syed
rahilasyed90@gmail.com
In reply to: Andres Freund (#103)
#109Fujii Masao
masao.fujii@gmail.com
In reply to: Rahila Syed (#108)
#110Rahila Syed
rahilasyed90@gmail.com
In reply to: Fujii Masao (#1)
#111Robert Haas
robertmhaas@gmail.com
In reply to: Rahila Syed (#110)
#112Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#111)
#113Robert Haas
robertmhaas@gmail.com
In reply to: Rahila Syed (#95)
#114Fujii Masao
masao.fujii@gmail.com
In reply to: Andres Freund (#112)
#115Rahila Syed
rahilasyed90@gmail.com
In reply to: Fujii Masao (#114)
#116Rahila Syed
rahilasyed90@gmail.com
In reply to: Robert Haas (#113)
#117Fujii Masao
masao.fujii@gmail.com
In reply to: Rahila Syed (#115)
#118Robert Haas
robertmhaas@gmail.com
In reply to: Fujii Masao (#117)
#119Arthur Silva
arthurprs@gmail.com
In reply to: Fujii Masao (#117)
#120Fujii Masao
masao.fujii@gmail.com
In reply to: Robert Haas (#118)
#121Fujii Masao
masao.fujii@gmail.com
In reply to: Arthur Silva (#119)
#122Rahila Syed
rahilasyed90@gmail.com
In reply to: Arthur Silva (#119)
#123Arthur Silva
arthurprs@gmail.com
In reply to: Rahila Syed (#122)
In reply to: Arthur Silva (#123)
#125Andres Freund
andres@anarazel.de
In reply to: Kenneth Marshall (#124)
#126Rahila Syed
rahilasyed.90@gmail.com
In reply to: Rahila Syed (#122)
#127Arthur Silva
arthurprs@gmail.com
In reply to: Rahila Syed (#126)
In reply to: Arthur Silva (#127)
#129Mitsumasa KONDO
kondo.mitsumasa@gmail.com
In reply to: Kenneth Marshall (#128)
#130Robert Haas
robertmhaas@gmail.com
In reply to: Rahila Syed (#126)
#131Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#130)
#132Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#130)
#133Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#131)
#134Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#133)
#135Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#134)
In reply to: Andres Freund (#131)
In reply to: Andres Freund (#134)
#138Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Fujii Masao (#120)
#139Abhijit Menon-Sen
ams@2ndQuadrant.com
In reply to: Heikki Linnakangas (#138)
#140Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Abhijit Menon-Sen (#139)
#141Ants Aasma
ants.aasma@cybertec.at
In reply to: Heikki Linnakangas (#138)
#142Andres Freund
andres@anarazel.de
In reply to: Heikki Linnakangas (#140)
#143Andres Freund
andres@anarazel.de
In reply to: Ants Aasma (#141)
#144Andres Freund
andres@anarazel.de
In reply to: Heikki Linnakangas (#138)
In reply to: Ants Aasma (#141)
#146Arthur Silva
arthurprs@gmail.com
In reply to: Ants Aasma (#141)
#147Arthur Silva
arthurprs@gmail.com
In reply to: Andres Freund (#142)
#148Ants Aasma
ants.aasma@cybertec.at
In reply to: Arthur Silva (#146)
#149Andres Freund
andres@anarazel.de
In reply to: Ants Aasma (#148)
#150Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#149)
In reply to: Tom Lane (#150)
#152Arthur Silva
arthurprs@gmail.com
In reply to: Tom Lane (#150)
In reply to: Arthur Silva (#152)
#154Arthur Silva
arthurprs@gmail.com
In reply to: Kenneth Marshall (#153)
#155Claudio Freire
klaussfreire@gmail.com
In reply to: Kenneth Marshall (#153)
#156Andres Freund
andres@anarazel.de
In reply to: Kenneth Marshall (#153)
In reply to: Andres Freund (#156)
#158Arthur Silva
arthurprs@gmail.com
In reply to: Andres Freund (#156)
#159Craig Ringer
craig@2ndquadrant.com
In reply to: Kenneth Marshall (#153)
#160Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Arthur Silva (#158)
#161Amit Kapila
amit.kapila16@gmail.com
In reply to: Heikki Linnakangas (#140)
#162Andres Freund
andres@anarazel.de
In reply to: Amit Kapila (#161)
#163Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Andres Freund (#162)
#164Andres Freund
andres@anarazel.de
In reply to: Heikki Linnakangas (#163)
#165Amit Kapila
amit.kapila16@gmail.com
In reply to: Andres Freund (#164)
#166Rahila Syed
rahilasyed.90@gmail.com
In reply to: Robert Haas (#135)
#167Rahila Syed
rahilasyed.90@gmail.com
In reply to: Rahila Syed (#166)
#168Tom Lane
tgl@sss.pgh.pa.us
In reply to: Rahila Syed (#166)
#169Masahiko Sawada
sawada.mshk@gmail.com
In reply to: Rahila Syed (#167)
#170Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Tom Lane (#168)
#171Rahila Syed
rahilasyed90@gmail.com
In reply to: Alvaro Herrera (#170)
#172Syed, Rahila
Rahila.Syed@nttdata.com
In reply to: Rahila Syed (#171)
#173Florian Weimer
fw@deneb.enyo.de
In reply to: Ants Aasma (#148)
#174Ants Aasma
ants.aasma@cybertec.at
In reply to: Florian Weimer (#173)
#175Andres Freund
andres@anarazel.de
In reply to: Syed, Rahila (#172)
#176Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#175)
#177Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#176)
#178Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Robert Haas (#176)
#179Andres Freund
andres@anarazel.de
In reply to: Heikki Linnakangas (#178)
#180Robert Haas
robertmhaas@gmail.com
In reply to: Heikki Linnakangas (#163)
#181Rahila Syed
rahilasyed.90@gmail.com
In reply to: Andres Freund (#175)
#182Rahila Syed
rahilasyed90@gmail.com
In reply to: Andres Freund (#175)
#183Fujii Masao
masao.fujii@gmail.com
In reply to: Rahila Syed (#182)
#184Syed, Rahila
Rahila.Syed@nttdata.com
In reply to: Fujii Masao (#183)
#185Fujii Masao
masao.fujii@gmail.com
In reply to: Syed, Rahila (#184)
#186Rahila Syed
rahilasyed.90@gmail.com
In reply to: Fujii Masao (#185)
#187Rahila Syed
rahilasyed90@gmail.com
In reply to: Rahila Syed (#186)
#188Fujii Masao
masao.fujii@gmail.com
In reply to: Rahila Syed (#187)
#189Rahila Syed
rahilasyed90@gmail.com
In reply to: Fujii Masao (#188)
#190Fujii Masao
masao.fujii@gmail.com
In reply to: Rahila Syed (#189)
#191Michael Paquier
michael@paquier.xyz
In reply to: Fujii Masao (#190)
#192Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#191)
#193Amit Langote
Langote_Amit_f8@lab.ntt.co.jp
In reply to: Michael Paquier (#192)
#194Rahila Syed
rahilasyed.90@gmail.com
In reply to: Michael Paquier (#192)
#195Andres Freund
andres@anarazel.de
In reply to: Michael Paquier (#192)
#196Michael Paquier
michael@paquier.xyz
In reply to: Andres Freund (#195)
#197Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#195)
#198Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#197)
#199Michael Paquier
michael@paquier.xyz
In reply to: Andres Freund (#198)
#200Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#199)
#201Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Michael Paquier (#200)
#202Michael Paquier
michael@paquier.xyz
In reply to: Alvaro Herrera (#201)
#203Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#202)
#204Syed, Rahila
Rahila.Syed@nttdata.com
In reply to: Michael Paquier (#203)
#205Michael Paquier
michael@paquier.xyz
In reply to: Syed, Rahila (#204)
#206Andres Freund
andres@anarazel.de
In reply to: Michael Paquier (#205)
#207Michael Paquier
michael@paquier.xyz
In reply to: Andres Freund (#206)
#208Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#207)
#209Rahila Syed
rahilasyed90@gmail.com
In reply to: Michael Paquier (#205)
#210Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#208)
#211Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#210)
#212Michael Paquier
michael@paquier.xyz
In reply to: Rahila Syed (#209)
#213Robert Haas
robertmhaas@gmail.com
In reply to: Michael Paquier (#205)
#214Michael Paquier
michael@paquier.xyz
In reply to: Robert Haas (#213)
#215Robert Haas
robertmhaas@gmail.com
In reply to: Michael Paquier (#214)
#216Michael Paquier
michael@paquier.xyz
In reply to: Robert Haas (#215)
#217Rahila Syed
rahilasyed.90@gmail.com
In reply to: Robert Haas (#213)
#218Michael Paquier
michael@paquier.xyz
In reply to: Rahila Syed (#217)
#219Robert Haas
robertmhaas@gmail.com
In reply to: Rahila Syed (#217)
#220Rahila Syed
rahilasyed.90@gmail.com
In reply to: Robert Haas (#219)
#221Fujii Masao
masao.fujii@gmail.com
In reply to: Michael Paquier (#218)
#222Michael Paquier
michael@paquier.xyz
In reply to: Robert Haas (#219)
#223Rahila Syed
rahilasyed.90@gmail.com
In reply to: Michael Paquier (#222)
#224Michael Paquier
michael@paquier.xyz
In reply to: Rahila Syed (#223)
#225Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#224)
#226Andres Freund
andres@anarazel.de
In reply to: Michael Paquier (#225)
#227Robert Haas
robertmhaas@gmail.com
In reply to: Rahila Syed (#220)
#228Michael Paquier
michael@paquier.xyz
In reply to: Andres Freund (#226)
#229Simon Riggs
simon@2ndQuadrant.com
In reply to: Michael Paquier (#228)
#230Michael Paquier
michael@paquier.xyz
In reply to: Simon Riggs (#229)
#231Rahila Syed
rahilasyed90@gmail.com
In reply to: Michael Paquier (#228)
#232Michael Paquier
michael@paquier.xyz
In reply to: Rahila Syed (#231)
#233Simon Riggs
simon@2ndQuadrant.com
In reply to: Michael Paquier (#230)
#234Robert Haas
robertmhaas@gmail.com
In reply to: Simon Riggs (#229)
#235Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#234)
#236Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#235)
#237Heikki Linnakangas
heikki.linnakangas@enterprisedb.com
In reply to: Andres Freund (#235)
#238Michael Paquier
michael@paquier.xyz
In reply to: Heikki Linnakangas (#237)
#239Simon Riggs
simon@2ndQuadrant.com
In reply to: Robert Haas (#234)
#240Simon Riggs
simon@2ndQuadrant.com
In reply to: Andres Freund (#235)
#241Amit Kapila
amit.kapila16@gmail.com
In reply to: Simon Riggs (#233)
#242Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#241)
#243Rahila Syed
rahilasyed90@gmail.com
In reply to: Robert Haas (#227)
#244Bruce Momjian
bruce@momjian.us
In reply to: Rahila Syed (#243)
#245Arthur Silva
arthurprs@gmail.com
In reply to: Rahila Syed (#243)
#246Rahila Syed
rahilasyed90@gmail.com
In reply to: Bruce Momjian (#244)
#247Bruce Momjian
bruce@momjian.us
In reply to: Rahila Syed (#246)
#248Michael Paquier
michael@paquier.xyz
In reply to: Bruce Momjian (#247)
#249Michael Paquier
michael@paquier.xyz
In reply to: Robert Haas (#234)
#250Robert Haas
robertmhaas@gmail.com
In reply to: Michael Paquier (#249)
#251Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#247)
#252Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#251)
#253Michael Paquier
michael@paquier.xyz
In reply to: Robert Haas (#250)
#254Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#251)
#255Andres Freund
andres@anarazel.de
In reply to: Bruce Momjian (#254)
#256Bruce Momjian
bruce@momjian.us
In reply to: Andres Freund (#255)
#257Andres Freund
andres@anarazel.de
In reply to: Bruce Momjian (#256)
#258Robert Haas
robertmhaas@gmail.com
In reply to: Michael Paquier (#253)
#259Michael Paquier
michael@paquier.xyz
In reply to: Robert Haas (#258)
#260Robert Haas
robertmhaas@gmail.com
In reply to: Michael Paquier (#259)
#261Rahila Syed
rahilasyed90@gmail.com
In reply to: Bruce Momjian (#254)
#262Bruce Momjian
bruce@momjian.us
In reply to: Andres Freund (#257)
#263Michael Paquier
michael@paquier.xyz
In reply to: Bruce Momjian (#244)
#264Andres Freund
andres@anarazel.de
In reply to: Bruce Momjian (#262)
#265Andres Freund
andres@anarazel.de
In reply to: Michael Paquier (#263)
#266Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#265)
#267Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#266)
#268Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#267)
#269Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#268)
#270Bruce Momjian
bruce@momjian.us
In reply to: Andres Freund (#269)
#271Simon Riggs
simon@2ndQuadrant.com
In reply to: Bruce Momjian (#270)
#272Robert Haas
robertmhaas@gmail.com
In reply to: Simon Riggs (#271)
#273Michael Paquier
michael@paquier.xyz
In reply to: Robert Haas (#266)
#274Claudio Freire
klaussfreire@gmail.com
In reply to: Michael Paquier (#273)
#275Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#241)
#276Simon Riggs
simon@2ndQuadrant.com
In reply to: Robert Haas (#272)
#277Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#263)
#278Simon Riggs
simon@2ndQuadrant.com
In reply to: Michael Paquier (#277)
#279Michael Paquier
michael@paquier.xyz
In reply to: Simon Riggs (#278)
#280Andres Freund
andres@anarazel.de
In reply to: Michael Paquier (#279)
#281Michael Paquier
michael@paquier.xyz
In reply to: Andres Freund (#280)
#282Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#281)
#283Robert Haas
robertmhaas@gmail.com
In reply to: Michael Paquier (#277)
#284Merlin Moncure
mmoncure@gmail.com
In reply to: Andres Freund (#257)
#285Michael Paquier
michael@paquier.xyz
In reply to: Robert Haas (#283)
#286Michael Paquier
michael@paquier.xyz
In reply to: Merlin Moncure (#284)
#287Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#285)
#288Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Michael Paquier (#287)
#289Michael Paquier
michael@paquier.xyz
In reply to: Alvaro Herrera (#288)
#290Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#289)
#291Merlin Moncure
mmoncure@gmail.com
In reply to: Michael Paquier (#286)
#292Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#290)
#293Michael Paquier
michael@paquier.xyz
In reply to: Merlin Moncure (#291)
#294Rahila Syed
rahilasyed90@gmail.com
In reply to: Michael Paquier (#292)
#295Fujii Masao
masao.fujii@gmail.com
In reply to: Michael Paquier (#292)
#296Michael Paquier
michael@paquier.xyz
In reply to: Fujii Masao (#295)
#297Michael Paquier
michael@paquier.xyz
In reply to: Rahila Syed (#294)
#298Fujii Masao
masao.fujii@gmail.com
In reply to: Michael Paquier (#297)
#299Rahila Syed
rahilasyed90@gmail.com
In reply to: Fujii Masao (#298)
#300Fujii Masao
masao.fujii@gmail.com
In reply to: Rahila Syed (#299)
#301Michael Paquier
michael@paquier.xyz
In reply to: Rahila Syed (#299)
#302Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#301)
#303Rahila Syed
rahilasyed90@gmail.com
In reply to: Michael Paquier (#302)
#304Fujii Masao
masao.fujii@gmail.com
In reply to: Michael Paquier (#302)
#305Michael Paquier
michael@paquier.xyz
In reply to: Fujii Masao (#304)
#306Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#305)
#307Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#306)
#308Fujii Masao
masao.fujii@gmail.com
In reply to: Michael Paquier (#307)
#309Michael Paquier
michael@paquier.xyz
In reply to: Fujii Masao (#308)
#310Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#309)
#311Jeff Davis
pgsql@j-davis.com
In reply to: Heikki Linnakangas (#14)
#312Michael Paquier
michael@paquier.xyz
In reply to: Jeff Davis (#311)
#313Andres Freund
andres@anarazel.de
In reply to: Michael Paquier (#312)
#314Bruce Momjian
bruce@momjian.us
In reply to: Andres Freund (#313)
#315Amit Kapila
amit.kapila16@gmail.com
In reply to: Bruce Momjian (#314)
#316Michael Paquier
michael@paquier.xyz
In reply to: Amit Kapila (#315)
#317Bruce Momjian
bruce@momjian.us
In reply to: Amit Kapila (#315)
#318Amit Kapila
amit.kapila16@gmail.com
In reply to: Michael Paquier (#316)
#319Amit Kapila
amit.kapila16@gmail.com
In reply to: Bruce Momjian (#317)
#320Andres Freund
andres@anarazel.de
In reply to: Bruce Momjian (#314)
In reply to: Andres Freund (#320)
#322Bruce Momjian
bruce@momjian.us
In reply to: Kenneth Marshall (#321)
#323Andres Freund
andres@anarazel.de
In reply to: Bruce Momjian (#322)
#324Bruce Momjian
bruce@momjian.us
In reply to: Andres Freund (#323)
#325Andres Freund
andres@anarazel.de
In reply to: Bruce Momjian (#324)
#326Bruce Momjian
bruce@momjian.us
In reply to: Andres Freund (#325)
#327Claudio Freire
klaussfreire@gmail.com
In reply to: Andres Freund (#325)
#328Bruce Momjian
bruce@momjian.us
In reply to: Claudio Freire (#327)
#329Stephen Frost
sfrost@snowman.net
In reply to: Bruce Momjian (#328)
#330Michael Paquier
michael@paquier.xyz
In reply to: Bruce Momjian (#322)
#331Fujii Masao
masao.fujii@gmail.com
In reply to: Bruce Momjian (#322)
#332Fujii Masao
masao.fujii@gmail.com
In reply to: Bruce Momjian (#328)
#333Fujii Masao
masao.fujii@gmail.com
In reply to: Michael Paquier (#310)
#334Michael Paquier
michael@paquier.xyz
In reply to: Fujii Masao (#333)
#335Rahila Syed
rahilasyed.90@gmail.com
In reply to: Michael Paquier (#334)
#336Michael Paquier
michael@paquier.xyz
In reply to: Rahila Syed (#335)
#337Rahila Syed
rahilasyed.90@gmail.com
In reply to: Fujii Masao (#332)
#338Michael Paquier
michael@paquier.xyz
In reply to: Rahila Syed (#337)
#339Rahila Syed
rahilasyed.90@gmail.com
In reply to: Michael Paquier (#338)
#340Michael Paquier
michael@paquier.xyz
In reply to: Rahila Syed (#339)
#341Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#322)
#342Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#340)
#343Fujii Masao
masao.fujii@gmail.com
In reply to: Michael Paquier (#334)
#344Syed, Rahila
Rahila.Syed@nttdata.com
In reply to: Michael Paquier (#336)
#345Michael Paquier
michael@paquier.xyz
In reply to: Fujii Masao (#343)
#346Michael Paquier
michael@paquier.xyz
In reply to: Syed, Rahila (#344)
#347Fujii Masao
masao.fujii@gmail.com
In reply to: Michael Paquier (#346)
#348Michael Paquier
michael@paquier.xyz
In reply to: Fujii Masao (#347)
#349Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#348)
#350Syed, Rahila
Rahila.Syed@nttdata.com
In reply to: Michael Paquier (#346)
#351Michael Paquier
michael@paquier.xyz
In reply to: Syed, Rahila (#350)
#352Fujii Masao
masao.fujii@gmail.com
In reply to: Michael Paquier (#345)
#353Syed, Rahila
Rahila.Syed@nttdata.com
In reply to: Michael Paquier (#348)
#354Michael Paquier
michael@paquier.xyz
In reply to: Syed, Rahila (#353)
#355Syed, Rahila
Rahila.Syed@nttdata.com
In reply to: Syed, Rahila (#353)
#356Syed, Rahila
Rahila.Syed@nttdata.com
In reply to: Michael Paquier (#354)
#357Michael Paquier
michael@paquier.xyz
In reply to: Syed, Rahila (#356)
#358Syed, Rahila
Rahila.Syed@nttdata.com
In reply to: Michael Paquier (#357)
#359Michael Paquier
michael@paquier.xyz
In reply to: Syed, Rahila (#358)
#360Syed, Rahila
Rahila.Syed@nttdata.com
In reply to: Michael Paquier (#359)
#361Michael Paquier
michael@paquier.xyz
In reply to: Syed, Rahila (#360)
#362Andres Freund
andres@anarazel.de
In reply to: Syed, Rahila (#360)
#363Andres Freund
andres@anarazel.de
In reply to: Michael Paquier (#361)
#364Syed, Rahila
Rahila.Syed@nttdata.com
In reply to: Andres Freund (#362)
#365Michael Paquier
michael@paquier.xyz
In reply to: Andres Freund (#362)
#366Rahila Syed
rahilasyed90@gmail.com
In reply to: Syed, Rahila (#364)
#367Fujii Masao
masao.fujii@gmail.com
In reply to: Rahila Syed (#366)
#368Michael Paquier
michael@paquier.xyz
In reply to: Fujii Masao (#367)
#369Syed, Rahila
Rahila.Syed@nttdata.com
In reply to: Fujii Masao (#367)
#370Andres Freund
andres@anarazel.de
In reply to: Michael Paquier (#368)
#371Fujii Masao
masao.fujii@gmail.com
In reply to: Syed, Rahila (#369)
#372Rahila Syed
rahilasyed90@gmail.com
In reply to: Fujii Masao (#371)
#373Michael Paquier
michael@paquier.xyz
In reply to: Rahila Syed (#372)
#374Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#373)
#375Fujii Masao
masao.fujii@gmail.com
In reply to: Michael Paquier (#374)
#376Rahila Syed
rahilasyed90@gmail.com
In reply to: Fujii Masao (#375)
#377Michael Paquier
michael@paquier.xyz
In reply to: Rahila Syed (#376)
#378Andres Freund
andres@anarazel.de
In reply to: Michael Paquier (#377)
#379Michael Paquier
michael@paquier.xyz
In reply to: Andres Freund (#378)
#380Syed, Rahila
Rahila.Syed@nttdata.com
In reply to: Michael Paquier (#374)
#381Fujii Masao
masao.fujii@gmail.com
In reply to: Michael Paquier (#379)
#382Michael Paquier
michael@paquier.xyz
In reply to: Syed, Rahila (#380)
#383Syed, Rahila
Rahila.Syed@nttdata.com
In reply to: Fujii Masao (#381)
#384Syed, Rahila
Rahila.Syed@nttdata.com
In reply to: Syed, Rahila (#383)
#385Michael Paquier
michael@paquier.xyz
In reply to: Syed, Rahila (#384)
#386Andres Freund
andres@anarazel.de
In reply to: Syed, Rahila (#384)
#387Fujii Masao
masao.fujii@gmail.com
In reply to: Andres Freund (#386)
#388Fujii Masao
masao.fujii@gmail.com
In reply to: Andres Freund (#363)
#389Fujii Masao
masao.fujii@gmail.com
In reply to: Michael Paquier (#385)
#390Michael Paquier
michael@paquier.xyz
In reply to: Fujii Masao (#389)
#391Michael Paquier
michael@paquier.xyz
In reply to: Michael Paquier (#390)
#392Rahila Syed
rahilasyed90@gmail.com
In reply to: Michael Paquier (#390)
#393Fujii Masao
masao.fujii@gmail.com
In reply to: Michael Paquier (#390)
#394Fujii Masao
masao.fujii@gmail.com
In reply to: Rahila Syed (#392)
#395Michael Paquier
michael@paquier.xyz
In reply to: Fujii Masao (#394)