Per table autovacuum vacuum cost limit behaviour strange

Started by Mark Kirkwoodabout 12 years ago40 messageshackers
Jump to latest
#1Mark Kirkwood
mark.kirkwood@catalyst.net.nz

A while back we were discussing rapid space bloat of tables under
certain circumstances. One further case I am examining is a highly
volatile single table, and how to tame its space blowout.

I've got a nice simple example (attached). Making use of pgbench to run
it as usual ():

$ createdb cache
$ psql cache < schema.sql
$ pgbench -n -c8 -T300 -f volatile0.sql cache

...causes the table (imaginatively named 'cache0') to grow several GB
with default autovacuum parameters. Some minimal changes will rein in
the growth to about 100MB:

$ grep -e naptime -e autovacuum_vacuum_cost_limit postgresql.conf
autovacuum_naptime = 5s
autovacuum_vacuum_cost_limit = 10000

However the cost_limit setting is likely to be way too aggressive
globally. No problem I figured, I'd leave it at the default (200) and
use ALTER TABLE to change it for *just* the 'cache0' table:

cache=# ALTER TABLE cache0 SET (autovacuum_vacuum_cost_limit=10000);

However re-running the pgbench test results in several GB worth of space
used by this table. Hmmm - looks like setting this parameter per table
does not work how I expected. Looking at
src/backend/postmaster/autovacuum.c I see some balancing calculations in
autovac_balance_cost() and AutoVacuumUpdateDelay(), the effect which
seems to be (after adding some debugging elogs) to reset the actual
effective cost_limit back to 200 for this table: viz (rel 16387 is cache0):

LOG: autovac_balance_cost(pid=24058 db=16384, rel=16387,
cost_limit=200, cost_limit_base=10000, cost_delay=20)
LOG: autovac_update_delay(pid=24058 db=16384, rel=16387,
cost_limit=200, cost_delay=20)

Is this working as intended? I did wonder if it was an artifact of only
having 1 table (creating another one made no difference)...or perhaps
only 1 active worker... I found I had to lobotomize the balancing calc
by doing:

cache=# ALTER TABLE cache0 SET (autovacuum_vacuum_cost_delay=0);

before I got the same effect as just setting the cost_limit globally.
I'm now a bit confused about whether I understand how setting cost_limit
and cost_delay via ALTER TABLE works (or in fact if it is working
properly for that matter).

Regards

Mark

Attachments:

schema.sqltext/x-sql; name=schema.sqlDownload
volatile0.sqltext/x-sql; name=volatile0.sqlDownload
#2Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Mark Kirkwood (#1)
Re: Per table autovacuum vacuum cost limit behaviour strange

On Wed, Feb 12, 2014 at 12:32 PM, Mark Kirkwood <
mark.kirkwood@catalyst.net.nz> wrote:

A while back we were discussing rapid space bloat of tables under certain
circumstances. One further case I am examining is a highly volatile single
table, and how to tame its space blowout.

I've got a nice simple example (attached). Making use of pgbench to run it
as usual ():

$ createdb cache
$ psql cache < schema.sql
$ pgbench -n -c8 -T300 -f volatile0.sql cache

...causes the table (imaginatively named 'cache0') to grow several GB with
default autovacuum parameters. Some minimal changes will rein in the growth
to about 100MB:

$ grep -e naptime -e autovacuum_vacuum_cost_limit postgresql.conf
autovacuum_naptime = 5s
autovacuum_vacuum_cost_limit = 10000

However the cost_limit setting is likely to be way too aggressive
globally. No problem I figured, I'd leave it at the default (200) and use
ALTER TABLE to change it for *just* the 'cache0' table:

cache=# ALTER TABLE cache0 SET (autovacuum_vacuum_cost_limit=10000);

However re-running the pgbench test results in several GB worth of space
used by this table. Hmmm - looks like setting this parameter per table does
not work how I expected. Looking at src/backend/postmaster/autovacuum.c I
see some balancing calculations in autovac_balance_cost() and
AutoVacuumUpdateDelay(), the effect which seems to be (after adding some
debugging elogs) to reset the actual effective cost_limit back to 200 for
this table: viz (rel 16387 is cache0):

LOG: autovac_balance_cost(pid=24058 db=16384, rel=16387, cost_limit=200,
cost_limit_base=10000, cost_delay=20)
LOG: autovac_update_delay(pid=24058 db=16384, rel=16387, cost_limit=200,
cost_delay=20)

Is this working as intended? I did wonder if it was an artifact of only
having 1 table (creating another one made no difference)...or perhaps only
1 active worker... I found I had to lobotomize the balancing calc by doing:

cache=# ALTER TABLE cache0 SET (autovacuum_vacuum_cost_delay=0);

before I got the same effect as just setting the cost_limit globally. I'm
now a bit confused about whether I understand how setting cost_limit and
cost_delay via ALTER TABLE works (or in fact if it is working properly for
that matter).

When I go through the code for checking the same, I got the following
behavior.

The default values of vacuum parameters - cost_limit - 200 and cost_delay -
0
The default values of auto vacuum parameters - cost_limit - (-1) and
cost_delay - 20ms.

1. User is not provided any vacuum parameters to the table, so the vacuum
options for the table are cost_limit - 200 and cost_delay - 20
2. User is provided cost_limit as 1000 to the table, so the vacuum options
for the table are cost_limit - 1000 and cost_delay - 20

For the above two cases, the "autovac_balance_cost" function sets the cost
parameters as cost_limit - 200 and cost_delay - 20.

3. User is provided cost_limit as 1000 and cost_delay as 10 to the table,
so the vacuum options for the table are cost_limit - 1000 and cost_delay -
10

This case the cost_limit - 100 and cost_delay - 10.

4. User is provided cost_limit as 1000 and cost_delay as 100 to the table,
so the vacuum options for the table are cost_limit - 1000 and cost_delay -
100

This case the cost_limit - 1000 and cost_delay - 100

From the above observations, The cost parameters of vacuum are not working
as they specified.
please correct me if anything wrong in my observation.

Regards,
Hari Babu
Fujitsu Australia

#3Mark Kirkwood
mark.kirkwood@catalyst.net.nz
In reply to: Haribabu Kommi (#2)
Re: Per table autovacuum vacuum cost limit behaviour strange

On 13/02/14 17:13, Haribabu Kommi wrote:

On Wed, Feb 12, 2014 at 12:32 PM, Mark Kirkwood <
mark.kirkwood@catalyst.net.nz> wrote:

A while back we were discussing rapid space bloat of tables under certain
circumstances. One further case I am examining is a highly volatile single
table, and how to tame its space blowout.

I've got a nice simple example (attached). Making use of pgbench to run it
as usual ():

$ createdb cache
$ psql cache < schema.sql
$ pgbench -n -c8 -T300 -f volatile0.sql cache

...causes the table (imaginatively named 'cache0') to grow several GB with
default autovacuum parameters. Some minimal changes will rein in the growth
to about 100MB:

$ grep -e naptime -e autovacuum_vacuum_cost_limit postgresql.conf
autovacuum_naptime = 5s
autovacuum_vacuum_cost_limit = 10000

However the cost_limit setting is likely to be way too aggressive
globally. No problem I figured, I'd leave it at the default (200) and use
ALTER TABLE to change it for *just* the 'cache0' table:

cache=# ALTER TABLE cache0 SET (autovacuum_vacuum_cost_limit=10000);

However re-running the pgbench test results in several GB worth of space
used by this table. Hmmm - looks like setting this parameter per table does
not work how I expected. Looking at src/backend/postmaster/autovacuum.c I
see some balancing calculations in autovac_balance_cost() and
AutoVacuumUpdateDelay(), the effect which seems to be (after adding some
debugging elogs) to reset the actual effective cost_limit back to 200 for
this table: viz (rel 16387 is cache0):

LOG: autovac_balance_cost(pid=24058 db=16384, rel=16387, cost_limit=200,
cost_limit_base=10000, cost_delay=20)
LOG: autovac_update_delay(pid=24058 db=16384, rel=16387, cost_limit=200,
cost_delay=20)

Is this working as intended? I did wonder if it was an artifact of only
having 1 table (creating another one made no difference)...or perhaps only
1 active worker... I found I had to lobotomize the balancing calc by doing:

cache=# ALTER TABLE cache0 SET (autovacuum_vacuum_cost_delay=0);

before I got the same effect as just setting the cost_limit globally. I'm
now a bit confused about whether I understand how setting cost_limit and
cost_delay via ALTER TABLE works (or in fact if it is working properly for
that matter).

When I go through the code for checking the same, I got the following
behavior.

The default values of vacuum parameters - cost_limit - 200 and cost_delay -
0
The default values of auto vacuum parameters - cost_limit - (-1) and
cost_delay - 20ms.

1. User is not provided any vacuum parameters to the table, so the vacuum
options for the table are cost_limit - 200 and cost_delay - 20
2. User is provided cost_limit as 1000 to the table, so the vacuum options
for the table are cost_limit - 1000 and cost_delay - 20

For the above two cases, the "autovac_balance_cost" function sets the cost
parameters as cost_limit - 200 and cost_delay - 20.

3. User is provided cost_limit as 1000 and cost_delay as 10 to the table,
so the vacuum options for the table are cost_limit - 1000 and cost_delay -
10

This case the cost_limit - 100 and cost_delay - 10.

4. User is provided cost_limit as 1000 and cost_delay as 100 to the table,
so the vacuum options for the table are cost_limit - 1000 and cost_delay -
100

This case the cost_limit - 1000 and cost_delay - 100

From the above observations, The cost parameters of vacuum are not working
as they specified.
please correct me if anything wrong in my observation.

FWIW - I can confirm these calculations in 9.4devel. I found the
attached patch handy for logging what the balanced limit and delay was.

Regards

Mark

Attachments:

autovacuum.c.difftext/x-patch; name=autovacuum.c.diffDownload+7-1
#4Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Mark Kirkwood (#3)
Re: Per table autovacuum vacuum cost limit behaviour strange

On Thu, Feb 13, 2014 at 3:31 PM, Mark Kirkwood wrote:

On 13/02/14 17:13, Haribabu Kommi wrote:

When I go through the code for checking the same, I got the following
behavior.

The default values of vacuum parameters - cost_limit - 200 and cost_delay
-
0
The default values of auto vacuum parameters - cost_limit - (-1) and
cost_delay - 20ms.

1. User is not provided any vacuum parameters to the table, so the vacuum
options for the table are cost_limit - 200 and cost_delay - 20
2. User is provided cost_limit as 1000 to the table, so the vacuum options
for the table are cost_limit - 1000 and cost_delay - 20

For the above two cases, the "autovac_balance_cost" function sets the cost
parameters as cost_limit - 200 and cost_delay - 20.

3. User is provided cost_limit as 1000 and cost_delay as 10 to the table,
so the vacuum options for the table are cost_limit - 1000 and cost_delay -
10

This case the cost_limit - 100 and cost_delay - 10.

4. User is provided cost_limit as 1000 and cost_delay as 100 to the table,
so the vacuum options for the table are cost_limit - 1000 and cost_delay -
100

This case the cost_limit - 1000 and cost_delay - 100

From the above observations, The cost parameters of vacuum are not
working
as they specified.
please correct me if anything wrong in my observation.

FWIW - I can confirm these calculations in 9.4devel. I found the attached
patch handy for logging what the balanced limit and delay was.

I changed the balance cost calculations a little bit to give priority to
the user provided per table autovacuum parameters.
If any user specified per table vacuum parameters exists and those are
different with guc vacuum parameters then the
balance cost calculations will not include that worker in calculation. Only
the cost is distributed between other workers
with specified guc vacuum cost parameter.

The problem in this calculation is if the user provides same guc values to
the per table values also then it doesn't consider them in calculation.
Patch is attached in the mail. please provide you suggestions or
corrections in this approach.

Regards,
Hari Babu
Fujitsu Australia

Attachments:

per_table_vacuum_para_v1.patchapplication/octet-stream; name=per_table_vacuum_para_v1.patchDownload+35-38
#5Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Haribabu Kommi (#4)
Re: Per table autovacuum vacuum cost limit behaviour strange

I hadn't noticed this thread. I will give this a look. Thanks for
providing a patch.

--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#6Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Haribabu Kommi (#4)
Re: Per table autovacuum vacuum cost limit behaviour strange

Haribabu Kommi escribi�:

I changed the balance cost calculations a little bit to give priority to
the user provided per table autovacuum parameters.
If any user specified per table vacuum parameters exists and those are
different with guc vacuum parameters then the
balance cost calculations will not include that worker in calculation. Only
the cost is distributed between other workers
with specified guc vacuum cost parameter.

The problem in this calculation is if the user provides same guc values to
the per table values also then it doesn't consider them in calculation.

I think this is a strange approach to the problem, because if you
configure the backends just so, they are completely ignored instead of
being adjusted. And this would have action-at-a-distance consequences
because if you change the defaults in postgresql.conf you end up with
completely different behavior on the tables for which you have carefully
tuned the delay so that they are ignored in rebalance calculations.

I think that rather than ignoring some backends completely, we should be
looking at how to "weight" the balancing calculations among all the
backends in some smart way that doesn't mean they end up with the
default values of limit, which AFAIU is what happens now -- which is
stupid. Not real sure how to do that, perhaps base it on the
globally-configured ratio of delay/limit vs. the table-specific ratio.

What I mean is that perhaps the current approach is all wrong and we
need to find a better algorithm to suit this case and more generally.
Of course, I don't mean to say that it should behave completely
differently than now in normal cases, only that it shouldn't give
completely stupid results in corner cases such as this one.

As an example, suppose that global limit=200 and global delay=20 (the
defaults). Then we have a global ratio of 5. If all three tables being
vacuumed currently are using the default values, then they all have
ratio=5 and therefore all should have the same limit and delay settings
applied after rebalance. Now, if two tables have ratio=5 and one table
has been configured to have a very fast vacuum, that is limit=10000,
then ratio for that table is 10000/20=500. Therefore that table should
be configured, after rebalance, to have a limit and delay that are 100
times faster than the settings for the other two tables. (And there is
a further constraint that the total delay per "limit unit" should be
so-and-so to accomodate getting the correct total delay per limit unit.)

I haven't thought about how to code that, but I don't think it should be
too difficult. Want to give it a try? I think it makes sense to modify
both the running delay and the running limit to achieve whatever ratio
we come up with, except that delay should probably not go below 10ms
because, apparently, some platforms have that much of sleep granularity
and it wouldn't really work to have a smaller delay.

Am I making sense?

--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#7Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Alvaro Herrera (#6)
Re: Per table autovacuum vacuum cost limit behaviour strange

On Feb 15, 2014 9:19 AM, "Alvaro Herrera" <alvherre@2ndquadrant.com> wrote:

Haribabu Kommi escribió:

I changed the balance cost calculations a little bit to give priority to
the user provided per table autovacuum parameters.
If any user specified per table vacuum parameters exists and those are
different with guc vacuum parameters then the
balance cost calculations will not include that worker in calculation.

Only

the cost is distributed between other workers
with specified guc vacuum cost parameter.

The problem in this calculation is if the user provides same guc values

to

the per table values also then it doesn't consider them in calculation.

I think this is a strange approach to the problem, because if you
configure the backends just so, they are completely ignored instead of
being adjusted. And this would have action-at-a-distance consequences
because if you change the defaults in postgresql.conf you end up with
completely different behavior on the tables for which you have carefully
tuned the delay so that they are ignored in rebalance calculations.

I think that rather than ignoring some backends completely, we should be
looking at how to "weight" the balancing calculations among all the
backends in some smart way that doesn't mean they end up with the
default values of limit, which AFAIU is what happens now -- which is
stupid. Not real sure how to do that, perhaps base it on the
globally-configured ratio of delay/limit vs. the table-specific ratio.

What I mean is that perhaps the current approach is all wrong and we
need to find a better algorithm to suit this case and more generally.
Of course, I don't mean to say that it should behave completely
differently than now in normal cases, only that it shouldn't give
completely stupid results in corner cases such as this one.

As an example, suppose that global limit=200 and global delay=20 (the
defaults). Then we have a global ratio of 5. If all three tables being
vacuumed currently are using the default values, then they all have
ratio=5 and therefore all should have the same limit and delay settings
applied after rebalance. Now, if two tables have ratio=5 and one table
has been configured to have a very fast vacuum, that is limit=10000,
then ratio for that table is 10000/20=500. Therefore that table should
be configured, after rebalance, to have a limit and delay that are 100
times faster than the settings for the other two tables. (And there is
a further constraint that the total delay per "limit unit" should be
so-and-so to accomodate getting the correct total delay per limit unit.)

I haven't thought about how to code that, but I don't think it should be
too difficult. Want to give it a try? I think it makes sense to modify
both the running delay and the running limit to achieve whatever ratio
we come up with, except that delay should probably not go below 10ms
because, apparently, some platforms have that much of sleep granularity
and it wouldn't really work to have a smaller delay.

Am I making sense?

Yes makes sense and it's a good approach also not leaving the delay
parameter as is. Thanks I will give a try.

Regards,
Hari Babu
Fujitsu Australia

#8Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Haribabu Kommi (#7)
Re: Per table autovacuum vacuum cost limit behaviour strange

On Sat, Feb 15, 2014 at 10:47 AM, Haribabu Kommi
<kommi.haribabu@gmail.com>wrote:

On Feb 15, 2014 9:19 AM, "Alvaro Herrera" <alvherre@2ndquadrant.com>
wrote:

I think this is a strange approach to the problem, because if you
configure the backends just so, they are completely ignored instead of
being adjusted. And this would have action-at-a-distance consequences
because if you change the defaults in postgresql.conf you end up with
completely different behavior on the tables for which you have carefully
tuned the delay so that they are ignored in rebalance calculations.

I think that rather than ignoring some backends completely, we should be
looking at how to "weight" the balancing calculations among all the
backends in some smart way that doesn't mean they end up with the
default values of limit, which AFAIU is what happens now -- which is
stupid. Not real sure how to do that, perhaps base it on the
globally-configured ratio of delay/limit vs. the table-specific ratio.

What I mean is that perhaps the current approach is all wrong and we
need to find a better algorithm to suit this case and more generally.
Of course, I don't mean to say that it should behave completely
differently than now in normal cases, only that it shouldn't give
completely stupid results in corner cases such as this one.

As an example, suppose that global limit=200 and global delay=20 (the
defaults). Then we have a global ratio of 5. If all three tables being
vacuumed currently are using the default values, then they all have
ratio=5 and therefore all should have the same limit and delay settings
applied after rebalance. Now, if two tables have ratio=5 and one table
has been configured to have a very fast vacuum, that is limit=10000,
then ratio for that table is 10000/20=500. Therefore that table should
be configured, after rebalance, to have a limit and delay that are 100
times faster than the settings for the other two tables. (And there is
a further constraint that the total delay per "limit unit" should be
so-and-so to accomodate getting the correct total delay per limit unit.)

I haven't thought about how to code that, but I don't think it should be
too difficult. Want to give it a try? I think it makes sense to modify
both the running delay and the running limit to achieve whatever ratio
we come up with, except that delay should probably not go below 10ms
because, apparently, some platforms have that much of sleep granularity
and it wouldn't really work to have a smaller delay.

Am I making sense?

Yes makes sense and it's a good approach also not leaving the delay
parameter as is. Thanks I will give a try.

I modified the "autovac_balance_cost" function to balance the costs using
the number of running workers, instead
of default vacuum cost parameters.

Lets assume there are 4 workers running currently with default cost values
of limit 200 and delay 20ms.
The cost will be distributed as 50 and 10ms each.

Suppose if one worker is having a different cost limit value as 1000, which
is 5 times more than default value.
The cost will be distributed as 50 and 10ms each for other 3 workers and
250 and 10ms for the worker having
cost limit value other than default. By this way also it still ensures the
cost limit value is 5 times more than other workers.

By this way the worker with user specified autovacuum cost parameters is
not ignored completely.
Patch is attached. Please let me know your suggestions.

Regards,
Hari Babu
Fujitsu Australia

Attachments:

per_table_vacuum_para_v2.patchapplication/octet-stream; name=per_table_vacuum_para_v2.patchDownload+40-54
#9Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Haribabu Kommi (#8)
Re: Per table autovacuum vacuum cost limit behaviour strange

Haribabu Kommi wrote:

I modified the "autovac_balance_cost" function to balance the costs using
the number of running workers, instead
of default vacuum cost parameters.

Just as a heads-up, this patch wasn't part of the commitfest, but I
intend to review it and possibly commit for 9.4. Not immediately but at
some point.

Arguably this is a bug fix, since the autovac rebalance code behaves
horribly in cases such as the one described here, so I should consider a
backpatch right away. However I don't think it's a good idea to do that
without more field testing. Perhaps we can backpatch later if the new
code demonstrates its sanity.

--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#10Amit Kapila
amit.kapila16@gmail.com
In reply to: Haribabu Kommi (#8)
Re: Per table autovacuum vacuum cost limit behaviour strange

On Mon, Feb 17, 2014 at 7:38 AM, Haribabu Kommi
<kommi.haribabu@gmail.com> wrote:

I modified the "autovac_balance_cost" function to balance the costs using
the number of running workers, instead
of default vacuum cost parameters.

Lets assume there are 4 workers running currently with default cost values
of limit 200 and delay 20ms.
The cost will be distributed as 50 and 10ms each.

Suppose if one worker is having a different cost limit value as 1000, which
is 5 times more than default value.
The cost will be distributed as 50 and 10ms each for other 3 workers and 250
and 10ms for the worker having
cost limit value other than default. By this way also it still ensures the
cost limit value is 5 times more than other workers.

Won't this change break the basic idea of autovacuum_vacuum_cost_limit
which is as follows:
"Note that the value is distributed proportionally among the running autovacuum
workers, if there is more than one, so that the sum of the limits of each worker
never exceeds the limit on this variable.".

Basically with proposed change, the sum of limits of each worker will be more
than autovacuum_vacuum_cost_limit and I think main reason for same is that
the new calculation doesn't consider autovacuum_vacuum_cost_limit or other
similar parameters.

I think current calculation gives appropriate consideration for table level
vacuum settings when autovacuum_vacuum_cost_limit is configured
with more care (i.e it is more than table level settings). As an example
consider the below case:

autovacuum_vacuum_cost_limit = 10000
t1 (autovacuum_vacuum_cost_limit = 1000)
t2 (default)
t3 (default)
t4 (default)

Consider other settings as Default.

Now cost_limit after autovac_balance_cost is as follows:
Worker-1 for t1 = 322
Worker-2 for t2 = 3225
Worker-3 for t3 = 3225
Worker-4 for t3 = 3225

So in this way proper consideration has been given to table level
vacuum settings and guc configured for autovacuum_vacuum_cost_limit
with current code.

Now it might be the case that we want to improve current calculation for
cases where it doesn't work well, but I think it has to be better than current
behaviour and it is better to consider both guc's and table level settings with
some better formula.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#11Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Amit Kapila (#10)
Re: Per table autovacuum vacuum cost limit behaviour strange

On Mon, May 5, 2014 at 1:09 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Feb 17, 2014 at 7:38 AM, Haribabu Kommi
<kommi.haribabu@gmail.com> wrote:

I modified the "autovac_balance_cost" function to balance the costs using
the number of running workers, instead
of default vacuum cost parameters.

Lets assume there are 4 workers running currently with default cost values
of limit 200 and delay 20ms.
The cost will be distributed as 50 and 10ms each.

Suppose if one worker is having a different cost limit value as 1000, which
is 5 times more than default value.
The cost will be distributed as 50 and 10ms each for other 3 workers and 250
and 10ms for the worker having
cost limit value other than default. By this way also it still ensures the
cost limit value is 5 times more than other workers.

Won't this change break the basic idea of autovacuum_vacuum_cost_limit
which is as follows:
"Note that the value is distributed proportionally among the running autovacuum
workers, if there is more than one, so that the sum of the limits of each worker
never exceeds the limit on this variable.".

It is not breaking the behavior. This setting can be overridden for
individual tables by
changing storage parameters. Still the cost values for the default
tables are under the guc limit.

Basically with proposed change, the sum of limits of each worker will be more
than autovacuum_vacuum_cost_limit and I think main reason for same is that
the new calculation doesn't consider autovacuum_vacuum_cost_limit or other
similar parameters.

If user doesn't provide any table specific value then
autovacuum_vacuum_cost_limit guc
value is set to the table. So the same is used in the calculation.

I think current calculation gives appropriate consideration for table level
vacuum settings when autovacuum_vacuum_cost_limit is configured
with more care (i.e it is more than table level settings). As an example
consider the below case:

autovacuum_vacuum_cost_limit = 10000
t1 (autovacuum_vacuum_cost_limit = 1000)
t2 (default)
t3 (default)
t4 (default)

Consider other settings as Default.

Now cost_limit after autovac_balance_cost is as follows:
Worker-1 for t1 = 322
Worker-2 for t2 = 3225
Worker-3 for t3 = 3225
Worker-4 for t3 = 3225

So in this way proper consideration has been given to table level
vacuum settings and guc configured for autovacuum_vacuum_cost_limit
with current code.

It works for the case where the table specific values less than the
default cost limit.
The same logic doesn't work with higher values. Usually the table
specific values
are more than default values to the tables where the faster vacuuming
is expected.

Now it might be the case that we want to improve current calculation for
cases where it doesn't work well, but I think it has to be better than current
behaviour and it is better to consider both guc's and table level settings with
some better formula.

With the proposed change, it works for both fine whether the table
specific value is higher
or lower to the default value. It works on the factor of the
difference between the default value
to the table specific value.

default autovacuum_vacuum_cost_limit = 10000
t1 - 1000, t2 - default, t3 - default, t4 - default --> balance costs
t1 - 250, t2 - 2500, t3 - 2500, t4 - 2500.

t1 - 20000, t2 - default, t3 - default, t4 - default --> balance
costs t1 - 5000, t2 - 2500, t3 - 2500, t4 - 2500.

Regards,
Hari Babu
Fujitsu Australia

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#12Amit Kapila
amit.kapila16@gmail.com
In reply to: Haribabu Kommi (#11)
Re: Per table autovacuum vacuum cost limit behaviour strange

On Mon, May 5, 2014 at 6:35 AM, Haribabu Kommi <kommi.haribabu@gmail.com> wrote:

On Mon, May 5, 2014 at 1:09 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Feb 17, 2014 at 7:38 AM, Haribabu Kommi
<kommi.haribabu@gmail.com> wrote:

I modified the "autovac_balance_cost" function to balance the costs using
the number of running workers, instead
of default vacuum cost parameters.

Lets assume there are 4 workers running currently with default cost values
of limit 200 and delay 20ms.
The cost will be distributed as 50 and 10ms each.

Suppose if one worker is having a different cost limit value as 1000, which
is 5 times more than default value.
The cost will be distributed as 50 and 10ms each for other 3 workers and 250
and 10ms for the worker having
cost limit value other than default. By this way also it still ensures the
cost limit value is 5 times more than other workers.

Won't this change break the basic idea of autovacuum_vacuum_cost_limit
which is as follows:
"Note that the value is distributed proportionally among the running autovacuum
workers, if there is more than one, so that the sum of the limits of each worker
never exceeds the limit on this variable.".

It is not breaking the behavior. This setting can be overridden for
individual tables by
changing storage parameters. Still the cost values for the default
tables are under the guc limit.

Could you think of a case where in current calculation it doesn't follow
what I mentioned above ("the sum of the limits of each worker
never exceeds the limit on this variable.")?

Here what I could understand is that sum of cost_limit for all
autovacuum workers should never exceed the value of
autovacuum_vacuum_cost_limit which seems to be always the
case in current code but same is not true for proposed patch.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#13Mark Kirkwood
mark.kirkwood@catalyst.net.nz
In reply to: Amit Kapila (#12)
Re: Per table autovacuum vacuum cost limit behaviour strange

On 05/05/14 15:22, Amit Kapila wrote:

Here what I could understand is that sum of cost_limit for all
autovacuum workers should never exceed the value of
autovacuum_vacuum_cost_limit which seems to be always the
case in current code but same is not true for proposed patch.

Right, but have a look at the 1st message in this thread - the current
behavior (and to a large extent the above condition) means that setting
cost limits per table does not work in any remotely intuitive way.

ITSM that the whole purpose of a per table setting in this context is to
override the behavior of auto vacuum throttling - and currently this
does not happen unless you get real brutal (i.e setting the cost delay
to zero in addition to setting cost limit...making the whole cost limit
a bit pointless).

regards

Mark

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#14Amit Kapila
amit.kapila16@gmail.com
In reply to: Mark Kirkwood (#13)
Re: Per table autovacuum vacuum cost limit behaviour strange

On Mon, May 5, 2014 at 11:57 AM, Mark Kirkwood
<mark.kirkwood@catalyst.net.nz> wrote:

On 05/05/14 15:22, Amit Kapila wrote:
Right, but have a look at the 1st message in this thread - the current
behavior (and to a large extent the above condition) means that setting
cost limits per table does not work in any remotely intuitive way.

ITSM that the whole purpose of a per table setting in this context is to
override the behavior of auto vacuum throttling - and currently this does
not happen unless you get real brutal (i.e setting the cost delay to zero in
addition to setting cost limit...making the whole cost limit a bit
pointless).

I think meaning of per table setting is just that it overrides the default
value of autovacuum_vacuum_cost_limit for that table and the rest of
calculation or concept remains same. This is what currently code does
and the same is mentioned in docs as far as I can understand.

As per current behaviour the per-table cost_limit is also adjusted based
on the setting of GUC autovacuum_vacuum_cost_limit and right now it
follows a simple principle that the total cost limit for all workers should be
equal to autovacuum_vacuum_cost_limit. Even code has below comment:

/*
* Adjust cost limit of each active worker to balance the total of cost
* limit to autovacuum_vacuum_cost_limit.
*/

Now If you want to change for the case where user specifies value per
table which is more than autovacuum_vacuum_cost_limit or otherwise,
then I think the new definition should be bit more clear and it is better
not to impact current calculation for default values.

I could think of 2 ways to change this:

a. if user has specified cost_limit value for table, then it just uses it
rather than rebalancing based on value of system-wide guc variable
autovacuum_vacuum_cost_limit
b. another could be to restrict setting per-table value to be lesser than
system-wide value?

The former is used for auto vacuum parameters like scale_factor and
later is used for parameters like freeze_max_age.

Thoughts?

Alvaro, do you think above options makes sense to solve this problem?

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#15Mark Kirkwood
mark.kirkwood@catalyst.net.nz
In reply to: Amit Kapila (#14)
Re: Per table autovacuum vacuum cost limit behaviour strange

On 06/05/14 16:28, Amit Kapila wrote:

On Mon, May 5, 2014 at 11:57 AM, Mark Kirkwood
<mark.kirkwood@catalyst.net.nz> wrote:

On 05/05/14 15:22, Amit Kapila wrote:
Right, but have a look at the 1st message in this thread - the current
behavior (and to a large extent the above condition) means that setting
cost limits per table does not work in any remotely intuitive way.

ITSM that the whole purpose of a per table setting in this context is to
override the behavior of auto vacuum throttling - and currently this does
not happen unless you get real brutal (i.e setting the cost delay to zero in
addition to setting cost limit...making the whole cost limit a bit
pointless).

I think meaning of per table setting is just that it overrides the default
value of autovacuum_vacuum_cost_limit for that table and the rest of
calculation or concept remains same. This is what currently code does
and the same is mentioned in docs as far as I can understand.

As per current behaviour the per-table cost_limit is also adjusted based
on the setting of GUC autovacuum_vacuum_cost_limit and right now it
follows a simple principle that the total cost limit for all workers should be
equal to autovacuum_vacuum_cost_limit. Even code has below comment:

/*
* Adjust cost limit of each active worker to balance the total of cost
* limit to autovacuum_vacuum_cost_limit.
*/

Now If you want to change for the case where user specifies value per
table which is more than autovacuum_vacuum_cost_limit or otherwise,
then I think the new definition should be bit more clear and it is better
not to impact current calculation for default values.

I could think of 2 ways to change this:

a. if user has specified cost_limit value for table, then it just uses it
rather than rebalancing based on value of system-wide guc variable
autovacuum_vacuum_cost_limit
b. another could be to restrict setting per-table value to be lesser than
system-wide value?

The former is used for auto vacuum parameters like scale_factor and
later is used for parameters like freeze_max_age.

Thoughts?

Alvaro, do you think above options makes sense to solve this problem?

Yes indeed - the code currently working differently from what one would
expect. However the usual reason for handing knobs to the user for
individual object is so that special configurations can be applied to
them. The current method of operation of the per table knobs does not do
this (not without clubbing 'em on the head)

The (ahem) sensible way that one would expect (perhaps even need)
autovacuum throttling to work is:

- set sensible defaults for all the usual (well behaved) tables
- set a few really aggressive overrides for a handful of the naughty ones

Runaway free space bloat is one of the things that can really mangle a
postgres system (I've been called in to rescue a few in my time)...
there needs to be a way to control those few badly behaved tables ...
without removing the usefulness of throttling the others.

Regards

Mark

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#16Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Mark Kirkwood (#15)
Re: Per table autovacuum vacuum cost limit behaviour strange

Mark Kirkwood wrote:

On 06/05/14 16:28, Amit Kapila wrote:

On Mon, May 5, 2014 at 11:57 AM, Mark Kirkwood
<mark.kirkwood@catalyst.net.nz> wrote:

I could think of 2 ways to change this:

a. if user has specified cost_limit value for table, then it just uses it
rather than rebalancing based on value of system-wide guc variable
autovacuum_vacuum_cost_limit
b. another could be to restrict setting per-table value to be lesser than
system-wide value?

The former is used for auto vacuum parameters like scale_factor and
later is used for parameters like freeze_max_age.

Thoughts?

Alvaro, do you think above options makes sense to solve this problem?

I've been giving some thought to this. Really, there is no way to
handle this sensibly while at the same time keeping the documented
behavior -- or in other words, what we have documented is not useful
behavior. Your option (b) above is an easy solution to the problem,
however it means that the user will have serious trouble configuring the
system in scenarios such as volatile tables, as Mark says -- essentially
that will foreclose the option of using autovacuum for them.

I'm not sure I like your (a) proposal much better. One problem there is
that if you set the values for a table to be exactly the same values as
in postgresql.conf, it will behave completely different because it will
not participate in balancing. To me this seems to violate POLA.

I checked Haribabu's latest patch in this thread, and didn't like it
much. If you set up a table to have cost_delay=1000, it runs at that
speed when vacuumed alone; but if there are two workers, it goes at half
the speed even if the other one is configured with a very small
cost_delay (in essence, "waste" the allocated I/O bandwidth). Three
workers, it goes at a third of the speed -- again, even if the other
tables are configured to go much slower than the volatile one. This
seems too simplistic. It might be okay when you have only one or two
very large or high-churn tables, and small numbers of workers, but it's
not unreasonable to think that you might have lots more workers if your
DB has many high-churn tables.

So my proposal is a bit more complicated. First we introduce the notion
of a single number, to enable sorting and computations: the "delay
equivalent", which is the cost_limit divided by cost_delay. The highest
the value is for any table, the fastest it is vacuumed. (It makes sense
in physical terms: a higher cost_limit makes it faster, because vacuum
sleeps less often; and a higher cost_delay makes it go slower, because
vacuums sleeps for longer.) Now, the critical issue is to notice that
not all tables are equal; they can be split in two groups, those that go
faster than the global delay equivalent
(i.e. the effective values of GUC variables
autovacuum_vacuum_cost_limit/autovacuum_vacuum_cost_delay), and those
that go equal or slower. For the latter group, the rebalancing
algorithm "distributes" the allocated I/O by the global vars, in a
pro-rated manner. For the former group (tables vacuumed faster than
global delay equiv), to rebalance we don't consider the global delay
equiv but the delay equiv of the fastest table currently being vacuumed.

Suppose we have two tables, delay_equiv=10 each (which is the default
value). If they are both vacuumed in parallel, then we distribute a
delay_equiv of 5 to each (so set cost_limit=100, cost_delay=20). As
soon as one of them finishes, the remaining one is allowed to upgrade to
delay_equiv=10 (cost_limit=200, cost_delay=20).

Now add a third table, delay_equiv=500 (cost_limit=10000, cost_delay=20;
this is Mark's volatile table). If it's being vacuumed on its own, just
assign cost_limit=10000 cost_delay=20, as normal. If one of the other
two tables are being vacuumed, that one will use delay_equiv=10, as per
above. To balance the volatile table, we take the delay_equiv of this
one and subtract the already handed-out delay_equiv of 10; so we set the
volatile table to delay_equiv=490 (cost_limit=9800, cost_delay=20).

If we do it this way, the whole system is running at the full speed
enabled by the fastest table we have set the per-table options, but also
we have scaled things so that the slow tables go slow and the fast
tables go fast.

As a more elaborate example, add a fourth table with delay_equiv=50
(cost_limit=1000, cost_delay=20). This is also faster than the global
vars, so we put it in the first group. If all four tables are being
vacuumed in parallel, we have the two slow tables going at delay_equiv=5
each (cost_limit=100, cost_delay=20); then there are delay_equiv=490 to
distribute among the remaining ones; pro-rating this we have
delay_equiv=445 (cost_limit=8900, cost_delay=20) for the volatile table
and delay_equiv=45 (cost_limit=900, cost_delay=20) for the other one.

If one of the slowest tables finished vacuuming, the other one will
speed up to delay_equiv=10, and the two fastest ones will go on
unchanged. If both finish and the fast tables keep going, the faster
one will go at delay_equiv=454 and the other one at delay_equiv=45.
Note that the volatile table will go a bit faster while the other one is
barely affected.

Essentially, if you configure a table with a delay-equiv that's greater
than the system configured values, you're giving permission for vacuum
to use more I/O, but each table has its own limit to how fast it can go.

The (ahem) sensible way that one would expect (perhaps even need)
autovacuum throttling to work is:

- set sensible defaults for all the usual (well behaved) tables
- set a few really aggressive overrides for a handful of the naughty ones

Does my proposal above satisfy your concerns?

Runaway free space bloat is one of the things that can really mangle
a postgres system (I've been called in to rescue a few in my
time)... there needs to be a way to control those few badly behaved
tables ... without removing the usefulness of throttling the others.

Agreed.

--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#17Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#16)
Re: Per table autovacuum vacuum cost limit behaviour strange

Alvaro Herrera wrote:

So my proposal is a bit more complicated. First we introduce the notion
of a single number, to enable sorting and computations: the "delay
equivalent", which is the cost_limit divided by cost_delay.

Here's a patch that implements this idea. As you see this is quite a
bit more complicated that Haribabu's proposal.

There are two holes in this:

1. if you ALTER DATABASE to change vacuum delay for a database, those
values are not considered in the global equiv delay. I don't think this
is very important and anyway we haven't considered this very much, so
it's okay if we don't handle it.

2. If you have a "fast worker" that's only slightly faster than regular
workers, it will become slower in some cases. This is explained in a
FIXME comment in the patch.

I don't really have any more time to invest in this, but I would like to
see it in 9.4. Mark, would you test this? Haribabu, how open are you
to fixing point (2) above?

--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

per_table_vacuum_para_v3.patchtext/x-diff; charset=us-asciiDownload+223-170
#18Mark Kirkwood
mark.kirkwood@catalyst.net.nz
In reply to: Alvaro Herrera (#17)
Re: Per table autovacuum vacuum cost limit behaviour strange

On 27/08/14 10:27, Alvaro Herrera wrote:

Alvaro Herrera wrote:

So my proposal is a bit more complicated. First we introduce the notion
of a single number, to enable sorting and computations: the "delay
equivalent", which is the cost_limit divided by cost_delay.

Here's a patch that implements this idea. As you see this is quite a
bit more complicated that Haribabu's proposal.

There are two holes in this:

1. if you ALTER DATABASE to change vacuum delay for a database, those
values are not considered in the global equiv delay. I don't think this
is very important and anyway we haven't considered this very much, so
it's okay if we don't handle it.

2. If you have a "fast worker" that's only slightly faster than regular
workers, it will become slower in some cases. This is explained in a
FIXME comment in the patch.

I don't really have any more time to invest in this, but I would like to
see it in 9.4. Mark, would you test this? Haribabu, how open are you
to fixing point (2) above?

Thanks Alvaro - I will take a look.

regards

Mark

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#19Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Alvaro Herrera (#17)
Re: Per table autovacuum vacuum cost limit behaviour strange

On Wed, Aug 27, 2014 at 8:27 AM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:

Alvaro Herrera wrote:

So my proposal is a bit more complicated. First we introduce the notion
of a single number, to enable sorting and computations: the "delay
equivalent", which is the cost_limit divided by cost_delay.

Here's a patch that implements this idea. As you see this is quite a
bit more complicated that Haribabu's proposal.

There are two holes in this:

1. if you ALTER DATABASE to change vacuum delay for a database, those
values are not considered in the global equiv delay. I don't think this
is very important and anyway we haven't considered this very much, so
it's okay if we don't handle it.

2. If you have a "fast worker" that's only slightly faster than regular
workers, it will become slower in some cases. This is explained in a
FIXME comment in the patch.

I don't really have any more time to invest in this, but I would like to
see it in 9.4. Mark, would you test this? Haribabu, how open are you
to fixing point (2) above?

Thanks Alvaro. I will check the point(2).

Regards,
Hari Babu
Fujitsu Australia

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#20Mark Kirkwood
mark.kirkwood@catalyst.net.nz
In reply to: Alvaro Herrera (#17)
Re: Per table autovacuum vacuum cost limit behaviour strange

On 27/08/14 10:27, Alvaro Herrera wrote:

Alvaro Herrera wrote:

So my proposal is a bit more complicated. First we introduce the notion
of a single number, to enable sorting and computations: the "delay
equivalent", which is the cost_limit divided by cost_delay.

Here's a patch that implements this idea. As you see this is quite a
bit more complicated that Haribabu's proposal.

There are two holes in this:

1. if you ALTER DATABASE to change vacuum delay for a database, those
values are not considered in the global equiv delay. I don't think this
is very important and anyway we haven't considered this very much, so
it's okay if we don't handle it.

2. If you have a "fast worker" that's only slightly faster than regular
workers, it will become slower in some cases. This is explained in a
FIXME comment in the patch.

I don't really have any more time to invest in this, but I would like to
see it in 9.4. Mark, would you test this? Haribabu, how open are you
to fixing point (2) above?

I did some testing with this patch applied.

Minimally tweaking autovacuum (naptime of 5s) with a single table
'cache0' created with a cost limit setting of 10000, running:

$ pgbench -n -c8 -T300 -f volatile0.sql cache

and monitoring the size of 'cache0' table showed a steady state of:

cache=# SELECT pg_relation_size('cache0')/(1024*1024) AS mb;
mb
------
85

So far so good. Adding another table 'cache1' similar to the previous
but lacking any per table autovacuum settings, and running 2 pgbench
sessions:

$ pgbench -n -c8 -T300 -f volatile0.sql cache
$ pgbench -n -c8 -T300 -f volatile1.sql cache

(volatile1.sql just uses table 'cache1' instead of 'cache0') shows after
a few minutes:

cache=# SELECT relname,pg_relation_size(oid)/(1024*1024) AS mb
FROM pg_class WHERE relname like 'cache_';
relname | mb
---------+--------
cache0 | 664
cache1 | 1900

So we are definitely seeing the 'fast' worker being slowed down. Also,
the growth of 'cache1' was only a bit faster than 'cache0' - so the
'slow' worker was getting a speed boost was well.

So looks like good progress, but yeah - point (2) is obviously rearing
its head in this test.

Cheers

Mark

Attachments:

schema.sqlapplication/sql; name=schema.sqlDownload
volatile0.sqlapplication/sql; name=volatile0.sqlDownload
#21Robert Haas
robertmhaas@gmail.com
In reply to: Alvaro Herrera (#16)
#22Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Robert Haas (#21)
#23Robert Haas
robertmhaas@gmail.com
In reply to: Alvaro Herrera (#22)
#24Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Robert Haas (#23)
#25Robert Haas
robertmhaas@gmail.com
In reply to: Alvaro Herrera (#24)
#26Mark Kirkwood
mark.kirkwood@catalyst.net.nz
In reply to: Alvaro Herrera (#24)
#27Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Mark Kirkwood (#26)
#28Amit Kapila
amit.kapila16@gmail.com
In reply to: Alvaro Herrera (#16)
#29Amit Kapila
amit.kapila16@gmail.com
In reply to: Alvaro Herrera (#22)
#30Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Alvaro Herrera (#17)
#31Gregory Smith
gregsmithpgsql@gmail.com
In reply to: Robert Haas (#21)
#32Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Robert Haas (#21)
#33Robert Haas
robertmhaas@gmail.com
In reply to: Alvaro Herrera (#32)
#34Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#32)
#35Robert Haas
robertmhaas@gmail.com
In reply to: Alvaro Herrera (#34)
#36Stephen Frost
sfrost@snowman.net
In reply to: Robert Haas (#35)
#37Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Stephen Frost (#36)
#38Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#37)
#39Stephen Frost
sfrost@snowman.net
In reply to: Alvaro Herrera (#38)
#40Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Stephen Frost (#39)