Fun fact about autovacuum and orphan temp tables
Hello, hackers!
We were testing how well some application works with PostgreSQL and
stumbled upon an autovacuum behavior which I fail to understand.
Application in question have a habit to heavily use temporary tables in
funny ways.
For example it creates A LOT of them.
Which is ok.
Funny part is that it never drops them. So when backend is finally
terminated, it tries to drop them and fails with error:
FATAL: out of shared memory
HINT: You might need to increase max_locks_per_transaction
If I understand that rigth, we are trying to drop all these temp tables
in one transaction and running out of locks to do so.
After that postgresql.log is flooded at the rate 1k/s with messages like
that:
LOG: autovacuum: found orphan temp table "pg_temp_15"."tt38147" in
database "DB_TEST"
It produces a noticeable load on the system and it`s getting worst with
every terminated backend or restart.
I did some RTFS and it appears that autovacuum has no intention of
cleaning that orphan tables unless
it`s wraparound time:
src/backend/postmaster/autovacuum.c
/* We just ignore it if the owning backend is still active */
2037 if (backendID == MyBackendId ||
BackendIdGetProc(backendID) == NULL)
2038 {
2039 /*
2040 * We found an orphan temp table (which was
probably left
2041 * behind by a crashed backend). If it's so old
as to need
2042 * vacuum for wraparound, forcibly drop it.
Otherwise just
2043 * log a complaint.
2044 */
2045 if (wraparound)
2046 {
2047 ObjectAddress object;
2048
2049 ereport(LOG,
2050 (errmsg("autovacuum: dropping orphan
temp table \"%s\".\"%s\" in database \"%s\"",
2051 get_namespace_name(classForm->relnamespace),
2052 NameStr(classForm->relname),
2053 get_database_name(MyDatabaseId))));
2054 object.classId = RelationRelationId;
2055 object.objectId = relid;
2056 object.objectSubId = 0;
2057 performDeletion(&object, DROP_CASCADE,
PERFORM_DELETION_INTERNAL);
2058 }
2059 else
2060 {
2061 ereport(LOG,
2062 (errmsg("autovacuum: found orphan
temp table \"%s\".\"%s\" in database \"%s\"",
2063 get_namespace_name(classForm->relnamespace),
2064 NameStr(classForm->relname),
2065 get_database_name(MyDatabaseId))));
2066 }
2067 }
2068 }
What is more troubling is that pg_statistic is starting to bloat badly.
LOG: automatic vacuum of table "DB_TEST.pg_catalog.pg_statistic": index
scans: 0
pages: 0 removed, 68225 remain, 0 skipped due to pins
tuples: 0 removed, 2458382 remain, 2408081 are dead but not yet
removable
buffer usage: 146450 hits, 31 misses, 0 dirtied
avg read rate: 0.010 MB/s, avg write rate: 0.000 MB/s
system usage: CPU 3.27s/6.92u sec elapsed 23.87 sec
What is the purpose of keeping orphan tables around and not dropping
them on the spot?
--
Grigory Smolkin
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
On 09/05/2016 01:54 PM, Grigory Smolkin wrote:
What is the purpose of keeping orphan tables around and not dropping
them on the spot?
You can read the discussion about it here:
/messages/by-id/3507.1214581513@sss.pgh.pa.us
--
Vik Fearing +33 6 46 75 15 36
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Grigory Smolkin wrote:
Funny part is that it never drops them. So when backend is finally
terminated, it tries to drop them and fails with error:FATAL: out of shared memory
HINT: You might need to increase max_locks_per_transactionIf I understand that rigth, we are trying to drop all these temp tables in
one transaction and running out of locks to do so.
Hmm, yeah, I suppose it does that, and it does seem pretty inconvenient.
It is certainly pointless to hold onto these locks for temp tables. I
wonder how ugly would be to fix this problem ...
--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 09/05/2016 04:34 PM, Alvaro Herrera wrote:
Grigory Smolkin wrote:
Funny part is that it never drops them. So when backend is finally
terminated, it tries to drop them and fails with error:FATAL: out of shared memory
HINT: You might need to increase max_locks_per_transactionIf I understand that rigth, we are trying to drop all these temp tables in
one transaction and running out of locks to do so.Hmm, yeah, I suppose it does that, and it does seem pretty inconvenient.
It is certainly pointless to hold onto these locks for temp tables. I
wonder how ugly would be to fix this problem ...
Thank you for your interest in this problem.
I dont think this is a source of problem. Ugly fix here would only force
backend to terminate properly.
It will not help at all in cause of server crash or power outage.
We need a way to tell autovacuum, that we don`t need orphan temp tables,
so they can be removed using existing routine.
The least invasive solution would be to have a guc, something like
'keep_orphan_temp_tables' with boolean value.
Which would determine a autovacuum worker policy toward encountered
orphan temp tables.
--
Grigory Smolkin
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
Grigory Smolkin wrote:
On 09/05/2016 04:34 PM, Alvaro Herrera wrote:
Grigory Smolkin wrote:
Funny part is that it never drops them. So when backend is finally
terminated, it tries to drop them and fails with error:FATAL: out of shared memory
HINT: You might need to increase max_locks_per_transactionIf I understand that rigth, we are trying to drop all these temp tables in
one transaction and running out of locks to do so.Hmm, yeah, I suppose it does that, and it does seem pretty inconvenient.
It is certainly pointless to hold onto these locks for temp tables. I
wonder how ugly would be to fix this problem ...Thank you for your interest in this problem.
I dont think this is a source of problem. Ugly fix here would only force
backend to terminate properly.
It will not help at all in cause of server crash or power outage.
We need a way to tell autovacuum, that we don`t need orphan temp tables, so
they can be removed using existing routine.
It is always possible to drop the containing schemas; and as soon as
some other backend uses the BackendId 15 (in your example) the tables
would be removed anyway. This only becomes a longstanding problem when
the crashing backend uses a high-numbered BackendId that's not reused
promptly enough.
The least invasive solution would be to have a guc, something like
'keep_orphan_temp_tables' with boolean value.
Which would determine a autovacuum worker policy toward encountered orphan
temp tables.
The stated reason for keeping them around is to ensure you have time to
do some forensics research in case there was something useful in the
crashing backend. My feeling is that if the reason they are kept around
is not a crash but rather some implementation defect that broke end-time
cleanup, then they don't have their purported value and I would rather
just remove them.
I have certainly faced my fair share of customers with dangling temp
tables, and would like to see this changed in some way or another.
--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, Sep 5, 2016 at 12:48:32PM -0300, Alvaro Herrera wrote:
The least invasive solution would be to have a guc, something like
'keep_orphan_temp_tables' with boolean value.
Which would determine a autovacuum worker policy toward encountered orphan
temp tables.The stated reason for keeping them around is to ensure you have time to
do some forensics research in case there was something useful in the
crashing backend. My feeling is that if the reason they are kept around
is not a crash but rather some implementation defect that broke end-time
cleanup, then they don't have their purported value and I would rather
just remove them.I have certainly faced my fair share of customers with dangling temp
tables, and would like to see this changed in some way or another.
I don't think we look at those temp tables frequently enough to justify
keeping them around for all users.
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 9/5/16 12:14 PM, Bruce Momjian wrote:
I have certainly faced my fair share of customers with dangling temp
tables, and would like to see this changed in some way or another.I don't think we look at those temp tables frequently enough to justify
keeping them around for all users.
Plus, if we cared about forensics, we'd prevent re-use of the orphaned
schemas by new backends. That doesn't seem like a good idea for normal
use, but if we had a preserve_orphaned_temp_objects GUC someone could
add that behavior.
Isn't there some other GUC aimed at preserving data for forensics
(besides zero_damaged_pages)? Maybe we could just broaden that to
include orphaned temp objects.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532) mobile: 512-569-9461
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, Sep 5, 2016 at 1:14 PM, Bruce Momjian <bruce@momjian.us> wrote:
On Mon, Sep 5, 2016 at 12:48:32PM -0300, Alvaro Herrera wrote:
The least invasive solution would be to have a guc, something like
'keep_orphan_temp_tables' with boolean value.
Which would determine a autovacuum worker policy toward encountered orphan
temp tables.The stated reason for keeping them around is to ensure you have time to
do some forensics research in case there was something useful in the
crashing backend. My feeling is that if the reason they are kept around
is not a crash but rather some implementation defect that broke end-time
cleanup, then they don't have their purported value and I would rather
just remove them.I have certainly faced my fair share of customers with dangling temp
tables, and would like to see this changed in some way or another.I don't think we look at those temp tables frequently enough to justify
keeping them around for all users.
+1. I think it would be much better to nuke them more aggressively.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Mon, 5 Sep 2016 14:54:05 +0300
Grigory Smolkin <g.smolkin@postgrespro.ru> wrote:
Hello, hackers!
We were testing how well some application works with PostgreSQL and
stumbled upon an autovacuum behavior which I fail to understand.
Application in question have a habit to heavily use temporary tables
in funny ways.
For example it creates A LOT of them.
Which is ok.
Funny part is that it never drops them. So when backend is finally
terminated, it tries to drop them and fails with error:FATAL: out of shared memory
HINT: You might need to increase max_locks_per_transactionIf I understand that rigth, we are trying to drop all these temp
tables in one transaction and running out of locks to do so.
After that postgresql.log is flooded at the rate 1k/s with messages
like that:LOG: autovacuum: found orphan temp table "pg_temp_15"."tt38147" in
database "DB_TEST"It produces a noticeable load on the system and it`s getting worst
with every terminated backend or restart.
I did some RTFS and it appears that autovacuum has no intention of
cleaning that orphan tables unless
it`s wraparound time:src/backend/postmaster/autovacuum.c
/* We just ignore it if the owning backend is still
active */ 2037 if (backendID == MyBackendId ||
BackendIdGetProc(backendID) == NULL)
2038 {
2039 /*
2040 * We found an orphan temp table (which was
probably left
2041 * behind by a crashed backend). If it's so
old as to need
2042 * vacuum for wraparound, forcibly drop it.
Otherwise just
2043 * log a complaint.
2044 */
2045 if (wraparound)
2046 {
2047 ObjectAddress object;
2048
2049 ereport(LOG,
2050 (errmsg("autovacuum: dropping
orphan temp table \"%s\".\"%s\" in database \"%s\"",
2051 get_namespace_name(classForm->relnamespace),
2052 NameStr(classForm->relname),
2053 get_database_name(MyDatabaseId))));
2054 object.classId = RelationRelationId;
2055 object.objectId = relid;
2056 object.objectSubId = 0;
2057 performDeletion(&object, DROP_CASCADE,
PERFORM_DELETION_INTERNAL);
2058 }
2059 else
2060 {
2061 ereport(LOG,
2062 (errmsg("autovacuum: found orphan
temp table \"%s\".\"%s\" in database \"%s\"",
2063 get_namespace_name(classForm->relnamespace),
2064 NameStr(classForm->relname),
2065 get_database_name(MyDatabaseId))));
2066 }
2067 }
2068 }What is more troubling is that pg_statistic is starting to bloat
badly.LOG: automatic vacuum of table "DB_TEST.pg_catalog.pg_statistic":
index scans: 0
pages: 0 removed, 68225 remain, 0 skipped due to pins
tuples: 0 removed, 2458382 remain, 2408081 are dead but not
yet removable
buffer usage: 146450 hits, 31 misses, 0 dirtied
avg read rate: 0.010 MB/s, avg write rate: 0.000 MB/s
system usage: CPU 3.27s/6.92u sec elapsed 23.87 secWhat is the purpose of keeping orphan tables around and not dropping
them on the spot?
Hey Hackers,
I tried to fix the problem with a new backend not being
able to reuse a temporary namespace when it contains
thousands of temporary tables. I disabled locking of objects
during namespace clearing process. See the patch attached.
Please tell me if there are any reasons why this is wrong.
I also added a GUC variable and changed the condition in
autovacuum to let it nuke orphan tables on its way.
See another patch attached.
Regards,
Constantin Pan
On Thu, Sep 8, 2016 at 12:38 AM, Robert Haas <robertmhaas@gmail.com> wrote:
On Mon, Sep 5, 2016 at 1:14 PM, Bruce Momjian <bruce@momjian.us> wrote:
I don't think we look at those temp tables frequently enough to justify
keeping them around for all users.+1. I think it would be much better to nuke them more aggressively.
+1 from here as well. Making the deletion of orphaned temp tables even
in the non-wraparound autovacuum case mandatory looks to be the better
move to me. I can see that it could be important to be able to look at
some of temp tables' data after a crash, but the argument looks weak
compared to the potential bloat of catalog tables because of those
dangling temp relations. And I'd suspect that there are far more users
who would like to see this removal more aggressive than users caring
about having a look at those orphaned tables after a crash.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, Oct 20, 2016 at 9:30 PM, Constantin S. Pan <kvapen@gmail.com> wrote:
I tried to fix the problem with a new backend not being
able to reuse a temporary namespace when it contains
thousands of temporary tables. I disabled locking of objects
during namespace clearing process. See the patch attached.
Please tell me if there are any reasons why this is wrong.
That's invasive. I am wondering if a cleaner approach here would be a
flag in deleteOneObject() that performs the lock cleanup, as that's
what you are trying to solve here.
I also added a GUC variable and changed the condition in
autovacuum to let it nuke orphan tables on its way.
See another patch attached.
It seems to me that you'd even want to make the drop of orphaned
tables mandatory once it is detected even it is not a wraparound
autovacuum. Dangling temp tables have higher chances to hit users than
diagnostic of orphaned temp tables after a crash. (A background worker
could be used for existing versions to clean up that more aggressively
actually)
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Oct 21, 2016 at 2:29 PM, Michael Paquier
<michael.paquier@gmail.com> wrote:
On Thu, Oct 20, 2016 at 9:30 PM, Constantin S. Pan <kvapen@gmail.com> wrote:
I tried to fix the problem with a new backend not being
able to reuse a temporary namespace when it contains
thousands of temporary tables. I disabled locking of objects
during namespace clearing process. See the patch attached.
Please tell me if there are any reasons why this is wrong.That's invasive. I am wondering if a cleaner approach here would be a
flag in deleteOneObject() that performs the lock cleanup, as that's
what you are trying to solve here.I also added a GUC variable and changed the condition in
autovacuum to let it nuke orphan tables on its way.
See another patch attached.It seems to me that you'd even want to make the drop of orphaned
tables mandatory once it is detected even it is not a wraparound
autovacuum. Dangling temp tables have higher chances to hit users than
diagnostic of orphaned temp tables after a crash. (A background worker
could be used for existing versions to clean up that more aggressively
actually)
You should as well add your patch to the next commit fest, so as to be
sure that it will attract more reviews and more attention:
https://commitfest.postgresql.org/11/
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, 21 Oct 2016 14:29:24 +0900
Michael Paquier <michael.paquier@gmail.com> wrote:
That's invasive. I am wondering if a cleaner approach here would be a
flag in deleteOneObject() that performs the lock cleanup, as that's
what you are trying to solve here.
The problem occurs earlier, at the findDependentObjects step. All the
objects inside the namespace are being locked before any of them gets
deleted, which leads to the "too many locks" condition.
Cheers,
Constantin Pan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Michael Paquier <michael.paquier@gmail.com> writes:
On Thu, Oct 20, 2016 at 9:30 PM, Constantin S. Pan <kvapen@gmail.com> wrote:
I tried to fix the problem with a new backend not being
able to reuse a temporary namespace when it contains
thousands of temporary tables. I disabled locking of objects
during namespace clearing process. See the patch attached.
Please tell me if there are any reasons why this is wrong.
That's invasive.
Invasive or otherwise, it's *completely unacceptable*. Without a lock
you have no way to be sure that nothing else is touching the table.
A less broken approach might be to split the cleanup into multiple shorter
transactions, that is, after every N objects stop and commit what you've
done so far. This shouldn't be that hard to do during backend exit, as
I'm pretty sure we're starting a new transaction just for this purpose
anyway. I don't know if it'd be possible to do it during the automatic
cleanup when glomming onto a pre-existing temp namespace, because we're
already within a user-started transaction at that point. But if we solve
the problem where it's being created, maybe that's enough for now.
I also added a GUC variable and changed the condition in
autovacuum to let it nuke orphan tables on its way.
See another patch attached.
It seems to me that you'd even want to make the drop of orphaned
tables mandatory once it is detected even it is not a wraparound
autovacuum.
If we are willing to do that then we don't really have to solve the
problem on the backend side. One could expect that autovacuum would
clean things up within a few minutes after a backend failure. We'd
have to be really darn sure that that "orphaned backend" test can
never have any false positives, though. I'm not sure that it was
ever designed to be race-condition-proof.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 10/21/16 8:47 AM, Tom Lane wrote:
It seems to me that you'd even want to make the drop of orphaned
tables mandatory once it is detected even it is not a wraparound
autovacuum.If we are willing to do that then we don't really have to solve the
problem on the backend side. One could expect that autovacuum would
clean things up within a few minutes after a backend failure.
Unless all the autovac workers are busy working on huge tables... maybe
a delay of several hours/days is OK in this case, but it's not wise to
assume autovac will always get to something within minutes.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532) mobile: 512-569-9461
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Sat, Oct 22, 2016 at 12:15 AM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:
On 10/21/16 8:47 AM, Tom Lane wrote:
It seems to me that you'd even want to make the drop of orphaned
tables mandatory once it is detected even it is not a wraparound
autovacuum.If we are willing to do that then we don't really have to solve the
problem on the backend side. One could expect that autovacuum would
clean things up within a few minutes after a backend failure.Unless all the autovac workers are busy working on huge tables... maybe a
delay of several hours/days is OK in this case, but it's not wise to assume
autovac will always get to something within minutes.
I am really thinking that we should just do that and call it a day
then, but document the fact that if one wants to look at the content
of orphaned tables after a crash he had better turn autovacuum to off
for the time of the analysis.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Michael Paquier <michael.paquier@gmail.com> writes:
On Sat, Oct 22, 2016 at 12:15 AM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:
On 10/21/16 8:47 AM, Tom Lane wrote:
If we are willing to do that then we don't really have to solve the
problem on the backend side. One could expect that autovacuum would
clean things up within a few minutes after a backend failure.
I am really thinking that we should just do that and call it a day
then, but document the fact that if one wants to look at the content
of orphaned tables after a crash he had better turn autovacuum to off
for the time of the analysis.
Yeah, agreed. This also points up the value of Robert's suggestion
of a "really off" setting.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Sat, Oct 22, 2016 at 9:45 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Michael Paquier <michael.paquier@gmail.com> writes:
On Sat, Oct 22, 2016 at 12:15 AM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:
On 10/21/16 8:47 AM, Tom Lane wrote:
If we are willing to do that then we don't really have to solve the
problem on the backend side. One could expect that autovacuum would
clean things up within a few minutes after a backend failure.I am really thinking that we should just do that and call it a day
then, but document the fact that if one wants to look at the content
of orphaned tables after a crash he had better turn autovacuum to off
for the time of the analysis.Yeah, agreed. This also points up the value of Robert's suggestion
of a "really off" setting.
Okay, so I suggest something like the attached as HEAD-only change
because that's a behavior modification.
--
Michael
Attachments:
autovacuum-orphan-cleanup.patchtext/plain; charset=US-ASCII; name=autovacuum-orphan-cleanup.patchDownload+21-26
Hi Dilip,
This is a gentle reminder.
you assigned as reviewer to the current patch in the 11-2016 commitfest.
But you haven't shared your review yet. Can you please try to share your
views
about the patch. This will help us in smoother operation of commitfest.
Michael had sent an updated patch based on some discussion.
Please Ignore if you already shared your review.
Regards,
Hari Babu
Fujitsu Australia
On Wed, Nov 16, 2016 at 5:24 PM, Haribabu Kommi
<kommi.haribabu@gmail.com> wrote:
This is a gentle reminder.
you assigned as reviewer to the current patch in the 11-2016 commitfest.
But you haven't shared your review yet. Can you please try to share your
views
about the patch. This will help us in smoother operation of commitfest.
Thanks for the reminder.
Michael had sent an updated patch based on some discussion.
Please Ignore if you already shared your review.
Hm. Thinking about that again, having a GUC to control if orphaned
temp tables in autovacuum is an overkill (who is going to go into this
level of tuning!?) and that we had better do something more aggressive
as there have been cases of users complaining about dangling temp
tables. I suspect the use case where people would like to have a look
at orphaned temp tables after a backend crash is not that wide, at
least a method would be to disable autovacuum after a crash if such a
monitoring is necessary. Tom has already stated upthread that the
patch to remove wildly locks is not acceptable, and he's clearly
right.
So the best move would be really to make the removal of orphaned temp
tables more aggressive, and not bother about having a GUC to control
that. The patch sent in
/messages/by-id/CAB7nPqSRYwaz1i12mPkH06_roO3ifgCgR88_aeX1OEg2r4OaNw@mail.gmail.com
does so, so I am marking the CF entry as ready for committer for this
patch to attract some committer's attention on the matter.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers