retry shm attach for windows (WAS: Re: OK, so culicidae is *still* broken)
On Sat, May 20, 2017 at 5:56 PM, Noah Misch <noah@leadboat.com> wrote:
On Sat, Apr 15, 2017 at 02:30:18PM -0700, Andres Freund wrote:
On 2017-04-15 17:24:54 -0400, Tom Lane wrote:
Andres Freund <andres@anarazel.de> writes:
On 2017-04-15 17:09:38 -0400, Tom Lane wrote:
Why doesn't Windows' ability to map the segment into the new process
before it executes take care of that?Because of ASLR of the main executable (i.e. something like PIE).
Not following. Are you saying that the main executable gets mapped into
the process address space immediately, but shared libraries are not?At the time of the pgwin32_ReserveSharedMemoryRegion() call, the child process
contains only ntdll.dll and the executable.Without PIE/ASLR we can somewhat rely on pgwin32_ReserveSharedMemoryRegion
to find the space that PGSharedMemoryCreate allocated still unoccupied.I've never had access to a Windows system that can reproduce the fork
failures. My best theory is that antivirus or similar software injects an
additional DLL at that early stage.I wonder whether we could work around that by just destroying the created
process and trying again if we get a collision. It'd be a tad
inefficient, but hopefully collisions wouldn't happen often enough to be a
big problem.That might work, although it's obviously not pretty.
I didn't like that idea when Michael proposed it in 2015. Since disabling
ASLR on the exe proved insufficient, I do like it now. It degrades nicely; if
something raises the collision rate from 1% to 10%, that just looks like fork
latency degrading.
So it seems both you and Tom are leaning towards some sort of retry
mechanism for shm reattach on Windows. I also think that is a viable
option to negate the impact of ASLR. Attached patch does that. Note
that, as I have mentioned above I think we need to do it for shm
reserve operation as well. I think we need to decide how many retries
are sufficient before bailing out. As of now, I have used 10 to have
some similarity with PGSharedMemoryCreate(), but we can choose some
different count as well. One might say that we can have "number of
retries" as a guc parameter, but I am not sure about it, so not used.
Another point to consider is that do we want the same retry mechanism
for EXEC_BACKEND builds (function
PGSharedMemoryReAttach is different on Windows and EXEC_BACKEND
builds). I think it makes sense to have retry mechanism for
EXEC_BACKEND builds, so done that way in the patch. Yet another point
which needs some thought is for reattach operation, before retrying do
we want to reserve the shm by using VirtualAllocEx?
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
Attachments:
win_shm_retry_reattach_v1.patchapplication/octet-stream; name=win_shm_retry_reattach_v1.patchDownload+95-16
On Tue, May 23, 2017 at 8:14 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
So it seems both you and Tom are leaning towards some sort of retry
mechanism for shm reattach on Windows. I also think that is a viable
option to negate the impact of ASLR. Attached patch does that. Note
that, as I have mentioned above I think we need to do it for shm
reserve operation as well. I think we need to decide how many retries
are sufficient before bailing out. As of now, I have used 10 to have
some similarity with PGSharedMemoryCreate(), but we can choose some
different count as well. One might say that we can have "number of
retries" as a guc parameter, but I am not sure about it, so not used.
New GUCs can be backpatched if necessary, though this does not seem
necessary. Who is going to set up that anyway if we have a limit high
enough. 10 looks like a sufficient number to me.
Another point to consider is that do we want the same retry mechanism
for EXEC_BACKEND builds (function
PGSharedMemoryReAttach is different on Windows and EXEC_BACKEND
builds). I think it makes sense to have retry mechanism for
EXEC_BACKEND builds, so done that way in the patch. Yet another point
which needs some thought is for reattach operation, before retrying do
we want to reserve the shm by using VirtualAllocEx?
- elog(FATAL, "could not reattach to shared memory (key=%p,
addr=%p): error code %lu",
+ {
+ elog(LOG, "could not reattach to shared memory (key=%p,
addr=%p): error code %lu",
UsedShmemSegID, UsedShmemSegAddr, GetLastError());
+ return false;
+ }
This should be a WARNING, with the attempt number reported as well?
-void
-PGSharedMemoryReAttach(void)
+bool
+PGSharedMemoryReAttach(int retry_count)
I think that the loop logic should be kept within
PGSharedMemoryReAttach, this makes the code of postmaster.c cleaner,
and it seems to me that each step of PGSharedMemoryReAttach() should
be retried in order. Do we need also to worry about SysV? I agree with
you that having consistency is better, but I don't recall seeing
failures or complains related to cygwin for ASLR.
I think that you are forgetting PGSharedMemoryCreate in the retry process.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, May 24, 2017 at 6:59 PM, Michael Paquier
<michael.paquier@gmail.com> wrote:
On Tue, May 23, 2017 at 8:14 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
So it seems both you and Tom are leaning towards some sort of retry
mechanism for shm reattach on Windows. I also think that is a viable
option to negate the impact of ASLR. Attached patch does that. Note
that, as I have mentioned above I think we need to do it for shm
reserve operation as well. I think we need to decide how many retries
are sufficient before bailing out. As of now, I have used 10 to have
some similarity with PGSharedMemoryCreate(), but we can choose some
different count as well. One might say that we can have "number of
retries" as a guc parameter, but I am not sure about it, so not used.New GUCs can be backpatched if necessary, though this does not seem
necessary. Who is going to set up that anyway if we have a limit high
enough. 10 looks like a sufficient number to me.
Okay.
Another point to consider is that do we want the same retry mechanism
for EXEC_BACKEND builds (function
PGSharedMemoryReAttach is different on Windows and EXEC_BACKEND
builds). I think it makes sense to have retry mechanism for
EXEC_BACKEND builds, so done that way in the patch. Yet another point
which needs some thought is for reattach operation, before retrying do
we want to reserve the shm by using VirtualAllocEx?- elog(FATAL, "could not reattach to shared memory (key=%p, addr=%p): error code %lu", + { + elog(LOG, "could not reattach to shared memory (key=%p, addr=%p): error code %lu", UsedShmemSegID, UsedShmemSegAddr, GetLastError()); + return false; + } This should be a WARNING, with the attempt number reported as well?
I think for retry we just want to log, why you want to display it as a
warning? During startup, other similar places (where we continue
startup even after the call has failed) also use LOG (refer
PGSharedMemoryDetach), so why do differently here? However, I think
adding retry_count should be okay.
-void -PGSharedMemoryReAttach(void) +bool +PGSharedMemoryReAttach(int retry_count) I think that the loop logic should be kept within PGSharedMemoryReAttach, this makes the code of postmaster.c cleaner,
Sure, we can do that, but then we need to repeat the same looping
logic in both sysv and win32 case. Now, if decide not to do for the
sysv case, then it might make sense to consider it doing it in
function PGSharedMemoryReAttach().
and it seems to me that each step of PGSharedMemoryReAttach() should
be retried in order. Do we need also to worry about SysV? I agree with
you that having consistency is better, but I don't recall seeing
failures or complains related to cygwin for ASLR.
I am also not aware of Cygwin failures, but I think keeping the code
same for the cases where we are not using fork mechanism seems like an
advisable approach. Also, if someone is testing EXEC_BACKEND on Linux
then randomization is 'on' by default, so one can hit this issue
during tests which doesn't matter much, but it still seems better to
have retry logic to prevent the issue.
I think that you are forgetting PGSharedMemoryCreate in the retry process.
No, we don't need retry for PGSharedMemoryCreate as we need this only
we are trying to attach to some pre-reserved shared memory. Do you
have something else in mind?
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, May 24, 2017 at 09:29:11AM -0400, Michael Paquier wrote:
On Tue, May 23, 2017 at 8:14 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
So it seems both you and Tom are leaning towards some sort of retry
mechanism for shm reattach on Windows. I also think that is a viable
option to negate the impact of ASLR. Attached patch does that. Note
that, as I have mentioned above I think we need to do it for shm
reserve operation as well. I think we need to decide how many retries
are sufficient before bailing out. As of now, I have used 10 to have
some similarity with PGSharedMemoryCreate(), but we can choose some
different count as well. One might say that we can have "number of
retries" as a guc parameter, but I am not sure about it, so not used.New GUCs can be backpatched if necessary, though this does not seem
necessary. Who is going to set up that anyway if we have a limit high
enough. 10 looks like a sufficient number to me.
Ten feels low to me. The value should be be low enough so users don't give up
and assume a permanent hang, but there's little advantage to making it lower.
I'd set it such that we give up in 1-5s on a modern Windows machine, which I
expect implies a retry count of one hundred or more.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
From: pgsql-hackers-owner@postgresql.org
[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Noah Misch
Ten feels low to me. The value should be be low enough so users don't give
up and assume a permanent hang, but there's little advantage to making it
lower.
I'd set it such that we give up in 1-5s on a modern Windows machine, which
I expect implies a retry count of one hundred or more.
Then, maybe we can measure the time in each iteration and give up after a particular seconds.
Regards
Takayuki Tsunakawa
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, May 25, 2017 at 11:34 AM, Tsunakawa, Takayuki
<tsunakawa.takay@jp.fujitsu.com> wrote:
From: pgsql-hackers-owner@postgresql.org
[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Noah Misch
Ten feels low to me. The value should be be low enough so users don't give
up and assume a permanent hang, but there's little advantage to making it
lower.
I'd set it such that we give up in 1-5s on a modern Windows machine, which
I expect implies a retry count of one hundred or more.Then, maybe we can measure the time in each iteration and give up after a particular seconds.
Indeed, pgrename() does so with a 100ms sleep time between each
iteration. Perhaps we could do that and limit to 50 iterations?
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, May 25, 2017 at 11:41:19AM +0900, Michael Paquier wrote:
On Thu, May 25, 2017 at 11:34 AM, Tsunakawa, Takayuki
<tsunakawa.takay@jp.fujitsu.com> wrote:From: pgsql-hackers-owner@postgresql.org
[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Noah Misch
Ten feels low to me. The value should be be low enough so users don't give
up and assume a permanent hang, but there's little advantage to making it
lower.
I'd set it such that we give up in 1-5s on a modern Windows machine, which
I expect implies a retry count of one hundred or more.Then, maybe we can measure the time in each iteration and give up after a particular seconds.
Exact duration is not important. Giving up after 0.1s is needlessly early,
because a system taking that long to start a backend is still usable. Giving
up after 50s is quite late. In between those extremes, lots of durations
would be reasonable. Thus, measuring time is needless complexity; retry count
is a suitable proxy.
Indeed, pgrename() does so with a 100ms sleep time between each
iteration. Perhaps we could do that and limit to 50 iterations?
pgrename() is polling for an asynchronous event, hence the sleep. To my
knowledge, time doesn't heal shm attach failures; therefore, a sleep is not
appropriate here.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, May 25, 2017 at 8:41 AM, Noah Misch <noah@leadboat.com> wrote:
On Thu, May 25, 2017 at 11:41:19AM +0900, Michael Paquier wrote:
Indeed, pgrename() does so with a 100ms sleep time between each
iteration. Perhaps we could do that and limit to 50 iterations?pgrename() is polling for an asynchronous event, hence the sleep. To my
knowledge, time doesn't heal shm attach failures; therefore, a sleep is not
appropriate here.
Yes, I also share this opinion, the shm attach failures are due to
randomization behavior, so sleep won't help much. So, I will change
the patch to use 100 retries unless people have other opinions.
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Amit Kapila <amit.kapila16@gmail.com> writes:
Yes, I also share this opinion, the shm attach failures are due to
randomization behavior, so sleep won't help much. So, I will change
the patch to use 100 retries unless people have other opinions.
Sounds about right to me.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
From: pgsql-hackers-owner@postgresql.org
[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Amit Kapila
Yes, I also share this opinion, the shm attach failures are due to
randomization behavior, so sleep won't help much. So, I will change the
patch to use 100 retries unless people have other opinions.
Perhaps I'm misunderstanding, but I thought it is not known yet whether the cause of the original problem is ASLR. I remember someone referred to anti-virus software and something else. I guessed that the reason Noah suggested 1 - 5 seconds of retry is based on the expectation that the address space might be freed by the anti-virus software.
Regards
Takayuki Tsunakawa
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, May 25, 2017 at 8:01 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Amit Kapila <amit.kapila16@gmail.com> writes:
Yes, I also share this opinion, the shm attach failures are due to
randomization behavior, so sleep won't help much. So, I will change
the patch to use 100 retries unless people have other opinions.Sounds about right to me.
Okay. I have changed the retry count to 100 and modified few comments
in the attached patch.
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
Attachments:
win_shm_retry_reattach_v2.patchapplication/octet-stream; name=win_shm_retry_reattach_v2.patchDownload+97-17
On Fri, May 26, 2017 at 5:30 AM, Tsunakawa, Takayuki
<tsunakawa.takay@jp.fujitsu.com> wrote:
From: pgsql-hackers-owner@postgresql.org
[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Amit Kapila
Yes, I also share this opinion, the shm attach failures are due to
randomization behavior, so sleep won't help much. So, I will change the
patch to use 100 retries unless people have other opinions.Perhaps I'm misunderstanding, but I thought it is not known yet whether the cause of the original problem is ASLR. I remember someone referred to anti-virus software and something else.
We are here purposefully trying to resolve the randomize shm
allocation behavior due to ASLR. The original failure was on a linux
machine and is resolved. We presumably sometimes get the failures [1]/messages/by-id/14121.1485360296@sss.pgh.pa.us
due to this behavior.
I guessed that the reason Noah suggested 1 - 5 seconds of retry is based on the expectation that the address space might be freed by the anti-virus software.
Noah is also suggesting to have a retry count, read his mail above in
this thread and refer to his comment ("Thus, measuring time is
needless complexity; retry count is a suitable proxy.")
I think the real question here is, shall we backpatch this fix or we
want to do this just in Head or we want to consider it as a new
feature for PostgreSQL-11. I think it should be fixed in Head and the
change seems harmless to me, so we should even backpatch it.
[1]: /messages/by-id/14121.1485360296@sss.pgh.pa.us
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, May 26, 2017 at 8:20 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
I think the real question here is, shall we backpatch this fix or we
want to do this just in Head or we want to consider it as a new
feature for PostgreSQL-11. I think it should be fixed in Head and the
change seems harmless to me, so we should even backpatch it.
The thing is not invasive, so backpatching is a low-risk move. We can
as well get that into HEAD first, wait a bit for dust to settle on it,
and then backpatch.
--
Michael
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, May 26, 2017 at 8:24 AM, Michael Paquier <michael.paquier@gmail.com>
wrote:
On Fri, May 26, 2017 at 8:20 AM, Amit Kapila <amit.kapila16@gmail.com>
wrote:I think the real question here is, shall we backpatch this fix or we
want to do this just in Head or we want to consider it as a new
feature for PostgreSQL-11. I think it should be fixed in Head and the
change seems harmless to me, so we should even backpatch it.The thing is not invasive, so backpatching is a low-risk move. We can
as well get that into HEAD first, wait a bit for dust to settle on it,
and then backpatch.
I would definitely suggest putting it in HEAD (and thus, v10) for a while
to get some real world exposure before backpatching. But if it does work
out well in the end, then we can certainly consider backpatching it. But
given the difficulty in reliably reproducing the problem etc, I think it's
a good idea to give it some proper real world experience in 10 first.
--
Magnus Hagander
Me: https://www.hagander.net/ <http://www.hagander.net/>
Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>
On Fri, May 26, 2017 at 8:21 PM, Magnus Hagander <magnus@hagander.net> wrote:
On Fri, May 26, 2017 at 8:24 AM, Michael Paquier <michael.paquier@gmail.com>
wrote:On Fri, May 26, 2017 at 8:20 AM, Amit Kapila <amit.kapila16@gmail.com>
wrote:I think the real question here is, shall we backpatch this fix or we
want to do this just in Head or we want to consider it as a new
feature for PostgreSQL-11. I think it should be fixed in Head and the
change seems harmless to me, so we should even backpatch it.The thing is not invasive, so backpatching is a low-risk move. We can
as well get that into HEAD first, wait a bit for dust to settle on it,
and then backpatch.I would definitely suggest putting it in HEAD (and thus, v10) for a while to
get some real world exposure before backpatching.
make sense to me, so I have added an entry in "Older Bugs" section in
PostgreSQL 10 Open Items.
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, May 26, 2017 at 10:51 AM, Magnus Hagander <magnus@hagander.net> wrote:
I would definitely suggest putting it in HEAD (and thus, v10) for a while to
get some real world exposure before backpatching. But if it does work out
well in the end, then we can certainly consider backpatching it. But given
the difficulty in reliably reproducing the problem etc, I think it's a good
idea to give it some proper real world experience in 10 first.
So, are you going to, perhaps, commit this? Or who is picking this up?
/me knows precious little about Windows.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas <robertmhaas@gmail.com> writes:
So, are you going to, perhaps, commit this? Or who is picking this up?
/me knows precious little about Windows.
I'm not going to be the one to commit this either, but seems like someone
should.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 01/06/17 15:25, Tom Lane wrote:
Robert Haas <robertmhaas@gmail.com> writes:
So, are you going to, perhaps, commit this? Or who is picking this up?
/me knows precious little about Windows.
I'm not going to be the one to commit this either, but seems like someone
should.
The new code does not use any windows specific APIs or anything, it just
adds retry logic for reattaching when we do EXEC_BACKEND which seems to
be agreed way of solving this. I do have couple of comments about the
code though.
The new parameter retry_count in PGSharedMemoryReAttach() seems to be
only used to decide if to log reattach issues so that we don't spam log
when retrying, but this fact is not mentioned anywhere.
Also, I am not excited about following coding style:
+ if (!pgwin32_ReserveSharedMemoryRegion(pi.hProcess)) + continue; + else + {
Amit, if you want to avoid having to add the curly braces for single
line while still having else, I'd invert the expression in the if ()
statement so that true comes first. It's much less ugly to have curly
braces part first and the continue statement in the else block IMHO.
--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, May 26, 2017 at 05:50:45PM +0530, Amit Kapila wrote:
On Fri, May 26, 2017 at 5:30 AM, Tsunakawa, Takayuki <tsunakawa.takay@jp.fujitsu.com> wrote:
I guessed that the reason Noah suggested 1 - 5 seconds of retry is based on the expectation that the address space might be freed by the anti-virus software.
No, I suggested it because I wouldn't seriously consider keeping an
installation where backend start takes 5s. If the address conflicts are that
persistent, I'd fix the bug or switch operating systems. Therefore, we may as
well let it fail at that duration, thereby showing the user what to
investigate. Startup time of 0.2s, on the other hand, is noticeable but
usable; I'd prefer not to fail hard at that duration.
Noah is also suggesting to have a retry count, read his mail above in
this thread and refer to his comment ("Thus, measuring time is
needless complexity; retry count is a suitable proxy.")
Right.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, Jun 1, 2017 at 10:36 PM, Petr Jelinek
<petr.jelinek@2ndquadrant.com> wrote:
On 01/06/17 15:25, Tom Lane wrote:
Robert Haas <robertmhaas@gmail.com> writes:
So, are you going to, perhaps, commit this? Or who is picking this up?
/me knows precious little about Windows.
I'm not going to be the one to commit this either, but seems like someone
should.The new code does not use any windows specific APIs or anything, it just
adds retry logic for reattaching when we do EXEC_BACKEND which seems to
be agreed way of solving this. I do have couple of comments about the
code though.The new parameter retry_count in PGSharedMemoryReAttach() seems to be
only used to decide if to log reattach issues so that we don't spam log
when retrying, but this fact is not mentioned anywhere.
No, it is to avoid calling free of memory which is not reserved on
retry. See the comment:
+ * On the first try, release memory region reservation that was made by
+ * the postmaster.
Are you referring to the same function in sysv_shm.c, if so probably I
can say refer the same API in win32_shmem.c or maybe add a similar
comment there as well?
Also, I am not excited about following coding style:
+ if (!pgwin32_ReserveSharedMemoryRegion(pi.hProcess)) + continue; + else + {Amit, if you want to avoid having to add the curly braces for single
line while still having else, I'd invert the expression in the if ()
statement so that true comes first. It's much less ugly to have curly
braces part first and the continue statement in the else block IMHO.
I felt that it is easier to understand the code in the way it is
currently written, but I can invert the check if you find it is easier
to read and understand that way.
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers