Comments on Custom RMGRs
New rmgr stuff looks interesting. I've had a detailed look through it
and tried to think about how it might be used in practice.
Spotted a minor comment that needs adjustment for new methods...
[PATCH: rmgr_001.v1.patch]
I notice rm_startup() and rm_cleanup() presume that this only works in
recovery. If recovery is "not needed", there is no way to run anything
at all, which seems wrong because how do we know that? I would prefer
it if rm_startup() and rm_cleanup() were executed in all cases. Only 4
builtin index rmgrs have these anyway, and they are all quick, so I
suggest we run them always. This allows a greater range of startup
behavior for rmgrs.
[PATCH: rmgr_002.v1.patch]
It occurs to me that any use of WAL presumes that Checkpoints exist
and do something useful. However, the custom rmgr interface doesn't
allow you to specify any actions on checkpoint, so ends up being
limited in scope. So I think we also need an rm_checkpoint() call -
which would be a no-op for existing rmgrs.
[PATCH: rmgr_003.v1.patch]
The above turns out to be fairly simple, but extends the API to
something truly flexible.
Please let me know what you think?
--
Simon Riggs http://www.EnterpriseDB.com/
On Wed, 2022-05-11 at 15:24 +0100, Simon Riggs wrote:
[PATCH: rmgr_001.v1.patch]
[PATCH: rmgr_002.v1.patch]
Thank you. Both of these look like good ideas, and I will commit them
in a few days assuming that nobody else sees a problem.
It occurs to me that any use of WAL presumes that Checkpoints exist
and do something useful. However, the custom rmgr interface doesn't
allow you to specify any actions on checkpoint, so ends up being
limited in scope. So I think we also need an rm_checkpoint() call -
which would be a no-op for existing rmgrs.
[PATCH: rmgr_003.v1.patch]
I also like this idea, but can you describe the intended use case? I
looked through CheckPointGuts() and I'm not sure what else a custom AM
might want to do. Maybe sync special files in a way that's not handled
with RegisterSyncRequest()?
Regards,
Jeff Davis
Hi,
On 2022-05-11 09:39:48 -0700, Jeff Davis wrote:
On Wed, 2022-05-11 at 15:24 +0100, Simon Riggs wrote:
[PATCH: rmgr_001.v1.patch]
[PATCH: rmgr_002.v1.patch]
Thank you. Both of these look like good ideas, and I will commit them
in a few days assuming that nobody else sees a problem.
What exactly is the use case here? Without passing in information about
whether recovery will be performed etc, it's not at all clear how callbacks
could something useful?
I don't think we should allocate a bunch of memory contexts to just free them
immediately after?
It occurs to me that any use of WAL presumes that Checkpoints exist
and do something useful. However, the custom rmgr interface doesn't
allow you to specify any actions on checkpoint, so ends up being
limited in scope. So I think we also need an rm_checkpoint() call -
which would be a no-op for existing rmgrs.
[PATCH: rmgr_003.v1.patch]I also like this idea, but can you describe the intended use case? I
looked through CheckPointGuts() and I'm not sure what else a custom AM
might want to do. Maybe sync special files in a way that's not handled
with RegisterSyncRequest()?
I'm not happy with the idea of random code being executed in the middle of
CheckPointGuts(), without any documentation of what is legal to do at that
point. To actually be useful we'd likely need multiple calls to such an rmgr
callback, with a parameter where in CheckPointGuts() we are. Right now the
sequencing is explicit in CheckPointGuts(), but with the proposed callback,
that'd not be the case anymore.
Greetings,
Andres Freund
On Thu, 12 May 2022 at 04:40, Andres Freund <andres@anarazel.de> wrote:
On 2022-05-11 09:39:48 -0700, Jeff Davis wrote:
On Wed, 2022-05-11 at 15:24 +0100, Simon Riggs wrote:
[PATCH: rmgr_001.v1.patch]
[PATCH: rmgr_002.v1.patch]
Thank you. Both of these look like good ideas, and I will commit them
in a few days assuming that nobody else sees a problem.What exactly is the use case here? Without passing in information about
whether recovery will be performed etc, it's not at all clear how callbacks
could something useful?
Sure, happy to do it that way.
[PATCH: rmgr_002.v2.patch]
I don't think we should allocate a bunch of memory contexts to just free them
immediately after?
Didn't seem a problem, but I've added code to use the flag requested above.
It occurs to me that any use of WAL presumes that Checkpoints exist
and do something useful. However, the custom rmgr interface doesn't
allow you to specify any actions on checkpoint, so ends up being
limited in scope. So I think we also need an rm_checkpoint() call -
which would be a no-op for existing rmgrs.
[PATCH: rmgr_003.v1.patch]I also like this idea, but can you describe the intended use case? I
looked through CheckPointGuts() and I'm not sure what else a custom AM
might want to do. Maybe sync special files in a way that's not handled
with RegisterSyncRequest()?I'm not happy with the idea of random code being executed in the middle of
CheckPointGuts(), without any documentation of what is legal to do at that
point.
The "I'm not happy.." ship has already sailed with pluggable rmgrs.
Checkpoints exist for one purpose - as the starting place for recovery.
Why would we allow pluggable recovery without *also* allowing
pluggable checkpoints?
To actually be useful we'd likely need multiple calls to such an rmgr
callback, with a parameter where in CheckPointGuts() we are. Right now the
sequencing is explicit in CheckPointGuts(), but with the proposed callback,
that'd not be the case anymore.
It is useful without the extra complexity you mention.
I see multiple uses for the rm_checkpoint() point proposed and I've
been asked multiple times for a checkpoint hook. Any rmgr that
services crash recovery for a non-smgr based storage system would need
this because the current checkpoint code only handles flushing to disk
for smgr-based approaches. That is orthogonal to other code during
checkpoint, so it stands alone quite well.
--
Simon Riggs http://www.EnterpriseDB.com/
Attachments:
rmgr_002.v2.patchapplication/octet-stream; name=rmgr_002.v2.patchDownload+48-32
Hi,
On 2022-05-12 22:26:51 +0100, Simon Riggs wrote:
On Thu, 12 May 2022 at 04:40, Andres Freund <andres@anarazel.de> wrote:
I'm not happy with the idea of random code being executed in the middle of
CheckPointGuts(), without any documentation of what is legal to do at that
point.The "I'm not happy.." ship has already sailed with pluggable rmgrs.
I don't agree. The ordering within a checkpoint is a lot more fragile than
what you do in an individual redo routine.
Checkpoints exist for one purpose - as the starting place for recovery.
Why would we allow pluggable recovery without *also* allowing
pluggable checkpoints?
Because one can do a lot of stuff with just pluggable WAL records, without
integrating into checkpoints?
Note that I'm *not* against making checkpoint extensible - I just think it
needs a good bit of design work around when the hook is called etc.
I definitely think it's too late in the cycle to add checkpoint extensibility
now.
To actually be useful we'd likely need multiple calls to such an rmgr
callback, with a parameter where in CheckPointGuts() we are. Right now the
sequencing is explicit in CheckPointGuts(), but with the proposed callback,
that'd not be the case anymore.It is useful without the extra complexity you mention.
Shrug. The documentation work definitely is needed. Perhaps we can get away
without multiple callbacks within a checkpoint, I think it'll become more
apparent when writing information about the precise point in time the
checkpoint callback is called.
I see multiple uses for the rm_checkpoint() point proposed and I've
been asked multiple times for a checkpoint hook. Any rmgr that
services crash recovery for a non-smgr based storage system would need
this because the current checkpoint code only handles flushing to disk
for smgr-based approaches. That is orthogonal to other code during
checkpoint, so it stands alone quite well.
FWIW, for that there are much bigger problems than checkpoint
extensibility. Most importantly there's currently no good way to integrate
relation creation / drop with the commit / abort infrastructure...
Greetings,
Andres Freund
On Thu, 2022-05-12 at 22:26 +0100, Simon Riggs wrote:
I see multiple uses for the rm_checkpoint() point proposed and I've
been asked multiple times for a checkpoint hook.
Can you elaborate and/or link to a discussion?
Regards,
Jeff Davis
On Fri, 13 May 2022 at 05:13, Jeff Davis <pgsql@j-davis.com> wrote:
On Thu, 2022-05-12 at 22:26 +0100, Simon Riggs wrote:
I see multiple uses for the rm_checkpoint() point proposed and I've
been asked multiple times for a checkpoint hook.Can you elaborate and/or link to a discussion?
Those were conversations away from Hackers, but I'm happy to share.
The first was a discussion about a data structure needed by BDR about
4 years ago. In the absence of a pluggable checkpoint, the solution
was forced to use a normal table, which wasn't very satisfactory.
The second was a more recent conversation with Mike Stonebraker, at
the end of 2021.. He was very keen to remove the buffer manager
entirely, which requires that we have a new smgr, which then needs new
code to allow it to be written to disk at checkpoint time, which then
requires some kind of pluggable code at checkpoint time. (Mike was
also keen to remove WAL, but that's another story entirely!).
The last use case was unlogged indexes, which need to be read from
disk at startup or rebuilt after crash, which requires RmgrStartup to
work both with and without InRedo=true.
--
Simon Riggs http://www.EnterpriseDB.com/
On Fri, 13 May 2022 at 00:42, Andres Freund <andres@anarazel.de> wrote:
On 2022-05-12 22:26:51 +0100, Simon Riggs wrote:
On Thu, 12 May 2022 at 04:40, Andres Freund <andres@anarazel.de> wrote:
I'm not happy with the idea of random code being executed in the middle of
CheckPointGuts(), without any documentation of what is legal to do at that
point.The "I'm not happy.." ship has already sailed with pluggable rmgrs.
I don't agree. The ordering within a checkpoint is a lot more fragile than
what you do in an individual redo routine.
Example?
Checkpoints exist for one purpose - as the starting place for recovery.
Why would we allow pluggable recovery without *also* allowing
pluggable checkpoints?Because one can do a lot of stuff with just pluggable WAL records, without
integrating into checkpoints?Note that I'm *not* against making checkpoint extensible - I just think it
needs a good bit of design work around when the hook is called etc.
When was any such work done previously for any other hook?? That isn't needed.
Checkpoints aren't complete until all data structures have
checkpointed, so there are no problems from a partial checkpoint being
written.
As a result, the order of actions in CheckpointGuts() is mostly
independent of each other. The SLRUs are all independent of each
other, as is CheckPointBuffers().
The use cases I'm trying to support aren't tricksy modifications of
existing code, they are just entirely new data structures which are
completely independent of other Postgres objects.
I definitely think it's too late in the cycle to add checkpoint extensibility
now.To actually be useful we'd likely need multiple calls to such an rmgr
callback, with a parameter where in CheckPointGuts() we are. Right now the
sequencing is explicit in CheckPointGuts(), but with the proposed callback,
that'd not be the case anymore.It is useful without the extra complexity you mention.
Shrug. The documentation work definitely is needed. Perhaps we can get away
without multiple callbacks within a checkpoint, I think it'll become more
apparent when writing information about the precise point in time the
checkpoint callback is called.
You seem to be thinking in terms of modifying the existing actions in
CheckpointGuts(). I don't care about that. Anybody that wishes to do
that can work out the details of their actions.
There is nothing to document, other than "don't do things that won't
work". How can anyone enumerate all the things that wouldn't work??
There is no list of caveats for any other hook. Why is it needed here?
I see multiple uses for the rm_checkpoint() point proposed and I've
been asked multiple times for a checkpoint hook. Any rmgr that
services crash recovery for a non-smgr based storage system would need
this because the current checkpoint code only handles flushing to disk
for smgr-based approaches. That is orthogonal to other code during
checkpoint, so it stands alone quite well.FWIW, for that there are much bigger problems than checkpoint
extensibility. Most importantly there's currently no good way to integrate
relation creation / drop with the commit / abort infrastructure...
One at a time...
--
Simon Riggs http://www.EnterpriseDB.com/
On Fri, 2022-05-13 at 13:31 +0100, Simon Riggs wrote:
The first was a discussion about a data structure needed by BDR about
4 years ago. In the absence of a pluggable checkpoint, the solution
was forced to use a normal table, which wasn't very satisfactory.
I'm interested to hear more about this case. Are you developing it into
a full table AM? In my experience with columnar, there's still a long
tail of things I wish I had in the backend to better support complete
table AMs.
The second was a more recent conversation with Mike Stonebraker, at
the end of 2021.. He was very keen to remove the buffer manager
entirely, which requires that we have a new smgr, which then needs
new
code to allow it to be written to disk at checkpoint time, which then
requires some kind of pluggable code at checkpoint time. (Mike was
also keen to remove WAL, but that's another story entirely!).
I'm guessing that would be more of an experimental/ambitious project,
and based on modified postgres anyway.
The last use case was unlogged indexes, which need to be read from
disk at startup or rebuilt after crash, which requires RmgrStartup to
work both with and without InRedo=true.
That sounds like a core feature, in which case we can just refactor
that for v16. It might be a nice cleanup for unlogged tables, too. I
don't think your 002-v2 patch is particularly risky, but any reluctance
at all probably pushes it to v16 given that it's so late in the cycle.
Regards,
Jeff Davis
On Fri, May 13, 2022 at 8:47 AM Simon Riggs
<simon.riggs@enterprisedb.com> wrote:
Note that I'm *not* against making checkpoint extensible - I just think it
needs a good bit of design work around when the hook is called etc.When was any such work done previously for any other hook?? That isn't needed.
I think almost every proposal to add a hook results in some discussion
about how usable the hook will be and whether it's being put in the
correct place and called with the correct arguments.
I think that's a good thing, too. Otherwise the code would be
cluttered with a bunch of hooks that seemed to someone like a good
idea at the time but are actually just a maintenance headache.
--
Robert Haas
EDB: http://www.enterprisedb.com
Hi,
The checkpoint hook looks very useful, especially for extensions that have
their own storage, like pg_stat_statements.
For example, we can keep work data in shared memory and save it only during
checkpoints.
When recovering, we need to read all the data from the disk and then repeat
the latest changes from the WAL.
On Mon, Feb 26, 2024 at 2:42 PM Simon Riggs <simon.riggs@enterprisedb.com>
wrote:
On Fri, 13 May 2022 at 00:42, Andres Freund <andres@anarazel.de> wrote:
On 2022-05-12 22:26:51 +0100, Simon Riggs wrote:
On Thu, 12 May 2022 at 04:40, Andres Freund <andres@anarazel.de>
wrote:
I'm not happy with the idea of random code being executed in the
middle of
CheckPointGuts(), without any documentation of what is legal to do
at that
point.
The "I'm not happy.." ship has already sailed with pluggable rmgrs.
I don't agree. The ordering within a checkpoint is a lot more fragile
than
what you do in an individual redo routine.
Example?
Checkpoints exist for one purpose - as the starting place for
recovery.
Why would we allow pluggable recovery without *also* allowing
pluggable checkpoints?Because one can do a lot of stuff with just pluggable WAL records,
without
integrating into checkpoints?
Note that I'm *not* against making checkpoint extensible - I just think
it
needs a good bit of design work around when the hook is called etc.
When was any such work done previously for any other hook?? That isn't
needed.
Checkpoints aren't complete until all data structures have
checkpointed, so there are no problems from a partial checkpoint being
written.As a result, the order of actions in CheckpointGuts() is mostly
independent of each other. The SLRUs are all independent of each
other, as is CheckPointBuffers().The use cases I'm trying to support aren't tricksy modifications of
existing code, they are just entirely new data structures which are
completely independent of other Postgres objects.I definitely think it's too late in the cycle to add checkpoint
extensibility
now.
To actually be useful we'd likely need multiple calls to such an
rmgr
callback, with a parameter where in CheckPointGuts() we are. Right
now the
sequencing is explicit in CheckPointGuts(), but with the proposed
callback,
that'd not be the case anymore.
It is useful without the extra complexity you mention.
Shrug. The documentation work definitely is needed. Perhaps we can get
away
without multiple callbacks within a checkpoint, I think it'll become
more
apparent when writing information about the precise point in time the
checkpoint callback is called.You seem to be thinking in terms of modifying the existing actions in
CheckpointGuts(). I don't care about that. Anybody that wishes to do
that can work out the details of their actions.There is nothing to document, other than "don't do things that won't
work". How can anyone enumerate all the things that wouldn't work??There is no list of caveats for any other hook. Why is it needed here?
There are easily reproducible issues where rm_checkpoint() throws an ERROR.
When it occurs at the end-of-recovery checkpoint, the server fails to start
with a message like this:
ERROR: Test error
FATAL: checkpoint request failed
HINT: Consult recent messages in the server log for details.
Even if we remove the broken extension from shared_preload_libraries, we
get the following message in the server log:
FATAL: resource manager with ID 128 not registered
HINT: Include the extension module that implements this resource manager
in shared_preload_libraries.
In both cases, with or without the extension in shared_preload_libraries,
the server cannot start.
This seems like a programmer's problem, but what should the user do after
receiving such messages?
Maybe it would be safer to use something like after_checkpoint_hook, which
would be called after the checkpoint is completed?
This is enough for some cases when we only need to save shared memory to
disk.
--
Regards,
Daniil Anisimov
Postgres Professional: http://postgrespro.com
On Mon, 2024-02-26 at 23:29 +0700, Danil Anisimow wrote:
Hi,
The checkpoint hook looks very useful, especially for extensions that
have their own storage, like pg_stat_statements.
For example, we can keep work data in shared memory and save it only
during checkpoints.
When recovering, we need to read all the data from the disk and then
repeat the latest changes from the WAL.
Let's pick this discussion back up, then. Where should the hook go?
Does it need to be broken into phases like resource owners? What
guidance can we provide to extension authors to use it correctly?
Simon's right that these things don't need to be 100% answered for
every hook we add; but I agree with Andres and Robert that this could
benefit from some more discussion about the details.
The proposal calls the hook right after CheckPointPredicate() and
before CheckPointBuffers(). Is that the right place for the use case
you have in mind with pg_stat_statements?
Regards,
Jeff Davis
On Tue, Feb 27, 2024 at 2:56 AM Jeff Davis <pgsql@j-davis.com> wrote:
Let's pick this discussion back up, then. Where should the hook go?
Does it need to be broken into phases like resource owners? What
guidance can we provide to extension authors to use it correctly?Simon's right that these things don't need to be 100% answered for
every hook we add; but I agree with Andres and Robert that this could
benefit from some more discussion about the details.The proposal calls the hook right after CheckPointPredicate() and
before CheckPointBuffers(). Is that the right place for the use case
you have in mind with pg_stat_statements?
Hello!
Answering your questions might take some time as I want to write a sample
patch for pg_stat_statements and make some tests.
What do you think about putting the patch to commitfest as it closing in a
few hours?
--
Regards,
Daniil Anisimov
Postgres Professional: http://postgrespro.com
On Thu, 2024-02-29 at 21:47 +0700, Danil Anisimow wrote:
Answering your questions might take some time as I want to write a
sample patch for pg_stat_statements and make some tests.
What do you think about putting the patch to commitfest as it closing
in a few hours?
Added to March CF.
I don't have an immediate use case in mind for this, so please drive
that part of the discussion. I can't promise this for 17, but if the
patch is simple enough and a quick consensus develops, then it's
possible.
Regards,
Jeff Davis
On 29 Feb 2024, at 19:47, Danil Anisimow <anisimow.d@gmail.com> wrote:
Answering your questions might take some time as I want to write a sample patch for pg_stat_statements and make some tests.
What do you think about putting the patch to commitfest as it closing in a few hours?
I’ve switched the patch to “Waiting on Author” to indicate that currently patch is not available yet. Please, flip it back when it’s available for review.
Thanks!
Best regards, Andrey Borodin.
On Fri, Mar 1, 2024 at 2:06 AM Jeff Davis <pgsql@j-davis.com> wrote:
Added to March CF.
I don't have an immediate use case in mind for this, so please drive
that part of the discussion. I can't promise this for 17, but if the
patch is simple enough and a quick consensus develops, then it's
possible.
[pgss_001.v1.patch] adds a custom resource manager to the
pg_stat_statements extension. The proposed patch is not a complete solution
for pgss and may not work correctly with replication.
The 020_crash.pl test demonstrates server interruption by killing a
backend. Without rm_checkpoint hook, the server restores pgss stats only
after last CHECKPOINT. Data added to WAL before the checkpoint is not
restored.
The rm_checkpoint hook allows saving shared memory data to disk at each
checkpoint. However, for pg_stat_statements, it matters when the checkpoint
occurred. When the server shuts down, pgss deletes the temporary file of
query texts. In other cases, this is unacceptable.
To provide this capability, a flags parameter was added to the
rm_checkpoint hook. The changes are presented in [rmgr_003.v2.patch].
--
Regards,
Daniil Anisimov
Postgres Professional: http://postgrespro.com
On Thu, 2024-03-21 at 19:47 +0700, Danil Anisimow wrote:
[pgss_001.v1.patch] adds a custom resource manager to the
pg_stat_statements extension.
Did you consider moving the logic for loading the initial contents from
disk from pgss_shmem_startup to .rmgr_startup?
The rm_checkpoint hook allows saving shared memory data to disk at
each checkpoint. However, for pg_stat_statements, it matters when the
checkpoint occurred. When the server shuts down, pgss deletes the
temporary file of query texts. In other cases, this is unacceptable.
To provide this capability, a flags parameter was added to the
rm_checkpoint hook. The changes are presented in [rmgr_003.v2.patch].
Overall this seems fairly reasonable to me. I think this will work for
similar extensions, where the data being stored is independent from the
buffers.
My biggest concern is that it might not be quite right for a table AM
that has complex state that needs action to be taken at a slightly
different time, e.g. right after CheckPointBuffers().
Then again, the rmgr is a low-level API, and any extension using it
should be prepared to adapt to changes. If it works for pgss, then we
know it works for at least one thing, and we can always improve it
later. For instance, we might call the hook several times and pass it a
"phase" argument.
Regards,
Jeff Davis
On Thu, 2024-03-21 at 19:47 +0700, Danil Anisimow wrote:
The proposed patch is not a complete solution for pgss and may not
work correctly with replication.
Also, what is the desired behavior during replication? Should queries
on the primary be represented in pgss on the replica? If the answer is
yes, should they be differentiated somehow so that you can know where
the slow queries are running?
Regards,
Jeff Davis
On Fri, Mar 22, 2024 at 2:02 AM Jeff Davis <pgsql@j-davis.com> wrote:
On Thu, 2024-03-21 at 19:47 +0700, Danil Anisimow wrote:
[pgss_001.v1.patch] adds a custom resource manager to the
pg_stat_statements extension.Did you consider moving the logic for loading the initial contents from
disk from pgss_shmem_startup to .rmgr_startup?
I tried it, but .rmgr_startup is not called if the system was shut down
cleanly.
My biggest concern is that it might not be quite right for a table AM
that has complex state that needs action to be taken at a slightly
different time, e.g. right after CheckPointBuffers().
Then again, the rmgr is a low-level API, and any extension using it
should be prepared to adapt to changes. If it works for pgss, then we
know it works for at least one thing, and we can always improve it
later. For instance, we might call the hook several times and pass it a
"phase" argument.
In [rmgr_003.v3.patch] I added a phase argument to RmgrCheckpoint().
Currently it is only called in two places: before and after
CheckPointBuffers().
--
Regards,
Daniil Anisimov
Postgres Professional: http://postgrespro.com
Attachments:
rmgr_003.v3.patchtext/x-patch; charset=US-ASCII; name=rmgr_003.v3.patchDownload+63-28
On Fri, 2024-03-29 at 18:20 +0700, Danil Anisimow wrote:
In [rmgr_003.v3.patch] I added a phase argument to RmgrCheckpoint().
Currently it is only called in two places: before and after
CheckPointBuffers().
I am fine with this.
You've moved the discussion forward in two ways:
1. Changes to pg_stat_statements to actually use the API; and
2. The hook is called at multiple points.
Those at least partially address the concerns raised by Andres and
Robert. But given that there was pushback from multiple people on the
feature, I'd like to hear from at least one of them. It's very late in
the cycle so I'm not sure we'll get more feedback in time, though.
Regards,
Jeff Davis