Sub-millisecond [autovacuum_]vacuum_cost_delay broken

Started by Melanie Plagemanabout 3 years ago26 messageshackers
Jump to latest
#1Melanie Plageman
melanieplageman@gmail.com

Hi,

I think that 4753ef37e0ed undid the work caf626b2c did to support
sub-millisecond delays for vacuum and autovacuum.

After 4753ef37e0ed, vacuum_delay_point()'s local variable msec is a
double which, after being passed to WaitLatch() as timeout, which is a
long, ends up being 0, so we don't end up waiting AFAICT.

When I set [autovacuum_]vacuum_delay_point to 0.5, SHOW will report that
it is 500us, but WaitLatch() is still getting 0 as timeout.

- Melanie

#2Thomas Munro
thomas.munro@gmail.com
In reply to: Melanie Plageman (#1)
Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken

On Fri, Mar 10, 2023 at 10:26 AM Melanie Plageman
<melanieplageman@gmail.com> wrote:

I think that 4753ef37e0ed undid the work caf626b2c did to support
sub-millisecond delays for vacuum and autovacuum.

After 4753ef37e0ed, vacuum_delay_point()'s local variable msec is a
double which, after being passed to WaitLatch() as timeout, which is a
long, ends up being 0, so we don't end up waiting AFAICT.

When I set [autovacuum_]vacuum_delay_point to 0.5, SHOW will report that
it is 500us, but WaitLatch() is still getting 0 as timeout.

Given that some of the clunkier underlying kernel primitives have
milliseconds in their interface, I don't think it would be possible to
make a usec-based variant of WaitEventSetWait() that works everywhere.
Could it possibly make sense to do something that accumulates the
error, so if you're using 0.5 then every second vacuum_delay_point()
waits for 1ms?

#3Tom Lane
tgl@sss.pgh.pa.us
In reply to: Thomas Munro (#2)
Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken

Thomas Munro <thomas.munro@gmail.com> writes:

On Fri, Mar 10, 2023 at 10:26 AM Melanie Plageman
<melanieplageman@gmail.com> wrote:

I think that 4753ef37e0ed undid the work caf626b2c did to support
sub-millisecond delays for vacuum and autovacuum.

Given that some of the clunkier underlying kernel primitives have
milliseconds in their interface, I don't think it would be possible to
make a usec-based variant of WaitEventSetWait() that works everywhere.
Could it possibly make sense to do something that accumulates the
error, so if you're using 0.5 then every second vacuum_delay_point()
waits for 1ms?

Yeah ... using float math there was cute, but it'd only get us so far.
The caf626b2c code would only work well on platforms that have
microsecond-based sleep primitives, so it was already not too portable.

Can we fix this by making VacuumCostBalance carry the extra fractional
delay, or would a separate variable be better?

regards, tom lane

#4Stephen Frost
sfrost@snowman.net
In reply to: Thomas Munro (#2)
Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken

Greetings,

* Thomas Munro (thomas.munro@gmail.com) wrote:

On Fri, Mar 10, 2023 at 10:26 AM Melanie Plageman
<melanieplageman@gmail.com> wrote:

I think that 4753ef37e0ed undid the work caf626b2c did to support
sub-millisecond delays for vacuum and autovacuum.

After 4753ef37e0ed, vacuum_delay_point()'s local variable msec is a
double which, after being passed to WaitLatch() as timeout, which is a
long, ends up being 0, so we don't end up waiting AFAICT.

When I set [autovacuum_]vacuum_delay_point to 0.5, SHOW will report that
it is 500us, but WaitLatch() is still getting 0 as timeout.

Given that some of the clunkier underlying kernel primitives have
milliseconds in their interface, I don't think it would be possible to
make a usec-based variant of WaitEventSetWait() that works everywhere.
Could it possibly make sense to do something that accumulates the
error, so if you're using 0.5 then every second vacuum_delay_point()
waits for 1ms?

Hmm. That generally makes sense to me.. though isn't exactly the same.
Still, I wouldn't want to go back to purely pg_usleep() as that has the
other downsides mentioned.

Perhaps if the delay is sub-millisecond, explicitly do the WaitLatch()
with zero but also do the pg_usleep()? That's doing a fair bit of work
beyond just sleeping, but it also means we shouldn't miss out on the
postmaster going away or similar..

Thanks,

Stephen

#5Thomas Munro
thomas.munro@gmail.com
In reply to: Tom Lane (#3)
Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken

On Fri, Mar 10, 2023 at 11:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Thomas Munro <thomas.munro@gmail.com> writes:

On Fri, Mar 10, 2023 at 10:26 AM Melanie Plageman
<melanieplageman@gmail.com> wrote:

I think that 4753ef37e0ed undid the work caf626b2c did to support
sub-millisecond delays for vacuum and autovacuum.

Given that some of the clunkier underlying kernel primitives have
milliseconds in their interface, I don't think it would be possible to
make a usec-based variant of WaitEventSetWait() that works everywhere.
Could it possibly make sense to do something that accumulates the
error, so if you're using 0.5 then every second vacuum_delay_point()
waits for 1ms?

Yeah ... using float math there was cute, but it'd only get us so far.
The caf626b2c code would only work well on platforms that have
microsecond-based sleep primitives, so it was already not too portable.

Also, the previous coding was already b0rked, because pg_usleep()
rounds up to milliseconds on Windows (with a surprising formula for
rounding), and also the whole concept seems to assume things about
schedulers that aren't really universally true. If we actually cared
about high res times maybe we should be using nanosleep and tracking
the drift? And spreading it out a bit. But I don't know.

Can we fix this by making VacuumCostBalance carry the extra fractional
delay, or would a separate variable be better?

I was wondering the same thing, but not being too familiar with that
code, no opinion on that yet.

#6Melanie Plageman
melanieplageman@gmail.com
In reply to: Thomas Munro (#5)
Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken

On Thu, Mar 9, 2023 at 5:10 PM Thomas Munro <thomas.munro@gmail.com> wrote:

On Fri, Mar 10, 2023 at 11:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Thomas Munro <thomas.munro@gmail.com> writes:

On Fri, Mar 10, 2023 at 10:26 AM Melanie Plageman
<melanieplageman@gmail.com> wrote:

I think that 4753ef37e0ed undid the work caf626b2c did to support
sub-millisecond delays for vacuum and autovacuum.

Given that some of the clunkier underlying kernel primitives have
milliseconds in their interface, I don't think it would be possible to
make a usec-based variant of WaitEventSetWait() that works everywhere.
Could it possibly make sense to do something that accumulates the
error, so if you're using 0.5 then every second vacuum_delay_point()
waits for 1ms?

Yeah ... using float math there was cute, but it'd only get us so far.
The caf626b2c code would only work well on platforms that have
microsecond-based sleep primitives, so it was already not too portable.

Also, the previous coding was already b0rked, because pg_usleep()
rounds up to milliseconds on Windows (with a surprising formula for
rounding), and also the whole concept seems to assume things about
schedulers that aren't really universally true. If we actually cared
about high res times maybe we should be using nanosleep and tracking
the drift? And spreading it out a bit. But I don't know.

Can we fix this by making VacuumCostBalance carry the extra fractional
delay, or would a separate variable be better?

I was wondering the same thing, but not being too familiar with that
code, no opinion on that yet.

Well, that is reset to zero in vacuum() at the top -- which is called for
each table for autovacuum, so it would get reset to zero between
autovacuuming tables. I dunno how you feel about that...

- Melanie

#7Tom Lane
tgl@sss.pgh.pa.us
In reply to: Thomas Munro (#5)
Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken

Thomas Munro <thomas.munro@gmail.com> writes:

On Fri, Mar 10, 2023 at 11:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

The caf626b2c code would only work well on platforms that have
microsecond-based sleep primitives, so it was already not too portable.

Also, the previous coding was already b0rked, because pg_usleep()
rounds up to milliseconds on Windows (with a surprising formula for
rounding), and also the whole concept seems to assume things about
schedulers that aren't really universally true. If we actually cared
about high res times maybe we should be using nanosleep and tracking
the drift? And spreading it out a bit. But I don't know.

Yeah, I was wondering about trying to make it a closed-loop control,
but I think that'd be huge overkill considering what the mechanism is
trying to accomplish.

A minimalistic fix could be as attached. I'm not sure if it's worth
making the state variable global so that it can be reset to zero in
the places where we zero out VacuumCostBalance etc. Also note that
this is ignoring the VacuumSharedCostBalance stuff, so you'd possibly
have the extra delay accumulating in unexpected places when there are
multiple workers. But I really doubt it's worth worrying about that.

Is it reasonable to assume that all modern platforms can time
millisecond delays accurately? Ten years ago I'd have suggested
truncating the delay to a multiple of 10msec and using this logic
to track the remainder, but maybe now that's unnecessary.

regards, tom lane

Attachments:

fix-fractional-vacuum-cost-delay-again-wip.patchtext/x-diff; charset=us-ascii; name=fix-fractional-vacuum-cost-delay-again-wip.patchDownload+21-5
#8Nathan Bossart
nathandbossart@gmail.com
In reply to: Tom Lane (#7)
Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken

On Thu, Mar 09, 2023 at 05:27:08PM -0500, Tom Lane wrote:

Is it reasonable to assume that all modern platforms can time
millisecond delays accurately? Ten years ago I'd have suggested
truncating the delay to a multiple of 10msec and using this logic
to track the remainder, but maybe now that's unnecessary.

If so, it might also be worth updating or removing this comment in
pgsleep.c:

* NOTE: although the delay is specified in microseconds, the effective
* resolution is only 1/HZ, or 10 milliseconds, on most Unixen. Expect
* the requested delay to be rounded up to the next resolution boundary.

I've had doubts for some time about whether this is still accurate...

--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com

#9Melanie Plageman
melanieplageman@gmail.com
In reply to: Tom Lane (#7)
Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken

On Thu, Mar 9, 2023 at 5:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Thomas Munro <thomas.munro@gmail.com> writes:

On Fri, Mar 10, 2023 at 11:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

The caf626b2c code would only work well on platforms that have
microsecond-based sleep primitives, so it was already not too portable.

Also, the previous coding was already b0rked, because pg_usleep()
rounds up to milliseconds on Windows (with a surprising formula for
rounding), and also the whole concept seems to assume things about
schedulers that aren't really universally true. If we actually cared
about high res times maybe we should be using nanosleep and tracking
the drift? And spreading it out a bit. But I don't know.

Yeah, I was wondering about trying to make it a closed-loop control,
but I think that'd be huge overkill considering what the mechanism is
trying to accomplish.

Not relevant to fixing this, but I wonder if you could eliminate the
need to specify the cost delay in most cases for autovacuum if you used
feedback from how much vacuuming work was done during the last cycle of
vacuuming to control the delay value internally - a kind of
feedback-adjusted controller.

- Melanie

#10Melanie Plageman
melanieplageman@gmail.com
In reply to: Tom Lane (#7)
Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken

On Thu, Mar 9, 2023 at 5:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Thomas Munro <thomas.munro@gmail.com> writes:

On Fri, Mar 10, 2023 at 11:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

The caf626b2c code would only work well on platforms that have
microsecond-based sleep primitives, so it was already not too portable.

Also, the previous coding was already b0rked, because pg_usleep()
rounds up to milliseconds on Windows (with a surprising formula for
rounding), and also the whole concept seems to assume things about
schedulers that aren't really universally true. If we actually cared
about high res times maybe we should be using nanosleep and tracking
the drift? And spreading it out a bit. But I don't know.

Yeah, I was wondering about trying to make it a closed-loop control,
but I think that'd be huge overkill considering what the mechanism is
trying to accomplish.

A minimalistic fix could be as attached. I'm not sure if it's worth
making the state variable global so that it can be reset to zero in
the places where we zero out VacuumCostBalance etc. Also note that
this is ignoring the VacuumSharedCostBalance stuff, so you'd possibly
have the extra delay accumulating in unexpected places when there are
multiple workers. But I really doubt it's worth worrying about that.

What if someone resets the delay guc and there is still a large residual?

#11Tom Lane
tgl@sss.pgh.pa.us
In reply to: Melanie Plageman (#10)
Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken

Melanie Plageman <melanieplageman@gmail.com> writes:

On Thu, Mar 9, 2023 at 5:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:

A minimalistic fix could be as attached. I'm not sure if it's worth
making the state variable global so that it can be reset to zero in
the places where we zero out VacuumCostBalance etc. Also note that
this is ignoring the VacuumSharedCostBalance stuff, so you'd possibly
have the extra delay accumulating in unexpected places when there are
multiple workers. But I really doubt it's worth worrying about that.

What if someone resets the delay guc and there is still a large residual?

By definition, the residual is less than 1msec.

regards, tom lane

#12Thomas Munro
thomas.munro@gmail.com
In reply to: Nathan Bossart (#8)
Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken

On Fri, Mar 10, 2023 at 11:37 AM Nathan Bossart
<nathandbossart@gmail.com> wrote:

On Thu, Mar 09, 2023 at 05:27:08PM -0500, Tom Lane wrote:

Is it reasonable to assume that all modern platforms can time
millisecond delays accurately? Ten years ago I'd have suggested
truncating the delay to a multiple of 10msec and using this logic
to track the remainder, but maybe now that's unnecessary.

If so, it might also be worth updating or removing this comment in
pgsleep.c:

* NOTE: although the delay is specified in microseconds, the effective
* resolution is only 1/HZ, or 10 milliseconds, on most Unixen. Expect
* the requested delay to be rounded up to the next resolution boundary.

I've had doubts for some time about whether this is still accurate...

What I see with the old select(), or a more modern clock_nanosleep()
call, is that Linux, FreeBSD, macOS are happy sleeping for .1ms, .5ms,
1ms, 2ms, 3ms, and through innaccuracies and scheduling overheads etc
it works out to about 5-25% extra sleep time (I expect that can be
affected by choice of time source/available hardware, and perhaps
various system calls use different tricks). I definitely recall the
behaviour described, back in the old days where more stuff was
scheduler-tick based. I have no clue for Windows; quick googling
tells me that it might still be pretty chunky, unless you do certain
other stuff that I didn't follow up; we could probably get more
accurate sleep times by rummaging through nt.dll. It would be good to
find out how well WaitEventSet does on Windows; perhaps we should have
a little timing accuracy test in the tree to collect build farm data?

FWIW epoll has a newer _pwait2() call that has higher res timeout
argument, and Windows WaitEventSet could also do high res timers if
you add timer events rather than using the timeout argument, and I
guess conceptually even the old poll() thing could do the equivalent
with a signal alarm timer, but it sounds a lot like a bad idea to do
very short sleeps to me, burning so much CPU on scheduling. I kinda
wonder if the 10ms + residual thing might even turn out to be a better
idea... but I dunno.

The 1ms residual thing looks pretty good to me as a fix to the
immediate problem report, but we might also want to adjust the wording
in config.sgml?

#13Thomas Munro
thomas.munro@gmail.com
In reply to: Thomas Munro (#12)
Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken

Erm, but maybe I'm just looking at this too myopically. Is there
really any point in letting people set it to 0.5, if it behaves as if
you'd set it to 1 and doubled the cost limit? Isn't it just more
confusing? I haven't read the discussion from when fractional delays
came in, where I imagine that must have come up...

#14Tom Lane
tgl@sss.pgh.pa.us
In reply to: Thomas Munro (#13)
Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken

Thomas Munro <thomas.munro@gmail.com> writes:

Erm, but maybe I'm just looking at this too myopically. Is there
really any point in letting people set it to 0.5, if it behaves as if
you'd set it to 1 and doubled the cost limit? Isn't it just more
confusing? I haven't read the discussion from when fractional delays
came in, where I imagine that must have come up...

At [1]/messages/by-id/28720.1552101086@sss.pgh.pa.us I argued

The reason is this: what we want to do is throttle VACUUM's I/O demand,
and by "throttle" I mean "gradually reduce". There is nothing gradual
about issuing a few million I/Os and then sleeping for many milliseconds;
that'll just produce spikes and valleys in the I/O demand. Ideally,
what we'd have it do is sleep for a very short interval after each I/O.
But that's not too practical, both for code-structure reasons and because
most platforms don't give us a way to so finely control the length of a
sleep. Hence the design of sleeping for awhile after every so many I/Os.

However, the current settings are predicated on the assumption that
you can't get the kernel to give you a sleep of less than circa 10ms.
That assumption is way outdated, I believe; poking around on systems
I have here, the minimum delay time using pg_usleep(1) seems to be
generally less than 100us, and frequently less than 10us, on anything
released in the last decade.

I propose therefore that instead of increasing vacuum_cost_limit,
what we ought to be doing is reducing vacuum_cost_delay by a similar
factor. And, to provide some daylight for people to reduce it even
more, we ought to arrange for it to be specifiable in microseconds
not milliseconds. There's no GUC_UNIT_US right now, but it's time.

That last point was later overruled in favor of keeping it measured in
msec to avoid breaking existing configuration files. Nonetheless,
vacuum_cost_delay *is* an actual time to wait (conceptually at least),
not just part of a unitless ratio; and there seem to be good arguments
in favor of letting people make it small.

I take your point that really short sleeps are inefficient so far as the
scheduling overhead goes. But on modern machines you probably have to get
down to a not-very-large number of microseconds before that's a big deal.

regards, tom lane

[1]: /messages/by-id/28720.1552101086@sss.pgh.pa.us

#15Thomas Munro
thomas.munro@gmail.com
In reply to: Tom Lane (#14)
Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken

On Fri, Mar 10, 2023 at 1:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:

I propose therefore that instead of increasing vacuum_cost_limit,
what we ought to be doing is reducing vacuum_cost_delay by a similar
factor. And, to provide some daylight for people to reduce it even
more, we ought to arrange for it to be specifiable in microseconds
not milliseconds. There's no GUC_UNIT_US right now, but it's time.

That last point was later overruled in favor of keeping it measured in
msec to avoid breaking existing configuration files. Nonetheless,
vacuum_cost_delay *is* an actual time to wait (conceptually at least),
not just part of a unitless ratio; and there seem to be good arguments
in favor of letting people make it small.

I take your point that really short sleeps are inefficient so far as the
scheduling overhead goes. But on modern machines you probably have to get
down to a not-very-large number of microseconds before that's a big deal.

OK. One idea is to provide a WaitLatchUsec(), which is just some
cross platform donkeywork that I think I know how to type in, and it
would have to round up on poll() and Windows builds. Then we could
either also provide WaitEventSetResolution() that returns 1000 or 1
depending on availability of 1us waits so that we could round
appropriately and then track residual, but beyond that let the user
worry about inaccuracies and overheads (as mentioned in the
documentation), or we could start consulting the clock and tracking
our actual sleep time and true residual over time (maybe that's what
"closed-loop control" means?).

#16Tom Lane
tgl@sss.pgh.pa.us
In reply to: Thomas Munro (#15)
Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken

Thomas Munro <thomas.munro@gmail.com> writes:

OK. One idea is to provide a WaitLatchUsec(), which is just some
cross platform donkeywork that I think I know how to type in, and it
would have to round up on poll() and Windows builds. Then we could
either also provide WaitEventSetResolution() that returns 1000 or 1
depending on availability of 1us waits so that we could round
appropriately and then track residual, but beyond that let the user
worry about inaccuracies and overheads (as mentioned in the
documentation),

... so we'd still need to have the residual-sleep-time logic?

or we could start consulting the clock and tracking
our actual sleep time and true residual over time (maybe that's what
"closed-loop control" means?).

Yeah, I was hand-waving about trying to measure our actual sleep times.
On reflection I doubt it's a great idea. It'll add overhead and there's
still a question of whether measurement noise would accumulate.

regards, tom lane

#17Thomas Munro
thomas.munro@gmail.com
In reply to: Tom Lane (#16)
Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken

On Fri, Mar 10, 2023 at 2:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Thomas Munro <thomas.munro@gmail.com> writes:

OK. One idea is to provide a WaitLatchUsec(), which is just some
cross platform donkeywork that I think I know how to type in, and it
would have to round up on poll() and Windows builds. Then we could
either also provide WaitEventSetResolution() that returns 1000 or 1
depending on availability of 1us waits so that we could round
appropriately and then track residual, but beyond that let the user
worry about inaccuracies and overheads (as mentioned in the
documentation),

... so we'd still need to have the residual-sleep-time logic?

Ah, perhaps not. Considering that the historical behaviour on the
main affected platform (Windows) was already to round up to
milliseconds before we latchified this code anyway, and now a google
search is telling me that the relevant timer might in fact be *super*
lumpy, perhaps even to the point of 1/64th of a second[1]https://randomascii.wordpress.com/2020/10/04/windows-timer-resolution-the-great-rule-change/ (maybe
that's a problem for a Windows hacker to look into some time; I really
should create a wiki page of known Windows problems in search of a
hacker)... it now looks like sub-ms residual logic would be a bit
pointless after all.

I'll go and see about usec latch waits. More soon.

[1]: https://randomascii.wordpress.com/2020/10/04/windows-timer-resolution-the-great-rule-change/

#18Thomas Munro
thomas.munro@gmail.com
In reply to: Thomas Munro (#17)
Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken

On Fri, Mar 10, 2023 at 2:45 PM Thomas Munro <thomas.munro@gmail.com> wrote:

I'll go and see about usec latch waits. More soon.

Here are some experimental patches along those lines. Seems good
locally, but I saw a random failure I don't understand on CI so
apparently I need to find a bug; at least this gives an idea of how
this might look. Unfortunately, the new interface on Linux turned out
to be newer that I first realised: Linux 5.11+ (so RHEL 9, Debian
12/Bookworm, Ubuntu 21.04/Hirsute Hippo), so unless we're OK with it
taking a couple more years to be more widely used, we'll need some
fallback code. Perhaps something like 0004, which also shows the sort
of thing that we might consider back-patching to 14 and 15 (next
revision I'll move that up the front and put it in back-patchable
form). It's not exactly beautiful; maybe sharing code with recovery's
lazy PM-exit detection could help. Of course, the new μs-based wait
API could be used wherever we do timestamp-based waiting, for no
particular reason other than that it is the resolution of our
timestamps, so there is no need to bother rounding; I doubt anyone
would notice or care much about that, but it's vote in favour of μs
rather than the other obvious contender ns, which modern underlying
kernel primitives are using.

Attachments:

0001-Support-microsecond-based-timeouts-in-WaitEventSet-A.patchtext/x-patch; charset=US-ASCII; name=0001-Support-microsecond-based-timeouts-in-WaitEventSet-A.patchDownload+127-40
0002-Use-microsecond-based-naps-for-vacuum_cost_delay-sle.patchtext/x-patch; charset=US-ASCII; name=0002-Use-microsecond-based-naps-for-vacuum_cost_delay-sle.patchDownload+4-5
0003-Use-microsecond-based-naps-in-walreceiver.patchtext/x-patch; charset=US-ASCII; name=0003-Use-microsecond-based-naps-in-walreceiver.patchDownload+30-9
0004-Provide-fallback-implementation-of-vacuum_cost_delay.patchtext/x-patch; charset=US-ASCII; name=0004-Provide-fallback-implementation-of-vacuum_cost_delay.patchDownload+42-6
#19Thomas Munro
thomas.munro@gmail.com
In reply to: Thomas Munro (#18)
Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken

On Fri, Mar 10, 2023 at 6:58 PM Thomas Munro <thomas.munro@gmail.com> wrote:

... Perhaps something like 0004, which also shows the sort
of thing that we might consider back-patching to 14 and 15 (next
revision I'll move that up the front and put it in back-patchable
form).

I think this is the minimal back-patchable change. I propose to go
ahead and do that, and then to kick the ideas about latch API changes
into a new thread for the next commitfest.

Attachments:

0001-Fix-fractional-vacuum_cost_delay.patchtext/x-patch; charset=US-ASCII; name=0001-Fix-fractional-vacuum_cost_delay.patchDownload+13-6
#20Tom Lane
tgl@sss.pgh.pa.us
In reply to: Thomas Munro (#19)
Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken

Thomas Munro <thomas.munro@gmail.com> writes:

I think this is the minimal back-patchable change. I propose to go
ahead and do that, and then to kick the ideas about latch API changes
into a new thread for the next commitfest.

OK by me, but then again 4753ef37 wasn't my patch.

regards, tom lane

#21Thomas Munro
thomas.munro@gmail.com
In reply to: Thomas Munro (#12)
#22Thomas Munro
thomas.munro@gmail.com
In reply to: Tom Lane (#20)
#23Nathan Bossart
nathandbossart@gmail.com
In reply to: Thomas Munro (#22)
#24Thomas Munro
thomas.munro@gmail.com
In reply to: Nathan Bossart (#23)
#25Nathan Bossart
nathandbossart@gmail.com
In reply to: Thomas Munro (#24)
#26Thomas Munro
thomas.munro@gmail.com
In reply to: Nathan Bossart (#25)