Experimenting with hash tables inside pg_dump

Started by Tom Laneover 4 years ago21 messageshackers
Jump to latest
#1Tom Lane
tgl@sss.pgh.pa.us

Today, pg_dump does a lot of internal lookups via binary search
in presorted arrays. I thought it might improve matters
to replace those binary searches with hash tables, theoretically
converting O(log N) searches into O(1) searches. So I tried making
a hash table indexed by CatalogId (tableoid+oid) with simplehash.h,
and replacing as many data structures as I could with that.

This makes the code shorter and (IMO anyway) cleaner, but

(a) the executable size increases by a few KB --- apparently, even
the minimum subset of simplehash.h's functionality is code-wasteful.

(b) I couldn't measure any change in performance at all. I tried
it on the regression database and on a toy DB with 10000 simple
tables. Maybe on a really large DB you'd notice some difference,
but I'm not very optimistic now.

So this experiment feels like a failure, but I thought I'd post
the patch and results for the archives' sake. Maybe somebody
will think of a way to improve matters. Or maybe it's worth
doing just to shorten the code?

regards, tom lane

Attachments:

use-simplehash-in-pg-dump-1.patchtext/x-diff; charset=us-ascii; name=use-simplehash-in-pg-dump-1.patchDownload+187-328
#2Nathan Bossart
nathandbossart@gmail.com
In reply to: Tom Lane (#1)
Re: Experimenting with hash tables inside pg_dump

On 10/21/21, 3:29 PM, "Tom Lane" <tgl@sss.pgh.pa.us> wrote:

(b) I couldn't measure any change in performance at all. I tried
it on the regression database and on a toy DB with 10000 simple
tables. Maybe on a really large DB you'd notice some difference,
but I'm not very optimistic now.

I wonder how many tables you'd need to start seeing a difference.
There are certainly databases out there with many more than 10,000
tables. I'll look into this...

Nathan

#3Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#1)
Re: Experimenting with hash tables inside pg_dump

Hi,

On 2021-10-21 18:27:25 -0400, Tom Lane wrote:

Today, pg_dump does a lot of internal lookups via binary search
in presorted arrays. I thought it might improve matters
to replace those binary searches with hash tables, theoretically
converting O(log N) searches into O(1) searches. So I tried making
a hash table indexed by CatalogId (tableoid+oid) with simplehash.h,
and replacing as many data structures as I could with that.

That does sound like a good idea in theory...

This makes the code shorter and (IMO anyway) cleaner, but

(a) the executable size increases by a few KB --- apparently, even
the minimum subset of simplehash.h's functionality is code-wasteful.

Hm. Surprised a bit by that. In an optimized build the difference is a
smaller, at least.

optimized:
text data bss dec hex filename
448066 7048 1368 456482 6f722 src/bin/pg_dump/pg_dump
447530 7048 1496 456074 6f58a src/bin/pg_dump/pg_dump.orig

debug:
text data bss dec hex filename
516883 7024 1352 525259 803cb src/bin/pg_dump/pg_dump
509819 7024 1480 518323 7e8b3 src/bin/pg_dump/pg_dump.orig

The fact that optimization plays such a role makes me wonder if a good chunk
of the difference is the slightly more complicated find{Type,Func,...}ByOid()
functions.

(b) I couldn't measure any change in performance at all. I tried
it on the regression database and on a toy DB with 10000 simple
tables. Maybe on a really large DB you'd notice some difference,
but I'm not very optimistic now.

Did you measure runtime of pg_dump, or how much CPU it used? I think a lot of
the time the backend is a bigger bottleneck than pg_dump...

For the regression test DB the majority of the time seems to be spent below
two things:
1) libpq
2) sortDumpableObjects().

I don't think 2) hits the binary search / hashtable path?

It does seem interesting that a substantial part of the time is spent in/below
PQexec() and PQfnumber(). Especially the latter shouldn't be too hard to
optimize away...

Greetings,

Andres Freund

#4Nathan Bossart
nathandbossart@gmail.com
In reply to: Andres Freund (#3)
Re: Experimenting with hash tables inside pg_dump

On 10/21/21, 4:14 PM, "Bossart, Nathan" <bossartn@amazon.com> wrote:

On 10/21/21, 3:29 PM, "Tom Lane" <tgl@sss.pgh.pa.us> wrote:

(b) I couldn't measure any change in performance at all. I tried
it on the regression database and on a toy DB with 10000 simple
tables. Maybe on a really large DB you'd notice some difference,
but I'm not very optimistic now.

I wonder how many tables you'd need to start seeing a difference.
There are certainly databases out there with many more than 10,000
tables. I'll look into this...

Well, I tested with 200,000 tables and saw no difference with this.

Nathan

#5Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#3)
Re: Experimenting with hash tables inside pg_dump

Andres Freund <andres@anarazel.de> writes:

Did you measure runtime of pg_dump, or how much CPU it used?

I was looking mostly at wall-clock runtime, though I did notice
that the CPU time looked about the same too.

I think a lot of
the time the backend is a bigger bottleneck than pg_dump...

Yeah, that. I tried doing a system-wide "perf" measurement, and soon
realized that a big fraction of the time for a "pg_dump -s" run is
being spent in the planner :-(. I'm currently experimenting with
PREPARE'ing pg_dump's repetitive queries, and it's looking very
promising. More later.

regards, tom lane

#6Andres Freund
andres@anarazel.de
In reply to: Andres Freund (#3)
Re: Experimenting with hash tables inside pg_dump

Hi,

On 2021-10-21 16:37:57 -0700, Andres Freund wrote:

On 2021-10-21 18:27:25 -0400, Tom Lane wrote:

(a) the executable size increases by a few KB --- apparently, even
the minimum subset of simplehash.h's functionality is code-wasteful.

Hm. Surprised a bit by that. In an optimized build the difference is a
smaller, at least.

optimized:
text data bss dec hex filename
448066 7048 1368 456482 6f722 src/bin/pg_dump/pg_dump
447530 7048 1496 456074 6f58a src/bin/pg_dump/pg_dump.orig

debug:
text data bss dec hex filename
516883 7024 1352 525259 803cb src/bin/pg_dump/pg_dump
509819 7024 1480 518323 7e8b3 src/bin/pg_dump/pg_dump.orig

The fact that optimization plays such a role makes me wonder if a good chunk
of the difference is the slightly more complicated find{Type,Func,...}ByOid()
functions.

It's not that.

In a debug build a good chunk of it is due to a bunch of Assert()s. Another
part is that trivial helper functions like SH_PREV() don't get inlined.

The increase for an optimized build seems to boil down to pg_log_error()
invocations. If I replace those with an exit(1), the resulting binaries are
within 100 byte.

If I prevent the compiler from inlining findObjectByCatalogId() in all the
find*ByOid() routines, your version is smaller than master even without other
changes.

Greetings,

Andres Freund

#7Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#5)
Re: Experimenting with hash tables inside pg_dump

Hi,

On 2021-10-21 20:22:56 -0400, Tom Lane wrote:

Andres Freund <andres@anarazel.de> writes:
Yeah, that. I tried doing a system-wide "perf" measurement, and soon
realized that a big fraction of the time for a "pg_dump -s" run is
being spent in the planner :-(.

A trick for seeing the proportions of this easily in perf is to start both
postgres and pg_dump pinned to a specific CPU, and profile that cpu. That gets
rid of most of the noise of other programs etc.

I'm currently experimenting with
PREPARE'ing pg_dump's repetitive queries, and it's looking very
promising. More later.

Good idea.

I wonder though if for some of them we should instead replace the per-object
queries with one query returning the information for all objects of a type. It
doesn't make all that much sense that we build and send one query for each
table and index.

Greetings,

Andres Freund

#8Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#7)
Re: Experimenting with hash tables inside pg_dump

Andres Freund <andres@anarazel.de> writes:

I wonder though if for some of them we should instead replace the per-object
queries with one query returning the information for all objects of a type. It
doesn't make all that much sense that we build and send one query for each
table and index.

The trick is the problem I alluded to in another thread: it's not safe to
do stuff like pg_get_expr() on tables we don't have lock on.

I've thought about doing something like

SELECT unsafe-functions FROM pg_class WHERE oid IN (someoid, someoid, ...)

but in cases with tens of thousands of tables, it seems unlikely that
that's going to behave all that nicely.

The *real* fix, I suppose, would be to fix all those catalog-inspection
functions so that they operate with respect to the query's snapshot.
But that's not a job I'm volunteering for. Besides which, pg_dump
still has to cope with back-rev servers where it wouldn't be safe.

regards, tom lane

#9Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#8)
Re: Experimenting with hash tables inside pg_dump

Hi,

On 2021-10-21 22:13:22 -0400, Tom Lane wrote:

Andres Freund <andres@anarazel.de> writes:

I wonder though if for some of them we should instead replace the per-object
queries with one query returning the information for all objects of a type. It
doesn't make all that much sense that we build and send one query for each
table and index.

The trick is the problem I alluded to in another thread: it's not safe to
do stuff like pg_get_expr() on tables we don't have lock on.

I was looking at getTableAttrs() - sending one query instead of #tables
queries yields a quite substantial speedup in a quick prototype. And I don't
think it changes anything around locking semantics.

I've thought about doing something like

SELECT unsafe-functions FROM pg_class WHERE oid IN (someoid, someoid, ...)

but in cases with tens of thousands of tables, it seems unlikely that
that's going to behave all that nicely.

That's kinda what I'm doing in the quick hack. But instead of using IN(...) I
made it unnest('{oid, oid, ...}'), that scales much better.

A pg_dump --schema-only of the regression database goes from

real 0m0.675s
user 0m0.039s
sys 0m0.029s

to

real 0m0.477s
user 0m0.037s
sys 0m0.020s

which isn't half-bad.

There's a few more cases like this I think. But most are harder because the
dumping happens one-by-one from dumpDumpableObject(). The relatively easy but
substantial cases I could find quickly were getIndexes(), getConstraints(),
getTriggers()

To see where it's worth putting in time it'd be useful if getSchemaData() in
verbose mode printed timing information...

The *real* fix, I suppose, would be to fix all those catalog-inspection
functions so that they operate with respect to the query's snapshot.
But that's not a job I'm volunteering for. Besides which, pg_dump
still has to cope with back-rev servers where it wouldn't be safe.

Yea, that's not a small change :(. I suspect that we'd need a bunch of new
caching infrastructure to make that reasonably performant, since this
presumably couldn't use syscache etc.

Greetings,

Andres Freund

Attachments:

pg_dump-bulk-gettableattrs.difftext/x-diff; charset=us-asciiDownload+373-272
#10Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#9)
Re: Experimenting with hash tables inside pg_dump

Andres Freund <andres@anarazel.de> writes:

On 2021-10-21 22:13:22 -0400, Tom Lane wrote:

I've thought about doing something like
SELECT unsafe-functions FROM pg_class WHERE oid IN (someoid, someoid, ...)
but in cases with tens of thousands of tables, it seems unlikely that
that's going to behave all that nicely.

That's kinda what I'm doing in the quick hack. But instead of using IN(...) I
made it unnest('{oid, oid, ...}'), that scales much better.

I'm skeptical of that, mainly because it doesn't work in old servers,
and I really don't want to maintain two fundamentally different
versions of getTableAttrs(). I don't think you actually need the
multi-array form of unnest() here --- we know the TableInfo array
is in OID order --- but even the single-array form only works
back to 8.4.

However ... looking through getTableAttrs' main query, it seems
like the only thing there that's potentially unsafe is the
"format_type(t.oid, a.atttypmod)" call. I wonder if it could be
sane to convert it into a single query that just scans all of
pg_attribute, and then deal with creating the formatted type names
separately, perhaps with an improved version of getFormattedTypeName
that could cache the results for non-default typmods. The main
knock on this approach is the temptation for somebody to stick some
unsafe function into the query in future. We could stick a big fat
warning comment into the code, but lately I despair of people reading
comments.

To see where it's worth putting in time it'd be useful if getSchemaData() in
verbose mode printed timing information...

I've been running test cases with log_min_duration_statement = 0,
which serves well enough.

regards, tom lane

#11Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#10)
Re: Experimenting with hash tables inside pg_dump

Hi,

On 2021-10-22 10:53:31 -0400, Tom Lane wrote:

Andres Freund <andres@anarazel.de> writes:

On 2021-10-21 22:13:22 -0400, Tom Lane wrote:

I've thought about doing something like
SELECT unsafe-functions FROM pg_class WHERE oid IN (someoid, someoid, ...)
but in cases with tens of thousands of tables, it seems unlikely that
that's going to behave all that nicely.

That's kinda what I'm doing in the quick hack. But instead of using IN(...) I
made it unnest('{oid, oid, ...}'), that scales much better.

I'm skeptical of that, mainly because it doesn't work in old servers,
and I really don't want to maintain two fundamentally different
versions of getTableAttrs(). I don't think you actually need the
multi-array form of unnest() here --- we know the TableInfo array
is in OID order --- but even the single-array form only works
back to 8.4.

I think we can address that, if we think it's overall a promising approach to
pursue. E.g. if we don't need the indexes, we can make it = ANY().

However ... looking through getTableAttrs' main query, it seems
like the only thing there that's potentially unsafe is the
"format_type(t.oid, a.atttypmod)" call.

I assume the default expression bit would also be unsafe?

Greetings,

Andres Freund

#12Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#11)
Re: Experimenting with hash tables inside pg_dump

Andres Freund <andres@anarazel.de> writes:

On 2021-10-22 10:53:31 -0400, Tom Lane wrote:

I'm skeptical of that, mainly because it doesn't work in old servers,

I think we can address that, if we think it's overall a promising approach to
pursue. E.g. if we don't need the indexes, we can make it = ANY().

Hmm ... yeah, I guess we could get away with that. It might not scale
as nicely to a huge database, but probably dumping a huge database
from an ancient server isn't all that interesting.

I'm inclined to think that it could be sane to make getTableAttrs
and getIndexes use this style, but we probably still want functions
and such to use per-object queries. In those other catalogs there
are many built-in objects that we don't really care about. The
prepared-queries hack I was working on last night is probably plenty
good enough there, and it's a much less invasive patch.

Were you planning to pursue this further, or did you want me to?
I'd want to layer it on top of the work I did at [1]/messages/by-id/2273648.1634764485@sss.pgh.pa.us, else there's
going to be lots of merge conflicts.

regards, tom lane

[1]: /messages/by-id/2273648.1634764485@sss.pgh.pa.us

#13Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#6)
Re: Experimenting with hash tables inside pg_dump

Andres Freund <andres@anarazel.de> writes:

On 2021-10-21 18:27:25 -0400, Tom Lane wrote:

(a) the executable size increases by a few KB --- apparently, even
the minimum subset of simplehash.h's functionality is code-wasteful.

If I prevent the compiler from inlining findObjectByCatalogId() in all the
find*ByOid() routines, your version is smaller than master even without other
changes.

Hmm ... seems to depend a lot on which compiler you use.

I was originally looking at it with gcc 8.4.1 (RHEL8 default),
x86_64. On that, adding pg_noinline to findObjectByCatalogId
helps a little, but it's still 3.6K bigger than HEAD.

I then tried gcc 11.2.1/x86_64, finding that the patch adds
about 2K and pg_noinline makes no difference.

I also tried it on Apple's clang 13.0.0, both x86_64 and ARM
versions. On that, the change seems to be a wash or slightly
smaller, with maybe a little benefit from pg_noinline but not
much. It's hard to tell for sure because size(1) seems to be
rounding off to a page boundary on that platform.

Anyway, these are all sub-one-percent changes in the code
size, so probably we should not sweat that much about it.
I'm kind of leaning now towards pushing the patch, just
on the grounds that getting rid of all those single-purpose
index arrays (and likely future need for more of them)
is worth it from a maintenance perspective.

regards, tom lane

#14Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#12)
Re: Experimenting with hash tables inside pg_dump

Hi,

On October 22, 2021 8:54:13 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Andres Freund <andres@anarazel.de> writes:

On 2021-10-22 10:53:31 -0400, Tom Lane wrote:

I'm skeptical of that, mainly because it doesn't work in old servers,

I think we can address that, if we think it's overall a promising approach to
pursue. E.g. if we don't need the indexes, we can make it = ANY().

Hmm ... yeah, I guess we could get away with that. It might not scale
as nicely to a huge database, but probably dumping a huge database
from an ancient server isn't all that interesting.

I think compared to the overhead of locking that many tables and sending O(N) queries it shouldn't be a huge factor.

One think that looks like it might be worth doing, and not hard, is to use single row mode. No need to materialize all that data twice in memory.

At a later stage it might be worth sending the array separately as a parameter. Perhaps even binary encoded.

I'm inclined to think that it could be sane to make getTableAttrs
and getIndexes use this style, but we probably still want functions
and such to use per-object queries. In those other catalogs there
are many built-in objects that we don't really care about. The
prepared-queries hack I was working on last night is probably plenty
good enough there, and it's a much less invasive patch.

Yes, that seems reasonable. I think the triggers query would benefit from the batch approach though - I see that taking a long time in aggregate on a test database with many tables I had around (partially due to the self join), and we already materialize it.

Were you planning to pursue this further, or did you want me to?

It seems too nice an improvement to drop on the floor. That said, I don't really have the mental bandwidth to pursue this beyond the POC stage - it seemed complicated enough that suggestion accompanied by a prototype was a good idea. So I'd be happy for you to incorporate this into your other changes.

I'd want to layer it on top of the work I did at [1], else there's
going to be lots of merge conflicts.

Makes sense. Even if nobody else were doing anything in the area I'd probably want to split it into one commit creating the query once, and then separately implement the batching.

Regards,

Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.

#15Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#14)
Re: Experimenting with hash tables inside pg_dump

Andres Freund <andres@anarazel.de> writes:

On October 22, 2021 8:54:13 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Were you planning to pursue this further, or did you want me to?

It seems too nice an improvement to drop on the floor. That said, I don't really have the mental bandwidth to pursue this beyond the POC stage - it seemed complicated enough that suggestion accompanied by a prototype was a good idea. So I'd be happy for you to incorporate this into your other changes.

Cool, I'll see what I can do with it, as long as I'm poking around
in the area.

regards, tom lane

#16Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#13)
Re: Experimenting with hash tables inside pg_dump

Hi,

On October 22, 2021 10:32:30 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Andres Freund <andres@anarazel.de> writes:

On 2021-10-21 18:27:25 -0400, Tom Lane wrote:

(a) the executable size increases by a few KB --- apparently, even
the minimum subset of simplehash.h's functionality is code-wasteful.

If I prevent the compiler from inlining findObjectByCatalogId() in all the
find*ByOid() routines, your version is smaller than master even without other
changes.

Hmm ... seems to depend a lot on which compiler you use.

Inline heuristics change a lot over time, so that'd make sense.

I see some win by marking pg_log_error cold. That might be useful more generally too.

Which made me look at the code invoking it from simplehash. I think the patch that made simplehash work in frontend code isn't quite right, because pg_log_error() returns...

Wonder if we should mark simplehash's grow as noinline? Even with a single caller it seems better to not inline it to remove register allocator pressure.

Anyway, these are all sub-one-percent changes in the code
size, so probably we should not sweat that much about it.
I'm kind of leaning now towards pushing the patch, just
on the grounds that getting rid of all those single-purpose
index arrays (and likely future need for more of them)
is worth it from a maintenance perspective.

+1

The only thought I had wrt the patch is that I'd always create the hash table. That way the related branches can be removed, which is a win code size wise (as well as speed presumably, but I think we're far away from that mattering).

This type of code is where I most wish for a language with proper generic data types/containers...

Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.

#17Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#16)
Re: Experimenting with hash tables inside pg_dump

Andres Freund <andres@anarazel.de> writes:

Which made me look at the code invoking it from simplehash. I think the patch that made simplehash work in frontend code isn't quite right, because pg_log_error() returns...

Indeed, that's broken. I guess we want pg_log_fatal then exit(1).

Wonder if we should mark simplehash's grow as noinline? Even with a single caller it seems better to not inline it to remove register allocator pressure.

Seems plausible --- you want me to go change that?

The only thought I had wrt the patch is that I'd always create the hash
table.

That'd require adding an explicit init function and figuring out where to
call it, which we could do but I didn't (and don't) think it's worth the
trouble. One more branch here isn't going to matter, especially given
that we can't even measure the presumed macro improvement.

regards, tom lane

#18Tom Lane
tgl@sss.pgh.pa.us
In reply to: Tom Lane (#17)
Re: Experimenting with hash tables inside pg_dump

I wrote:

Andres Freund <andres@anarazel.de> writes:

Wonder if we should mark simplehash's grow as noinline? Even with a single caller it seems better to not inline it to remove register allocator pressure.

Seems plausible --- you want me to go change that?

Hmm, harder than it sounds. If I remove "inline" from SH_SCOPE then
the compiler complains about unreferenced static functions, while
if I leave it there than adding pg_noinline causes a complaint about
conflicting options. Seems like we need a less quick-and-dirty
approach to dealing with unnecessary simplehash support functions.

regards, tom lane

#19Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#18)
Re: Experimenting with hash tables inside pg_dump

Hi,

Thanks for pushing the error handling cleanup etc!

On 2021-10-22 16:32:39 -0400, Tom Lane wrote:

I wrote:

Andres Freund <andres@anarazel.de> writes:

Wonder if we should mark simplehash's grow as noinline? Even with a single caller it seems better to not inline it to remove register allocator pressure.

Seems plausible --- you want me to go change that?

Hmm, harder than it sounds. If I remove "inline" from SH_SCOPE then
the compiler complains about unreferenced static functions, while
if I leave it there than adding pg_noinline causes a complaint about
conflicting options.

The easy way out would be to to not declare SH_GROW inside SH_DECLARE - that'd
currently work, because there aren't any calls to grow from outside of
simplehash.h. The comment says:
* ... But resizing to the exact input size can be advantageous
* performance-wise, when known at some point.

But perhaps that's sufficiently served to create the table with the correct
size immediately?

If we were to go for that, we'd just put SH_GROW in the SH_DEFINE section not
use SH_SCOPE, but just static. That works here, and I have some hope it'd not
cause warnings on other compilers either, because there'll be references from
the other inline functions. Even if there's a SH_SCOPE=static inline
simplehash use inside a header and there aren't any callers in a TU, there'd
still be static inline references to it.

Another alternative would be to use __attribute__((unused)) or such on
non-static-inline functions that might or might not be used.

Seems like we need a less quick-and-dirty approach to dealing with
unnecessary simplehash support functions.

I don't think the problem is unnecessary ones? It's "cold" functions we don't
want to have inlined into larger functions.

Greetings,

Andres Freund

#20Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#19)
Re: Experimenting with hash tables inside pg_dump

Andres Freund <andres@anarazel.de> writes:

On 2021-10-22 16:32:39 -0400, Tom Lane wrote:

Hmm, harder than it sounds. If I remove "inline" from SH_SCOPE then
the compiler complains about unreferenced static functions, while
if I leave it there than adding pg_noinline causes a complaint about
conflicting options.

The easy way out would be to to not declare SH_GROW inside SH_DECLARE - that'd
currently work, because there aren't any calls to grow from outside of
simplehash.h.

Seems like a reasonable approach. If somebody wanted to call that
from outside, I'd personally feel they were getting way too friendly
with the implementation.

Seems like we need a less quick-and-dirty approach to dealing with
unnecessary simplehash support functions.

I don't think the problem is unnecessary ones?

I was thinking about the stuff like SH_ITERATE, which you might or
might not have use for in any particular file. In the case at hand
here, a file that doesn't call SH_INSERT would be at risk of getting
unused-function complaints about SH_GROW. But as you say, if we do
find that happening, __attribute__((unused)) would probably be
enough to silence it.

regards, tom lane

#21Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#20)