Coccinelle for PostgreSQL development [1/N]: coccicheck.py
I got some time over during the holidays, so I spent some of it
doing something I've been thinking about for a while.
For those of you that are not aware of it: Coccinelle is a tool for pattern
matching and text transformation for C code and can be used for detection
of problematic programming patterns and to make complex, tree-wide patches
easy. It is aware of the structure of C code and is better suited to make
complicated changes than what is possible using normal text substitution
tools like Sed and Perl.
Coccinelle have been successfully been used in the Linux project since 2008
and is now an established tool for Linux development and a large number of
semantic patches have been added to the source tree to capture everything
from generic issues (like eliminating the redundant A in expressions like
"!A || (A && B)") to more Linux-specific problems like adding a missing
call to kfree().
Although PostgreSQL is nowhere the size of the Linux kernel, it is
nevertheless of a significant size and would benefit from incorporating
Coccinelle into the development. I noticed it's been used in a few cases
way back (like 10 years back) to fix issues in the PostgreSQL code, but I
thought it might be useful to make it part of normal development practice
to, among other things:
- Identify and correct bugs in the source code both during development and
review.
- Make large-scale changes to the source tree to improve the code based on
new insights.
- Encode and enforce APIs by ensuring that function calls are used
correctly.
- Use improved coding patterns for more efficient code.
- Allow extensions to automatically update code for later PostgreSQL
versions.
To that end, I created a series of patches to show how it could be used in
the PostgreSQL tree. It is a lot easier to discuss concrete code and I
split it up into separate messages since that makes it easier to discuss
each individual patch. The series contains code to make it easy to work
with Coccinelle during development and reviews, as well as examples of
semantic patches that capture problems, demonstrate how to make large-scale
changes, how to enforce APIs, and also improve some coding patterns.
This first patch contains the coccicheck.py script, which is a
re-implementation of the coccicheck script that the Linux kernel uses. We
cannot immediately use the coccicheck script since it is quite closely tied
to the Linux source code tree and we need to have something that both
supports autoconf and Meson. Since Python seems to be used more and more in
the tree, it seems to be the most natural choice. (I have no strong opinion
on what language to use, but think it would be good to have something that
is as platform-independent as possible.)
The intention is that we should be able to use the Linux semantic patches
directly, so it supports the "Requires" and "Options" keywords, which can
be used to require a specific version of spatch(1) and add options to the
execution of that semantic patch, respectively.
--
Best wishes,
Mats Kindahl, Timescale
Attachments:
0001-Add-initial-coccicheck-script.v1.patchtext/x-patch; charset=US-ASCII; name=0001-Add-initial-coccicheck-script.v1.patchDownload
From 55f5caba3d6cb88e3729985571286c16171f36b3 Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Sun, 29 Dec 2024 19:35:58 +0100
Subject: Add initial coccicheck script
The coccicheck.py script can be used to run several semantics patches on a
source tree to either generate a report, see the context of the modification
(what lines that requires changes), or generate a patch to correct an issue.
python coccicheck.py <options> <pattern> <path> ...
Options:
--spatch=SPATCH
Path to spatch binary. Defaults to value of environment variable
SPATCH.
--mode={report,context,patch}
Defaults to value of environment variable MODE.
<pattern>
pattern for all semantic patches to match. For example,
src/tools/cocci/**/.cocci to match all *.cocci files in the directory
src/tools/cocci.
<path>
Path to source files to apply semantic patches to.
---
src/tools/coccicheck.py | 176 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 176 insertions(+)
create mode 100755 src/tools/coccicheck.py
diff --git a/src/tools/coccicheck.py b/src/tools/coccicheck.py
new file mode 100755
index 00000000000..1fe136b307f
--- /dev/null
+++ b/src/tools/coccicheck.py
@@ -0,0 +1,176 @@
+#!/usr/bin/env python3
+
+"""Run Coccinelle on a set of files and directories.
+
+This is a re-written version of the Linux ``coccicheck`` script.
+
+Coccicheck can run in two different modes (the original have four
+different modes):
+
+- *patch*: patch files using the cocci file.
+
+- *report*: report will report any improvements that this script can
+ make, but not show any patch.
+
+- *context*: show the context where the patch can be applied.
+
+The program will take a single cocci file and call spatch(1) with a
+set of paths that can be either files or directories.
+
+When starting, the cocci file will be parsed and any lines containing
+"Options:" or "Requires:" will be treated specially.
+
+- Lines containing "Options:" will have a list of options to add to
+ the call of the spatch(1) program. These options will be added last.
+
+- Lines containing "Requires:" can contain a version of spatch(1) that
+ is required for this cocci file. If the version requirements are not
+ satisfied, the file will not be used.
+
+When calling spatch(1), it will set the virtual rules "patch" or
+"report" and the cocci file can use these to act differently depending
+on the mode.
+
+You need to set the following environment variables to control the
+default:
+
+SPATCH: Path to spatch program. This will be used if no path is
+ passed using the option --spatch.
+
+You may set the following environment variables:
+
+SPATCH_EXTRA: Extra flags to use when calling spatch. These will be
+ added last.
+
+"""
+
+import argparse
+import os
+import sys
+import subprocess
+import re
+
+from pathlib import PurePath, Path
+from packaging import version
+
+VERSION_CRE = re.compile(
+ r'spatch version (\S+) compiled with OCaml version (\S+)'
+)
+
+
+def parse_metadata(cocci_file):
+ """Parse metadata in Cocci file."""
+ metadata = {}
+ with open(cocci_file) as fh:
+ for line in fh:
+ mre = re.match(r'(Options|Requires):(.*)', line, re.IGNORECASE)
+ if mre:
+ metadata[mre.group(1).lower()] = mre.group(2)
+ return metadata
+
+
+def get_config(args):
+ """Compute configuration information."""
+ # Figure out spatch version. We just need to read the first line
+ config = {}
+ cmd = [args.spatch, '--version']
+ with subprocess.Popen(cmd, stdout=subprocess.PIPE, text=True) as proc:
+ for line in proc.stdout:
+ mre = VERSION_CRE.match(line)
+ if mre:
+ config['spatch_version'] = mre.group(1)
+ break
+ return config
+
+
+def run_spatch(cocci_file, args, config, env):
+ """Run coccinelle on the provided file."""
+ if args.verbose > 1:
+ print("processing cocci file", cocci_file)
+ spatch_version = config['spatch_version']
+ metadata = parse_metadata(cocci_file)
+
+ # Check that we have a valid version
+ if 'required' in metadata:
+ required_version = version.parse(metadata['required'])
+ if required_version < spatch_version:
+ print(
+ f'Skipping SmPL patch {cocci_file}: '
+ f'requires {required_version} (had {spatch_version})'
+ )
+ return
+
+ command = [
+ args.spatch,
+ "-D", args.mode,
+ "--cocci-file", cocci_file,
+ "--very-quiet",
+ ]
+
+ if 'options' in metadata:
+ command.append(metadata['options'])
+ if args.mode == 'report':
+ command.append('--no-show-diff')
+ if args.spflags:
+ command.append(args.spflags)
+
+ sp = subprocess.run(command + args.path, env=env)
+ if sp.returncode != 0:
+ sys.exit(sp.returncode)
+
+
+def coccinelle(args, config, env):
+ """Run coccinelle on all files matching the provided pattern."""
+ root = '/' if PurePath(args.cocci).is_absolute() else '.'
+ count = 0
+ for cocci_file in Path(root).glob(args.cocci):
+ count += 1
+ run_spatch(cocci_file, args, config, env)
+ return count
+
+
+def main(argv):
+ """Run coccicheck."""
+ parser = argparse.ArgumentParser()
+ parser.add_argument('--verbose', '-v', action='count', default=0)
+ parser.add_argument('--spatch', type=PurePath, metavar='SPATCH',
+ default=os.environ.get('SPATCH'),
+ help=('Path to spatch binary. Defaults to '
+ 'value of environment variable SPATCH.'))
+ parser.add_argument('--spflags', type=PurePath,
+ metavar='SPFLAGS',
+ default=os.environ.get('SPFLAGS', None),
+ help=('Flags to pass to spatch call. Defaults '
+ 'to value of enviroment variable SPFLAGS.'))
+ parser.add_argument('--mode', choices=['patch', 'report', 'context'],
+ default=os.environ.get('MODE', 'report'),
+ help=('Mode to use for coccinelle. Defaults to '
+ 'value of environment variable MODE.'))
+ parser.add_argument('--include', '-I', type=PurePath,
+ metavar='DIR',
+ help='Extra include directories.')
+ parser.add_argument('cocci', metavar='pattern',
+ help='Pattern for Cocci files to use.')
+ parser.add_argument('path', nargs='+', type=PurePath,
+ help='Directory or source path to process.')
+
+ args = parser.parse_args(argv)
+
+ if args.verbose > 1:
+ print("arguments:", args)
+
+ if args.spatch is None:
+ parser.error('spatch is part of the Coccinelle project and is '
+ 'available at http://coccinelle.lip6.fr/')
+
+ if coccinelle(args, get_config(args), os.environ) == 0:
+ parser.error(f'no coccinelle files found matching {args.cocci}')
+
+
+if __name__ == '__main__':
+ try:
+ main(sys.argv[1:])
+ except KeyboardInterrupt:
+ print("Execution aborted")
+ except Exception as exc:
+ print(exc)
--
2.43.0
On 2025-01-07 Tu 2:44 PM, Mats Kindahl wrote:
I got some time over during the holidays, so I spent some of it
doing something I've been thinking about for a while.For those of you that are not aware of it: Coccinelle is a tool for
pattern matching and text transformation for C code and can be used
for detection of problematic programming patterns and to make complex,
tree-wide patches easy. It is aware of the structure of C code and is
better suited to make complicated changes than what is possible using
normal text substitution tools like Sed and Perl.Coccinelle have been successfully been used in the Linux project since
2008 and is now an established tool for Linux development and a large
number of semantic patches have been added to the source tree to
capture everything from generic issues (like eliminating the redundant
A in expressions like "!A || (A && B)") to more Linux-specific
problems like adding a missing call to kfree().Although PostgreSQL is nowhere the size of the Linux kernel, it is
nevertheless of a significant size and would benefit from
incorporating Coccinelle into the development. I noticed it's been
used in a few cases way back (like 10 years back) to fix issues in the
PostgreSQL code, but I thought it might be useful to make it part of
normal development practice to, among other things:- Identify and correct bugs in the source code both during development
and review.
- Make large-scale changes to the source tree to improve the code
based on new insights.
- Encode and enforce APIs by ensuring that function calls are used
correctly.
- Use improved coding patterns for more efficient code.
- Allow extensions to automatically update code for later PostgreSQL
versions.To that end, I created a series of patches to show how it could be
used in the PostgreSQL tree. It is a lot easier to discuss concrete
code and I split it up into separate messages since that makes it
easier to discuss each individual patch. The series contains code to
make it easy to work with Coccinelle during development and reviews,
as well as examples of semantic patches that capture problems,
demonstrate how to make large-scale changes, how to enforce APIs, and
also improve some coding patterns.This first patch contains the coccicheck.py script, which is a
re-implementation of the coccicheck script that the Linux kernel uses.
We cannot immediately use the coccicheck script since it is quite
closely tied to the Linux source code tree and we need to have
something that both supports autoconf and Meson. Since Python seems to
be used more and more in the tree, it seems to be the most natural
choice. (I have no strong opinion on what language to use, but think
it would be good to have something that is as platform-independent as
possible.)The intention is that we should be able to use the Linux semantic
patches directly, so it supports the "Requires" and "Options"
keywords, which can be used to require a specific version of spatch(1)
and add options to the execution of that semantic patch, respectively.
Please don't start multiple threads like this. If you want to submit a
set of patches for a single feature, send them all as attachments in a
single email. Otherwise this just makes life hard for threading email
readers.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Wed, Jan 8, 2025 at 11:42 AM Andrew Dunstan <andrew@dunslane.net> wrote:
On 2025-01-07 Tu 2:44 PM, Mats Kindahl wrote:
I got some time over during the holidays, so I spent some of it
doing something I've been thinking about for a while.For those of you that are not aware of it: Coccinelle is a tool for
pattern matching and text transformation for C code and can be used
for detection of problematic programming patterns and to make complex,
tree-wide patches easy. It is aware of the structure of C code and is
better suited to make complicated changes than what is possible using
normal text substitution tools like Sed and Perl.Coccinelle have been successfully been used in the Linux project since
2008 and is now an established tool for Linux development and a large
number of semantic patches have been added to the source tree to
capture everything from generic issues (like eliminating the redundant
A in expressions like "!A || (A && B)") to more Linux-specific
problems like adding a missing call to kfree().Although PostgreSQL is nowhere the size of the Linux kernel, it is
nevertheless of a significant size and would benefit from
incorporating Coccinelle into the development. I noticed it's been
used in a few cases way back (like 10 years back) to fix issues in the
PostgreSQL code, but I thought it might be useful to make it part of
normal development practice to, among other things:- Identify and correct bugs in the source code both during development
and review.
- Make large-scale changes to the source tree to improve the code
based on new insights.
- Encode and enforce APIs by ensuring that function calls are used
correctly.
- Use improved coding patterns for more efficient code.
- Allow extensions to automatically update code for later PostgreSQL
versions.To that end, I created a series of patches to show how it could be
used in the PostgreSQL tree. It is a lot easier to discuss concrete
code and I split it up into separate messages since that makes it
easier to discuss each individual patch. The series contains code to
make it easy to work with Coccinelle during development and reviews,
as well as examples of semantic patches that capture problems,
demonstrate how to make large-scale changes, how to enforce APIs, and
also improve some coding patterns.This first patch contains the coccicheck.py script, which is a
re-implementation of the coccicheck script that the Linux kernel uses.
We cannot immediately use the coccicheck script since it is quite
closely tied to the Linux source code tree and we need to have
something that both supports autoconf and Meson. Since Python seems to
be used more and more in the tree, it seems to be the most natural
choice. (I have no strong opinion on what language to use, but think
it would be good to have something that is as platform-independent as
possible.)The intention is that we should be able to use the Linux semantic
patches directly, so it supports the "Requires" and "Options"
keywords, which can be used to require a specific version of spatch(1)
and add options to the execution of that semantic patch, respectively.Please don't start multiple threads like this. If you want to submit a
set of patches for a single feature, send them all as attachments in a
single email. Otherwise this just makes life hard for threading email
readers.
My apologies, I thought this would make it easier to discuss and review the
code. I will send a single email in the future.
Should I resend this as a single email with all the patches?
--
Best wishes,
Mats Kindahl, Timescale
Hi,
My apologies, I thought this would make it easier to discuss and review the code. I will send a single email in the future.
Should I resend this as a single email with all the patches?
IMO the best solution would be re-submitting all the patches to this
thread. Also please make sure the patchset is registered on the
nearest open CF [1]https://commitfest.postgresql.org/ This will ensure that the patchset is built on our
CI (aka cfbot [2]http://cfbot.cputube.org/) and will not be lost.
[1]: https://commitfest.postgresql.org/
[2]: http://cfbot.cputube.org/
--
Best regards,
Aleksander Alekseev
On Tue, Jan 14, 2025 at 4:19 PM Aleksander Alekseev <
aleksander@timescale.com> wrote:
IMO the best solution would be re-submitting all the patches to this
thread. Also please make sure the patchset is registered on the
nearest open CF [1] This will ensure that the patchset is built on our
CI (aka cfbot [2]) and will not be lost.[1]: https://commitfest.postgresql.org/
[2]: http://cfbot.cputube.org/
Thank you Aleksander,
Here is a new post with all patches attached and all comments combined.
For those of you that are not aware of it: Coccinelle is a tool for pattern
matching and text transformation for C code and can be used for detection
of problematic programming patterns and to make complex, tree-wide patches
easy. It is aware of the structure of C code and is better suited to make
complicated changes than what is possible using normal text substitution
tools like Sed and Perl. I've noticed it's been used at a few cases way
back to fix issues.[1]https://coccinelle.gitlabpages.inria.fr/website/
Coccinelle have been successfully been used in the Linux project since 2008
and is now an established tool for Linux development and a large number of
semantic patches have been added to the source tree to capture everything
from generic issues (like eliminating the redundant A in expressions like
"!A || (A && B)") to more Linux-specific problems like adding a missing
call to kfree().
Although PostgreSQL is nowhere the size of the Linux kernel, it is
nevertheless of a significant size and would benefit from incorporating
Coccinelle into the development. I noticed it's been used in a few cases
way back (like 10 years back) to fix issues in the PostgreSQL code, but I
thought it might be useful to make it part of normal development practice
to, among other things:
- Identify and correct bugs in the source code both during development and
review.
- Make large-scale changes to the source tree to improve the code based on
new insights.
- Encode and enforce APIs by ensuring that function calls are used
correctly.
- Use improved coding patterns for more efficient code.
- Allow extensions to automatically update code for later PostgreSQL
versions.
To that end, I created a series of patches to show how it could be used in
the PostgreSQL tree. It is a lot easier to discuss concrete code and I
split it up into separate messages since that makes it easier to discuss
each individual patch. The series contains code to make it easy to work
with Coccinelle during development and reviews, as well as examples of
semantic patches that capture problems, demonstrate how to make large-scale
changes, how to enforce APIs, and also improve some coding patterns.
The first three patches contain the coccicheck.py script and the
integration with the build system (both Meson and Autoconf).
# Coccicheck Script
It is a re-implementation of the coccicheck script that the Linux kernel
uses. We cannot immediately use the coccicheck script since it is quite
closely tied to the Linux source code tree and we need to have something
that both supports Autoconf and Meson. Since Python seems to be used more
and more in the tree, it seems to be the most natural choice. (I have no
strong opinion on what language to use, but think it would be good to have
something that is as platform-independent as possible.)
The intention is that we should be able to use the Linux semantic patches
directly, so it supports the "Requires" and "Options" keywords, which can
be used to require a specific version of spatch(1) and add options to the
execution of that semantic patch, respectively.
# Autoconf support
The changes to Autoconf modifies the configure.ac and related files (in
particular Makefile.global.in). At this point, I have deliberately not
added support for pgxs so extensions cannot use coccicheck through the
PostgreSQL installation. This is something that we can add later though.
The semantic patches are expected to live in cocci/ directory under the
root and the patch uses the pattern cocci/**/*.cocci to find all semantic
patches. Right now there are no subdirectories for the semantic patches,
but this might be something we want to add to create different categories
of scripts.
The coccicheck target is used in the same way as for the Linux kernel, that
is, to generate and apply all patches suggested by the semantic patches,
you type:
make coccicheck MODE=patch | patch -p1
Linux as support for a few more variables: V to set the verbosity, J to use
multiple jobs for processing the semantic patches, M to select a different
directory to apply the semantic patches to, and COCCI to use a single
specific semantic patch rather than all available. I have not added support
for this right now, but if you think this is valuable, it should be
straightforward to add.
I used autoconf 2.69, as mentioned in configure.ac, but that generate a
bigger diff than I expected. Any advice here is welcome.
# Meson Support
The support for Meson is done by adding three coccicheck targets: one for
each mode. To apply all patches suggested by the semantic patches using
ninja (as is done in [2]https://www.postgresql.org/docs/current/install-meson.html), you type the following in the build directory
generated by Meson (e.g., the "build/" subdirectory).
ninja coccicheck-patch | patch -p1 -d ..
If you want to pass other flags you have to set the SPFLAGS environment
variable when calling ninja:
SPFLAGS=--debug ninja coccicheck-report
# Semantic Patch: Wrong type for palloc()
This is the first example of a semantic patch and shows how to capture and
fix a common problem.
If you use an palloc() to allocate memory for an object (or an array of
objects) and by mistake type something like:
StringInfoData *info = palloc(sizeof(StringInfoData*));
You will not allocate enough memory for storing the object. This semantic
patch catches any cases where you are either allocating an array of objects
or a single object that do not have corret types in this sense, more
precisely, it captures assignments to a variable of type T* where palloc()
uses sizeof(T) either alone or with a single expression (assuming this is
an array count).
The semantic patch is overzealous in the sense that using the wrong typedef
will suggest a change (this can be seen in the patch). Although the sizes
of these are the same, it is probably be better to just follow the
convention of always using the type "T*" for any "palloc(sizeof(T))" since
the typedef can change at any point and would then introduce a bug.
Coccicheck can easily fix this for you, so it is straightforward to enforce
this. It also simplifies other automated checking to follow this convention.
We don't really have any real bugs as a result from this, but we have one
case where an allocation of "sizeof(LLVMBasicBlockRef*)" is allocated to an
"LLVMBasicBlockRef*", which strictly speaking is not correct (it should be
"sizeof(LLVMBasicBlockRef)"). However, since they are both pointers, there
is no risk of incorrect allocation size. One typedef usage that does not
match.
# Semantic Patch: Introduce palloc_array() and palloc_object() where
possible
This is an example of a large-scale refactoring to improve the code.
For PostgreSQL 16, Peter extended the palloc()/pg_malloc() interface in
commit 2016055a92f to provide more type-safety, but these functions are not
widely used. This semantic patch captures and replaces all uses of palloc()
where palloc_array() or palloc_object() could be used instead. It
deliberately does not touch cases where it is not clear that the
replacement can be done.
[1]: https://coccinelle.gitlabpages.inria.fr/website/
[2]: https://www.postgresql.org/docs/current/install-meson.html
--
Best wishes,
Mats Kindahl, Timescale
Attachments:
0004-Add-semantic-patch-for-sizeof-using-palloc.v2.patchtext/x-patch; charset=US-ASCII; name=0004-Add-semantic-patch-for-sizeof-using-palloc.v2.patchDownload
From 945f4ced69d43cbaf48e45e15729062cfe0622e3 Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Sun, 5 Jan 2025 19:26:47 +0100
Subject: Add semantic patch for sizeof() using palloc()
If palloc() is used to allocate elements of type T it should be assigned to a
variable of type T* or risk indexes out of bounds. This semantic patch checks
that allocations to variables of type T* are using sizeof(T) when allocating
memory using palloc().
---
cocci/palloc_sizeof.cocci | 49 ++++++++++++++++++++++++++++
contrib/btree_gist/btree_utils_var.c | 2 +-
src/backend/jit/llvm/llvmjit_expr.c | 5 ++-
3 files changed, 52 insertions(+), 4 deletions(-)
create mode 100644 cocci/palloc_sizeof.cocci
diff --git a/cocci/palloc_sizeof.cocci b/cocci/palloc_sizeof.cocci
new file mode 100644
index 00000000000..5f8593c2687
--- /dev/null
+++ b/cocci/palloc_sizeof.cocci
@@ -0,0 +1,49 @@
+virtual report
+virtual context
+virtual patch
+
+@initialize:python@
+@@
+import re
+
+CONST_CRE = re.compile(r'\bconst\b')
+
+def is_simple_type(s):
+ return s != 'void' and not CONST_CRE.search(s)
+
+@r1 depends on report || context@
+type T1 : script:python () { is_simple_type(T1) };
+idexpression T1 *I;
+type T2 != T1;
+position p;
+expression E;
+identifier func = {palloc, palloc0};
+@@
+(
+* I = func@p(sizeof(T2))
+|
+* I = func@p(E * sizeof(T2))
+)
+
+@script:python depends on report@
+T1 << r1.T1;
+T2 << r1.T2;
+I << r1.I;
+p << r1.p;
+@@
+coccilib.report.print_report(p[0], f"'{I}' has type '{T1}*' but 'sizeof({T2})' is used to allocate memory")
+
+@depends on patch@
+type T1 : script:python () { is_simple_type(T1) };
+idexpression T1 *I;
+type T2 != T1;
+expression E;
+identifier func = {palloc, palloc0};
+@@
+(
+- I = func(sizeof(T2))
++ I = func(sizeof(T1))
+|
+- I = func(E * sizeof(T2))
++ I = func(E * sizeof(T1))
+)
diff --git a/contrib/btree_gist/btree_utils_var.c b/contrib/btree_gist/btree_utils_var.c
index d9df2356cd1..36937795e90 100644
--- a/contrib/btree_gist/btree_utils_var.c
+++ b/contrib/btree_gist/btree_utils_var.c
@@ -475,7 +475,7 @@ gbt_var_picksplit(const GistEntryVector *entryvec, GIST_SPLITVEC *v,
v->spl_nleft = 0;
v->spl_nright = 0;
- sv = palloc(sizeof(bytea *) * (maxoff + 1));
+ sv = palloc((maxoff + 1) * sizeof(GBT_VARKEY *));
/* Sort entries */
diff --git a/src/backend/jit/llvm/llvmjit_expr.c b/src/backend/jit/llvm/llvmjit_expr.c
index c1cf34f1034..3ef01aadd47 100644
--- a/src/backend/jit/llvm/llvmjit_expr.c
+++ b/src/backend/jit/llvm/llvmjit_expr.c
@@ -690,8 +690,7 @@ llvm_compile_expr(ExprState *state)
LLVMBuildStore(b, l_sbool_const(1), v_resnullp);
/* create blocks for checking args, one for each */
- b_checkargnulls =
- palloc(sizeof(LLVMBasicBlockRef *) * op->d.func.nargs);
+ b_checkargnulls = palloc(op->d.func.nargs * sizeof(LLVMBasicBlockRef));
for (int argno = 0; argno < op->d.func.nargs; argno++)
b_checkargnulls[argno] =
l_bb_before_v(b_nonull, "b.%d.isnull.%d", opno,
@@ -2520,7 +2519,7 @@ llvm_compile_expr(ExprState *state)
v_nullsp = l_ptr_const(nulls, l_ptr(TypeStorageBool));
/* create blocks for checking args */
- b_checknulls = palloc(sizeof(LLVMBasicBlockRef *) * nargs);
+ b_checknulls = palloc(nargs * sizeof(LLVMBasicBlockRef));
for (int argno = 0; argno < nargs; argno++)
{
b_checknulls[argno] =
--
2.43.0
0001-Add-initial-coccicheck-script.v2.patchtext/x-patch; charset=US-ASCII; name=0001-Add-initial-coccicheck-script.v2.patchDownload
From 3d77e12fabddf7011f50550fe3ef35825e0465c1 Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Sun, 29 Dec 2024 19:35:58 +0100
Subject: Add initial coccicheck script
The coccicheck.py script can be used to run several semantics patches on a
source tree to either generate a report, see the context of the modification
(what lines that requires changes), or generate a patch to correct an issue.
python coccicheck.py <options> <pattern> <path> ...
Options:
--spatch=SPATCH
Path to spatch binary. Defaults to value of environment variable
SPATCH.
--mode={report,context,patch}
Defaults to value of environment variable MODE.
<pattern>
pattern for all semantic patches to match. For example,
src/tools/cocci/**/.cocci to match all *.cocci files in the directory
src/tools/cocci.
<path>
Path to source files to apply semantic patches to.
---
src/tools/coccicheck.py | 176 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 176 insertions(+)
create mode 100755 src/tools/coccicheck.py
diff --git a/src/tools/coccicheck.py b/src/tools/coccicheck.py
new file mode 100755
index 00000000000..f5a7ccaa92f
--- /dev/null
+++ b/src/tools/coccicheck.py
@@ -0,0 +1,176 @@
+#!/usr/bin/env python3
+
+"""Run Coccinelle on a set of files and directories.
+
+This is a re-written version of the Linux ``coccicheck`` script.
+
+Coccicheck can run in two different modes (the original have four
+different modes):
+
+- *patch*: patch files using the cocci file.
+
+- *report*: report will report any improvements that this script can
+ make, but not show any patch.
+
+- *context*: show the context where the patch can be applied.
+
+The program will take a single cocci file and call spatch(1) with a
+set of paths that can be either files or directories.
+
+When starting, the cocci file will be parsed and any lines containing
+"Options:" or "Requires:" will be treated specially.
+
+- Lines containing "Options:" will have a list of options to add to
+ the call of the spatch(1) program. These options will be added last.
+
+- Lines containing "Requires:" can contain a version of spatch(1) that
+ is required for this cocci file. If the version requirements are not
+ satisfied, the file will not be used.
+
+When calling spatch(1), it will set the virtual rules "patch" or
+"report" and the cocci file can use these to act differently depending
+on the mode.
+
+The following environment variables are available:
+
+SPATCH: Path to spatch program. This will be used if no path is
+ passed using the option --spatch.
+
+SPFLAGS: Extra flags to use when calling spatch. These will be added
+ last.
+
+MODE: Mode to use. It will be used if no --mode is passed to
+ coccicheck.py.
+
+"""
+
+import argparse
+import os
+import sys
+import subprocess
+import re
+
+from pathlib import PurePath, Path
+from packaging import version
+
+VERSION_CRE = re.compile(
+ r'spatch version (\S+) compiled with OCaml version (\S+)'
+)
+
+
+def parse_metadata(cocci_file):
+ """Parse metadata in Cocci file."""
+ metadata = {}
+ with open(cocci_file) as fh:
+ for line in fh:
+ mre = re.match(r'(Options|Requires):(.*)', line, re.IGNORECASE)
+ if mre:
+ metadata[mre.group(1).lower()] = mre.group(2)
+ return metadata
+
+
+def get_config(args):
+ """Compute configuration information."""
+ # Figure out spatch version. We just need to read the first line
+ config = {}
+ cmd = [args.spatch, '--version']
+ with subprocess.Popen(cmd, stdout=subprocess.PIPE, text=True) as proc:
+ for line in proc.stdout:
+ mre = VERSION_CRE.match(line)
+ if mre:
+ config['spatch_version'] = mre.group(1)
+ break
+ return config
+
+
+def run_spatch(cocci_file, args, config, env):
+ """Run coccinelle on the provided file."""
+ if args.verbose > 1:
+ print("processing cocci file", cocci_file)
+ spatch_version = config['spatch_version']
+ metadata = parse_metadata(cocci_file)
+
+ # Check that we have a valid version
+ if 'required' in metadata:
+ required_version = version.parse(metadata['required'])
+ if required_version < spatch_version:
+ print(
+ f'Skipping SmPL patch {cocci_file}: '
+ f'requires {required_version} (had {spatch_version})'
+ )
+ return
+
+ command = [
+ args.spatch,
+ "-D", args.mode,
+ "--cocci-file", cocci_file,
+ "--very-quiet",
+ ]
+
+ if 'options' in metadata:
+ command.append(metadata['options'])
+ if args.mode == 'report':
+ command.append('--no-show-diff')
+ if args.spflags:
+ command.append(args.spflags)
+
+ sp = subprocess.run(command + args.path, env=env)
+ if sp.returncode != 0:
+ sys.exit(sp.returncode)
+
+
+def coccinelle(args, config, env):
+ """Run coccinelle on all files matching the provided pattern."""
+ root = '/' if PurePath(args.cocci).is_absolute() else '.'
+ count = 0
+ for cocci_file in Path(root).glob(args.cocci):
+ count += 1
+ run_spatch(cocci_file, args, config, env)
+ return count
+
+
+def main(argv):
+ """Run coccicheck."""
+ parser = argparse.ArgumentParser()
+ parser.add_argument('--verbose', '-v', action='count', default=0)
+ parser.add_argument('--spatch', type=PurePath, metavar='SPATCH',
+ default=os.environ.get('SPATCH'),
+ help=('Path to spatch binary. Defaults to '
+ 'value of environment variable SPATCH.'))
+ parser.add_argument('--spflags', type=PurePath,
+ metavar='SPFLAGS',
+ default=os.environ.get('SPFLAGS', None),
+ help=('Flags to pass to spatch call. Defaults '
+ 'to value of enviroment variable SPFLAGS.'))
+ parser.add_argument('--mode', choices=['patch', 'report', 'context'],
+ default=os.environ.get('MODE', 'report'),
+ help=('Mode to use for coccinelle. Defaults to '
+ 'value of environment variable MODE.'))
+ parser.add_argument('--include', '-I', type=PurePath,
+ metavar='DIR',
+ help='Extra include directories.')
+ parser.add_argument('cocci', metavar='pattern',
+ help='Pattern for Cocci files to use.')
+ parser.add_argument('path', nargs='+', type=PurePath,
+ help='Directory or source path to process.')
+
+ args = parser.parse_args(argv)
+
+ if args.verbose > 1:
+ print("arguments:", args)
+
+ if args.spatch is None:
+ parser.error('spatch is part of the Coccinelle project and is '
+ 'available at http://coccinelle.lip6.fr/')
+
+ if coccinelle(args, get_config(args), os.environ) == 0:
+ parser.error(f'no coccinelle files found matching {args.cocci}')
+
+
+if __name__ == '__main__':
+ try:
+ main(sys.argv[1:])
+ except KeyboardInterrupt:
+ print("Execution aborted")
+ except Exception as exc:
+ print(exc)
--
2.43.0
0002-Create-coccicheck-target-for-autoconf.v2.patchtext/x-patch; charset=US-ASCII; name=0002-Create-coccicheck-target-for-autoconf.v2.patchDownload
From 8ddf7134c1c52bfc5c45a602e741da08f2d57c01 Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Mon, 30 Dec 2024 19:58:07 +0100
Subject: Create coccicheck target for autoconf
This adds a coccicheck target for the autoconf-based build system. The
coccicheck target accepts one parameter MODE, which can be either "patch",
"report", or "context". The "patch" mode will generate a patch that can be
applied to the source tree, the "report" mode will generate a list of file
locations with information about what can be changed, and the "context" mode
will just highlight the line that will be affected by the semantic patch.
The following will generate a patch and apply it to the source code tree:
make coccicheck MODE=patch | patch -p1
---
configure | 100 ++++++++++++++++++++++++++++++++++++++---
configure.ac | 12 +++++
src/Makefile.global.in | 24 +++++++++-
src/makefiles/pgxs.mk | 3 ++
4 files changed, 132 insertions(+), 7 deletions(-)
diff --git a/configure b/configure
index ceeef9b0915..e4ab847a4aa 100755
--- a/configure
+++ b/configure
@@ -769,6 +769,9 @@ enable_coverage
GENHTML
LCOV
GCOV
+enable_coccicheck
+SPFLAGS
+SPATCH
enable_debug
enable_rpath
default_port
@@ -836,6 +839,7 @@ with_pgport
enable_rpath
enable_debug
enable_profiling
+enable_coccicheck
enable_coverage
enable_dtrace
enable_tap_tests
@@ -1528,6 +1532,7 @@ Optional Features:
executables
--enable-debug build with debugging symbols (-g)
--enable-profiling build with profiling enabled
+ --enable-coccicheck enable Coccinelle checks (requires spatch)
--enable-coverage build with coverage testing instrumentation
--enable-dtrace build with DTrace support
--enable-tap-tests enable TAP tests (requires Perl and IPC::Run)
@@ -3319,6 +3324,91 @@ fi
+#
+# --enable-coccicheck enables Coccinelle check target "coccicheck"
+#
+
+
+# Check whether --enable-coccicheck was given.
+if test "${enable_coccicheck+set}" = set; then :
+ enableval=$enable_coccicheck;
+ case $enableval in
+ yes)
+ if test -z "$SPATCH"; then
+ for ac_prog in spatch
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_SPATCH+:} false; then :
+ $as_echo_n "(cached) " >&6
+else
+ case $SPATCH in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_SPATCH="$SPATCH" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_path_SPATCH="$as_dir/$ac_word$ac_exec_ext"
+ $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+SPATCH=$ac_cv_path_SPATCH
+if test -n "$SPATCH"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $SPATCH" >&5
+$as_echo "$SPATCH" >&6; }
+else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+ test -n "$SPATCH" && break
+done
+
+else
+ # Report the value of SPATCH in configure's output in all cases.
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for SPATCH" >&5
+$as_echo_n "checking for SPATCH... " >&6; }
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $SPATCH" >&5
+$as_echo "$SPATCH" >&6; }
+fi
+
+if test -z "$SPATCH"; then
+ as_fn_error $? "spatch not found" "$LINENO" 5
+fi
+
+ ;;
+ no)
+ :
+ ;;
+ *)
+ as_fn_error $? "no argument expected for --enable-coccicheck option" "$LINENO" 5
+ ;;
+ esac
+
+else
+ enable_coccicheck=no
+
+fi
+
+
+
+
#
# --enable-coverage enables generation of code coverage metrics with gcov
#
@@ -14785,7 +14875,7 @@ else
We can't simply define LARGE_OFF_T to be 9223372036854775807,
since some C++ compilers masquerading as C compilers
incorrectly reject 9223372036854775807. */
-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
&& LARGE_OFF_T % 2147483647 == 1)
? 1 : -1];
@@ -14831,7 +14921,7 @@ else
We can't simply define LARGE_OFF_T to be 9223372036854775807,
since some C++ compilers masquerading as C compilers
incorrectly reject 9223372036854775807. */
-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
&& LARGE_OFF_T % 2147483647 == 1)
? 1 : -1];
@@ -14855,7 +14945,7 @@ rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
We can't simply define LARGE_OFF_T to be 9223372036854775807,
since some C++ compilers masquerading as C compilers
incorrectly reject 9223372036854775807. */
-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
&& LARGE_OFF_T % 2147483647 == 1)
? 1 : -1];
@@ -14900,7 +14990,7 @@ else
We can't simply define LARGE_OFF_T to be 9223372036854775807,
since some C++ compilers masquerading as C compilers
incorrectly reject 9223372036854775807. */
-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
&& LARGE_OFF_T % 2147483647 == 1)
? 1 : -1];
@@ -14924,7 +15014,7 @@ rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
We can't simply define LARGE_OFF_T to be 9223372036854775807,
since some C++ compilers masquerading as C compilers
incorrectly reject 9223372036854775807. */
-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
&& LARGE_OFF_T % 2147483647 == 1)
? 1 : -1];
diff --git a/configure.ac b/configure.ac
index d713360f340..1f96e672632 100644
--- a/configure.ac
+++ b/configure.ac
@@ -199,6 +199,18 @@ AC_SUBST(enable_debug)
PGAC_ARG_BOOL(enable, profiling, no,
[build with profiling enabled ])
+#
+# --enable-coccicheck enables Coccinelle check target "coccicheck"
+#
+PGAC_ARG_BOOL(enable, coccicheck, no,
+ [enable Coccinelle checks (requires spatch)],
+[PGAC_PATH_PROGS(SPATCH, spatch)
+if test -z "$SPATCH"; then
+ AC_MSG_ERROR([spatch not found])
+fi
+AC_SUBST(SPFLAGS)])
+AC_SUBST(enable_coccicheck)
+
#
# --enable-coverage enables generation of code coverage metrics with gcov
#
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 1278b7744f4..fe78276af13 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -19,7 +19,7 @@
#
# Meta configuration
-standard_targets = all install installdirs uninstall clean distclean coverage check checkprep installcheck init-po update-po
+standard_targets = all install installdirs uninstall clean distclean coccicheck coverage check checkprep installcheck init-po update-po
# these targets should recurse even into subdirectories not being built:
standard_always_targets = clean distclean
@@ -200,6 +200,7 @@ enable_rpath = @enable_rpath@
enable_nls = @enable_nls@
enable_debug = @enable_debug@
enable_dtrace = @enable_dtrace@
+enable_coccicheck = @enable_coccicheck@
enable_coverage = @enable_coverage@
enable_injection_points = @enable_injection_points@
enable_tap_tests = @enable_tap_tests@
@@ -373,7 +374,7 @@ CLDR_VERSION = 45
# If a particular subdirectory knows this isn't needed in itself or its
# children, it can set NO_GENERATED_HEADERS.
-all install check installcheck: submake-generated-headers
+all install check installcheck coccicheck: submake-generated-headers
.PHONY: submake-generated-headers
@@ -520,6 +521,11 @@ FOP = @FOP@
XMLLINT = @XMLLINT@
XSLTPROC = @XSLTPROC@
+# Coccinelle
+
+SPATCH = @SPATCH@
+SPFLAGS = @SPFLAGS@
+
# Code coverage
GCOV = @GCOV@
@@ -990,6 +996,20 @@ endif # nls.mk
endif # enable_nls
+##########################################################################
+#
+# Coccinelle checks
+#
+
+ifeq ($(enable_coccicheck), yes)
+coccicheck_py = $(top_srcdir)/src/tools/coccicheck.py
+coccicheck = SPATCH=$(SPATCH) SPFLAGS=$(SPFLAGS) $(PYTHON) $(coccicheck_py)
+
+.PHONY: coccicheck
+coccicheck:
+ $(coccicheck) --mode=$(MODE) 'cocci/**/*.cocci' $(top_srcdir)
+endif # enable_coccicheck
+
##########################################################################
#
# Coverage
diff --git a/src/makefiles/pgxs.mk b/src/makefiles/pgxs.mk
index 0de3737e789..144459dccd2 100644
--- a/src/makefiles/pgxs.mk
+++ b/src/makefiles/pgxs.mk
@@ -95,6 +95,9 @@ endif
ifeq ($(FLEX),)
FLEX = flex
endif
+ifeq ($(SPATCH),)
+SPATCH = spatch
+endif
endif # PGXS
--
2.43.0
0003-Add-meson-build-for-coccicheck.v2.patchtext/x-patch; charset=US-ASCII; name=0003-Add-meson-build-for-coccicheck.v2.patchDownload
From e51f656cdd691cd00312b199857d193b51c6106c Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Wed, 1 Jan 2025 14:15:51 +0100
Subject: Add meson build for coccicheck
This commit adds a run target `coccicheck` to meson build files.
Since ninja does not accept parameters the same way make does, there are three
run targets defined---"coccicheck-patch", "coccicheck-report", and
"coccicheck-context"---that you can use to generate a patch, get a report, and
get the context respectively. For example, to patch the tree from the "build"
subdirectory created by the meson run:
ninja coccicheck-patch | patch -d .. -p1
---
meson.build | 26 ++++++++++++++++++++++++++
meson_options.txt | 7 ++++++-
src/makefiles/meson.build | 6 ++++++
3 files changed, 38 insertions(+), 1 deletion(-)
diff --git a/meson.build b/meson.build
index 32fc89f3a4b..e8a975657e9 100644
--- a/meson.build
+++ b/meson.build
@@ -348,6 +348,7 @@ missing = find_program('config/missing', native: true)
cp = find_program('cp', required: false, native: true)
xmllint_bin = find_program(get_option('XMLLINT'), native: true, required: false)
xsltproc_bin = find_program(get_option('XSLTPROC'), native: true, required: false)
+spatch = find_program(get_option('SPATCH'), native: true, required: false)
bison_flags = []
if bison.found()
@@ -1546,6 +1547,30 @@ else
endif
+###############################################################
+# Option: Coccinelle checks
+###############################################################
+
+coccicheck_opt = get_option('coccicheck')
+coccicheck_dep = not_found_dep
+if not coccicheck_opt.disabled()
+ if spatch.found()
+ coccicheck_dep = declare_dependency()
+ elif coccicheck_opt.enabled()
+ error('missing required tools (spatch needed) for Coccinelle checks')
+ endif
+endif
+
+coccicheck_modes = ['context', 'report', 'patch']
+
+foreach mode : coccicheck_modes
+ run_target('coccicheck-' + mode,
+ command: [python, files('src/tools/coccicheck.py'),
+ '--mode', mode,
+ '--spatch', spatch,
+ '@SOURCE_ROOT@/cocci/**/*.cocci',
+ '@SOURCE_ROOT@'])
+endforeach
###############################################################
# Compiler tests
@@ -3688,6 +3713,7 @@ if meson.version().version_compare('>=0.57')
{
'bison': '@0@ @1@'.format(bison.full_path(), bison_version),
'dtrace': dtrace,
+ 'spatch': spatch,
'flex': '@0@ @1@'.format(flex.full_path(), flex_version),
},
section: 'Programs',
diff --git a/meson_options.txt b/meson_options.txt
index d9c7ddccbc4..f1c6e219b2b 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -43,6 +43,9 @@ option('cassert', type: 'boolean', value: false,
option('tap_tests', type: 'feature', value: 'auto',
description: 'Enable TAP tests')
+option('coccicheck', type: 'feature', value: 'auto',
+ description: 'Enable Coccinelle checks')
+
option('injection_points', type: 'boolean', value: false,
description: 'Enable injection points')
@@ -52,7 +55,6 @@ option('PG_TEST_EXTRA', type: 'string', value: '',
option('PG_GIT_REVISION', type: 'string', value: 'HEAD',
description: 'git revision to be packaged by pgdist target')
-
# Compilation options
option('extra_include_dirs', type: 'array', value: [],
@@ -192,6 +194,9 @@ option('PYTHON', type: 'array', value: ['python3', 'python'],
option('SED', type: 'string', value: 'gsed',
description: 'Path to sed binary')
+option('SPATCH', type: 'string', value: 'spatch',
+ description: 'Path to spatch binary, used for SmPL patches')
+
option('STRIP', type: 'string', value: 'strip',
description: 'Path to strip binary, used for PGXS emulation')
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index d49b2079a44..4dcf386daf0 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -57,6 +57,7 @@ pgxs_kv = {
'enable_injection_points': get_option('injection_points') ? 'yes' : 'no',
'enable_tap_tests': tap_tests_enabled ? 'yes' : 'no',
'enable_debug': get_option('debug') ? 'yes' : 'no',
+ 'enable_coccicheck': spatch.found() ? 'yes' : 'no',
'enable_coverage': 'no',
'enable_dtrace': dtrace.found() ? 'yes' : 'no',
@@ -151,6 +152,7 @@ pgxs_bins = {
'TAR': tar,
'ZSTD': program_zstd,
'DTRACE': dtrace,
+ 'SPATCH': spatch,
}
pgxs_empty = [
@@ -166,6 +168,10 @@ pgxs_empty = [
'DBTOEPUB',
'FOP',
+ # Coccinelle is not supported by pgxs
+ 'SPATCH',
+ 'SPFLAGS',
+
# supporting coverage for pgxs-in-meson build doesn't seem worth it
'GENHTML',
'LCOV',
--
2.43.0
0005-Add-script-for-palloc_array.v2.patchtext/x-patch; charset=US-ASCII; name=0005-Add-script-for-palloc_array.v2.patchDownload
From 72ef88c31015238c3816a1354c53d12b70b005cf Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Sun, 29 Dec 2024 20:23:25 +0100
Subject: Add script for palloc_array
Macros were added to the palloc API in commit 2016055a92f to improve
type-safety, but very few instances were replaced. This adds a cocci script to
do that replacement and applies it to the code base. It deliberately do not
replace instances where the type of the variable and the type used in the macro
does not match.
---
cocci/palloc_array.cocci | 159 ++++++++++++++++++
contrib/amcheck/verify_heapam.c | 2 +-
contrib/amcheck/verify_nbtree.c | 2 +-
.../basebackup_to_shell/basebackup_to_shell.c | 2 +-
contrib/bloom/blinsert.c | 2 +-
contrib/bloom/blutils.c | 2 +-
contrib/bloom/blvacuum.c | 4 +-
contrib/btree_gin/btree_gin.c | 16 +-
contrib/btree_gist/btree_inet.c | 4 +-
contrib/btree_gist/btree_interval.c | 6 +-
contrib/btree_gist/btree_time.c | 4 +-
contrib/btree_gist/btree_ts.c | 8 +-
contrib/btree_gist/btree_utils_num.c | 6 +-
contrib/btree_gist/btree_utils_var.c | 10 +-
contrib/btree_gist/btree_uuid.c | 2 +-
contrib/cube/cube.c | 2 +-
contrib/dict_int/dict_int.c | 4 +-
contrib/dict_xsyn/dict_xsyn.c | 6 +-
contrib/file_fdw/file_fdw.c | 8 +-
contrib/hstore/hstore_gin.c | 4 +-
contrib/hstore/hstore_gist.c | 6 +-
contrib/hstore/hstore_io.c | 16 +-
contrib/hstore/hstore_op.c | 14 +-
contrib/hstore_plperl/hstore_plperl.c | 2 +-
contrib/hstore_plpython/hstore_plpython.c | 2 +-
contrib/intarray/_int_bool.c | 2 +-
contrib/intarray/_int_gin.c | 4 +-
contrib/intarray/_int_gist.c | 12 +-
contrib/intarray/_intbig_gist.c | 6 +-
contrib/jsonb_plpython/jsonb_plpython.c | 2 +-
contrib/ltree/_ltree_gist.c | 6 +-
contrib/ltree/_ltree_op.c | 2 +-
contrib/ltree/ltree_gist.c | 6 +-
contrib/ltree/ltree_io.c | 8 +-
contrib/ltree/ltree_op.c | 2 +-
contrib/ltree/ltxtquery_io.c | 2 +-
contrib/pageinspect/brinfuncs.c | 3 +-
contrib/pageinspect/btreefuncs.c | 8 +-
contrib/pageinspect/ginfuncs.c | 4 +-
contrib/pageinspect/hashfuncs.c | 2 +-
contrib/pageinspect/heapfuncs.c | 4 +-
contrib/pg_buffercache/pg_buffercache_pages.c | 2 +-
contrib/pg_logicalinspect/pg_logicalinspect.c | 6 +-
contrib/pg_prewarm/autoprewarm.c | 3 +-
.../pg_stat_statements/pg_stat_statements.c | 2 +-
contrib/pg_trgm/trgm_gin.c | 6 +-
contrib/pg_trgm/trgm_gist.c | 12 +-
contrib/pg_trgm/trgm_op.c | 14 +-
contrib/pg_trgm/trgm_regexp.c | 21 ++-
contrib/pg_visibility/pg_visibility.c | 2 +-
contrib/pg_walinspect/pg_walinspect.c | 5 +-
contrib/pgcrypto/mbuf.c | 8 +-
contrib/pgcrypto/openssl.c | 4 +-
contrib/pgcrypto/pgp-cfb.c | 2 +-
contrib/pgcrypto/pgp-compress.c | 4 +-
contrib/pgcrypto/pgp-decrypt.c | 4 +-
contrib/pgcrypto/pgp-encrypt.c | 4 +-
contrib/pgcrypto/pgp-pgsql.c | 6 +-
contrib/pgcrypto/pgp-pubkey.c | 2 +-
contrib/pgcrypto/pgp.c | 2 +-
contrib/pgcrypto/px-hmac.c | 2 +-
contrib/pgcrypto/px.c | 2 +-
contrib/pgrowlocks/pgrowlocks.c | 2 +-
contrib/postgres_fdw/postgres_fdw.c | 20 +--
contrib/seg/seg.c | 13 +-
contrib/sepgsql/label.c | 2 +-
contrib/sepgsql/uavc.c | 2 +-
contrib/spi/autoinc.c | 6 +-
contrib/spi/refint.c | 8 +-
contrib/sslinfo/sslinfo.c | 2 +-
contrib/tablefunc/tablefunc.c | 14 +-
contrib/test_decoding/test_decoding.c | 2 +-
contrib/unaccent/unaccent.c | 9 +-
contrib/xml2/xpath.c | 4 +-
src/backend/access/brin/brin.c | 4 +-
src/backend/access/brin/brin_minmax_multi.c | 8 +-
src/backend/access/brin/brin_revmap.c | 2 +-
src/backend/access/brin/brin_tuple.c | 10 +-
src/backend/access/common/attmap.c | 2 +-
src/backend/access/common/heaptuple.c | 8 +-
src/backend/access/common/printtup.c | 2 +-
src/backend/access/common/reloptions.c | 16 +-
src/backend/access/common/tidstore.c | 8 +-
src/backend/access/common/tupconvert.c | 4 +-
src/backend/access/common/tupdesc.c | 2 +-
src/backend/access/gin/ginbtree.c | 6 +-
src/backend/access/gin/gindatapage.c | 14 +-
src/backend/access/gin/ginentrypage.c | 2 +-
src/backend/access/gin/ginget.c | 2 +-
src/backend/access/gin/gininsert.c | 4 +-
src/backend/access/gin/ginscan.c | 2 +-
src/backend/access/gin/ginutil.c | 8 +-
src/backend/access/gin/ginvacuum.c | 6 +-
src/backend/access/gist/gist.c | 22 +--
src/backend/access/gist/gistbuild.c | 10 +-
src/backend/access/gist/gistbuildbuffers.c | 7 +-
src/backend/access/gist/gistproc.c | 32 ++--
src/backend/access/gist/gistscan.c | 5 +-
src/backend/access/gist/gistsplit.c | 10 +-
src/backend/access/gist/gistutil.c | 4 +-
src/backend/access/gist/gistvacuum.c | 4 +-
src/backend/access/gist/gistxlog.c | 2 +-
src/backend/access/hash/hash.c | 4 +-
src/backend/access/hash/hashsort.c | 2 +-
src/backend/access/heap/heapam.c | 8 +-
src/backend/access/heap/heapam_handler.c | 6 +-
src/backend/access/heap/vacuumlazy.c | 6 +-
src/backend/access/index/amvalidate.c | 2 +-
src/backend/access/nbtree/nbtinsert.c | 12 +-
src/backend/access/nbtree/nbtree.c | 4 +-
src/backend/access/nbtree/nbtsort.c | 18 +-
src/backend/access/spgist/spgdoinsert.c | 28 +--
src/backend/access/spgist/spginsert.c | 2 +-
src/backend/access/spgist/spgkdtreeproc.c | 2 +-
src/backend/access/spgist/spgproc.c | 4 +-
src/backend/access/spgist/spgquadtreeproc.c | 8 +-
src/backend/access/spgist/spgscan.c | 6 +-
src/backend/access/spgist/spgtextproc.c | 2 +-
src/backend/access/spgist/spgutils.c | 2 +-
src/backend/access/spgist/spgvacuum.c | 6 +-
src/backend/access/spgist/spgxlog.c | 2 +-
src/backend/access/transam/multixact.c | 7 +-
src/backend/access/transam/parallel.c | 2 +-
src/backend/access/transam/timeline.c | 8 +-
src/backend/access/transam/twophase.c | 9 +-
src/backend/access/transam/xact.c | 4 +-
src/backend/access/transam/xlog.c | 4 +-
src/backend/access/transam/xlogfuncs.c | 2 +-
src/backend/access/transam/xloginsert.c | 7 +-
src/backend/access/transam/xlogprefetcher.c | 2 +-
src/backend/access/transam/xlogrecovery.c | 6 +-
src/backend/access/transam/xlogutils.c | 2 +-
src/backend/backup/basebackup.c | 7 +-
src/backend/backup/basebackup_copy.c | 2 +-
src/backend/backup/basebackup_gzip.c | 2 +-
src/backend/backup/basebackup_incremental.c | 8 +-
src/backend/backup/basebackup_lz4.c | 2 +-
src/backend/backup/basebackup_progress.c | 2 +-
src/backend/backup/basebackup_server.c | 2 +-
src/backend/backup/basebackup_target.c | 4 +-
src/backend/backup/basebackup_throttle.c | 2 +-
src/backend/backup/basebackup_zstd.c | 2 +-
src/backend/backup/walsummary.c | 2 +-
src/backend/bootstrap/bootstrap.c | 4 +-
src/backend/catalog/aclchk.c | 2 +-
src/backend/catalog/dependency.c | 12 +-
src/backend/catalog/heap.c | 10 +-
src/backend/catalog/index.c | 2 +-
src/backend/catalog/namespace.c | 4 +-
src/backend/catalog/objectaddress.c | 4 +-
src/backend/catalog/pg_constraint.c | 8 +-
src/backend/catalog/pg_depend.c | 2 +-
src/backend/catalog/pg_enum.c | 6 +-
src/backend/catalog/pg_inherits.c | 4 +-
src/backend/catalog/pg_publication.c | 6 +-
src/backend/catalog/pg_shdepend.c | 13 +-
src/backend/catalog/pg_subscription.c | 4 +-
src/backend/catalog/storage.c | 14 +-
src/backend/commands/alter.c | 18 +-
src/backend/commands/analyze.c | 54 +++---
src/backend/commands/async.c | 4 +-
src/backend/commands/cluster.c | 4 +-
src/backend/commands/collationcmds.c | 7 +-
src/backend/commands/copy.c | 2 +-
src/backend/commands/copyfrom.c | 10 +-
src/backend/commands/copyto.c | 2 +-
src/backend/commands/createas.c | 2 +-
src/backend/commands/dbcommands.c | 4 +-
src/backend/commands/event_trigger.c | 22 +--
src/backend/commands/explain.c | 6 +-
src/backend/commands/extension.c | 8 +-
src/backend/commands/functioncmds.c | 10 +-
src/backend/commands/matview.c | 4 +-
src/backend/commands/opclasscmds.c | 12 +-
src/backend/commands/policy.c | 8 +-
src/backend/commands/publicationcmds.c | 6 +-
src/backend/commands/seclabel.c | 2 +-
src/backend/commands/subscriptioncmds.c | 9 +-
src/backend/commands/tablecmds.c | 40 ++---
src/backend/commands/tablespace.c | 2 +-
src/backend/commands/trigger.c | 20 +--
src/backend/commands/tsearchcmds.c | 8 +-
src/backend/commands/typecmds.c | 2 +-
src/backend/commands/user.c | 4 +-
src/backend/commands/vacuumparallel.c | 4 +-
src/backend/executor/execExpr.c | 23 ++-
src/backend/executor/execExprInterp.c | 12 +-
src/backend/executor/execGrouping.c | 2 +-
src/backend/executor/execIndexing.c | 2 +-
src/backend/executor/execJunk.c | 4 +-
src/backend/executor/execMain.c | 4 +-
src/backend/executor/execParallel.c | 5 +-
src/backend/executor/execPartition.c | 4 +-
src/backend/executor/execReplication.c | 6 +-
src/backend/executor/execSRF.c | 4 +-
src/backend/executor/execTuples.c | 14 +-
src/backend/executor/functions.c | 6 +-
src/backend/executor/instrument.c | 2 +-
src/backend/executor/nodeAgg.c | 4 +-
src/backend/executor/nodeAppend.c | 5 +-
src/backend/executor/nodeBitmapAnd.c | 2 +-
src/backend/executor/nodeBitmapOr.c | 2 +-
src/backend/executor/nodeIndexscan.c | 32 ++--
src/backend/executor/nodeMemoize.c | 6 +-
src/backend/executor/nodeMergeAppend.c | 2 +-
src/backend/executor/nodeModifyTable.c | 6 +-
src/backend/executor/nodeSamplescan.c | 2 +-
src/backend/executor/nodeSubplan.c | 4 +-
src/backend/executor/nodeTidrangescan.c | 2 +-
src/backend/executor/nodeTidscan.c | 23 ++-
src/backend/executor/spi.c | 12 +-
src/backend/executor/tqueue.c | 4 +-
src/backend/executor/tstoreReceiver.c | 2 +-
src/backend/foreign/foreign.c | 10 +-
src/backend/jit/llvm/llvmjit.c | 2 +-
src/backend/jit/llvm/llvmjit_deform.c | 12 +-
src/backend/jit/llvm/llvmjit_expr.c | 12 +-
src/backend/lib/bipartite_match.c | 2 +-
src/backend/lib/dshash.c | 4 +-
src/backend/lib/integerset.c | 2 +-
src/backend/lib/knapsack.c | 4 +-
src/backend/lib/pairingheap.c | 2 +-
src/backend/lib/rbtree.c | 2 +-
src/backend/libpq/auth-scram.c | 2 +-
src/backend/libpq/hba.c | 12 +-
src/backend/libpq/pqcomm.c | 2 +-
src/backend/nodes/queryjumblefuncs.c | 2 +-
src/backend/nodes/readfuncs.c | 2 +-
src/backend/optimizer/geqo/geqo_erx.c | 2 +-
src/backend/optimizer/geqo/geqo_eval.c | 2 +-
src/backend/optimizer/geqo/geqo_pmx.c | 8 +-
src/backend/optimizer/geqo/geqo_pool.c | 4 +-
.../optimizer/geqo/geqo_recombination.c | 2 +-
src/backend/optimizer/path/allpaths.c | 2 +-
src/backend/optimizer/path/clausesel.c | 2 +-
src/backend/optimizer/path/costsize.c | 4 +-
src/backend/optimizer/path/equivclass.c | 4 +-
src/backend/optimizer/path/indxpath.c | 9 +-
src/backend/optimizer/path/pathkeys.c | 4 +-
src/backend/optimizer/plan/createplan.c | 92 +++++-----
src/backend/optimizer/plan/initsplan.c | 8 +-
src/backend/optimizer/plan/planagg.c | 2 +-
src/backend/optimizer/plan/planner.c | 25 +--
src/backend/optimizer/plan/setrefs.c | 6 +-
src/backend/optimizer/prep/prepjointree.c | 11 +-
src/backend/optimizer/prep/prepunion.c | 2 +-
src/backend/optimizer/util/appendinfo.c | 6 +-
src/backend/optimizer/util/clauses.c | 4 +-
src/backend/optimizer/util/plancat.c | 2 +-
src/backend/optimizer/util/predtest.c | 4 +-
src/backend/optimizer/util/tlist.c | 12 +-
src/backend/parser/analyze.c | 11 +-
src/backend/parser/parse_clause.c | 9 +-
src/backend/parser/parse_expr.c | 2 +-
src/backend/parser/parse_node.c | 2 +-
src/backend/parser/parse_param.c | 4 +-
src/backend/parser/parse_relation.c | 16 +-
src/backend/parser/parse_type.c | 2 +-
src/backend/partitioning/partbounds.c | 36 ++--
src/backend/partitioning/partdesc.c | 6 +-
src/backend/partitioning/partprune.c | 27 ++-
src/backend/postmaster/autovacuum.c | 8 +-
src/backend/postmaster/checkpointer.c | 4 +-
src/backend/postmaster/pgarch.c | 4 +-
src/backend/postmaster/pmchild.c | 2 +-
src/backend/postmaster/postmaster.c | 4 +-
src/backend/postmaster/syslogger.c | 2 +-
src/backend/postmaster/walsummarizer.c | 3 +-
.../libpqwalreceiver/libpqwalreceiver.c | 4 +-
.../replication/logical/applyparallelworker.c | 2 +-
src/backend/replication/logical/launcher.c | 2 +-
src/backend/replication/logical/logical.c | 2 +-
.../replication/logical/logicalfuncs.c | 2 +-
src/backend/replication/logical/proto.c | 6 +-
.../replication/logical/reorderbuffer.c | 14 +-
src/backend/replication/logical/slotsync.c | 2 +-
src/backend/replication/logical/snapbuild.c | 5 +-
src/backend/replication/logical/tablesync.c | 2 +-
src/backend/replication/logical/worker.c | 15 +-
src/backend/replication/pgoutput/pgoutput.c | 4 +-
src/backend/replication/syncrep.c | 6 +-
src/backend/replication/walreceiver.c | 4 +-
src/backend/replication/walsender.c | 2 +-
src/backend/rewrite/rewriteHandler.c | 6 +-
src/backend/rewrite/rewriteManip.c | 6 +-
src/backend/snowball/dict_snowball.c | 4 +-
src/backend/statistics/dependencies.c | 23 ++-
src/backend/statistics/extended_stats.c | 24 +--
src/backend/statistics/mcv.c | 16 +-
src/backend/statistics/mvdistinct.c | 6 +-
src/backend/storage/buffer/bufmgr.c | 6 +-
src/backend/storage/file/buffile.c | 6 +-
src/backend/storage/file/fd.c | 4 +-
src/backend/storage/ipc/procarray.c | 8 +-
src/backend/storage/ipc/shm_mq.c | 2 +-
src/backend/storage/lmgr/deadlock.c | 18 +-
src/backend/storage/lmgr/lock.c | 11 +-
src/backend/storage/lmgr/lwlock.c | 6 +-
src/backend/storage/lmgr/predicate.c | 2 +-
src/backend/storage/smgr/bulk_write.c | 2 +-
src/backend/storage/smgr/md.c | 2 +-
src/backend/storage/smgr/smgr.c | 2 +-
src/backend/storage/sync/sync.c | 2 +-
src/backend/tcop/fastpath.c | 2 +-
src/backend/tcop/pquery.c | 2 +-
src/backend/tsearch/dict.c | 2 +-
src/backend/tsearch/dict_ispell.c | 2 +-
src/backend/tsearch/dict_simple.c | 6 +-
src/backend/tsearch/dict_synonym.c | 4 +-
src/backend/tsearch/dict_thesaurus.c | 10 +-
src/backend/tsearch/spell.c | 8 +-
src/backend/tsearch/ts_parse.c | 2 +-
src/backend/tsearch/ts_selfuncs.c | 2 +-
src/backend/tsearch/ts_typanalyze.c | 6 +-
src/backend/tsearch/ts_utils.c | 5 +-
src/backend/tsearch/wparser.c | 8 +-
src/backend/tsearch/wparser_def.c | 13 +-
src/backend/utils/activity/pgstat_relation.c | 2 +-
src/backend/utils/activity/wait_event.c | 2 +-
src/backend/utils/adt/acl.c | 8 +-
src/backend/utils/adt/array_selfuncs.c | 8 +-
src/backend/utils/adt/array_typanalyze.c | 12 +-
src/backend/utils/adt/array_userfuncs.c | 12 +-
src/backend/utils/adt/arrayfuncs.c | 36 ++--
src/backend/utils/adt/arraysubs.c | 2 +-
src/backend/utils/adt/arrayutils.c | 2 +-
src/backend/utils/adt/date.c | 24 +--
src/backend/utils/adt/datetime.c | 10 +-
src/backend/utils/adt/enum.c | 4 +-
src/backend/utils/adt/formatting.c | 10 +-
src/backend/utils/adt/geo_ops.c | 94 +++++------
src/backend/utils/adt/geo_spgist.c | 26 +--
src/backend/utils/adt/int.c | 2 +-
src/backend/utils/adt/int8.c | 2 +-
src/backend/utils/adt/json.c | 6 +-
src/backend/utils/adt/jsonb.c | 10 +-
src/backend/utils/adt/jsonb_gin.c | 8 +-
src/backend/utils/adt/jsonb_util.c | 13 +-
src/backend/utils/adt/jsonfuncs.c | 60 +++----
src/backend/utils/adt/jsonpath_exec.c | 18 +-
src/backend/utils/adt/levenshtein.c | 4 +-
src/backend/utils/adt/lockfuncs.c | 8 +-
src/backend/utils/adt/mac.c | 14 +-
src/backend/utils/adt/mac8.c | 18 +-
src/backend/utils/adt/mcxtfuncs.c | 2 +-
src/backend/utils/adt/misc.c | 2 +-
src/backend/utils/adt/multirangetypes.c | 21 +--
.../utils/adt/multirangetypes_selfuncs.c | 4 +-
src/backend/utils/adt/name.c | 2 +-
src/backend/utils/adt/network.c | 24 +--
src/backend/utils/adt/network_gist.c | 10 +-
src/backend/utils/adt/numeric.c | 17 +-
src/backend/utils/adt/oracle_compat.c | 4 +-
src/backend/utils/adt/orderedsetaggs.c | 14 +-
src/backend/utils/adt/pg_locale_libc.c | 6 +-
src/backend/utils/adt/rangetypes_gist.c | 11 +-
src/backend/utils/adt/rangetypes_selfuncs.c | 4 +-
src/backend/utils/adt/rangetypes_spgist.c | 4 +-
src/backend/utils/adt/rangetypes_typanalyze.c | 12 +-
src/backend/utils/adt/regexp.c | 10 +-
src/backend/utils/adt/rowtypes.c | 56 +++---
src/backend/utils/adt/ruleutils.c | 10 +-
src/backend/utils/adt/selfuncs.c | 10 +-
src/backend/utils/adt/timestamp.c | 46 +++--
src/backend/utils/adt/tsginidx.c | 6 +-
src/backend/utils/adt/tsgistidx.c | 10 +-
src/backend/utils/adt/tsquery.c | 6 +-
src/backend/utils/adt/tsquery_cleanup.c | 2 +-
src/backend/utils/adt/tsquery_gist.c | 4 +-
src/backend/utils/adt/tsquery_op.c | 6 +-
src/backend/utils/adt/tsquery_rewrite.c | 2 +-
src/backend/utils/adt/tsquery_util.c | 6 +-
src/backend/utils/adt/tsrank.c | 9 +-
src/backend/utils/adt/tsvector.c | 5 +-
src/backend/utils/adt/tsvector_op.c | 11 +-
src/backend/utils/adt/tsvector_parser.c | 7 +-
src/backend/utils/adt/uuid.c | 4 +-
src/backend/utils/adt/varlena.c | 8 +-
src/backend/utils/adt/xml.c | 8 +-
src/backend/utils/cache/catcache.c | 4 +-
src/backend/utils/cache/evtcache.c | 2 +-
src/backend/utils/cache/inval.c | 5 +-
src/backend/utils/cache/lsyscache.c | 6 +-
src/backend/utils/cache/plancache.c | 10 +-
src/backend/utils/cache/relcache.c | 37 ++--
src/backend/utils/cache/typcache.c | 17 +-
src/backend/utils/error/elog.c | 2 +-
src/backend/utils/fmgr/fmgr.c | 4 +-
src/backend/utils/fmgr/funcapi.c | 14 +-
src/backend/utils/init/postinit.c | 2 +-
src/backend/utils/mb/mbutils.c | 4 +-
src/backend/utils/misc/conffiles.c | 6 +-
src/backend/utils/misc/guc.c | 7 +-
src/backend/utils/misc/tzparser.c | 2 +-
src/backend/utils/mmgr/dsa.c | 4 +-
src/backend/utils/sort/logtape.c | 4 +-
src/backend/utils/sort/sharedtuplestore.c | 4 +-
src/backend/utils/sort/tuplesort.c | 2 +-
src/backend/utils/sort/tuplesortvariants.c | 10 +-
src/backend/utils/sort/tuplestore.c | 2 +-
src/backend/utils/time/combocid.c | 4 +-
src/backend/utils/time/snapmgr.c | 2 +-
src/bin/pg_basebackup/astreamer_inject.c | 2 +-
src/bin/pg_combinebackup/load_manifest.c | 2 +-
src/bin/pg_dump/common.c | 2 +-
src/bin/pg_verifybackup/astreamer_verify.c | 2 +-
src/bin/pg_verifybackup/pg_verifybackup.c | 2 +-
src/common/blkreftable.c | 14 +-
src/common/parse_manifest.c | 4 +-
src/common/pgfnames.c | 6 +-
src/common/rmtree.c | 2 +-
src/fe_utils/astreamer_file.c | 4 +-
src/fe_utils/astreamer_gzip.c | 4 +-
src/fe_utils/astreamer_lz4.c | 4 +-
src/fe_utils/astreamer_tar.c | 6 +-
src/fe_utils/astreamer_zstd.c | 4 +-
src/pl/plperl/plperl.c | 20 +--
src/pl/plpgsql/src/pl_comp.c | 28 +--
src/pl/plpgsql/src/pl_exec.c | 4 +-
src/pl/plpython/plpy_cursorobject.c | 4 +-
src/pl/plpython/plpy_exec.c | 6 +-
src/pl/plpython/plpy_procedure.c | 2 +-
src/pl/plpython/plpy_spi.c | 4 +-
src/pl/plpython/plpy_typeio.c | 12 +-
src/pl/tcl/pltcl.c | 8 +-
.../modules/dummy_index_am/dummy_index_am.c | 2 +-
src/test/modules/plsample/plsample.c | 2 +-
.../modules/test_integerset/test_integerset.c | 4 +-
src/test/modules/test_parser/test_parser.c | 4 +-
.../modules/test_radixtree/test_radixtree.c | 2 +-
src/test/modules/test_rbtree/test_rbtree.c | 6 +-
src/test/modules/test_regex/test_regex.c | 8 +-
.../test_resowner/test_resowner_basic.c | 4 +-
.../test_resowner/test_resowner_many.c | 6 +-
.../modules/test_rls_hooks/test_rls_hooks.c | 4 +-
src/test/modules/worker_spi/worker_spi.c | 2 +-
src/test/regress/regress.c | 16 +-
src/timezone/pgtz.c | 2 +-
src/tutorial/complex.c | 6 +-
src/tutorial/funcs.c | 2 +-
440 files changed, 1794 insertions(+), 1631 deletions(-)
create mode 100644 cocci/palloc_array.cocci
diff --git a/cocci/palloc_array.cocci b/cocci/palloc_array.cocci
new file mode 100644
index 00000000000..7395c33b71b
--- /dev/null
+++ b/cocci/palloc_array.cocci
@@ -0,0 +1,159 @@
+// Since PG16 there are array versions of common palloc operations, so
+// we can use those instead.
+//
+// We ignore cases where we have a anonymous struct and also when the
+// type of the variable being assigned to is different from the
+// inferred type.
+
+virtual patch
+virtual report
+virtual context
+
+// These rules (soN) are needed to rewrite types of the form
+// sizeof(T[C]) to C * sizeof(T) since Cocci cannot (currently) handle
+// it.
+@initialize:python@
+@@
+import re
+
+CRE = re.compile(r'(.*)\s+\[\s+(\d+)\s+\]$')
+
+def is_array_type(s):
+ mre = CRE.match(s)
+ return (mre is not None)
+
+@so1 depends on patch@
+type T : script:python() { is_array_type(T) };
+@@
+palloc(sizeof(T))
+
+@script:python so2 depends on patch@
+T << so1.T;
+T2;
+E;
+@@
+mre = CRE.match(T)
+coccinelle.T2 = cocci.make_type(mre.group(1))
+coccinelle.E = cocci.make_expr(mre.group(2))
+
+@depends on patch@
+type so1.T;
+type so2.T2;
+expression so2.E;
+@@
+- palloc(sizeof(T))
++ palloc(E * sizeof(T2))
+
+@r1 depends on report || context@
+type T !~ "^struct {";
+expression E;
+position p;
+idexpression T *I;
+identifier alloc = {palloc0, palloc};
+fresh identifier alloc_array = alloc ## "_array";
+@@
+* I = alloc@p(E * sizeof(T))
+
+@script:python depends on report@
+p << r1.p;
+alloc << r1.alloc;
+alloc_array << r1.alloc_array;
+@@
+coccilib.report.print_report(p[0], f"this {alloc} can be replaced with {alloc_array}")
+
+@depends on patch@
+type T !~ "^struct {";
+expression E;
+T *P;
+idexpression T* I;
+constant C;
+identifier alloc = {palloc0, palloc};
+fresh identifier alloc_array = alloc ## "_array";
+@@
+(
+- I = (T*) alloc(E * sizeof( \( *P \| P[C] \) ))
++ I = alloc_array(T, E)
+|
+- I = (T*) alloc(E * sizeof(T))
++ I = alloc_array(T, E)
+|
+- I = alloc(E * sizeof( \( *P \| P[C] \) ))
++ I = alloc_array(T, E)
+|
+- I = alloc(E * sizeof(T))
++ I = alloc_array(T, E)
+)
+
+@r3 depends on report || context@
+type T !~ "^struct {";
+expression E;
+idexpression T *P;
+idexpression T *I;
+position p;
+@@
+* I = repalloc@p(P, E * sizeof(T))
+
+@script:python depends on report@
+p << r3.p;
+@@
+coccilib.report.print_report(p[0], "this repalloc can be replaced with repalloc_array")
+
+@depends on patch@
+type T !~ "^struct {";
+expression E;
+idexpression T *P1;
+idexpression T *P2;
+idexpression T *I;
+constant C;
+@@
+(
+- I = (T*) repalloc(P1, E * sizeof( \( *P2 \| P2[C] \) ))
++ I = repalloc_array(P1, T, E)
+|
+- I = (T*) repalloc(P1, E * sizeof(T))
++ I = repalloc_array(P1, T, E)
+|
+- I = repalloc(P1, E * sizeof( \( *P2 \| P2[C] \) ))
++ I = repalloc_array(P1, T, E)
+|
+- I = repalloc(P1, E * sizeof(T))
++ I = repalloc_array(P1, T, E)
+)
+
+@r4 depends on report || context@
+type T !~ "^struct {";
+position p;
+idexpression T* I;
+identifier alloc = {palloc, palloc0};
+fresh identifier alloc_object = alloc ## "_object";
+@@
+* I = alloc@p(sizeof(T))
+
+@script:python depends on report@
+p << r4.p;
+alloc << r4.alloc;
+alloc_object << r4.alloc_object;
+@@
+coccilib.report.print_report(p[0], "this {alloc} can be replaced with {alloc_object}")
+
+@depends on patch@
+type T !~ "^struct {";
+T* P;
+idexpression T *I;
+constant C;
+identifier alloc = {palloc, palloc0};
+fresh identifier alloc_object = alloc ## "_object";
+@@
+(
+- I = (T*) alloc(sizeof( \( *P \| P[C] \) ))
++ I = alloc_object(T)
+|
+- I = (T*) alloc(sizeof(T))
++ I = alloc_object(T)
+|
+- I = alloc(sizeof( \( *P \| P[C] \) ))
++ I = alloc_object(T)
+|
+- I = alloc(sizeof(T))
++ I = alloc_object(T)
+)
diff --git a/contrib/amcheck/verify_heapam.c b/contrib/amcheck/verify_heapam.c
index 8a8e36dde7e..596d24c382f 100644
--- a/contrib/amcheck/verify_heapam.c
+++ b/contrib/amcheck/verify_heapam.c
@@ -1746,7 +1746,7 @@ check_tuple_attribute(HeapCheckContext *ctx)
{
ToastedAttribute *ta;
- ta = (ToastedAttribute *) palloc0(sizeof(ToastedAttribute));
+ ta = palloc0_object(ToastedAttribute);
VARATT_EXTERNAL_GET_POINTER(ta->toast_pointer, attr);
ta->blkno = ctx->blkno;
diff --git a/contrib/amcheck/verify_nbtree.c b/contrib/amcheck/verify_nbtree.c
index 7f7b55d902a..fb0a77cd177 100644
--- a/contrib/amcheck/verify_nbtree.c
+++ b/contrib/amcheck/verify_nbtree.c
@@ -524,7 +524,7 @@ bt_check_every_level(Relation rel, Relation heaprel, bool heapkeyspace,
/*
* Initialize state for entire verification operation
*/
- state = palloc0(sizeof(BtreeCheckState));
+ state = palloc0_object(BtreeCheckState);
state->rel = rel;
state->heaprel = heaprel;
state->heapkeyspace = heapkeyspace;
diff --git a/contrib/basebackup_to_shell/basebackup_to_shell.c b/contrib/basebackup_to_shell/basebackup_to_shell.c
index d91366b06d2..902f79b3459 100644
--- a/contrib/basebackup_to_shell/basebackup_to_shell.c
+++ b/contrib/basebackup_to_shell/basebackup_to_shell.c
@@ -133,7 +133,7 @@ shell_get_sink(bbsink *next_sink, void *detail_arg)
* We remember the current value of basebackup_to_shell.shell_command to
* be certain that it can't change under us during the backup.
*/
- sink = palloc0(sizeof(bbsink_shell));
+ sink = palloc0_object(bbsink_shell);
*((const bbsink_ops **) &sink->base.bbs_ops) = &bbsink_shell_ops;
sink->base.bbs_next = next_sink;
sink->target_detail = detail_arg;
diff --git a/contrib/bloom/blinsert.c b/contrib/bloom/blinsert.c
index ee8ebaf3caf..b989baecff4 100644
--- a/contrib/bloom/blinsert.c
+++ b/contrib/bloom/blinsert.c
@@ -148,7 +148,7 @@ blbuild(Relation heap, Relation index, IndexInfo *indexInfo)
MemoryContextDelete(buildstate.tmpCtx);
- result = (IndexBuildResult *) palloc(sizeof(IndexBuildResult));
+ result = palloc_object(IndexBuildResult);
result->heap_tuples = reltuples;
result->index_tuples = buildstate.indtuples;
diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c
index 3796bea786b..d92fd8ce719 100644
--- a/contrib/bloom/blutils.c
+++ b/contrib/bloom/blutils.c
@@ -86,7 +86,7 @@ makeDefaultBloomOptions(void)
BloomOptions *opts;
int i;
- opts = (BloomOptions *) palloc0(sizeof(BloomOptions));
+ opts = palloc0_object(BloomOptions);
/* Convert DEFAULT_BLOOM_LENGTH from # of bits to # of words */
opts->bloomLength = (DEFAULT_BLOOM_LENGTH + SIGNWORDBITS - 1) / SIGNWORDBITS;
for (i = 0; i < INDEX_MAX_KEYS; i++)
diff --git a/contrib/bloom/blvacuum.c b/contrib/bloom/blvacuum.c
index 7e1db0b52fc..433cc29c910 100644
--- a/contrib/bloom/blvacuum.c
+++ b/contrib/bloom/blvacuum.c
@@ -42,7 +42,7 @@ blbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
GenericXLogState *gxlogState;
if (stats == NULL)
- stats = (IndexBulkDeleteResult *) palloc0(sizeof(IndexBulkDeleteResult));
+ stats = palloc0_object(IndexBulkDeleteResult);
initBloomState(&state, index);
@@ -172,7 +172,7 @@ blvacuumcleanup(IndexVacuumInfo *info, IndexBulkDeleteResult *stats)
return stats;
if (stats == NULL)
- stats = (IndexBulkDeleteResult *) palloc0(sizeof(IndexBulkDeleteResult));
+ stats = palloc0_object(IndexBulkDeleteResult);
/*
* Iterate over the pages: insert deleted pages into FSM and collect
diff --git a/contrib/btree_gin/btree_gin.c b/contrib/btree_gin/btree_gin.c
index 533c55e9eaf..8252b980aa3 100644
--- a/contrib/btree_gin/btree_gin.c
+++ b/contrib/btree_gin/btree_gin.c
@@ -31,7 +31,7 @@ gin_btree_extract_value(FunctionCallInfo fcinfo, bool is_varlena)
{
Datum datum = PG_GETARG_DATUM(0);
int32 *nentries = (int32 *) PG_GETARG_POINTER(1);
- Datum *entries = (Datum *) palloc(sizeof(Datum));
+ Datum *entries = palloc_object(Datum);
if (is_varlena)
datum = PointerGetDatum(PG_DETOAST_DATUM(datum));
@@ -60,8 +60,8 @@ gin_btree_extract_query(FunctionCallInfo fcinfo,
StrategyNumber strategy = PG_GETARG_UINT16(2);
bool **partialmatch = (bool **) PG_GETARG_POINTER(3);
Pointer **extra_data = (Pointer **) PG_GETARG_POINTER(4);
- Datum *entries = (Datum *) palloc(sizeof(Datum));
- QueryInfo *data = (QueryInfo *) palloc(sizeof(QueryInfo));
+ Datum *entries = palloc_object(Datum);
+ QueryInfo *data = palloc_object(QueryInfo);
bool *ptr_partialmatch;
*nentries = 1;
@@ -280,7 +280,7 @@ GIN_SUPPORT(time, false, leftmostvalue_time, time_cmp)
static Datum
leftmostvalue_timetz(void)
{
- TimeTzADT *v = palloc(sizeof(TimeTzADT));
+ TimeTzADT *v = palloc_object(TimeTzADT);
v->time = 0;
v->zone = -24 * 3600; /* XXX is that true? */
@@ -301,7 +301,7 @@ GIN_SUPPORT(date, false, leftmostvalue_date, date_cmp)
static Datum
leftmostvalue_interval(void)
{
- Interval *v = palloc(sizeof(Interval));
+ Interval *v = palloc_object(Interval);
INTERVAL_NOBEGIN(v);
@@ -313,7 +313,7 @@ GIN_SUPPORT(interval, false, leftmostvalue_interval, interval_cmp)
static Datum
leftmostvalue_macaddr(void)
{
- macaddr *v = palloc0(sizeof(macaddr));
+ macaddr *v = palloc0_object(macaddr);
return MacaddrPGetDatum(v);
}
@@ -323,7 +323,7 @@ GIN_SUPPORT(macaddr, false, leftmostvalue_macaddr, macaddr_cmp)
static Datum
leftmostvalue_macaddr8(void)
{
- macaddr8 *v = palloc0(sizeof(macaddr8));
+ macaddr8 *v = palloc0_object(macaddr8);
return Macaddr8PGetDatum(v);
}
@@ -483,7 +483,7 @@ leftmostvalue_uuid(void)
* palloc0 will create the UUID with all zeroes:
* "00000000-0000-0000-0000-000000000000"
*/
- pg_uuid_t *retval = (pg_uuid_t *) palloc0(sizeof(pg_uuid_t));
+ pg_uuid_t *retval = palloc0_object(pg_uuid_t);
return UUIDPGetDatum(retval);
}
diff --git a/contrib/btree_gist/btree_inet.c b/contrib/btree_gist/btree_inet.c
index 4cffd349091..8ec395b004e 100644
--- a/contrib/btree_gist/btree_inet.c
+++ b/contrib/btree_gist/btree_inet.c
@@ -97,10 +97,10 @@ gbt_inet_compress(PG_FUNCTION_ARGS)
if (entry->leafkey)
{
- inetKEY *r = (inetKEY *) palloc(sizeof(inetKEY));
+ inetKEY *r = palloc_object(inetKEY);
bool failure = false;
- retval = palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
r->lower = convert_network_to_scalar(entry->key, INETOID, &failure);
Assert(!failure);
r->upper = r->lower;
diff --git a/contrib/btree_gist/btree_interval.c b/contrib/btree_gist/btree_interval.c
index 8f99a416965..76ff7a86d3e 100644
--- a/contrib/btree_gist/btree_interval.c
+++ b/contrib/btree_gist/btree_interval.c
@@ -151,7 +151,7 @@ gbt_intv_compress(PG_FUNCTION_ARGS)
{
char *r = (char *) palloc(2 * INTERVALSIZE);
- retval = palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
if (entry->leafkey)
{
@@ -191,10 +191,10 @@ gbt_intv_decompress(PG_FUNCTION_ARGS)
if (INTERVALSIZE != sizeof(Interval))
{
- intvKEY *r = palloc(sizeof(intvKEY));
+ intvKEY *r = palloc_object(intvKEY);
char *key = DatumGetPointer(entry->key);
- retval = palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
memcpy(&r->lower, key, INTERVALSIZE);
memcpy(&r->upper, key + INTERVALSIZE, INTERVALSIZE);
diff --git a/contrib/btree_gist/btree_time.c b/contrib/btree_gist/btree_time.c
index 2f7859340f6..13c2e176635 100644
--- a/contrib/btree_gist/btree_time.c
+++ b/contrib/btree_gist/btree_time.c
@@ -172,11 +172,11 @@ gbt_timetz_compress(PG_FUNCTION_ARGS)
if (entry->leafkey)
{
- timeKEY *r = (timeKEY *) palloc(sizeof(timeKEY));
+ timeKEY *r = palloc_object(timeKEY);
TimeTzADT *tz = DatumGetTimeTzADTP(entry->key);
TimeADT tmp;
- retval = palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
/* We are using the time + zone only to compress */
tmp = tz->time + (tz->zone * INT64CONST(1000000));
diff --git a/contrib/btree_gist/btree_ts.c b/contrib/btree_gist/btree_ts.c
index 9e0d979dda9..87450bb1714 100644
--- a/contrib/btree_gist/btree_ts.c
+++ b/contrib/btree_gist/btree_ts.c
@@ -152,7 +152,7 @@ ts_dist(PG_FUNCTION_ARGS)
if (TIMESTAMP_NOT_FINITE(a) || TIMESTAMP_NOT_FINITE(b))
{
- Interval *p = palloc(sizeof(Interval));
+ Interval *p = palloc_object(Interval);
p->day = INT_MAX;
p->month = INT_MAX;
@@ -176,7 +176,7 @@ tstz_dist(PG_FUNCTION_ARGS)
if (TIMESTAMP_NOT_FINITE(a) || TIMESTAMP_NOT_FINITE(b))
{
- Interval *p = palloc(sizeof(Interval));
+ Interval *p = palloc_object(Interval);
p->day = INT_MAX;
p->month = INT_MAX;
@@ -221,13 +221,13 @@ gbt_tstz_compress(PG_FUNCTION_ARGS)
if (entry->leafkey)
{
- tsKEY *r = (tsKEY *) palloc(sizeof(tsKEY));
+ tsKEY *r = palloc_object(tsKEY);
TimestampTz ts = DatumGetTimestampTz(entry->key);
Timestamp gmt;
gmt = tstz_to_ts_gmt(ts);
- retval = palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
r->lower = r->upper = gmt;
gistentryinit(*retval, PointerGetDatum(r),
entry->rel, entry->page,
diff --git a/contrib/btree_gist/btree_utils_num.c b/contrib/btree_gist/btree_utils_num.c
index 346ee837d75..9a520315d90 100644
--- a/contrib/btree_gist/btree_utils_num.c
+++ b/contrib/btree_gist/btree_utils_num.c
@@ -89,7 +89,7 @@ gbt_num_compress(GISTENTRY *entry, const gbtree_ninfo *tinfo)
memcpy(&r[0], leaf, tinfo->size);
memcpy(&r[tinfo->size], leaf, tinfo->size);
- retval = palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(r), entry->rel, entry->page,
entry->offset, false);
}
@@ -156,7 +156,7 @@ gbt_num_fetch(GISTENTRY *entry, const gbtree_ninfo *tinfo)
datum = entry->key;
}
- retval = palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
gistentryinit(*retval, datum, entry->rel, entry->page, entry->offset,
false);
return retval;
@@ -344,7 +344,7 @@ gbt_num_picksplit(const GistEntryVector *entryvec, GIST_SPLITVEC *v,
Nsrt *arr;
int nbytes;
- arr = (Nsrt *) palloc((maxoff + 1) * sizeof(Nsrt));
+ arr = palloc_array(Nsrt, (maxoff + 1));
nbytes = (maxoff + 2) * sizeof(OffsetNumber);
v->spl_left = (OffsetNumber *) palloc(nbytes);
v->spl_right = (OffsetNumber *) palloc(nbytes);
diff --git a/contrib/btree_gist/btree_utils_var.c b/contrib/btree_gist/btree_utils_var.c
index 36937795e90..099069c44ad 100644
--- a/contrib/btree_gist/btree_utils_var.c
+++ b/contrib/btree_gist/btree_utils_var.c
@@ -39,7 +39,7 @@ gbt_var_decompress(PG_FUNCTION_ARGS)
if (key != (GBT_VARKEY *) DatumGetPointer(entry->key))
{
- GISTENTRY *retval = (GISTENTRY *) palloc(sizeof(GISTENTRY));
+ GISTENTRY *retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(key),
entry->rel, entry->page,
@@ -288,7 +288,7 @@ gbt_var_compress(GISTENTRY *entry, const gbtree_vinfo *tinfo)
r = gbt_var_key_from_datum(leaf);
- retval = palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(r),
entry->rel, entry->page,
entry->offset, true);
@@ -308,7 +308,7 @@ gbt_var_fetch(PG_FUNCTION_ARGS)
GBT_VARKEY_R r = gbt_var_key_readable(key);
GISTENTRY *retval;
- retval = palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(r.lower),
entry->rel, entry->page,
entry->offset, true);
@@ -466,7 +466,7 @@ gbt_var_picksplit(const GistEntryVector *entryvec, GIST_SPLITVEC *v,
GBT_VARKEY **sv = NULL;
gbt_vsrt_arg varg;
- arr = (Vsrt *) palloc((maxoff + 1) * sizeof(Vsrt));
+ arr = palloc_array(Vsrt, (maxoff + 1));
nbytes = (maxoff + 2) * sizeof(OffsetNumber);
v->spl_left = (OffsetNumber *) palloc(nbytes);
v->spl_right = (OffsetNumber *) palloc(nbytes);
@@ -475,7 +475,7 @@ gbt_var_picksplit(const GistEntryVector *entryvec, GIST_SPLITVEC *v,
v->spl_nleft = 0;
v->spl_nright = 0;
- sv = palloc((maxoff + 1) * sizeof(GBT_VARKEY *));
+ sv = palloc_array(GBT_VARKEY *, (maxoff + 1));
/* Sort entries */
diff --git a/contrib/btree_gist/btree_uuid.c b/contrib/btree_gist/btree_uuid.c
index f4c5c6e5892..fff0e338920 100644
--- a/contrib/btree_gist/btree_uuid.c
+++ b/contrib/btree_gist/btree_uuid.c
@@ -108,7 +108,7 @@ gbt_uuid_compress(PG_FUNCTION_ARGS)
char *r = (char *) palloc(2 * UUID_LEN);
pg_uuid_t *key = DatumGetUUIDP(entry->key);
- retval = palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
memcpy(r, key, UUID_LEN);
memcpy(r + UUID_LEN, key, UUID_LEN);
diff --git a/contrib/cube/cube.c b/contrib/cube/cube.c
index bf8fc489dca..4b374893537 100644
--- a/contrib/cube/cube.c
+++ b/contrib/cube/cube.c
@@ -468,7 +468,7 @@ g_cube_decompress(PG_FUNCTION_ARGS)
if (key != DatumGetNDBOXP(entry->key))
{
- GISTENTRY *retval = (GISTENTRY *) palloc(sizeof(GISTENTRY));
+ GISTENTRY *retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(key),
entry->rel, entry->page,
diff --git a/contrib/dict_int/dict_int.c b/contrib/dict_int/dict_int.c
index 3cfe406f669..fe502fe7ad7 100644
--- a/contrib/dict_int/dict_int.c
+++ b/contrib/dict_int/dict_int.c
@@ -35,7 +35,7 @@ dintdict_init(PG_FUNCTION_ARGS)
DictInt *d;
ListCell *l;
- d = (DictInt *) palloc0(sizeof(DictInt));
+ d = palloc0_object(DictInt);
d->maxlen = 6;
d->rejectlong = false;
d->absval = false;
@@ -80,7 +80,7 @@ dintdict_lexize(PG_FUNCTION_ARGS)
char *in = (char *) PG_GETARG_POINTER(1);
int len = PG_GETARG_INT32(2);
char *txt;
- TSLexeme *res = palloc0(sizeof(TSLexeme) * 2);
+ TSLexeme *res = palloc0_array(TSLexeme, 2);
res[1].lexeme = NULL;
diff --git a/contrib/dict_xsyn/dict_xsyn.c b/contrib/dict_xsyn/dict_xsyn.c
index 756ba5998c5..5d3d9e7fcc8 100644
--- a/contrib/dict_xsyn/dict_xsyn.c
+++ b/contrib/dict_xsyn/dict_xsyn.c
@@ -147,7 +147,7 @@ dxsyn_init(PG_FUNCTION_ARGS)
ListCell *l;
char *filename = NULL;
- d = (DictSyn *) palloc0(sizeof(DictSyn));
+ d = palloc0_object(DictSyn);
d->len = 0;
d->syn = NULL;
d->matchorig = true;
@@ -232,12 +232,12 @@ dxsyn_lexize(PG_FUNCTION_ARGS)
char *end;
int nsyns = 0;
- res = palloc(sizeof(TSLexeme));
+ res = palloc_object(TSLexeme);
pos = value;
while ((syn = find_word(pos, &end)) != NULL)
{
- res = repalloc(res, sizeof(TSLexeme) * (nsyns + 2));
+ res = repalloc_array(res, TSLexeme, (nsyns + 2));
/* The first word is output only if keeporig=true */
if (pos != value || d->keeporig)
diff --git a/contrib/file_fdw/file_fdw.c b/contrib/file_fdw/file_fdw.c
index 678e754b2b9..adad763ecf3 100644
--- a/contrib/file_fdw/file_fdw.c
+++ b/contrib/file_fdw/file_fdw.c
@@ -526,7 +526,7 @@ fileGetForeignRelSize(PlannerInfo *root,
* we might as well get everything and not need to re-fetch it later in
* planning.
*/
- fdw_private = (FileFdwPlanState *) palloc(sizeof(FileFdwPlanState));
+ fdw_private = palloc_object(FileFdwPlanState);
fileGetOptions(foreigntableid,
&fdw_private->filename,
&fdw_private->is_program,
@@ -707,7 +707,7 @@ fileBeginForeignScan(ForeignScanState *node, int eflags)
* Save state in node->fdw_state. We must save enough information to call
* BeginCopyFrom() again.
*/
- festate = (FileFdwExecutionState *) palloc(sizeof(FileFdwExecutionState));
+ festate = palloc_object(FileFdwExecutionState);
festate->filename = filename;
festate->is_program = is_program;
festate->options = options;
@@ -1203,8 +1203,8 @@ file_acquire_sample_rows(Relation onerel, int elevel,
Assert(targrows > 0);
tupDesc = RelationGetDescr(onerel);
- values = (Datum *) palloc(tupDesc->natts * sizeof(Datum));
- nulls = (bool *) palloc(tupDesc->natts * sizeof(bool));
+ values = palloc_array(Datum, tupDesc->natts);
+ nulls = palloc_array(bool, tupDesc->natts);
/* Fetch options of foreign table */
fileGetOptions(RelationGetRelid(onerel), &filename, &is_program, &options);
diff --git a/contrib/hstore/hstore_gin.c b/contrib/hstore/hstore_gin.c
index 766c00bb6a7..061f50a505c 100644
--- a/contrib/hstore/hstore_gin.c
+++ b/contrib/hstore/hstore_gin.c
@@ -103,7 +103,7 @@ gin_extract_hstore_query(PG_FUNCTION_ARGS)
text *item;
*nentries = 1;
- entries = (Datum *) palloc(sizeof(Datum));
+ entries = palloc_object(Datum);
item = makeitem(VARDATA_ANY(query), VARSIZE_ANY_EXHDR(query), KEYFLAG);
entries[0] = PointerGetDatum(item);
}
@@ -120,7 +120,7 @@ gin_extract_hstore_query(PG_FUNCTION_ARGS)
deconstruct_array_builtin(query, TEXTOID, &key_datums, &key_nulls, &key_count);
- entries = (Datum *) palloc(sizeof(Datum) * key_count);
+ entries = palloc_array(Datum, key_count);
for (i = 0, j = 0; i < key_count; ++i)
{
diff --git a/contrib/hstore/hstore_gist.c b/contrib/hstore/hstore_gist.c
index a3b08af3850..c4b0ba0ece5 100644
--- a/contrib/hstore/hstore_gist.c
+++ b/contrib/hstore/hstore_gist.c
@@ -175,7 +175,7 @@ ghstore_compress(PG_FUNCTION_ARGS)
}
}
- retval = (GISTENTRY *) palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(res),
entry->rel, entry->page,
entry->offset,
@@ -195,7 +195,7 @@ ghstore_compress(PG_FUNCTION_ARGS)
res = ghstore_alloc(true, siglen, NULL);
- retval = (GISTENTRY *) palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(res),
entry->rel, entry->page,
entry->offset,
@@ -429,7 +429,7 @@ ghstore_picksplit(PG_FUNCTION_ARGS)
maxoff = OffsetNumberNext(maxoff);
/* sort before ... */
- costvector = (SPLITCOST *) palloc(sizeof(SPLITCOST) * maxoff);
+ costvector = palloc_array(SPLITCOST, maxoff);
for (j = FirstOffsetNumber; j <= maxoff; j = OffsetNumberNext(j))
{
costvector[j - 1].pos = j;
diff --git a/contrib/hstore/hstore_io.c b/contrib/hstore/hstore_io.c
index 2125436e40c..676d7b674f9 100644
--- a/contrib/hstore/hstore_io.c
+++ b/contrib/hstore/hstore_io.c
@@ -519,7 +519,7 @@ hstore_recv(PG_FUNCTION_ARGS)
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
errmsg("number of pairs (%d) exceeds the maximum allowed (%d)",
pcount, (int) (MaxAllocSize / sizeof(Pairs)))));
- pairs = palloc(pcount * sizeof(Pairs));
+ pairs = palloc_array(Pairs, pcount);
for (i = 0; i < pcount; ++i)
{
@@ -670,7 +670,7 @@ hstore_from_arrays(PG_FUNCTION_ARGS)
Assert(key_count == value_count);
}
- pairs = palloc(key_count * sizeof(Pairs));
+ pairs = palloc_array(Pairs, key_count);
for (i = 0; i < key_count; ++i)
{
@@ -764,7 +764,7 @@ hstore_from_array(PG_FUNCTION_ARGS)
errmsg("number of pairs (%d) exceeds the maximum allowed (%d)",
count, (int) (MaxAllocSize / sizeof(Pairs)))));
- pairs = palloc(count * sizeof(Pairs));
+ pairs = palloc_array(Pairs, count);
for (i = 0; i < count; ++i)
{
@@ -905,7 +905,7 @@ hstore_from_record(PG_FUNCTION_ARGS)
}
Assert(ncolumns <= MaxTupleAttributeNumber); /* thus, no overflow */
- pairs = palloc(ncolumns * sizeof(Pairs));
+ pairs = palloc_array(Pairs, ncolumns);
if (rec)
{
@@ -915,8 +915,8 @@ hstore_from_record(PG_FUNCTION_ARGS)
tuple.t_tableOid = InvalidOid;
tuple.t_data = rec;
- values = (Datum *) palloc(ncolumns * sizeof(Datum));
- nulls = (bool *) palloc(ncolumns * sizeof(bool));
+ values = palloc_array(Datum, ncolumns);
+ nulls = palloc_array(bool, ncolumns);
/* Break down the tuple into fields */
heap_deform_tuple(&tuple, tupdesc, values, nulls);
@@ -1098,8 +1098,8 @@ hstore_populate_record(PG_FUNCTION_ARGS)
my_extra->ncolumns = ncolumns;
}
- values = (Datum *) palloc(ncolumns * sizeof(Datum));
- nulls = (bool *) palloc(ncolumns * sizeof(bool));
+ values = palloc_array(Datum, ncolumns);
+ nulls = palloc_array(bool, ncolumns);
if (rec)
{
diff --git a/contrib/hstore/hstore_op.c b/contrib/hstore/hstore_op.c
index 5e57eceffc8..cdff927a920 100644
--- a/contrib/hstore/hstore_op.c
+++ b/contrib/hstore/hstore_op.c
@@ -101,7 +101,7 @@ hstoreArrayToPairs(ArrayType *a, int *npairs)
errmsg("number of pairs (%d) exceeds the maximum allowed (%d)",
key_count, (int) (MaxAllocSize / sizeof(Pairs)))));
- key_pairs = palloc(sizeof(Pairs) * key_count);
+ key_pairs = palloc_array(Pairs, key_count);
for (i = 0, j = 0; i < key_count; i++)
{
@@ -588,8 +588,8 @@ hstore_slice_to_array(PG_FUNCTION_ARGS)
PG_RETURN_POINTER(aout);
}
- out_datums = palloc(sizeof(Datum) * key_count);
- out_nulls = palloc(sizeof(bool) * key_count);
+ out_datums = palloc_array(Datum, key_count);
+ out_nulls = palloc_array(bool, key_count);
for (i = 0; i < key_count; ++i)
{
@@ -649,7 +649,7 @@ hstore_slice_to_hstore(PG_FUNCTION_ARGS)
}
/* hstoreArrayToPairs() checked overflow */
- out_pairs = palloc(sizeof(Pairs) * nkeys);
+ out_pairs = palloc_array(Pairs, nkeys);
bufsiz = 0;
/*
@@ -705,7 +705,7 @@ hstore_akeys(PG_FUNCTION_ARGS)
PG_RETURN_POINTER(a);
}
- d = (Datum *) palloc(sizeof(Datum) * count);
+ d = palloc_array(Datum, count);
for (i = 0; i < count; ++i)
{
@@ -741,8 +741,8 @@ hstore_avals(PG_FUNCTION_ARGS)
PG_RETURN_POINTER(a);
}
- d = (Datum *) palloc(sizeof(Datum) * count);
- nulls = (bool *) palloc(sizeof(bool) * count);
+ d = palloc_array(Datum, count);
+ nulls = palloc_array(bool, count);
for (i = 0; i < count; ++i)
{
diff --git a/contrib/hstore_plperl/hstore_plperl.c b/contrib/hstore_plperl/hstore_plperl.c
index 4a1629cad51..945b90eba64 100644
--- a/contrib/hstore_plperl/hstore_plperl.c
+++ b/contrib/hstore_plperl/hstore_plperl.c
@@ -118,7 +118,7 @@ plperl_to_hstore(PG_FUNCTION_ARGS)
pcount = hv_iterinit(hv);
- pairs = palloc(pcount * sizeof(Pairs));
+ pairs = palloc_array(Pairs, pcount);
i = 0;
while ((he = hv_iternext(hv)))
diff --git a/contrib/hstore_plpython/hstore_plpython.c b/contrib/hstore_plpython/hstore_plpython.c
index 310f63c30d4..8dea01cb6d2 100644
--- a/contrib/hstore_plpython/hstore_plpython.c
+++ b/contrib/hstore_plpython/hstore_plpython.c
@@ -147,7 +147,7 @@ plpython_to_hstore(PG_FUNCTION_ARGS)
Py_ssize_t i;
Pairs *pairs;
- pairs = palloc(pcount * sizeof(*pairs));
+ pairs = palloc_array(Pairs, pcount);
for (i = 0; i < pcount; i++)
{
diff --git a/contrib/intarray/_int_bool.c b/contrib/intarray/_int_bool.c
index 2b2c3f4029e..9b4909343a1 100644
--- a/contrib/intarray/_int_bool.c
+++ b/contrib/intarray/_int_bool.c
@@ -135,7 +135,7 @@ gettoken(WORKSTATE *state, int32 *val)
static void
pushquery(WORKSTATE *state, int32 type, int32 val)
{
- NODE *tmp = (NODE *) palloc(sizeof(NODE));
+ NODE *tmp = palloc_object(NODE);
tmp->type = type;
tmp->val = val;
diff --git a/contrib/intarray/_int_gin.c b/contrib/intarray/_int_gin.c
index b7958d8eca5..f213b45f440 100644
--- a/contrib/intarray/_int_gin.c
+++ b/contrib/intarray/_int_gin.c
@@ -42,7 +42,7 @@ ginint4_queryextract(PG_FUNCTION_ARGS)
/*
* Extract all the VAL items as things we want GIN to check for.
*/
- res = (Datum *) palloc(sizeof(Datum) * query->size);
+ res = palloc_array(Datum, query->size);
*nentries = 0;
for (i = 0; i < query->size; i++)
@@ -65,7 +65,7 @@ ginint4_queryextract(PG_FUNCTION_ARGS)
int32 *arr;
int32 i;
- res = (Datum *) palloc(sizeof(Datum) * (*nentries));
+ res = palloc_array(Datum, (*nentries));
arr = ARRPTR(query);
for (i = 0; i < *nentries; i++)
diff --git a/contrib/intarray/_int_gist.c b/contrib/intarray/_int_gist.c
index a09b7fa812c..90cf11c01a5 100644
--- a/contrib/intarray/_int_gist.c
+++ b/contrib/intarray/_int_gist.c
@@ -186,7 +186,7 @@ g_int_compress(PG_FUNCTION_ARGS)
errmsg("input array is too big (%d maximum allowed, %d current), use gist__intbig_ops opclass instead",
2 * num_ranges - 1, ARRNELEMS(r))));
- retval = palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(r),
entry->rel, entry->page, entry->offset, false);
@@ -276,7 +276,7 @@ g_int_compress(PG_FUNCTION_ARGS)
errmsg("data is too sparse, recreate index using gist__intbig_ops opclass instead")));
r = resize_intArrayType(r, len);
- retval = palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(r),
entry->rel, entry->page, entry->offset, false);
PG_RETURN_POINTER(retval);
@@ -306,7 +306,7 @@ g_int_decompress(PG_FUNCTION_ARGS)
{
if (in != (ArrayType *) DatumGetPointer(entry->key))
{
- retval = palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(in),
entry->rel, entry->page, entry->offset, false);
PG_RETURN_POINTER(retval);
@@ -321,7 +321,7 @@ g_int_decompress(PG_FUNCTION_ARGS)
{ /* not compressed value */
if (in != (ArrayType *) DatumGetPointer(entry->key))
{
- retval = palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(in),
entry->rel, entry->page, entry->offset, false);
@@ -350,7 +350,7 @@ g_int_decompress(PG_FUNCTION_ARGS)
if (in != (ArrayType *) DatumGetPointer(entry->key))
pfree(in);
- retval = palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(r),
entry->rel, entry->page, entry->offset, false);
@@ -535,7 +535,7 @@ g_int_picksplit(PG_FUNCTION_ARGS)
/*
* sort entries
*/
- costvector = (SPLITCOST *) palloc(sizeof(SPLITCOST) * maxoff);
+ costvector = palloc_array(SPLITCOST, maxoff);
for (i = FirstOffsetNumber; i <= maxoff; i = OffsetNumberNext(i))
{
costvector[i - 1].pos = i;
diff --git a/contrib/intarray/_intbig_gist.c b/contrib/intarray/_intbig_gist.c
index 9699fbf3b4f..0afa8a73b68 100644
--- a/contrib/intarray/_intbig_gist.c
+++ b/contrib/intarray/_intbig_gist.c
@@ -174,7 +174,7 @@ g_intbig_compress(PG_FUNCTION_ARGS)
ptr++;
}
- retval = (GISTENTRY *) palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(res),
entry->rel, entry->page,
entry->offset, false);
@@ -195,7 +195,7 @@ g_intbig_compress(PG_FUNCTION_ARGS)
}
res = _intbig_alloc(true, siglen, sign);
- retval = (GISTENTRY *) palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(res),
entry->rel, entry->page,
entry->offset, false);
@@ -385,7 +385,7 @@ g_intbig_picksplit(PG_FUNCTION_ARGS)
maxoff = OffsetNumberNext(maxoff);
/* sort before ... */
- costvector = (SPLITCOST *) palloc(sizeof(SPLITCOST) * maxoff);
+ costvector = palloc_array(SPLITCOST, maxoff);
for (j = FirstOffsetNumber; j <= maxoff; j = OffsetNumberNext(j))
{
costvector[j - 1].pos = j;
diff --git a/contrib/jsonb_plpython/jsonb_plpython.c b/contrib/jsonb_plpython/jsonb_plpython.c
index a625727c5e8..50cbf53f980 100644
--- a/contrib/jsonb_plpython/jsonb_plpython.c
+++ b/contrib/jsonb_plpython/jsonb_plpython.c
@@ -416,7 +416,7 @@ PLyObject_ToJsonbValue(PyObject *obj, JsonbParseState **jsonb_state, bool is_ele
return PLyMapping_ToJsonbValue(obj, jsonb_state);
}
- out = palloc(sizeof(JsonbValue));
+ out = palloc_object(JsonbValue);
if (obj == Py_None)
out->type = jbvNull;
diff --git a/contrib/ltree/_ltree_gist.c b/contrib/ltree/_ltree_gist.c
index 286ad24fbe8..a70f49982fe 100644
--- a/contrib/ltree/_ltree_gist.c
+++ b/contrib/ltree/_ltree_gist.c
@@ -79,7 +79,7 @@ _ltree_compress(PG_FUNCTION_ARGS)
item = NEXTVAL(item);
}
- retval = (GISTENTRY *) palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(key),
entry->rel, entry->page,
entry->offset, false);
@@ -97,7 +97,7 @@ _ltree_compress(PG_FUNCTION_ARGS)
}
key = ltree_gist_alloc(true, sign, siglen, NULL, NULL);
- retval = (GISTENTRY *) palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(key),
entry->rel, entry->page,
entry->offset, false);
@@ -310,7 +310,7 @@ _ltree_picksplit(PG_FUNCTION_ARGS)
maxoff = OffsetNumberNext(maxoff);
/* sort before ... */
- costvector = (SPLITCOST *) palloc(sizeof(SPLITCOST) * maxoff);
+ costvector = palloc_array(SPLITCOST, maxoff);
for (j = FirstOffsetNumber; j <= maxoff; j = OffsetNumberNext(j))
{
costvector[j - 1].pos = j;
diff --git a/contrib/ltree/_ltree_op.c b/contrib/ltree/_ltree_op.c
index b4a8097328d..4d54ad34bb6 100644
--- a/contrib/ltree/_ltree_op.c
+++ b/contrib/ltree/_ltree_op.c
@@ -307,7 +307,7 @@ _lca(PG_FUNCTION_ARGS)
(errcode(ERRCODE_NULL_VALUE_NOT_ALLOWED),
errmsg("array must not contain nulls")));
- a = (ltree **) palloc(sizeof(ltree *) * num);
+ a = palloc_array(ltree *, num);
while (num > 0)
{
num--;
diff --git a/contrib/ltree/ltree_gist.c b/contrib/ltree/ltree_gist.c
index 932f69bff2d..3ff28cf8ce0 100644
--- a/contrib/ltree/ltree_gist.c
+++ b/contrib/ltree/ltree_gist.c
@@ -101,7 +101,7 @@ ltree_compress(PG_FUNCTION_ARGS)
ltree *val = DatumGetLtreeP(entry->key);
ltree_gist *key = ltree_gist_alloc(false, NULL, 0, val, 0);
- retval = (GISTENTRY *) palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(key),
entry->rel, entry->page,
entry->offset, false);
@@ -117,7 +117,7 @@ ltree_decompress(PG_FUNCTION_ARGS)
if (PointerGetDatum(key) != entry->key)
{
- GISTENTRY *retval = (GISTENTRY *) palloc(sizeof(GISTENTRY));
+ GISTENTRY *retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(key),
entry->rel, entry->page,
@@ -318,7 +318,7 @@ ltree_picksplit(PG_FUNCTION_ARGS)
v->spl_right = (OffsetNumber *) palloc(nbytes);
v->spl_nleft = 0;
v->spl_nright = 0;
- array = (RIX *) palloc(sizeof(RIX) * (maxoff + 1));
+ array = palloc_array(RIX, (maxoff + 1));
/* copy the data into RIXes, and sort the RIXes */
for (j = FirstOffsetNumber; j <= maxoff; j = OffsetNumberNext(j))
diff --git a/contrib/ltree/ltree_io.c b/contrib/ltree/ltree_io.c
index b54a15d6c68..4813a6582af 100644
--- a/contrib/ltree/ltree_io.c
+++ b/contrib/ltree/ltree_io.c
@@ -65,7 +65,7 @@ parse_ltree(const char *buf, struct Node *escontext)
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
errmsg("number of ltree labels (%d) exceeds the maximum allowed (%d)",
num + 1, LTREE_MAX_LEVELS)));
- list = lptr = (nodeitem *) palloc(sizeof(nodeitem) * (num + 1));
+ list = lptr = palloc_array(nodeitem, (num + 1));
ptr = buf;
while (*ptr)
{
@@ -318,14 +318,16 @@ parse_lquery(const char *buf, struct Node *escontext)
case LQPRS_WAITLEVEL:
if (ISLABEL(ptr))
{
- GETVAR(curqlevel) = lptr = (nodeitem *) palloc0(sizeof(nodeitem) * (numOR + 1));
+ GETVAR(curqlevel) = lptr = palloc0_array(nodeitem,
+ (numOR + 1));
lptr->start = ptr;
state = LQPRS_WAITDELIM;
curqlevel->numvar = 1;
}
else if (t_iseq(ptr, '!'))
{
- GETVAR(curqlevel) = lptr = (nodeitem *) palloc0(sizeof(nodeitem) * (numOR + 1));
+ GETVAR(curqlevel) = lptr = palloc0_array(nodeitem,
+ (numOR + 1));
lptr->start = ptr + 1;
lptr->wlen = -1; /* compensate for counting ! below */
state = LQPRS_WAITDELIM;
diff --git a/contrib/ltree/ltree_op.c b/contrib/ltree/ltree_op.c
index 0e30dee4658..ee021250e82 100644
--- a/contrib/ltree/ltree_op.c
+++ b/contrib/ltree/ltree_op.c
@@ -571,7 +571,7 @@ lca(PG_FUNCTION_ARGS)
ltree **a,
*res;
- a = (ltree **) palloc(sizeof(ltree *) * fcinfo->nargs);
+ a = palloc_array(ltree *, fcinfo->nargs);
for (i = 0; i < fcinfo->nargs; i++)
a[i] = PG_GETARG_LTREE_P(i);
res = lca_inner(a, (int) fcinfo->nargs);
diff --git a/contrib/ltree/ltxtquery_io.c b/contrib/ltree/ltxtquery_io.c
index 7b8fba17ff2..e6944008c40 100644
--- a/contrib/ltree/ltxtquery_io.c
+++ b/contrib/ltree/ltxtquery_io.c
@@ -154,7 +154,7 @@ gettoken_query(QPRS_STATE *state, int32 *val, int32 *lenval, char **strval, uint
static bool
pushquery(QPRS_STATE *state, int32 type, int32 val, int32 distance, int32 lenval, uint16 flag)
{
- NODE *tmp = (NODE *) palloc(sizeof(NODE));
+ NODE *tmp = palloc_object(NODE);
tmp->type = type;
tmp->val = val;
diff --git a/contrib/pageinspect/brinfuncs.c b/contrib/pageinspect/brinfuncs.c
index 990c965aa92..597f8d09fc4 100644
--- a/contrib/pageinspect/brinfuncs.c
+++ b/contrib/pageinspect/brinfuncs.c
@@ -186,7 +186,8 @@ brin_page_items(PG_FUNCTION_ARGS)
* Initialize output functions for all indexed datatypes; simplifies
* calling them later.
*/
- columns = palloc(sizeof(brin_column_state *) * RelationGetDescr(indexRel)->natts);
+ columns = palloc_array(brin_column_state *,
+ RelationGetDescr(indexRel)->natts);
for (attno = 1; attno <= bdesc->bd_tupdesc->natts; attno++)
{
Oid output;
diff --git a/contrib/pageinspect/btreefuncs.c b/contrib/pageinspect/btreefuncs.c
index 9cdc8e182b4..80e28862f8f 100644
--- a/contrib/pageinspect/btreefuncs.c
+++ b/contrib/pageinspect/btreefuncs.c
@@ -380,7 +380,7 @@ bt_multi_page_stats(PG_FUNCTION_ARGS)
/* Save arguments for reuse */
mctx = MemoryContextSwitchTo(fctx->multi_call_memory_ctx);
- uargs = palloc(sizeof(ua_page_stats));
+ uargs = palloc_object(ua_page_stats);
uargs->relid = RelationGetRelid(rel);
uargs->blkno = blkno;
@@ -598,7 +598,7 @@ bt_page_print_tuples(ua_page_items *uargs)
tids = BTreeTupleGetPosting(itup);
nposting = BTreeTupleGetNPosting(itup);
- tids_datum = (Datum *) palloc(nposting * sizeof(Datum));
+ tids_datum = palloc_array(Datum, nposting);
for (int i = 0; i < nposting; i++)
tids_datum[i] = ItemPointerGetDatum(&tids[i]);
values[j++] = PointerGetDatum(construct_array_builtin(tids_datum, nposting, TIDOID));
@@ -661,7 +661,7 @@ bt_page_items_internal(PG_FUNCTION_ARGS, enum pageinspect_version ext_version)
*/
mctx = MemoryContextSwitchTo(fctx->multi_call_memory_ctx);
- uargs = palloc(sizeof(ua_page_items));
+ uargs = palloc_object(ua_page_items);
uargs->page = palloc(BLCKSZ);
memcpy(uargs->page, BufferGetPage(buffer), BLCKSZ);
@@ -753,7 +753,7 @@ bt_page_items_bytea(PG_FUNCTION_ARGS)
fctx = SRF_FIRSTCALL_INIT();
mctx = MemoryContextSwitchTo(fctx->multi_call_memory_ctx);
- uargs = palloc(sizeof(ua_page_items));
+ uargs = palloc_object(ua_page_items);
uargs->page = get_page_from_raw(raw_page);
diff --git a/contrib/pageinspect/ginfuncs.c b/contrib/pageinspect/ginfuncs.c
index 09a90957081..6e4eaea1900 100644
--- a/contrib/pageinspect/ginfuncs.c
+++ b/contrib/pageinspect/ginfuncs.c
@@ -222,7 +222,7 @@ gin_leafpage_items(PG_FUNCTION_ARGS)
opaq->flags,
(GIN_DATA | GIN_LEAF | GIN_COMPRESSED))));
- inter_call_data = palloc(sizeof(gin_leafpage_items_state));
+ inter_call_data = palloc_object(gin_leafpage_items_state);
/* Build a tuple descriptor for our result type */
if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
@@ -262,7 +262,7 @@ gin_leafpage_items(PG_FUNCTION_ARGS)
/* build an array of decoded item pointers */
tids = ginPostingListDecode(cur, &ndecoded);
- tids_datum = (Datum *) palloc(ndecoded * sizeof(Datum));
+ tids_datum = palloc_array(Datum, ndecoded);
for (i = 0; i < ndecoded; i++)
tids_datum[i] = ItemPointerGetDatum(&tids[i]);
values[2] = PointerGetDatum(construct_array_builtin(tids_datum, ndecoded, TIDOID));
diff --git a/contrib/pageinspect/hashfuncs.c b/contrib/pageinspect/hashfuncs.c
index d4a2a1d676a..1c4b8e9ce16 100644
--- a/contrib/pageinspect/hashfuncs.c
+++ b/contrib/pageinspect/hashfuncs.c
@@ -325,7 +325,7 @@ hash_page_items(PG_FUNCTION_ARGS)
page = verify_hash_page(raw_page, LH_BUCKET_PAGE | LH_OVERFLOW_PAGE);
- uargs = palloc(sizeof(struct user_args));
+ uargs = palloc_object(struct user_args);
uargs->page = page;
diff --git a/contrib/pageinspect/heapfuncs.c b/contrib/pageinspect/heapfuncs.c
index 64f32b5b42a..74447f7a626 100644
--- a/contrib/pageinspect/heapfuncs.c
+++ b/contrib/pageinspect/heapfuncs.c
@@ -153,7 +153,7 @@ heap_page_items(PG_FUNCTION_ARGS)
fctx = SRF_FIRSTCALL_INIT();
mctx = MemoryContextSwitchTo(fctx->multi_call_memory_ctx);
- inter_call_data = palloc(sizeof(heap_page_items_state));
+ inter_call_data = palloc_object(heap_page_items_state);
/* Build a tuple descriptor for our result type */
if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
@@ -543,7 +543,7 @@ heap_tuple_infomask_flags(PG_FUNCTION_ARGS)
}
/* build set of raw flags */
- flags = (Datum *) palloc0(sizeof(Datum) * bitcnt);
+ flags = palloc0_array(Datum, bitcnt);
/* decode t_infomask */
if ((t_infomask & HEAP_HASNULL) != 0)
diff --git a/contrib/pg_buffercache/pg_buffercache_pages.c b/contrib/pg_buffercache/pg_buffercache_pages.c
index 3ae0a018e10..b12ab41bd32 100644
--- a/contrib/pg_buffercache/pg_buffercache_pages.c
+++ b/contrib/pg_buffercache/pg_buffercache_pages.c
@@ -86,7 +86,7 @@ pg_buffercache_pages(PG_FUNCTION_ARGS)
oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
/* Create a user function context for cross-call persistence */
- fctx = (BufferCachePagesContext *) palloc(sizeof(BufferCachePagesContext));
+ fctx = palloc_object(BufferCachePagesContext);
/*
* To smoothly support upgrades from version 1.0 of this extension
diff --git a/contrib/pg_logicalinspect/pg_logicalinspect.c b/contrib/pg_logicalinspect/pg_logicalinspect.c
index cd575c6bd36..c6ccaba49b6 100644
--- a/contrib/pg_logicalinspect/pg_logicalinspect.c
+++ b/contrib/pg_logicalinspect/pg_logicalinspect.c
@@ -128,7 +128,8 @@ pg_get_logical_snapshot_info(PG_FUNCTION_ARGS)
{
Datum *arrayelems;
- arrayelems = (Datum *) palloc(ondisk.builder.committed.xcnt * sizeof(Datum));
+ arrayelems = palloc_array(Datum,
+ ondisk.builder.committed.xcnt);
for (int j = 0; j < ondisk.builder.committed.xcnt; j++)
arrayelems[j] = TransactionIdGetDatum(ondisk.builder.committed.xip[j]);
@@ -145,7 +146,8 @@ pg_get_logical_snapshot_info(PG_FUNCTION_ARGS)
{
Datum *arrayelems;
- arrayelems = (Datum *) palloc(ondisk.builder.catchange.xcnt * sizeof(Datum));
+ arrayelems = palloc_array(Datum,
+ ondisk.builder.catchange.xcnt);
for (int j = 0; j < ondisk.builder.catchange.xcnt; j++)
arrayelems[j] = TransactionIdGetDatum(ondisk.builder.catchange.xip[j]);
diff --git a/contrib/pg_prewarm/autoprewarm.c b/contrib/pg_prewarm/autoprewarm.c
index b45755b3347..0ead3978bb2 100644
--- a/contrib/pg_prewarm/autoprewarm.c
+++ b/contrib/pg_prewarm/autoprewarm.c
@@ -590,8 +590,7 @@ apw_dump_now(bool is_bgworker, bool dump_unlogged)
return 0;
}
- block_info_array =
- (BlockInfoRecord *) palloc(sizeof(BlockInfoRecord) * NBuffers);
+ block_info_array = palloc_array(BlockInfoRecord, NBuffers);
for (num_blocks = 0, i = 0; i < NBuffers; i++)
{
diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c
index bebf8134eb0..45e68a7445d 100644
--- a/contrib/pg_stat_statements/pg_stat_statements.c
+++ b/contrib/pg_stat_statements/pg_stat_statements.c
@@ -2147,7 +2147,7 @@ entry_dealloc(void)
* cur_median_usage includes the entries we're about to zap.
*/
- entries = palloc(hash_get_num_entries(pgss_hash) * sizeof(pgssEntry *));
+ entries = palloc_array(pgssEntry *, hash_get_num_entries(pgss_hash));
i = 0;
tottextlen = 0;
diff --git a/contrib/pg_trgm/trgm_gin.c b/contrib/pg_trgm/trgm_gin.c
index 29a52eac7af..10ed386bcb3 100644
--- a/contrib/pg_trgm/trgm_gin.c
+++ b/contrib/pg_trgm/trgm_gin.c
@@ -51,7 +51,7 @@ gin_extract_value_trgm(PG_FUNCTION_ARGS)
int32 i;
*nentries = trglen;
- entries = (Datum *) palloc(sizeof(Datum) * trglen);
+ entries = palloc_array(Datum, trglen);
ptr = GETARR(trg);
for (i = 0; i < trglen; i++)
@@ -146,7 +146,7 @@ gin_extract_query_trgm(PG_FUNCTION_ARGS)
if (trglen > 0)
{
- entries = (Datum *) palloc(sizeof(Datum) * trglen);
+ entries = palloc_array(Datum, trglen);
ptr = GETARR(trg);
for (i = 0; i < trglen; i++)
{
@@ -339,7 +339,7 @@ gin_trgm_triconsistent(PG_FUNCTION_ARGS)
* function, promoting all GIN_MAYBE keys to GIN_TRUE will
* give a conservative result.
*/
- boolcheck = (bool *) palloc(sizeof(bool) * nkeys);
+ boolcheck = palloc_array(bool, nkeys);
for (i = 0; i < nkeys; i++)
boolcheck[i] = (check[i] != GIN_FALSE);
if (!trigramsMatchGraph((TrgmPackedGraph *) extra_data[0],
diff --git a/contrib/pg_trgm/trgm_gist.c b/contrib/pg_trgm/trgm_gist.c
index 7f482f958fd..72897da49d8 100644
--- a/contrib/pg_trgm/trgm_gist.c
+++ b/contrib/pg_trgm/trgm_gist.c
@@ -124,7 +124,7 @@ gtrgm_compress(PG_FUNCTION_ARGS)
text *val = DatumGetTextPP(entry->key);
res = generate_trgm(VARDATA_ANY(val), VARSIZE_ANY_EXHDR(val));
- retval = (GISTENTRY *) palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(res),
entry->rel, entry->page,
entry->offset, false);
@@ -143,7 +143,7 @@ gtrgm_compress(PG_FUNCTION_ARGS)
}
res = gtrgm_alloc(true, siglen, sign);
- retval = (GISTENTRY *) palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(res),
entry->rel, entry->page,
entry->offset, false);
@@ -163,7 +163,7 @@ gtrgm_decompress(PG_FUNCTION_ARGS)
if (key != (text *) DatumGetPointer(entry->key))
{
/* need to pass back the decompressed item */
- retval = palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(key),
entry->rel, entry->page, entry->offset, entry->leafkey);
PG_RETURN_POINTER(retval);
@@ -423,7 +423,7 @@ gtrgm_consistent(PG_FUNCTION_ARGS)
* So we can apply trigramsMatchGraph despite uncertainty,
* and that usefully improves the quality of the search.
*/
- check = (bool *) palloc(len * sizeof(bool));
+ check = palloc_array(bool, len);
for (k = 0; k < len; k++)
{
CPTRGM(((char *) &tmp), ptr + k);
@@ -820,7 +820,7 @@ gtrgm_picksplit(PG_FUNCTION_ARGS)
SPLITCOST *costvector;
/* cache the sign data for each existing item */
- cache = (CACHESIGN *) palloc(sizeof(CACHESIGN) * (maxoff + 1));
+ cache = palloc_array(CACHESIGN, (maxoff + 1));
cache_sign = palloc(siglen * (maxoff + 1));
for (k = FirstOffsetNumber; k <= maxoff; k = OffsetNumberNext(k))
@@ -864,7 +864,7 @@ gtrgm_picksplit(PG_FUNCTION_ARGS)
union_r = GETSIGN(datum_r);
/* sort before ... */
- costvector = (SPLITCOST *) palloc(sizeof(SPLITCOST) * maxoff);
+ costvector = palloc_array(SPLITCOST, maxoff);
for (j = FirstOffsetNumber; j <= maxoff; j = OffsetNumberNext(j))
{
costvector[j - 1].pos = j;
diff --git a/contrib/pg_trgm/trgm_op.c b/contrib/pg_trgm/trgm_op.c
index d0833b3e4a1..b8446d5fb1d 100644
--- a/contrib/pg_trgm/trgm_op.c
+++ b/contrib/pg_trgm/trgm_op.c
@@ -405,7 +405,7 @@ make_positional_trgm(trgm *trg1, int len1, trgm *trg2, int len2)
int i,
len = len1 + len2;
- result = (pos_trgm *) palloc(sizeof(pos_trgm) * len);
+ result = palloc_array(pos_trgm, len);
for (i = 0; i < len1; i++)
{
@@ -488,7 +488,7 @@ iterate_word_similarity(int *trg2indexes,
lower = (flags & WORD_SIMILARITY_STRICT) ? 0 : -1;
/* Memorise last position of each trigram */
- lastpos = (int *) palloc(sizeof(int) * len);
+ lastpos = palloc_array(int, len);
memset(lastpos, -1, sizeof(int) * len);
for (i = 0; i < len2; i++)
@@ -664,8 +664,8 @@ calc_word_similarity(char *str1, int slen1, char *str2, int slen2,
* Merge positional trigrams array: enumerate each trigram and find its
* presence in required word.
*/
- trg2indexes = (int *) palloc(sizeof(int) * len2);
- found = (bool *) palloc0(sizeof(bool) * len);
+ trg2indexes = palloc_array(int, len2);
+ found = palloc0_array(bool, len);
ulen1 = 0;
j = 0;
@@ -891,7 +891,7 @@ generate_wildcard_trgm(const char *str, int slen)
tptr = GETARR(trg);
/* Allocate a buffer for blank-padded, but not yet case-folded, words */
- buf = palloc(sizeof(char) * (slen + 4));
+ buf = palloc_array(char, (slen + 4));
/*
* Extract trigrams from each substring extracted by get_wildcard_part.
@@ -961,7 +961,7 @@ show_trgm(PG_FUNCTION_ARGS)
int i;
trg = generate_trgm(VARDATA_ANY(in), VARSIZE_ANY_EXHDR(in));
- d = (Datum *) palloc(sizeof(Datum) * (1 + ARRNELEM(trg)));
+ d = palloc_array(Datum, (1 + ARRNELEM(trg)));
for (i = 0, ptr = GETARR(trg); i < ARRNELEM(trg); i++, ptr++)
{
@@ -1089,7 +1089,7 @@ trgm_presence_map(TRGM *query, TRGM *key)
lenk = ARRNELEM(key),
i;
- result = (bool *) palloc0(lenq * sizeof(bool));
+ result = palloc0_array(bool, lenq);
/* for each query trigram, do a binary search in the key array */
for (i = 0; i < lenq; i++)
diff --git a/contrib/pg_trgm/trgm_regexp.c b/contrib/pg_trgm/trgm_regexp.c
index 149f9eb259c..288be0ac153 100644
--- a/contrib/pg_trgm/trgm_regexp.c
+++ b/contrib/pg_trgm/trgm_regexp.c
@@ -728,7 +728,7 @@ RE_compile(regex_t *regex, text *text_re, int cflags, Oid collation)
char errMsg[100];
/* Convert pattern string to wide characters */
- pattern = (pg_wchar *) palloc((text_re_len + 1) * sizeof(pg_wchar));
+ pattern = palloc_array(pg_wchar, (text_re_len + 1));
pattern_len = pg_mb2wchar_with_len(text_re_val,
pattern,
text_re_len);
@@ -796,7 +796,7 @@ getColorInfo(regex_t *regex, TrgmNFA *trgmNFA)
colorInfo->wordCharsCount = 0;
/* Extract all the chars in this color */
- chars = (pg_wchar *) palloc(sizeof(pg_wchar) * charsCount);
+ chars = palloc_array(pg_wchar, charsCount);
pg_reg_getcharacters(regex, i, chars, charsCount);
/*
@@ -1063,7 +1063,7 @@ addKey(TrgmNFA *trgmNFA, TrgmState *state, TrgmStateKey *key)
* original NFA.
*/
arcsCount = pg_reg_getnumoutarcs(trgmNFA->regex, key->nstate);
- arcs = (regex_arc_t *) palloc(sizeof(regex_arc_t) * arcsCount);
+ arcs = palloc_array(regex_arc_t, arcsCount);
pg_reg_getoutarcs(trgmNFA->regex, key->nstate, arcs, arcsCount);
for (i = 0; i < arcsCount; i++)
@@ -1177,7 +1177,7 @@ addKey(TrgmNFA *trgmNFA, TrgmState *state, TrgmStateKey *key)
static void
addKeyToQueue(TrgmNFA *trgmNFA, TrgmStateKey *key)
{
- TrgmStateKey *keyCopy = (TrgmStateKey *) palloc(sizeof(TrgmStateKey));
+ TrgmStateKey *keyCopy = palloc_object(TrgmStateKey);
memcpy(keyCopy, key, sizeof(TrgmStateKey));
trgmNFA->keysQueue = lappend(trgmNFA->keysQueue, keyCopy);
@@ -1215,7 +1215,7 @@ addArcs(TrgmNFA *trgmNFA, TrgmState *state)
TrgmStateKey *key = (TrgmStateKey *) lfirst(cell);
arcsCount = pg_reg_getnumoutarcs(trgmNFA->regex, key->nstate);
- arcs = (regex_arc_t *) palloc(sizeof(regex_arc_t) * arcsCount);
+ arcs = palloc_array(regex_arc_t, arcsCount);
pg_reg_getoutarcs(trgmNFA->regex, key->nstate, arcs, arcsCount);
for (i = 0; i < arcsCount; i++)
@@ -1311,7 +1311,7 @@ addArc(TrgmNFA *trgmNFA, TrgmState *state, TrgmStateKey *key,
}
/* Checks were successful, add new arc */
- arc = (TrgmArc *) palloc(sizeof(TrgmArc));
+ arc = palloc_object(TrgmArc);
arc->target = getState(trgmNFA, destKey);
arc->ctrgm.colors[0] = key->prefix.colors[0];
arc->ctrgm.colors[1] = key->prefix.colors[1];
@@ -1467,7 +1467,7 @@ selectColorTrigrams(TrgmNFA *trgmNFA)
int cnumber;
/* Collect color trigrams from all arcs */
- colorTrgms = (ColorTrgmInfo *) palloc0(sizeof(ColorTrgmInfo) * arcsCount);
+ colorTrgms = palloc0_array(ColorTrgmInfo, arcsCount);
trgmNFA->colorTrgms = colorTrgms;
i = 0;
@@ -1479,7 +1479,7 @@ selectColorTrigrams(TrgmNFA *trgmNFA)
foreach(cell, state->arcs)
{
TrgmArc *arc = (TrgmArc *) lfirst(cell);
- TrgmArcInfo *arcInfo = (TrgmArcInfo *) palloc(sizeof(TrgmArcInfo));
+ TrgmArcInfo *arcInfo = palloc_object(TrgmArcInfo);
ColorTrgmInfo *trgmInfo = &colorTrgms[i];
arcInfo->source = state;
@@ -1964,8 +1964,7 @@ packGraph(TrgmNFA *trgmNFA, MemoryContext rcontext)
}
/* Collect array of all arcs */
- arcs = (TrgmPackArcInfo *)
- palloc(sizeof(TrgmPackArcInfo) * trgmNFA->arcsCount);
+ arcs = palloc_array(TrgmPackArcInfo, trgmNFA->arcsCount);
arcIndex = 0;
hash_seq_init(&scan_status, trgmNFA->states);
while ((state = (TrgmState *) hash_seq_search(&scan_status)) != NULL)
@@ -2147,7 +2146,7 @@ printSourceNFA(regex_t *regex, TrgmColorInfo *colors, int ncolors)
appendStringInfoString(&buf, ";\n");
arcsCount = pg_reg_getnumoutarcs(regex, state);
- arcs = (regex_arc_t *) palloc(sizeof(regex_arc_t) * arcsCount);
+ arcs = palloc_array(regex_arc_t, arcsCount);
pg_reg_getoutarcs(regex, state, arcs, arcsCount);
for (i = 0; i < arcsCount; i++)
diff --git a/contrib/pg_visibility/pg_visibility.c b/contrib/pg_visibility/pg_visibility.c
index c900cfcea40..6b2ce8db2f5 100644
--- a/contrib/pg_visibility/pg_visibility.c
+++ b/contrib/pg_visibility/pg_visibility.c
@@ -732,7 +732,7 @@ collect_corrupt_items(Oid relid, bool all_visible, bool all_frozen)
* number of entries allocated. We'll repurpose these fields before
* returning.
*/
- items = palloc0(sizeof(corrupt_items));
+ items = palloc0_object(corrupt_items);
items->next = 0;
items->count = 64;
items->tids = palloc(items->count * sizeof(ItemPointerData));
diff --git a/contrib/pg_walinspect/pg_walinspect.c b/contrib/pg_walinspect/pg_walinspect.c
index 9e609415789..f7d4bf8d007 100644
--- a/contrib/pg_walinspect/pg_walinspect.c
+++ b/contrib/pg_walinspect/pg_walinspect.c
@@ -105,8 +105,7 @@ InitXLogReaderState(XLogRecPtr lsn)
errmsg("could not read WAL at LSN %X/%X",
LSN_FORMAT_ARGS(lsn))));
- private_data = (ReadLocalXLogPageNoWaitPrivate *)
- palloc0(sizeof(ReadLocalXLogPageNoWaitPrivate));
+ private_data = palloc0_object(ReadLocalXLogPageNoWaitPrivate);
xlogreader = XLogReaderAllocate(wal_segment_size, NULL,
XL_ROUTINE(.page_read = &read_local_xlog_page_no_wait,
@@ -306,7 +305,7 @@ GetWALBlockInfo(FunctionCallInfo fcinfo, XLogReaderState *record,
/* Construct and save block_fpi_info */
bitcnt = pg_popcount((const char *) &blk->bimg_info,
sizeof(uint8));
- flags = (Datum *) palloc0(sizeof(Datum) * bitcnt);
+ flags = palloc0_array(Datum, bitcnt);
if ((blk->bimg_info & BKPIMAGE_HAS_HOLE) != 0)
flags[cnt++] = CStringGetTextDatum("HAS_HOLE");
if (blk->apply_image)
diff --git a/contrib/pgcrypto/mbuf.c b/contrib/pgcrypto/mbuf.c
index 99f8957b004..054b5599d07 100644
--- a/contrib/pgcrypto/mbuf.c
+++ b/contrib/pgcrypto/mbuf.c
@@ -115,7 +115,7 @@ mbuf_create(int len)
if (!len)
len = 8192;
- mbuf = palloc(sizeof *mbuf);
+ mbuf = palloc_object(MBuf);
mbuf->data = palloc(len);
mbuf->buf_end = mbuf->data + len;
mbuf->data_end = mbuf->data;
@@ -132,7 +132,7 @@ mbuf_create_from_data(uint8 *data, int len)
{
MBuf *mbuf;
- mbuf = palloc(sizeof *mbuf);
+ mbuf = palloc_object(MBuf);
mbuf->data = (uint8 *) data;
mbuf->buf_end = mbuf->data + len;
mbuf->data_end = mbuf->data + len;
@@ -206,7 +206,7 @@ pullf_create(PullFilter **pf_p, const PullFilterOps *op, void *init_arg, PullFil
res = 0;
}
- pf = palloc0(sizeof(*pf));
+ pf = palloc0_object(PullFilter);
pf->buflen = res;
pf->op = op;
pf->priv = priv;
@@ -372,7 +372,7 @@ pushf_create(PushFilter **mp_p, const PushFilterOps *op, void *init_arg, PushFil
res = 0;
}
- mp = palloc0(sizeof(*mp));
+ mp = palloc0_object(PushFilter);
mp->block_size = res;
mp->op = op;
mp->priv = priv;
diff --git a/contrib/pgcrypto/openssl.c b/contrib/pgcrypto/openssl.c
index 448db331a0f..25a86aa6623 100644
--- a/contrib/pgcrypto/openssl.c
+++ b/contrib/pgcrypto/openssl.c
@@ -196,7 +196,7 @@ px_find_digest(const char *name, PX_MD **res)
ResourceOwnerRememberOSSLDigest(digest->owner, digest);
/* The PX_MD object is allocated in the current memory context. */
- h = palloc(sizeof(*h));
+ h = palloc_object(PX_MD);
h->result_size = digest_result_size;
h->block_size = digest_block_size;
h->reset = digest_reset;
@@ -773,7 +773,7 @@ px_find_cipher(const char *name, PX_Cipher **res)
od->evp_ciph = i->ciph->cipher_func();
/* The PX_Cipher is allocated in current memory context */
- c = palloc(sizeof(*c));
+ c = palloc_object(PX_Cipher);
c->block_size = gen_ossl_block_size;
c->key_size = gen_ossl_key_size;
c->iv_size = gen_ossl_iv_size;
diff --git a/contrib/pgcrypto/pgp-cfb.c b/contrib/pgcrypto/pgp-cfb.c
index de41e825b0c..d8f1afc3aba 100644
--- a/contrib/pgcrypto/pgp-cfb.c
+++ b/contrib/pgcrypto/pgp-cfb.c
@@ -67,7 +67,7 @@ pgp_cfb_create(PGP_CFB **ctx_p, int algo, const uint8 *key, int key_len,
return res;
}
- ctx = palloc0(sizeof(*ctx));
+ ctx = palloc0_object(PGP_CFB);
ctx->ciph = ciph;
ctx->block_size = px_cipher_block_size(ciph);
ctx->resync = resync;
diff --git a/contrib/pgcrypto/pgp-compress.c b/contrib/pgcrypto/pgp-compress.c
index 961cf21e748..caa80ecdb45 100644
--- a/contrib/pgcrypto/pgp-compress.c
+++ b/contrib/pgcrypto/pgp-compress.c
@@ -80,7 +80,7 @@ compress_init(PushFilter *next, void *init_arg, void **priv_p)
/*
* init
*/
- st = palloc0(sizeof(*st));
+ st = palloc0_object(struct ZipStat);
st->buf_len = ZIP_OUT_BUF;
st->stream.zalloc = z_alloc;
st->stream.zfree = z_free;
@@ -211,7 +211,7 @@ decompress_init(void **priv_p, void *arg, PullFilter *src)
&& ctx->compress_algo != PGP_COMPR_ZIP)
return PXE_PGP_UNSUPPORTED_COMPR;
- dec = palloc0(sizeof(*dec));
+ dec = palloc0_object(struct DecomprData);
dec->buf_len = ZIP_OUT_BUF;
*priv_p = dec;
diff --git a/contrib/pgcrypto/pgp-decrypt.c b/contrib/pgcrypto/pgp-decrypt.c
index e1ea5b3e58d..52ca7840c6d 100644
--- a/contrib/pgcrypto/pgp-decrypt.c
+++ b/contrib/pgcrypto/pgp-decrypt.c
@@ -224,7 +224,7 @@ pgp_create_pkt_reader(PullFilter **pf_p, PullFilter *src, int len,
int pkttype, PGP_Context *ctx)
{
int res;
- struct PktData *pkt = palloc(sizeof(*pkt));
+ struct PktData *pkt = palloc_object(struct PktData);
pkt->type = pkttype;
pkt->len = len;
@@ -448,7 +448,7 @@ mdcbuf_init(void **priv_p, void *arg, PullFilter *src)
PGP_Context *ctx = arg;
struct MDCBufData *st;
- st = palloc0(sizeof(*st));
+ st = palloc0_object(struct MDCBufData);
st->buflen = sizeof(st->buf);
st->ctx = ctx;
*priv_p = st;
diff --git a/contrib/pgcrypto/pgp-encrypt.c b/contrib/pgcrypto/pgp-encrypt.c
index f7467c9b1cb..2c059804706 100644
--- a/contrib/pgcrypto/pgp-encrypt.c
+++ b/contrib/pgcrypto/pgp-encrypt.c
@@ -178,7 +178,7 @@ encrypt_init(PushFilter *next, void *init_arg, void **priv_p)
if (res < 0)
return res;
- st = palloc0(sizeof(*st));
+ st = palloc0_object(struct EncStat);
st->ciph = ciph;
*priv_p = st;
@@ -240,7 +240,7 @@ pkt_stream_init(PushFilter *next, void *init_arg, void **priv_p)
{
struct PktStreamStat *st;
- st = palloc(sizeof(*st));
+ st = palloc_object(struct PktStreamStat);
st->final_done = 0;
st->pkt_block = 1 << STREAM_BLOCK_SHIFT;
*priv_p = st;
diff --git a/contrib/pgcrypto/pgp-pgsql.c b/contrib/pgcrypto/pgp-pgsql.c
index 7c9f4c7b39b..3e47b9364ab 100644
--- a/contrib/pgcrypto/pgp-pgsql.c
+++ b/contrib/pgcrypto/pgp-pgsql.c
@@ -782,8 +782,8 @@ parse_key_value_arrays(ArrayType *key_array, ArrayType *val_array,
(errcode(ERRCODE_ARRAY_SUBSCRIPT_ERROR),
errmsg("mismatched array dimensions")));
- keys = (char **) palloc(sizeof(char *) * key_count);
- values = (char **) palloc(sizeof(char *) * val_count);
+ keys = palloc_array(char *, key_count);
+ values = palloc_array(char *, val_count);
for (i = 0; i < key_count; i++)
{
@@ -937,7 +937,7 @@ pgp_armor_headers(PG_FUNCTION_ARGS)
attinmeta = TupleDescGetAttInMetadata(tupdesc);
funcctx->attinmeta = attinmeta;
- state = (pgp_armor_headers_state *) palloc(sizeof(pgp_armor_headers_state));
+ state = palloc_object(pgp_armor_headers_state);
res = pgp_extract_armor_headers((uint8 *) VARDATA_ANY(data),
VARSIZE_ANY_EXHDR(data),
diff --git a/contrib/pgcrypto/pgp-pubkey.c b/contrib/pgcrypto/pgp-pubkey.c
index 9a6561caf9d..6f118865917 100644
--- a/contrib/pgcrypto/pgp-pubkey.c
+++ b/contrib/pgcrypto/pgp-pubkey.c
@@ -39,7 +39,7 @@ pgp_key_alloc(PGP_PubKey **pk_p)
{
PGP_PubKey *pk;
- pk = palloc0(sizeof(*pk));
+ pk = palloc0_object(PGP_PubKey);
*pk_p = pk;
return 0;
}
diff --git a/contrib/pgcrypto/pgp.c b/contrib/pgcrypto/pgp.c
index 8a6a6c2adf1..4e8b4f8827b 100644
--- a/contrib/pgcrypto/pgp.c
+++ b/contrib/pgcrypto/pgp.c
@@ -190,7 +190,7 @@ pgp_init(PGP_Context **ctx_p)
{
PGP_Context *ctx;
- ctx = palloc0(sizeof *ctx);
+ ctx = palloc0_object(PGP_Context);
ctx->cipher_algo = def_cipher_algo;
ctx->s2k_cipher_algo = def_s2k_cipher_algo;
diff --git a/contrib/pgcrypto/px-hmac.c b/contrib/pgcrypto/px-hmac.c
index 99174d26551..68e5cff6d6a 100644
--- a/contrib/pgcrypto/px-hmac.c
+++ b/contrib/pgcrypto/px-hmac.c
@@ -157,7 +157,7 @@ px_find_hmac(const char *name, PX_HMAC **res)
return PXE_HASH_UNUSABLE_FOR_HMAC;
}
- h = palloc(sizeof(*h));
+ h = palloc_object(PX_HMAC);
h->p.ipad = palloc(bs);
h->p.opad = palloc(bs);
h->md = md;
diff --git a/contrib/pgcrypto/px.c b/contrib/pgcrypto/px.c
index d35ccca7774..4d668d4e496 100644
--- a/contrib/pgcrypto/px.c
+++ b/contrib/pgcrypto/px.c
@@ -291,7 +291,7 @@ px_find_combo(const char *name, PX_Combo **res)
PX_Combo *cx;
- cx = palloc0(sizeof(*cx));
+ cx = palloc0_object(PX_Combo);
buf = pstrdup(name);
err = parse_cipher_name(buf, &s_cipher, &s_pad);
diff --git a/contrib/pgrowlocks/pgrowlocks.c b/contrib/pgrowlocks/pgrowlocks.c
index 7e40ab21dda..a46d75f3a39 100644
--- a/contrib/pgrowlocks/pgrowlocks.c
+++ b/contrib/pgrowlocks/pgrowlocks.c
@@ -116,7 +116,7 @@ pgrowlocks(PG_FUNCTION_ARGS)
attinmeta = TupleDescGetAttInMetadata(rsinfo->setDesc);
- values = (char **) palloc(rsinfo->setDesc->natts * sizeof(char *));
+ values = palloc_array(char *, rsinfo->setDesc->natts);
while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
{
diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c
index b92e2a0fc9f..62ba3a5acb0 100644
--- a/contrib/postgres_fdw/postgres_fdw.c
+++ b/contrib/postgres_fdw/postgres_fdw.c
@@ -629,7 +629,7 @@ postgresGetForeignRelSize(PlannerInfo *root,
* We use PgFdwRelationInfo to pass various information to subsequent
* functions.
*/
- fpinfo = (PgFdwRelationInfo *) palloc0(sizeof(PgFdwRelationInfo));
+ fpinfo = palloc0_object(PgFdwRelationInfo);
baserel->fdw_private = fpinfo;
/* Base foreign tables need to be pushed down always. */
@@ -1513,7 +1513,7 @@ postgresBeginForeignScan(ForeignScanState *node, int eflags)
/*
* We'll save private state in node->fdw_state.
*/
- fsstate = (PgFdwScanState *) palloc0(sizeof(PgFdwScanState));
+ fsstate = palloc0_object(PgFdwScanState);
node->fdw_state = fsstate;
/*
@@ -2663,7 +2663,7 @@ postgresBeginDirectModify(ForeignScanState *node, int eflags)
/*
* We'll save private state in node->fdw_state.
*/
- dmstate = (PgFdwDirectModifyState *) palloc0(sizeof(PgFdwDirectModifyState));
+ dmstate = palloc0_object(PgFdwDirectModifyState);
node->fdw_state = dmstate;
/*
@@ -3996,7 +3996,7 @@ create_foreign_modify(EState *estate,
ListCell *lc;
/* Begin constructing PgFdwModifyState. */
- fmstate = (PgFdwModifyState *) palloc0(sizeof(PgFdwModifyState));
+ fmstate = palloc0_object(PgFdwModifyState);
fmstate->rel = rel;
/* Identify which user to do the remote access as. */
@@ -6375,7 +6375,7 @@ postgresGetForeignJoinPaths(PlannerInfo *root,
* if found safe. Once we know that this join can be pushed down, we fill
* the entry.
*/
- fpinfo = (PgFdwRelationInfo *) palloc0(sizeof(PgFdwRelationInfo));
+ fpinfo = palloc0_object(PgFdwRelationInfo);
fpinfo->pushdown_safe = false;
joinrel->fdw_private = fpinfo;
/* attrs_used is only for base relations. */
@@ -6744,7 +6744,7 @@ postgresGetForeignUpperPaths(PlannerInfo *root, UpperRelationKind stage,
output_rel->fdw_private)
return;
- fpinfo = (PgFdwRelationInfo *) palloc0(sizeof(PgFdwRelationInfo));
+ fpinfo = palloc0_object(PgFdwRelationInfo);
fpinfo->pushdown_safe = false;
fpinfo->stage = stage;
output_rel->fdw_private = fpinfo;
@@ -6969,7 +6969,7 @@ add_foreign_ordered_paths(PlannerInfo *root, RelOptInfo *input_rel,
fpinfo->pushdown_safe = true;
/* Construct PgFdwPathExtraData */
- fpextra = (PgFdwPathExtraData *) palloc0(sizeof(PgFdwPathExtraData));
+ fpextra = palloc0_object(PgFdwPathExtraData);
fpextra->target = root->upper_targets[UPPERREL_ORDERED];
fpextra->has_final_sort = true;
@@ -7203,7 +7203,7 @@ add_foreign_final_paths(PlannerInfo *root, RelOptInfo *input_rel,
fpinfo->pushdown_safe = true;
/* Construct PgFdwPathExtraData */
- fpextra = (PgFdwPathExtraData *) palloc0(sizeof(PgFdwPathExtraData));
+ fpextra = palloc0_object(PgFdwPathExtraData);
fpextra->target = root->upper_targets[UPPERREL_FINAL];
fpextra->has_final_sort = has_final_sort;
fpextra->has_limit = extra->limit_needed;
@@ -7606,8 +7606,8 @@ make_tuple_from_result_row(PGresult *res,
tupdesc = fsstate->ss.ss_ScanTupleSlot->tts_tupleDescriptor;
}
- values = (Datum *) palloc0(tupdesc->natts * sizeof(Datum));
- nulls = (bool *) palloc(tupdesc->natts * sizeof(bool));
+ values = palloc0_array(Datum, tupdesc->natts);
+ nulls = palloc_array(bool, tupdesc->natts);
/* Initialize to nulls for any columns not present in result */
memset(nulls, true, tupdesc->natts * sizeof(bool));
diff --git a/contrib/seg/seg.c b/contrib/seg/seg.c
index fd4216edc5d..732cb89b339 100644
--- a/contrib/seg/seg.c
+++ b/contrib/seg/seg.c
@@ -104,7 +104,7 @@ Datum
seg_in(PG_FUNCTION_ARGS)
{
char *str = PG_GETARG_CSTRING(0);
- SEG *result = palloc(sizeof(SEG));
+ SEG *result = palloc_object(SEG);
yyscan_t scanner;
seg_scanner_init(str, &scanner);
@@ -341,8 +341,7 @@ gseg_picksplit(PG_FUNCTION_ARGS)
/*
* Prepare the auxiliary array and sort it.
*/
- sort_items = (gseg_picksplit_item *)
- palloc(maxoff * sizeof(gseg_picksplit_item));
+ sort_items = palloc_array(gseg_picksplit_item, maxoff);
for (i = 1; i <= maxoff; i++)
{
seg = DatumGetSegP(entryvec->vector[i].key);
@@ -367,7 +366,7 @@ gseg_picksplit(PG_FUNCTION_ARGS)
/*
* Emit segments to the left output page, and compute its bounding box.
*/
- seg_l = (SEG *) palloc(sizeof(SEG));
+ seg_l = palloc_object(SEG);
memcpy(seg_l, sort_items[0].data, sizeof(SEG));
*left++ = sort_items[0].index;
v->spl_nleft++;
@@ -385,7 +384,7 @@ gseg_picksplit(PG_FUNCTION_ARGS)
/*
* Likewise for the right page.
*/
- seg_r = (SEG *) palloc(sizeof(SEG));
+ seg_r = palloc_object(SEG);
memcpy(seg_r, sort_items[firstright].data, sizeof(SEG));
*right++ = sort_items[firstright].index;
v->spl_nright++;
@@ -629,7 +628,7 @@ seg_union(PG_FUNCTION_ARGS)
SEG *b = PG_GETARG_SEG_P(1);
SEG *n;
- n = (SEG *) palloc(sizeof(*n));
+ n = palloc_object(SEG);
/* take max of upper endpoints */
if (a->upper > b->upper)
@@ -669,7 +668,7 @@ seg_inter(PG_FUNCTION_ARGS)
SEG *b = PG_GETARG_SEG_P(1);
SEG *n;
- n = (SEG *) palloc(sizeof(*n));
+ n = palloc_object(SEG);
/* take min of upper endpoints */
if (a->upper < b->upper)
diff --git a/contrib/sepgsql/label.c b/contrib/sepgsql/label.c
index 996ce174454..e51d5d7de64 100644
--- a/contrib/sepgsql/label.c
+++ b/contrib/sepgsql/label.c
@@ -146,7 +146,7 @@ sepgsql_set_client_label(const char *new_label)
*/
oldcxt = MemoryContextSwitchTo(CurTransactionContext);
- plabel = palloc0(sizeof(pending_label));
+ plabel = palloc0_object(pending_label);
plabel->subid = GetCurrentSubTransactionId();
if (new_label)
plabel->label = pstrdup(new_label);
diff --git a/contrib/sepgsql/uavc.c b/contrib/sepgsql/uavc.c
index 65ea8e7946a..33feee4d42d 100644
--- a/contrib/sepgsql/uavc.c
+++ b/contrib/sepgsql/uavc.c
@@ -257,7 +257,7 @@ sepgsql_avc_compute(const char *scontext, const char *tcontext, uint16 tclass)
*/
oldctx = MemoryContextSwitchTo(avc_mem_cxt);
- cache = palloc0(sizeof(avc_cache));
+ cache = palloc0_object(avc_cache);
cache->hash = hash;
cache->scontext = pstrdup(scontext);
diff --git a/contrib/spi/autoinc.c b/contrib/spi/autoinc.c
index 8bf742230e0..b30c7ae1448 100644
--- a/contrib/spi/autoinc.c
+++ b/contrib/spi/autoinc.c
@@ -64,9 +64,9 @@ autoinc(PG_FUNCTION_ARGS)
args = trigger->tgargs;
tupdesc = rel->rd_att;
- chattrs = (int *) palloc(nargs / 2 * sizeof(int));
- newvals = (Datum *) palloc(nargs / 2 * sizeof(Datum));
- newnulls = (bool *) palloc(nargs / 2 * sizeof(bool));
+ chattrs = palloc_array(int, nargs / 2);
+ newvals = palloc_array(Datum, nargs / 2);
+ newnulls = palloc_array(bool, nargs / 2);
for (i = 0; i < nargs;)
{
diff --git a/contrib/spi/refint.c b/contrib/spi/refint.c
index e1aef7cd2a3..e0868826fb7 100644
--- a/contrib/spi/refint.c
+++ b/contrib/spi/refint.c
@@ -114,7 +114,7 @@ check_primary_key(PG_FUNCTION_ARGS)
* We use SPI plan preparation feature, so allocate space to place key
* values.
*/
- kvals = (Datum *) palloc(nkeys * sizeof(Datum));
+ kvals = palloc_array(Datum, nkeys);
/*
* Construct ident string as TriggerName $ TriggeredRelationId and try to
@@ -125,7 +125,7 @@ check_primary_key(PG_FUNCTION_ARGS)
/* if there is no plan then allocate argtypes for preparation */
if (plan->nplans <= 0)
- argtypes = (Oid *) palloc(nkeys * sizeof(Oid));
+ argtypes = palloc_array(Oid, nkeys);
/* For each column in key ... */
for (i = 0; i < nkeys; i++)
@@ -332,7 +332,7 @@ check_foreign_key(PG_FUNCTION_ARGS)
* We use SPI plan preparation feature, so allocate space to place key
* values.
*/
- kvals = (Datum *) palloc(nkeys * sizeof(Datum));
+ kvals = palloc_array(Datum, nkeys);
/*
* Construct ident string as TriggerName $ TriggeredRelationId and try to
@@ -343,7 +343,7 @@ check_foreign_key(PG_FUNCTION_ARGS)
/* if there is no plan(s) then allocate argtypes for preparation */
if (plan->nplans <= 0)
- argtypes = (Oid *) palloc(nkeys * sizeof(Oid));
+ argtypes = palloc_array(Oid, nkeys);
/*
* else - check that we have exactly nrefs plan(s) ready
diff --git a/contrib/sslinfo/sslinfo.c b/contrib/sslinfo/sslinfo.c
index 5fd46b98741..2f8f5397db0 100644
--- a/contrib/sslinfo/sslinfo.c
+++ b/contrib/sslinfo/sslinfo.c
@@ -382,7 +382,7 @@ ssl_extension_info(PG_FUNCTION_ARGS)
oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
/* Create a user function context for cross-call persistence */
- fctx = (SSLExtensionInfoContext *) palloc(sizeof(SSLExtensionInfoContext));
+ fctx = palloc_object(SSLExtensionInfoContext);
/* Construct tuple descriptor */
if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
diff --git a/contrib/tablefunc/tablefunc.c b/contrib/tablefunc/tablefunc.c
index 4f2abed702c..8d994c07514 100644
--- a/contrib/tablefunc/tablefunc.c
+++ b/contrib/tablefunc/tablefunc.c
@@ -204,7 +204,7 @@ normal_rand(PG_FUNCTION_ARGS)
funcctx->max_calls = num_tuples;
/* allocate memory for user context */
- fctx = (normal_rand_fctx *) palloc(sizeof(normal_rand_fctx));
+ fctx = palloc_object(normal_rand_fctx);
/*
* Use fctx to keep track of upper and lower bounds from call to call.
@@ -482,7 +482,7 @@ crosstab(PG_FUNCTION_ARGS)
char **values;
/* allocate and zero space */
- values = (char **) palloc0((1 + num_categories) * sizeof(char *));
+ values = palloc0_array(char *, (1 + num_categories));
/*
* now loop through the sql results and assign each value in sequence
@@ -763,7 +763,7 @@ load_categories_hash(char *cats_sql, MemoryContext per_query_ctx)
SPIcontext = MemoryContextSwitchTo(per_query_ctx);
- catdesc = (crosstab_cat_desc *) palloc(sizeof(crosstab_cat_desc));
+ catdesc = palloc_object(crosstab_cat_desc);
catdesc->catname = catname;
catdesc->attidx = i;
@@ -859,7 +859,7 @@ get_crosstab_tuplestore(char *sql,
result_ncols, tupdesc->natts)));
/* allocate space and make sure it's clear */
- values = (char **) palloc0(result_ncols * sizeof(char *));
+ values = palloc0_array(char *, result_ncols);
for (i = 0; i < proc; i++)
{
@@ -1242,9 +1242,11 @@ build_tuplestore_recursively(char *key_fld,
}
if (show_branch)
- values = (char **) palloc((CONNECTBY_NCOLS + serial_column) * sizeof(char *));
+ values = palloc_array(char *,
+ (CONNECTBY_NCOLS + serial_column));
else
- values = (char **) palloc((CONNECTBY_NCOLS_NOBRANCH + serial_column) * sizeof(char *));
+ values = palloc_array(char *,
+ (CONNECTBY_NCOLS_NOBRANCH + serial_column));
/* First time through, do a little setup */
if (level == 0)
diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c
index 0113b196363..b6e783324c1 100644
--- a/contrib/test_decoding/test_decoding.c
+++ b/contrib/test_decoding/test_decoding.c
@@ -160,7 +160,7 @@ pg_decode_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
TestDecodingData *data;
bool enable_streaming = false;
- data = palloc0(sizeof(TestDecodingData));
+ data = palloc0_object(TestDecodingData);
data->context = AllocSetContextCreate(ctx->context,
"text conversion context",
ALLOCSET_DEFAULT_SIZES);
diff --git a/contrib/unaccent/unaccent.c b/contrib/unaccent/unaccent.c
index 352802ef8e8..7bb25a36cb7 100644
--- a/contrib/unaccent/unaccent.c
+++ b/contrib/unaccent/unaccent.c
@@ -57,7 +57,7 @@ placeChar(TrieChar *node, const unsigned char *str, int lenstr,
TrieChar *curnode;
if (!node)
- node = (TrieChar *) palloc0(sizeof(TrieChar) * 256);
+ node = palloc0_array(TrieChar, 256);
Assert(lenstr > 0); /* else str[0] doesn't exist */
@@ -236,7 +236,8 @@ initTrie(const char *filename)
if (trgquoted && state > 0)
{
/* Ignore first and end quotes */
- trgstore = (char *) palloc(sizeof(char) * (trglen - 2));
+ trgstore = palloc_array(char,
+ (trglen - 2));
trgstorelen = 0;
for (int i = 1; i < trglen - 1; i++)
{
@@ -249,7 +250,7 @@ initTrie(const char *filename)
}
else
{
- trgstore = (char *) palloc(sizeof(char) * trglen);
+ trgstore = palloc_array(char, trglen);
trgstorelen = trglen;
memcpy(trgstore, trg, trgstorelen);
}
@@ -418,7 +419,7 @@ unaccent_lexize(PG_FUNCTION_ARGS)
/* return a result only if we made at least one substitution */
if (buf.data != NULL)
{
- res = (TSLexeme *) palloc0(sizeof(TSLexeme) * 2);
+ res = palloc0_array(TSLexeme, 2);
res->lexeme = buf.data;
res->flags = TSL_FILTER;
}
diff --git a/contrib/xml2/xpath.c b/contrib/xml2/xpath.c
index f7e3f485fe1..2525868c9eb 100644
--- a/contrib/xml2/xpath.c
+++ b/contrib/xml2/xpath.c
@@ -526,8 +526,8 @@ xpath_table(PG_FUNCTION_ARGS)
attinmeta = TupleDescGetAttInMetadata(rsinfo->setDesc);
- values = (char **) palloc(rsinfo->setDesc->natts * sizeof(char *));
- xpaths = (xmlChar **) palloc(rsinfo->setDesc->natts * sizeof(xmlChar *));
+ values = palloc_array(char *, rsinfo->setDesc->natts);
+ xpaths = palloc_array(xmlChar *, rsinfo->setDesc->natts);
/*
* Split XPaths. xpathset is a writable CString.
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index 4289142e20b..dc84f754a10 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -313,7 +313,7 @@ initialize_brin_insertstate(Relation idxRel, IndexInfo *indexInfo)
MemoryContext oldcxt;
oldcxt = MemoryContextSwitchTo(indexInfo->ii_Context);
- bistate = palloc0(sizeof(BrinInsertState));
+ bistate = palloc0_object(BrinInsertState);
bistate->bis_desc = brin_build_desc(idxRel);
bistate->bis_rmAccess = brinRevmapInitialize(idxRel,
&bistate->bis_pages_per_range);
@@ -2362,7 +2362,7 @@ _brin_begin_parallel(BrinBuildState *buildstate, Relation heap, Relation index,
Size estsort;
BrinShared *brinshared;
Sharedsort *sharedsort;
- BrinLeader *brinleader = (BrinLeader *) palloc0(sizeof(BrinLeader));
+ BrinLeader *brinleader = palloc0_object(BrinLeader);
WalUsage *walusage;
BufferUsage *bufferusage;
bool leaderparticipates = true;
diff --git a/src/backend/access/brin/brin_minmax_multi.c b/src/backend/access/brin/brin_minmax_multi.c
index 88214720bff..bfda1dab0f5 100644
--- a/src/backend/access/brin/brin_minmax_multi.c
+++ b/src/backend/access/brin/brin_minmax_multi.c
@@ -1341,7 +1341,7 @@ build_distances(FmgrInfo *distanceFn, Oid colloid,
return NULL;
ndistances = (neranges - 1);
- distances = (DistanceValue *) palloc0(sizeof(DistanceValue) * ndistances);
+ distances = palloc0_array(DistanceValue, ndistances);
/*
* Walk through the ranges once and compute the distance between the
@@ -1393,7 +1393,7 @@ build_expanded_ranges(FmgrInfo *cmp, Oid colloid, Ranges *ranges,
/* both ranges and points are expanded into a separate element */
neranges = ranges->nranges + ranges->nvalues;
- eranges = (ExpandedRange *) palloc0(neranges * sizeof(ExpandedRange));
+ eranges = palloc0_array(ExpandedRange, neranges);
/* fill the expanded ranges */
fill_expanded_ranges(eranges, neranges, ranges);
@@ -1505,7 +1505,7 @@ reduce_expanded_ranges(ExpandedRange *eranges, int neranges,
/* allocate space for the boundary values */
nvalues = 0;
- values = (Datum *) palloc(sizeof(Datum) * max_values);
+ values = palloc_array(Datum, max_values);
/* add the global min/max values, from the first/last range */
values[nvalues++] = eranges[0].minval;
@@ -2786,7 +2786,7 @@ brin_minmax_multi_union(PG_FUNCTION_ARGS)
oldctx = MemoryContextSwitchTo(ctx);
/* allocate and fill */
- eranges = (ExpandedRange *) palloc0(neranges * sizeof(ExpandedRange));
+ eranges = palloc0_array(ExpandedRange, neranges);
/* fill the expanded ranges with entries for the first range */
fill_expanded_ranges(eranges, ranges_a->nranges + ranges_a->nvalues,
diff --git a/src/backend/access/brin/brin_revmap.c b/src/backend/access/brin/brin_revmap.c
index ea722d95ebc..458788b5a11 100644
--- a/src/backend/access/brin/brin_revmap.c
+++ b/src/backend/access/brin/brin_revmap.c
@@ -79,7 +79,7 @@ brinRevmapInitialize(Relation idxrel, BlockNumber *pagesPerRange)
page = BufferGetPage(meta);
metadata = (BrinMetaPageData *) PageGetContents(page);
- revmap = palloc(sizeof(BrinRevmap));
+ revmap = palloc_object(BrinRevmap);
revmap->rm_irel = idxrel;
revmap->rm_pagesPerRange = metadata->pagesPerRange;
revmap->rm_lastRevmapPage = metadata->lastRevmapPage;
diff --git a/src/backend/access/brin/brin_tuple.c b/src/backend/access/brin/brin_tuple.c
index 861f397e6db..5100d52db64 100644
--- a/src/backend/access/brin/brin_tuple.c
+++ b/src/backend/access/brin/brin_tuple.c
@@ -119,13 +119,13 @@ brin_form_tuple(BrinDesc *brdesc, BlockNumber blkno, BrinMemTuple *tuple,
Assert(brdesc->bd_totalstored > 0);
- values = (Datum *) palloc(sizeof(Datum) * brdesc->bd_totalstored);
- nulls = (bool *) palloc0(sizeof(bool) * brdesc->bd_totalstored);
- phony_nullbitmap = (bits8 *)
- palloc(sizeof(bits8) * BITMAPLEN(brdesc->bd_totalstored));
+ values = palloc_array(Datum, brdesc->bd_totalstored);
+ nulls = palloc0_array(bool, brdesc->bd_totalstored);
+ phony_nullbitmap = palloc_array(bits8,
+ BITMAPLEN(brdesc->bd_totalstored));
#ifdef TOAST_INDEX_HACK
- untoasted_values = (Datum *) palloc(sizeof(Datum) * brdesc->bd_totalstored);
+ untoasted_values = palloc_array(Datum, brdesc->bd_totalstored);
#endif
/*
diff --git a/src/backend/access/common/attmap.c b/src/backend/access/common/attmap.c
index 4b6cfe05c02..53ee3423c4e 100644
--- a/src/backend/access/common/attmap.c
+++ b/src/backend/access/common/attmap.c
@@ -41,7 +41,7 @@ make_attrmap(int maplen)
{
AttrMap *res;
- res = (AttrMap *) palloc0(sizeof(AttrMap));
+ res = palloc0_object(AttrMap);
res->maplen = maplen;
res->attnums = (AttrNumber *) palloc0(sizeof(AttrNumber) * maplen);
return res;
diff --git a/src/backend/access/common/heaptuple.c b/src/backend/access/common/heaptuple.c
index b43cb9ccff4..a32964724a2 100644
--- a/src/backend/access/common/heaptuple.c
+++ b/src/backend/access/common/heaptuple.c
@@ -1230,8 +1230,8 @@ heap_modify_tuple(HeapTuple tuple,
* O(N^2) if there are many non-replaced columns, so it seems better to
* err on the side of linear cost.
*/
- values = (Datum *) palloc(numberOfAttributes * sizeof(Datum));
- isnull = (bool *) palloc(numberOfAttributes * sizeof(bool));
+ values = palloc_array(Datum, numberOfAttributes);
+ isnull = palloc_array(bool, numberOfAttributes);
heap_deform_tuple(tuple, tupleDesc, values, isnull);
@@ -1292,8 +1292,8 @@ heap_modify_tuple_by_cols(HeapTuple tuple,
* allocate and fill values and isnull arrays from the tuple, then replace
* selected columns from the input arrays.
*/
- values = (Datum *) palloc(numberOfAttributes * sizeof(Datum));
- isnull = (bool *) palloc(numberOfAttributes * sizeof(bool));
+ values = palloc_array(Datum, numberOfAttributes);
+ isnull = palloc_array(bool, numberOfAttributes);
heap_deform_tuple(tuple, tupleDesc, values, isnull);
diff --git a/src/backend/access/common/printtup.c b/src/backend/access/common/printtup.c
index 830a3d883aa..2bb1f490d8b 100644
--- a/src/backend/access/common/printtup.c
+++ b/src/backend/access/common/printtup.c
@@ -70,7 +70,7 @@ typedef struct
DestReceiver *
printtup_create_DR(CommandDest dest)
{
- DR_printtup *self = (DR_printtup *) palloc0(sizeof(DR_printtup));
+ DR_printtup *self = palloc0_object(DR_printtup);
self->pub.receiveSlot = printtup; /* might get changed later */
self->pub.rStartup = printtup_startup;
diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index e587abd9990..4d99043269b 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -710,13 +710,15 @@ add_reloption(relopt_gen *newoption)
if (max_custom_options == 0)
{
max_custom_options = 8;
- custom_options = palloc(max_custom_options * sizeof(relopt_gen *));
+ custom_options = palloc_array(relopt_gen *,
+ max_custom_options);
}
else
{
max_custom_options *= 2;
- custom_options = repalloc(custom_options,
- max_custom_options * sizeof(relopt_gen *));
+ custom_options = repalloc_array(custom_options,
+ relopt_gen *,
+ max_custom_options);
}
MemoryContextSwitchTo(oldcxt);
}
@@ -756,7 +758,7 @@ register_reloptions_validator(local_relopts *relopts, relopts_validator validato
static void
add_local_reloption(local_relopts *relopts, relopt_gen *newoption, int offset)
{
- local_relopt *opt = palloc(sizeof(*opt));
+ local_relopt *opt = palloc_object(local_relopt);
Assert(offset < relopts->relopt_struct_size);
@@ -1515,7 +1517,7 @@ parseRelOptions(Datum options, bool validate, relopt_kind kind,
if (numoptions > 0)
{
- reloptions = palloc(numoptions * sizeof(relopt_value));
+ reloptions = palloc_array(relopt_value, numoptions);
for (i = 0, j = 0; relOpts[i]; i++)
{
@@ -1541,7 +1543,7 @@ static relopt_value *
parseLocalRelOptions(local_relopts *relopts, Datum options, bool validate)
{
int nopts = list_length(relopts->options);
- relopt_value *values = palloc(sizeof(*values) * nopts);
+ relopt_value *values = palloc_array(relopt_value, nopts);
ListCell *lc;
int i = 0;
@@ -1945,7 +1947,7 @@ void *
build_local_reloptions(local_relopts *relopts, Datum options, bool validate)
{
int noptions = list_length(relopts->options);
- relopt_parse_elt *elems = palloc(sizeof(*elems) * noptions);
+ relopt_parse_elt *elems = palloc_array(relopt_parse_elt, noptions);
relopt_value *vals;
void *opts;
int i = 0;
diff --git a/src/backend/access/common/tidstore.c b/src/backend/access/common/tidstore.c
index 5bd75fb499c..7a1ab125b48 100644
--- a/src/backend/access/common/tidstore.c
+++ b/src/backend/access/common/tidstore.c
@@ -166,7 +166,7 @@ TidStoreCreateLocal(size_t max_bytes, bool insert_only)
size_t minContextSize = ALLOCSET_DEFAULT_MINSIZE;
size_t maxBlockSize = ALLOCSET_DEFAULT_MAXSIZE;
- ts = palloc0(sizeof(TidStore));
+ ts = palloc0_object(TidStore);
/* choose the maxBlockSize to be no larger than 1/16 of max_bytes */
while (16 * maxBlockSize > max_bytes)
@@ -212,7 +212,7 @@ TidStoreCreateShared(size_t max_bytes, int tranche_id)
size_t dsa_init_size = DSA_DEFAULT_INIT_SEGMENT_SIZE;
size_t dsa_max_size = DSA_MAX_SEGMENT_SIZE;
- ts = palloc0(sizeof(TidStore));
+ ts = palloc0_object(TidStore);
/*
* Choose the initial and maximum DSA segment sizes to be no longer than
@@ -250,7 +250,7 @@ TidStoreAttach(dsa_handle area_handle, dsa_pointer handle)
Assert(DsaPointerIsValid(handle));
/* create per-backend state */
- ts = palloc0(sizeof(TidStore));
+ ts = palloc0_object(TidStore);
area = dsa_attach(area_handle);
@@ -472,7 +472,7 @@ TidStoreBeginIterate(TidStore *ts)
{
TidStoreIter *iter;
- iter = palloc0(sizeof(TidStoreIter));
+ iter = palloc0_object(TidStoreIter);
iter->ts = ts;
if (TidStoreIsShared(ts))
diff --git a/src/backend/access/common/tupconvert.c b/src/backend/access/common/tupconvert.c
index 54dc2f4ab80..3c227b8a9e0 100644
--- a/src/backend/access/common/tupconvert.c
+++ b/src/backend/access/common/tupconvert.c
@@ -74,7 +74,7 @@ convert_tuples_by_position(TupleDesc indesc,
}
/* Prepare the map structure */
- map = (TupleConversionMap *) palloc(sizeof(TupleConversionMap));
+ map = palloc_object(TupleConversionMap);
map->indesc = indesc;
map->outdesc = outdesc;
map->attrMap = attrMap;
@@ -131,7 +131,7 @@ convert_tuples_by_name_attrmap(TupleDesc indesc,
Assert(attrMap != NULL);
/* Prepare the map structure */
- map = (TupleConversionMap *) palloc(sizeof(TupleConversionMap));
+ map = palloc_object(TupleConversionMap);
map->indesc = indesc;
map->outdesc = outdesc;
map->attrMap = attrMap;
diff --git a/src/backend/access/common/tupdesc.c b/src/backend/access/common/tupdesc.c
index fe197447912..2eb55b4d3ee 100644
--- a/src/backend/access/common/tupdesc.c
+++ b/src/backend/access/common/tupdesc.c
@@ -338,7 +338,7 @@ CreateTupleDescCopyConstr(TupleDesc tupdesc)
/* Copy the TupleConstr data structure, if any */
if (constr)
{
- TupleConstr *cpy = (TupleConstr *) palloc0(sizeof(TupleConstr));
+ TupleConstr *cpy = palloc0_object(TupleConstr);
cpy->has_not_null = constr->has_not_null;
cpy->has_generated_stored = constr->has_generated_stored;
diff --git a/src/backend/access/gin/ginbtree.c b/src/backend/access/gin/ginbtree.c
index 57741263abd..c31ed7ad5d4 100644
--- a/src/backend/access/gin/ginbtree.c
+++ b/src/backend/access/gin/ginbtree.c
@@ -85,7 +85,7 @@ ginFindLeafPage(GinBtree btree, bool searchMode,
{
GinBtreeStack *stack;
- stack = (GinBtreeStack *) palloc(sizeof(GinBtreeStack));
+ stack = palloc_object(GinBtreeStack);
stack->blkno = btree->rootBlkno;
stack->buffer = ReadBuffer(btree->index, btree->rootBlkno);
stack->parent = NULL;
@@ -152,7 +152,7 @@ ginFindLeafPage(GinBtree btree, bool searchMode,
}
else
{
- GinBtreeStack *ptr = (GinBtreeStack *) palloc(sizeof(GinBtreeStack));
+ GinBtreeStack *ptr = palloc_object(GinBtreeStack);
ptr->parent = stack;
stack = ptr;
@@ -246,7 +246,7 @@ ginFindParents(GinBtree btree, GinBtreeStack *stack)
blkno = root->blkno;
buffer = root->buffer;
- ptr = (GinBtreeStack *) palloc(sizeof(GinBtreeStack));
+ ptr = palloc_object(GinBtreeStack);
for (;;)
{
diff --git a/src/backend/access/gin/gindatapage.c b/src/backend/access/gin/gindatapage.c
index 662626efd82..7e175831a61 100644
--- a/src/backend/access/gin/gindatapage.c
+++ b/src/backend/access/gin/gindatapage.c
@@ -1332,7 +1332,7 @@ dataSplitPageInternal(GinBtree btree, Buffer origbuf,
static void *
dataPrepareDownlink(GinBtree btree, Buffer lbuf)
{
- PostingItem *pitem = palloc(sizeof(PostingItem));
+ PostingItem *pitem = palloc_object(PostingItem);
Page lpage = BufferGetPage(lbuf);
PostingItemSetBlockNumber(pitem, BufferGetBlockNumber(lbuf));
@@ -1374,7 +1374,7 @@ disassembleLeaf(Page page)
Pointer segbegin;
Pointer segend;
- leaf = palloc0(sizeof(disassembledLeaf));
+ leaf = palloc0_object(disassembledLeaf);
dlist_init(&leaf->segments);
if (GinPageIsCompressed(page))
@@ -1387,7 +1387,7 @@ disassembleLeaf(Page page)
segend = segbegin + GinDataLeafPageGetPostingListSize(page);
while ((Pointer) seg < segend)
{
- leafSegmentInfo *seginfo = palloc(sizeof(leafSegmentInfo));
+ leafSegmentInfo *seginfo = palloc_object(leafSegmentInfo);
seginfo->action = GIN_SEGMENT_UNMODIFIED;
seginfo->seg = seg;
@@ -1414,7 +1414,7 @@ disassembleLeaf(Page page)
if (nuncompressed > 0)
{
- seginfo = palloc(sizeof(leafSegmentInfo));
+ seginfo = palloc_object(leafSegmentInfo);
seginfo->action = GIN_SEGMENT_REPLACE;
seginfo->seg = NULL;
@@ -1455,7 +1455,7 @@ addItemsToLeaf(disassembledLeaf *leaf, ItemPointer newItems, int nNewItems)
*/
if (dlist_is_empty(&leaf->segments))
{
- newseg = palloc(sizeof(leafSegmentInfo));
+ newseg = palloc_object(leafSegmentInfo);
newseg->seg = NULL;
newseg->items = newItems;
newseg->nitems = nNewItems;
@@ -1512,7 +1512,7 @@ addItemsToLeaf(disassembledLeaf *leaf, ItemPointer newItems, int nNewItems)
cur->seg != NULL &&
SizeOfGinPostingList(cur->seg) >= GinPostingListSegmentTargetSize)
{
- newseg = palloc(sizeof(leafSegmentInfo));
+ newseg = palloc_object(leafSegmentInfo);
newseg->seg = NULL;
newseg->items = nextnew;
newseg->nitems = nthis;
@@ -1629,7 +1629,7 @@ leafRepackItems(disassembledLeaf *leaf, ItemPointer remaining)
if (seginfo->action != GIN_SEGMENT_INSERT)
seginfo->action = GIN_SEGMENT_REPLACE;
- nextseg = palloc(sizeof(leafSegmentInfo));
+ nextseg = palloc_object(leafSegmentInfo);
nextseg->action = GIN_SEGMENT_INSERT;
nextseg->seg = NULL;
nextseg->items = &seginfo->items[npacked];
diff --git a/src/backend/access/gin/ginentrypage.c b/src/backend/access/gin/ginentrypage.c
index c668d809f60..dc9b23e17af 100644
--- a/src/backend/access/gin/ginentrypage.c
+++ b/src/backend/access/gin/ginentrypage.c
@@ -708,7 +708,7 @@ entryPrepareDownlink(GinBtree btree, Buffer lbuf)
itup = getRightMostTuple(lpage);
- insertData = palloc(sizeof(GinBtreeEntryInsertData));
+ insertData = palloc_object(GinBtreeEntryInsertData);
insertData->entry = GinFormInteriorTuple(itup, lpage, lblkno);
insertData->isDelete = false;
diff --git a/src/backend/access/gin/ginget.c b/src/backend/access/gin/ginget.c
index 330805626ee..9326e765bd3 100644
--- a/src/backend/access/gin/ginget.c
+++ b/src/backend/access/gin/ginget.c
@@ -551,7 +551,7 @@ startScanKey(GinState *ginstate, GinScanOpaque so, GinScanKey key)
{
MemoryContextSwitchTo(so->tempCtx);
- entryIndexes = (int *) palloc(sizeof(int) * key->nentries);
+ entryIndexes = palloc_array(int, key->nentries);
for (i = 0; i < key->nentries; i++)
entryIndexes[i] = i;
qsort_arg(entryIndexes, key->nentries, sizeof(int),
diff --git a/src/backend/access/gin/gininsert.c b/src/backend/access/gin/gininsert.c
index 8e1788dbcf7..a09f3a743d5 100644
--- a/src/backend/access/gin/gininsert.c
+++ b/src/backend/access/gin/gininsert.c
@@ -418,7 +418,7 @@ ginbuild(Relation heap, Relation index, IndexInfo *indexInfo)
/*
* Return statistics
*/
- result = (IndexBuildResult *) palloc(sizeof(IndexBuildResult));
+ result = palloc_object(IndexBuildResult);
result->heap_tuples = reltuples;
result->index_tuples = buildstate.indtuples;
@@ -494,7 +494,7 @@ gininsert(Relation index, Datum *values, bool *isnull,
if (ginstate == NULL)
{
oldCtx = MemoryContextSwitchTo(indexInfo->ii_Context);
- ginstate = (GinState *) palloc(sizeof(GinState));
+ ginstate = palloc_object(GinState);
initGinState(ginstate, index);
indexInfo->ii_AmCache = ginstate;
MemoryContextSwitchTo(oldCtx);
diff --git a/src/backend/access/gin/ginscan.c b/src/backend/access/gin/ginscan.c
index 7d1e6615260..590dff23cf9 100644
--- a/src/backend/access/gin/ginscan.c
+++ b/src/backend/access/gin/ginscan.c
@@ -353,7 +353,7 @@ ginNewScanKey(IndexScanDesc scan)
* didn't create a nullFlags array, we assume everything is non-null.
* While at it, detect whether any null keys are present.
*/
- categories = (GinNullCategory *) palloc0(nQueryValues * sizeof(GinNullCategory));
+ categories = palloc0_array(GinNullCategory, nQueryValues);
if (nullFlags)
{
int32 j;
diff --git a/src/backend/access/gin/ginutil.c b/src/backend/access/gin/ginutil.c
index 2500d16b7bc..44a41fbfe43 100644
--- a/src/backend/access/gin/ginutil.c
+++ b/src/backend/access/gin/ginutil.c
@@ -496,7 +496,7 @@ ginExtractEntries(GinState *ginstate, OffsetNumber attnum,
if (isNull)
{
*nentries = 1;
- entries = (Datum *) palloc(sizeof(Datum));
+ entries = palloc_object(Datum);
entries[0] = (Datum) 0;
*categories = (GinNullCategory *) palloc(sizeof(GinNullCategory));
(*categories)[0] = GIN_CAT_NULL_ITEM;
@@ -518,7 +518,7 @@ ginExtractEntries(GinState *ginstate, OffsetNumber attnum,
if (entries == NULL || *nentries <= 0)
{
*nentries = 1;
- entries = (Datum *) palloc(sizeof(Datum));
+ entries = palloc_object(Datum);
entries[0] = (Datum) 0;
*categories = (GinNullCategory *) palloc(sizeof(GinNullCategory));
(*categories)[0] = GIN_CAT_EMPTY_ITEM;
@@ -530,7 +530,7 @@ ginExtractEntries(GinState *ginstate, OffsetNumber attnum,
* assuming that everything's non-null.
*/
if (nullFlags == NULL)
- nullFlags = (bool *) palloc0(*nentries * sizeof(bool));
+ nullFlags = palloc0_array(bool, *nentries);
/*
* If there's more than one key, sort and unique-ify.
@@ -544,7 +544,7 @@ ginExtractEntries(GinState *ginstate, OffsetNumber attnum,
keyEntryData *keydata;
cmpEntriesArg arg;
- keydata = (keyEntryData *) palloc(*nentries * sizeof(keyEntryData));
+ keydata = palloc_array(keyEntryData, *nentries);
for (i = 0; i < *nentries; i++)
{
keydata[i].datum = entries[i];
diff --git a/src/backend/access/gin/ginvacuum.c b/src/backend/access/gin/ginvacuum.c
index d98c54b7cf7..334fcfe753d 100644
--- a/src/backend/access/gin/ginvacuum.c
+++ b/src/backend/access/gin/ginvacuum.c
@@ -260,7 +260,7 @@ ginScanToDelete(GinVacuumState *gvs, BlockNumber blkno, bool isRoot,
{
if (!parent->child)
{
- me = (DataPageDeleteStack *) palloc0(sizeof(DataPageDeleteStack));
+ me = palloc0_object(DataPageDeleteStack);
me->parent = parent;
parent->child = me;
me->leftBuffer = InvalidBuffer;
@@ -584,7 +584,7 @@ ginbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
if (stats == NULL)
{
/* Yes, so initialize stats to zeroes */
- stats = (IndexBulkDeleteResult *) palloc0(sizeof(IndexBulkDeleteResult));
+ stats = palloc0_object(IndexBulkDeleteResult);
/*
* and cleanup any pending inserts
@@ -714,7 +714,7 @@ ginvacuumcleanup(IndexVacuumInfo *info, IndexBulkDeleteResult *stats)
*/
if (stats == NULL)
{
- stats = (IndexBulkDeleteResult *) palloc0(sizeof(IndexBulkDeleteResult));
+ stats = palloc0_object(IndexBulkDeleteResult);
initGinState(&ginstate, index);
ginInsertCleanup(&ginstate, !AmAutoVacuumWorkerProcess(),
false, true, stats);
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index b6bc75b44e3..58d7039d10d 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -43,7 +43,7 @@ static void gistprunepage(Relation rel, Page page, Buffer buffer,
#define ROTATEDIST(d) do { \
- SplitPageLayout *tmp = (SplitPageLayout *) palloc0(sizeof(SplitPageLayout)); \
+ SplitPageLayout *tmp = palloc0_object(SplitPageLayout); \
tmp->block.blkno = InvalidBlockNumber; \
tmp->buffer = InvalidBuffer; \
tmp->next = (d); \
@@ -387,7 +387,7 @@ gistplacetopage(Relation rel, Size freespace, GISTSTATE *giststate,
/* Prepare a vector of all the downlinks */
for (ptr = dist; ptr; ptr = ptr->next)
ndownlinks++;
- downlinks = palloc(sizeof(IndexTuple) * ndownlinks);
+ downlinks = palloc_array(IndexTuple, ndownlinks);
for (i = 0, ptr = dist; ptr; ptr = ptr->next)
downlinks[i++] = ptr->itup;
@@ -405,7 +405,7 @@ gistplacetopage(Relation rel, Size freespace, GISTSTATE *giststate,
/* Prepare split-info to be returned to caller */
for (ptr = dist; ptr; ptr = ptr->next)
{
- GISTPageSplitInfo *si = palloc(sizeof(GISTPageSplitInfo));
+ GISTPageSplitInfo *si = palloc_object(GISTPageSplitInfo);
si->buf = ptr->buffer;
si->downlink = ptr->itup;
@@ -819,7 +819,7 @@ gistdoinsert(Relation r, IndexTuple itup, Size freespace,
xlocked = false;
/* descend to the chosen child */
- item = (GISTInsertStack *) palloc0(sizeof(GISTInsertStack));
+ item = palloc0_object(GISTInsertStack);
item->blkno = childblkno;
item->parent = stack;
item->downlinkoffnum = downlinkoffnum;
@@ -919,7 +919,7 @@ gistFindPath(Relation r, BlockNumber child, OffsetNumber *downlinkoffnum)
*ptr;
BlockNumber blkno;
- top = (GISTInsertStack *) palloc0(sizeof(GISTInsertStack));
+ top = palloc0_object(GISTInsertStack);
top->blkno = GIST_ROOT_BLKNO;
top->downlinkoffnum = InvalidOffsetNumber;
@@ -971,7 +971,7 @@ gistFindPath(Relation r, BlockNumber child, OffsetNumber *downlinkoffnum)
* leaf pages, and we assume that there can't be any non-leaf
* pages behind leaf pages.
*/
- ptr = (GISTInsertStack *) palloc0(sizeof(GISTInsertStack));
+ ptr = palloc0_object(GISTInsertStack);
ptr->blkno = GistPageGetOpaque(page)->rightlink;
ptr->downlinkoffnum = InvalidOffsetNumber;
ptr->parent = top->parent;
@@ -996,7 +996,7 @@ gistFindPath(Relation r, BlockNumber child, OffsetNumber *downlinkoffnum)
else
{
/* Append this child to the list of pages to visit later */
- ptr = (GISTInsertStack *) palloc0(sizeof(GISTInsertStack));
+ ptr = palloc0_object(GISTInsertStack);
ptr->blkno = blkno;
ptr->downlinkoffnum = i;
ptr->parent = top;
@@ -1207,7 +1207,7 @@ gistfixsplit(GISTInsertState *state, GISTSTATE *giststate)
*/
for (;;)
{
- GISTPageSplitInfo *si = palloc(sizeof(GISTPageSplitInfo));
+ GISTPageSplitInfo *si = palloc_object(GISTPageSplitInfo);
IndexTuple downlink;
page = BufferGetPage(buf);
@@ -1471,8 +1471,8 @@ gistSplit(Relation r,
gistSplitByKey(r, page, itup, len, giststate, &v, 0);
/* form left and right vector */
- lvectup = (IndexTuple *) palloc(sizeof(IndexTuple) * (len + 1));
- rvectup = (IndexTuple *) palloc(sizeof(IndexTuple) * (len + 1));
+ lvectup = palloc_array(IndexTuple, (len + 1));
+ rvectup = palloc_array(IndexTuple, (len + 1));
for (i = 0; i < v.splitVector.spl_nleft; i++)
lvectup[i] = itup[v.splitVector.spl_left[i] - 1];
@@ -1541,7 +1541,7 @@ initGISTstate(Relation index)
oldCxt = MemoryContextSwitchTo(scanCxt);
/* Create and fill in the GISTSTATE */
- giststate = (GISTSTATE *) palloc(sizeof(GISTSTATE));
+ giststate = palloc_object(GISTSTATE);
giststate->scanCxt = scanCxt;
giststate->tempCxt = scanCxt; /* caller must change this if needed */
diff --git a/src/backend/access/gist/gistbuild.c b/src/backend/access/gist/gistbuild.c
index 9e707167d98..8603cb71c12 100644
--- a/src/backend/access/gist/gistbuild.c
+++ b/src/backend/access/gist/gistbuild.c
@@ -346,7 +346,7 @@ gistbuild(Relation heap, Relation index, IndexInfo *indexInfo)
/*
* Return statistics
*/
- result = (IndexBuildResult *) palloc(sizeof(IndexBuildResult));
+ result = palloc_object(IndexBuildResult);
result->heap_tuples = reltuples;
result->index_tuples = (double) buildstate.indtuples;
@@ -409,7 +409,7 @@ gist_indexsortbuild(GISTBuildState *state)
state->bulkstate = smgr_bulk_start_rel(state->indexrel, MAIN_FORKNUM);
/* Allocate a temporary buffer for the first leaf page batch. */
- levelstate = palloc0(sizeof(GistSortedBuildLevelState));
+ levelstate = palloc0_object(GistSortedBuildLevelState);
levelstate->pages[0] = palloc(BLCKSZ);
levelstate->parent = NULL;
gistinitpage(levelstate->pages[0], F_LEAF);
@@ -526,7 +526,7 @@ gist_indexsortbuild_levelstate_flush(GISTBuildState *state,
else
{
/* Create split layout from single page */
- dist = (SplitPageLayout *) palloc0(sizeof(SplitPageLayout));
+ dist = palloc0_object(SplitPageLayout);
union_tuple = gistunion(state->indexrel, itvec, vect_len,
state->giststate);
dist->itup = union_tuple;
@@ -597,7 +597,7 @@ gist_indexsortbuild_levelstate_flush(GISTBuildState *state,
parent = levelstate->parent;
if (parent == NULL)
{
- parent = palloc0(sizeof(GistSortedBuildLevelState));
+ parent = palloc0_object(GistSortedBuildLevelState);
parent->pages[0] = palloc(BLCKSZ);
parent->parent = NULL;
gistinitpage(parent->pages[0], 0);
@@ -1154,7 +1154,7 @@ gistbufferinginserttuples(GISTBuildState *buildstate, Buffer buffer, int level,
/* Create an array of all the downlink tuples */
ndownlinks = list_length(splitinfo);
- downlinks = (IndexTuple *) palloc(sizeof(IndexTuple) * ndownlinks);
+ downlinks = palloc_array(IndexTuple, ndownlinks);
i = 0;
foreach(lc, splitinfo)
{
diff --git a/src/backend/access/gist/gistbuildbuffers.c b/src/backend/access/gist/gistbuildbuffers.c
index 0707254d18e..9868ef0e252 100644
--- a/src/backend/access/gist/gistbuildbuffers.c
+++ b/src/backend/access/gist/gistbuildbuffers.c
@@ -46,7 +46,7 @@ gistInitBuildBuffers(int pagesPerBuffer, int levelStep, int maxLevel)
GISTBuildBuffers *gfbb;
HASHCTL hashCtl;
- gfbb = palloc(sizeof(GISTBuildBuffers));
+ gfbb = palloc_object(GISTBuildBuffers);
gfbb->pagesPerBuffer = pagesPerBuffer;
gfbb->levelStep = levelStep;
@@ -582,9 +582,8 @@ gistRelocateBuildBuffersOnSplit(GISTBuildBuffers *gfbb, GISTSTATE *giststate,
* Allocate memory for information about relocation buffers.
*/
splitPagesCount = list_length(splitinfo);
- relocationBuffersInfos =
- (RelocationBufferInfo *) palloc(sizeof(RelocationBufferInfo) *
- splitPagesCount);
+ relocationBuffersInfos = palloc_array(RelocationBufferInfo,
+ splitPagesCount);
/*
* Fill relocation buffers information for node buffers of pages produced
diff --git a/src/backend/access/gist/gistproc.c b/src/backend/access/gist/gistproc.c
index 392163cb229..d8e7c83ec50 100644
--- a/src/backend/access/gist/gistproc.c
+++ b/src/backend/access/gist/gistproc.c
@@ -171,7 +171,7 @@ gist_box_union(PG_FUNCTION_ARGS)
*pageunion;
numranges = entryvec->n;
- pageunion = (BOX *) palloc(sizeof(BOX));
+ pageunion = palloc_object(BOX);
cur = DatumGetBoxP(entryvec->vector[0].key);
memcpy(pageunion, cur, sizeof(BOX));
@@ -237,7 +237,7 @@ fallbackSplit(GistEntryVector *entryvec, GIST_SPLITVEC *v)
v->spl_left[v->spl_nleft] = i;
if (unionL == NULL)
{
- unionL = (BOX *) palloc(sizeof(BOX));
+ unionL = palloc_object(BOX);
*unionL = *cur;
}
else
@@ -250,7 +250,7 @@ fallbackSplit(GistEntryVector *entryvec, GIST_SPLITVEC *v)
v->spl_right[v->spl_nright] = i;
if (unionR == NULL)
{
- unionR = (BOX *) palloc(sizeof(BOX));
+ unionR = palloc_object(BOX);
*unionR = *cur;
}
else
@@ -515,8 +515,8 @@ gist_box_picksplit(PG_FUNCTION_ARGS)
nentries = context.entriesCount = maxoff - FirstOffsetNumber + 1;
/* Allocate arrays for intervals along axes */
- intervalsLower = (SplitInterval *) palloc(nentries * sizeof(SplitInterval));
- intervalsUpper = (SplitInterval *) palloc(nentries * sizeof(SplitInterval));
+ intervalsLower = palloc_array(SplitInterval, nentries);
+ intervalsUpper = palloc_array(SplitInterval, nentries);
/*
* Calculate the overall minimum bounding box over all the entries.
@@ -698,15 +698,15 @@ gist_box_picksplit(PG_FUNCTION_ARGS)
v->spl_nright = 0;
/* Allocate bounding boxes of left and right groups */
- leftBox = palloc0(sizeof(BOX));
- rightBox = palloc0(sizeof(BOX));
+ leftBox = palloc0_object(BOX);
+ rightBox = palloc0_object(BOX);
/*
* Allocate an array for "common entries" - entries which can be placed to
* either group without affecting overlap along selected axis.
*/
commonEntriesCount = 0;
- commonEntries = (CommonEntry *) palloc(nentries * sizeof(CommonEntry));
+ commonEntries = palloc_array(CommonEntry, nentries);
/* Helper macros to place an entry in the left or right group */
#define PLACE_LEFT(box, off) \
@@ -1042,10 +1042,10 @@ gist_poly_compress(PG_FUNCTION_ARGS)
POLYGON *in = DatumGetPolygonP(entry->key);
BOX *r;
- r = (BOX *) palloc(sizeof(BOX));
+ r = palloc_object(BOX);
memcpy(r, &(in->boundbox), sizeof(BOX));
- retval = (GISTENTRY *) palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(r),
entry->rel, entry->page,
entry->offset, false);
@@ -1107,13 +1107,13 @@ gist_circle_compress(PG_FUNCTION_ARGS)
CIRCLE *in = DatumGetCircleP(entry->key);
BOX *r;
- r = (BOX *) palloc(sizeof(BOX));
+ r = palloc_object(BOX);
r->high.x = float8_pl(in->center.x, in->radius);
r->low.x = float8_mi(in->center.x, in->radius);
r->high.y = float8_pl(in->center.y, in->radius);
r->low.y = float8_mi(in->center.y, in->radius);
- retval = (GISTENTRY *) palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(r),
entry->rel, entry->page,
entry->offset, false);
@@ -1171,9 +1171,9 @@ gist_point_compress(PG_FUNCTION_ARGS)
if (entry->leafkey) /* Point, actually */
{
- BOX *box = palloc(sizeof(BOX));
+ BOX *box = palloc_object(BOX);
Point *point = DatumGetPointP(entry->key);
- GISTENTRY *retval = palloc(sizeof(GISTENTRY));
+ GISTENTRY *retval = palloc_object(GISTENTRY);
box->high = box->low = *point;
@@ -1200,9 +1200,9 @@ gist_point_fetch(PG_FUNCTION_ARGS)
Point *r;
GISTENTRY *retval;
- retval = palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
- r = (Point *) palloc(sizeof(Point));
+ r = palloc_object(Point);
r->x = in->high.x;
r->y = in->high.y;
gistentryinit(*retval, PointerGetDatum(r),
diff --git a/src/backend/access/gist/gistscan.c b/src/backend/access/gist/gistscan.c
index 700fa959d03..36d08eba852 100644
--- a/src/backend/access/gist/gistscan.c
+++ b/src/backend/access/gist/gistscan.c
@@ -228,7 +228,7 @@ gistrescan(IndexScanDesc scan, ScanKey key, int nkeys,
*/
if (!first_time)
{
- fn_extras = (void **) palloc(scan->numberOfKeys * sizeof(void *));
+ fn_extras = palloc_array(void *, scan->numberOfKeys);
for (i = 0; i < scan->numberOfKeys; i++)
fn_extras[i] = scan->keyData[i].sk_func.fn_extra;
}
@@ -283,7 +283,8 @@ gistrescan(IndexScanDesc scan, ScanKey key, int nkeys,
/* As above, preserve fn_extra if not first time through */
if (!first_time)
{
- fn_extras = (void **) palloc(scan->numberOfOrderBys * sizeof(void *));
+ fn_extras = palloc_array(void *,
+ scan->numberOfOrderBys);
for (i = 0; i < scan->numberOfOrderBys; i++)
fn_extras[i] = scan->orderByData[i].sk_func.fn_extra;
}
diff --git a/src/backend/access/gist/gistsplit.c b/src/backend/access/gist/gistsplit.c
index 49838ceb07b..d4909c05a7d 100644
--- a/src/backend/access/gist/gistsplit.c
+++ b/src/backend/access/gist/gistsplit.c
@@ -51,7 +51,7 @@ gistunionsubkeyvec(GISTSTATE *giststate, IndexTuple *itvec,
int i,
cleanedLen = 0;
- cleanedItVec = (IndexTuple *) palloc(sizeof(IndexTuple) * gsvp->len);
+ cleanedItVec = palloc_array(IndexTuple, gsvp->len);
for (i = 0; i < gsvp->len; i++)
{
@@ -632,7 +632,7 @@ gistSplitByKey(Relation r, Page page, IndexTuple *itup, int len,
/* note that entryvec->vector[0] goes unused in this code */
entryvec = palloc(GEVHDRSZ + (len + 1) * sizeof(GISTENTRY));
entryvec->n = len + 1;
- offNullTuples = (OffsetNumber *) palloc(len * sizeof(OffsetNumber));
+ offNullTuples = palloc_array(OffsetNumber, len);
for (i = 1; i <= len; i++)
{
@@ -716,8 +716,10 @@ gistSplitByKey(Relation r, Page page, IndexTuple *itup, int len,
* Form an array of just the don't-care tuples to pass to a
* recursive invocation of this function for the next column.
*/
- IndexTuple *newitup = (IndexTuple *) palloc(len * sizeof(IndexTuple));
- OffsetNumber *map = (OffsetNumber *) palloc(len * sizeof(OffsetNumber));
+ IndexTuple *newitup = palloc_array(IndexTuple,
+ len);
+ OffsetNumber *map = palloc_array(OffsetNumber,
+ len);
int newlen = 0;
GIST_SPLITVEC backupSplit;
diff --git a/src/backend/access/gist/gistutil.c b/src/backend/access/gist/gistutil.c
index 48db718b904..f3fcf142132 100644
--- a/src/backend/access/gist/gistutil.c
+++ b/src/backend/access/gist/gistutil.c
@@ -100,7 +100,7 @@ gistextractpage(Page page, int *len /* out */ )
maxoff = PageGetMaxOffsetNumber(page);
*len = maxoff;
- itvec = palloc(sizeof(IndexTuple) * maxoff);
+ itvec = palloc_array(IndexTuple, maxoff);
for (i = FirstOffsetNumber; i <= maxoff; i = OffsetNumberNext(i))
itvec[i - FirstOffsetNumber] = (IndexTuple) PageGetItem(page, PageGetItemId(page, i));
@@ -113,7 +113,7 @@ gistextractpage(Page page, int *len /* out */ )
IndexTuple *
gistjoinvector(IndexTuple *itvec, int *len, IndexTuple *additvec, int addlen)
{
- itvec = (IndexTuple *) repalloc(itvec, sizeof(IndexTuple) * ((*len) + addlen));
+ itvec = repalloc_array(itvec, IndexTuple, ((*len) + addlen));
memmove(&itvec[*len], additvec, sizeof(IndexTuple) * addlen);
*len += addlen;
return itvec;
diff --git a/src/backend/access/gist/gistvacuum.c b/src/backend/access/gist/gistvacuum.c
index fe0bfb781ca..bc250fdbb1c 100644
--- a/src/backend/access/gist/gistvacuum.c
+++ b/src/backend/access/gist/gistvacuum.c
@@ -61,7 +61,7 @@ gistbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
{
/* allocate stats if first time through, else re-use existing struct */
if (stats == NULL)
- stats = (IndexBulkDeleteResult *) palloc0(sizeof(IndexBulkDeleteResult));
+ stats = palloc0_object(IndexBulkDeleteResult);
gistvacuumscan(info, stats, callback, callback_state);
@@ -85,7 +85,7 @@ gistvacuumcleanup(IndexVacuumInfo *info, IndexBulkDeleteResult *stats)
*/
if (stats == NULL)
{
- stats = (IndexBulkDeleteResult *) palloc0(sizeof(IndexBulkDeleteResult));
+ stats = palloc0_object(IndexBulkDeleteResult);
gistvacuumscan(info, stats, NULL, NULL);
}
diff --git a/src/backend/access/gist/gistxlog.c b/src/backend/access/gist/gistxlog.c
index 9d54e64787a..ed1d324d46c 100644
--- a/src/backend/access/gist/gistxlog.c
+++ b/src/backend/access/gist/gistxlog.c
@@ -230,7 +230,7 @@ decodePageSplitRecord(char *begin, int len, int *n)
memcpy(n, begin, sizeof(int));
ptr = begin + sizeof(int);
- tuples = palloc(*n * sizeof(IndexTuple));
+ tuples = palloc_array(IndexTuple, *n);
for (i = 0; i < *n; i++)
{
diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c
index f950b9925f5..94c6e290693 100644
--- a/src/backend/access/hash/hash.c
+++ b/src/backend/access/hash/hash.c
@@ -187,7 +187,7 @@ hashbuild(Relation heap, Relation index, IndexInfo *indexInfo)
/*
* Return statistics
*/
- result = (IndexBuildResult *) palloc(sizeof(IndexBuildResult));
+ result = palloc_object(IndexBuildResult);
result->heap_tuples = reltuples;
result->index_tuples = buildstate.indtuples;
@@ -627,7 +627,7 @@ loop_top:
/* return statistics */
if (stats == NULL)
- stats = (IndexBulkDeleteResult *) palloc0(sizeof(IndexBulkDeleteResult));
+ stats = palloc0_object(IndexBulkDeleteResult);
stats->estimated_count = false;
stats->num_index_tuples = num_index_tuples;
stats->tuples_removed += tuples_removed;
diff --git a/src/backend/access/hash/hashsort.c b/src/backend/access/hash/hashsort.c
index 6e8c0e68a92..1df83449350 100644
--- a/src/backend/access/hash/hashsort.c
+++ b/src/backend/access/hash/hashsort.c
@@ -59,7 +59,7 @@ struct HSpool
HSpool *
_h_spoolinit(Relation heap, Relation index, uint32 num_buckets)
{
- HSpool *hspool = (HSpool *) palloc0(sizeof(HSpool));
+ HSpool *hspool = palloc0_object(HSpool);
hspool->index = index;
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index b6349950294..b7a275d87d8 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -2298,7 +2298,7 @@ heap_multi_insert(Relation relation, TupleTableSlot **slots, int ntuples,
HEAP_DEFAULT_FILLFACTOR);
/* Toast and set header data in all the slots */
- heaptuples = palloc(ntuples * sizeof(HeapTuple));
+ heaptuples = palloc_array(HeapTuple, ntuples);
for (i = 0; i < ntuples; i++)
{
HeapTuple tuple;
@@ -6730,7 +6730,7 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask,
* even member XIDs >= OldestXmin often won't be kept by second pass.
*/
nnewmembers = 0;
- newmembers = palloc(sizeof(MultiXactMember) * nmembers);
+ newmembers = palloc_array(MultiXactMember, nmembers);
has_lockers = false;
update_xid = InvalidTransactionId;
update_committed = false;
@@ -8544,7 +8544,7 @@ bottomup_sort_and_shrink(TM_IndexDeleteOp *delstate)
Assert(delstate->ndeltids > 0);
/* Calculate per-heap-block count of TIDs */
- blockgroups = palloc(sizeof(IndexDeleteCounts) * delstate->ndeltids);
+ blockgroups = palloc_array(IndexDeleteCounts, delstate->ndeltids);
for (int i = 0; i < delstate->ndeltids; i++)
{
TM_IndexDelete *ideltid = &delstate->deltids[i];
@@ -8617,7 +8617,7 @@ bottomup_sort_and_shrink(TM_IndexDeleteOp *delstate)
/* Sort groups and rearrange caller's deltids array */
qsort(blockgroups, nblockgroups, sizeof(IndexDeleteCounts),
bottomup_sort_and_shrink_cmp);
- reordereddeltids = palloc(delstate->ndeltids * sizeof(TM_IndexDelete));
+ reordereddeltids = palloc_array(TM_IndexDelete, delstate->ndeltids);
nblockgroups = Min(BOTTOMUP_MAX_NBLOCKS, nblockgroups);
/* Determine number of favorable blocks at the start of final deltids */
diff --git a/src/backend/access/heap/heapam_handler.c b/src/backend/access/heap/heapam_handler.c
index a4003cf59e1..96cdfae1cc8 100644
--- a/src/backend/access/heap/heapam_handler.c
+++ b/src/backend/access/heap/heapam_handler.c
@@ -77,7 +77,7 @@ heapam_slot_callbacks(Relation relation)
static IndexFetchTableData *
heapam_index_fetch_begin(Relation rel)
{
- IndexFetchHeapData *hscan = palloc0(sizeof(IndexFetchHeapData));
+ IndexFetchHeapData *hscan = palloc0_object(IndexFetchHeapData);
hscan->xs_base.rel = rel;
hscan->xs_cbuf = InvalidBuffer;
@@ -713,8 +713,8 @@ heapam_relation_copy_for_cluster(Relation OldHeap, Relation NewHeap,
/* Preallocate values/isnull arrays */
natts = newTupDesc->natts;
- values = (Datum *) palloc(natts * sizeof(Datum));
- isnull = (bool *) palloc(natts * sizeof(bool));
+ values = palloc_array(Datum, natts);
+ isnull = palloc_array(bool, natts);
/* Initialize the rewrite operation */
rwstate = begin_heap_rewrite(OldHeap, NewHeap, OldestXmin, *xid_cutoff,
diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 5b0e790e121..87b734fbfe9 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -409,7 +409,7 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,
* of each rel. It's convenient for code in lazy_scan_heap to always use
* these temp copies.
*/
- vacrel = (LVRelState *) palloc0(sizeof(LVRelState));
+ vacrel = palloc0_object(LVRelState);
vacrel->dbname = get_database_name(MyDatabaseId);
vacrel->relnamespace = get_namespace_name(RelationGetNamespace(rel));
vacrel->relname = pstrdup(RelationGetRelationName(rel));
@@ -429,7 +429,7 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,
if (instrument && vacrel->nindexes > 0)
{
/* Copy index names used by instrumentation (not error reporting) */
- indnames = palloc(sizeof(char *) * vacrel->nindexes);
+ indnames = palloc_array(char *, vacrel->nindexes);
for (int i = 0; i < vacrel->nindexes; i++)
indnames[i] = pstrdup(RelationGetRelationName(vacrel->indrels[i]));
}
@@ -3033,7 +3033,7 @@ dead_items_alloc(LVRelState *vacrel, int nworkers)
* locally.
*/
- dead_items_info = (VacDeadItemsInfo *) palloc(sizeof(VacDeadItemsInfo));
+ dead_items_info = palloc_object(VacDeadItemsInfo);
dead_items_info->max_bytes = vac_work_mem * 1024L;
dead_items_info->num_items = 0;
vacrel->dead_items_info = dead_items_info;
diff --git a/src/backend/access/index/amvalidate.c b/src/backend/access/index/amvalidate.c
index 4cf237019ad..8d7e7171bd7 100644
--- a/src/backend/access/index/amvalidate.c
+++ b/src/backend/access/index/amvalidate.c
@@ -118,7 +118,7 @@ identify_opfamily_groups(CatCList *oprlist, CatCList *proclist)
}
/* Time for a new group */
- thisgroup = (OpFamilyOpFuncGroup *) palloc(sizeof(OpFamilyOpFuncGroup));
+ thisgroup = palloc_object(OpFamilyOpFuncGroup);
if (oprform &&
(!procform ||
(oprform->amoplefttype < procform->amproclefttype ||
diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c
index 3eddbcf3a82..db4e79495a8 100644
--- a/src/backend/access/nbtree/nbtinsert.c
+++ b/src/backend/access/nbtree/nbtinsert.c
@@ -2950,7 +2950,7 @@ _bt_deadblocks(Page page, OffsetNumber *deletable, int ndeletable,
*/
spacentids = ndeletable + 1;
ntids = 0;
- tidblocks = (BlockNumber *) palloc(sizeof(BlockNumber) * spacentids);
+ tidblocks = palloc_array(BlockNumber, spacentids);
/*
* First add the table block for the incoming newitem. This is the one
@@ -2972,8 +2972,9 @@ _bt_deadblocks(Page page, OffsetNumber *deletable, int ndeletable,
if (ntids + 1 > spacentids)
{
spacentids *= 2;
- tidblocks = (BlockNumber *)
- repalloc(tidblocks, sizeof(BlockNumber) * spacentids);
+ tidblocks = repalloc_array(tidblocks,
+ BlockNumber,
+ spacentids);
}
tidblocks[ntids++] = ItemPointerGetBlockNumber(&itup->t_tid);
@@ -2985,8 +2986,9 @@ _bt_deadblocks(Page page, OffsetNumber *deletable, int ndeletable,
if (ntids + nposting > spacentids)
{
spacentids = Max(spacentids * 2, ntids + nposting);
- tidblocks = (BlockNumber *)
- repalloc(tidblocks, sizeof(BlockNumber) * spacentids);
+ tidblocks = repalloc_array(tidblocks,
+ BlockNumber,
+ spacentids);
}
for (int j = 0; j < nposting; j++)
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 3d617f168f5..5686fc1f531 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -867,7 +867,7 @@ btbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
/* allocate stats if first time through, else re-use existing struct */
if (stats == NULL)
- stats = (IndexBulkDeleteResult *) palloc0(sizeof(IndexBulkDeleteResult));
+ stats = palloc0_object(IndexBulkDeleteResult);
/* Establish the vacuum cycle ID to use for this scan */
/* The ENSURE stuff ensures we clean up shared memory on failure */
@@ -928,7 +928,7 @@ btvacuumcleanup(IndexVacuumInfo *info, IndexBulkDeleteResult *stats)
* We handle the problem by making num_index_tuples an estimate in
* cleanup-only case.
*/
- stats = (IndexBulkDeleteResult *) palloc0(sizeof(IndexBulkDeleteResult));
+ stats = palloc0_object(IndexBulkDeleteResult);
btvacuumscan(info, stats, NULL, NULL, 0);
stats->estimated_count = true;
}
diff --git a/src/backend/access/nbtree/nbtsort.c b/src/backend/access/nbtree/nbtsort.c
index 7aba852db90..aebdab16d07 100644
--- a/src/backend/access/nbtree/nbtsort.c
+++ b/src/backend/access/nbtree/nbtsort.c
@@ -334,7 +334,7 @@ btbuild(Relation heap, Relation index, IndexInfo *indexInfo)
if (buildstate.btleader)
_bt_end_parallel(buildstate.btleader);
- result = (IndexBuildResult *) palloc(sizeof(IndexBuildResult));
+ result = palloc_object(IndexBuildResult);
result->heap_tuples = reltuples;
result->index_tuples = buildstate.indtuples;
@@ -365,7 +365,7 @@ static double
_bt_spools_heapscan(Relation heap, Relation index, BTBuildState *buildstate,
IndexInfo *indexInfo)
{
- BTSpool *btspool = (BTSpool *) palloc0(sizeof(BTSpool));
+ BTSpool *btspool = palloc0_object(BTSpool);
SortCoordinate coordinate = NULL;
double reltuples = 0;
@@ -439,7 +439,7 @@ _bt_spools_heapscan(Relation heap, Relation index, BTBuildState *buildstate,
*/
if (indexInfo->ii_Unique)
{
- BTSpool *btspool2 = (BTSpool *) palloc0(sizeof(BTSpool));
+ BTSpool *btspool2 = palloc0_object(BTSpool);
SortCoordinate coordinate2 = NULL;
/* Initialize secondary spool */
@@ -647,7 +647,7 @@ _bt_blwritepage(BTWriteState *wstate, BulkWriteBuffer buf, BlockNumber blkno)
static BTPageState *
_bt_pagestate(BTWriteState *wstate, uint32 level)
{
- BTPageState *state = (BTPageState *) palloc0(sizeof(BTPageState));
+ BTPageState *state = palloc0_object(BTPageState);
/* create initial page for level */
state->btps_buf = _bt_blnewpage(wstate, level);
@@ -1406,7 +1406,7 @@ _bt_begin_parallel(BTBuildState *buildstate, bool isconcurrent, int request)
Sharedsort *sharedsort;
Sharedsort *sharedsort2;
BTSpool *btspool = buildstate->spool;
- BTLeader *btleader = (BTLeader *) palloc0(sizeof(BTLeader));
+ BTLeader *btleader = palloc0_object(BTLeader);
WalUsage *walusage;
BufferUsage *bufferusage;
bool leaderparticipates = true;
@@ -1695,7 +1695,7 @@ _bt_leader_participate_as_worker(BTBuildState *buildstate)
int sortmem;
/* Allocate memory and initialize private spool */
- leaderworker = (BTSpool *) palloc0(sizeof(BTSpool));
+ leaderworker = palloc0_object(BTSpool);
leaderworker->heap = buildstate->spool->heap;
leaderworker->index = buildstate->spool->index;
leaderworker->isunique = buildstate->spool->isunique;
@@ -1707,7 +1707,7 @@ _bt_leader_participate_as_worker(BTBuildState *buildstate)
else
{
/* Allocate memory for worker's own private secondary spool */
- leaderworker2 = (BTSpool *) palloc0(sizeof(BTSpool));
+ leaderworker2 = palloc0_object(BTSpool);
/* Initialize worker's own secondary spool */
leaderworker2->heap = leaderworker->heap;
@@ -1798,7 +1798,7 @@ _bt_parallel_build_main(dsm_segment *seg, shm_toc *toc)
indexRel = index_open(btshared->indexrelid, indexLockmode);
/* Initialize worker's own spool */
- btspool = (BTSpool *) palloc0(sizeof(BTSpool));
+ btspool = palloc0_object(BTSpool);
btspool->heap = heapRel;
btspool->index = indexRel;
btspool->isunique = btshared->isunique;
@@ -1815,7 +1815,7 @@ _bt_parallel_build_main(dsm_segment *seg, shm_toc *toc)
else
{
/* Allocate memory for worker's own private secondary spool */
- btspool2 = (BTSpool *) palloc0(sizeof(BTSpool));
+ btspool2 = palloc0_object(BTSpool);
/* Initialize worker's own secondary spool */
btspool2->heap = btspool->heap;
diff --git a/src/backend/access/spgist/spgdoinsert.c b/src/backend/access/spgist/spgdoinsert.c
index 58c06ef2dc4..074419eb9d4 100644
--- a/src/backend/access/spgist/spgdoinsert.c
+++ b/src/backend/access/spgist/spgdoinsert.c
@@ -89,7 +89,7 @@ addNode(SpGistState *state, SpGistInnerTuple tuple, Datum label, int offset)
else if (offset > tuple->nNodes)
elog(ERROR, "invalid offset for adding node to SPGiST inner tuple");
- nodes = palloc(sizeof(SpGistNodeTuple) * (tuple->nNodes + 1));
+ nodes = palloc_array(SpGistNodeTuple, (tuple->nNodes + 1));
SGITITERATE(tuple, i, node)
{
if (i < offset)
@@ -410,8 +410,8 @@ moveLeafs(Relation index, SpGistState *state,
/* Locate the tuples to be moved, and count up the space needed */
i = PageGetMaxOffsetNumber(current->page);
- toDelete = (OffsetNumber *) palloc(sizeof(OffsetNumber) * i);
- toInsert = (OffsetNumber *) palloc(sizeof(OffsetNumber) * (i + 1));
+ toDelete = palloc_array(OffsetNumber, i);
+ toInsert = palloc_array(OffsetNumber, (i + 1));
size = newLeafTuple->size + sizeof(ItemIdData);
@@ -722,11 +722,11 @@ doPickSplit(Relation index, SpGistState *state,
max = PageGetMaxOffsetNumber(current->page);
n = max + 1;
in.datums = (Datum *) palloc(sizeof(Datum) * n);
- toDelete = (OffsetNumber *) palloc(sizeof(OffsetNumber) * n);
- toInsert = (OffsetNumber *) palloc(sizeof(OffsetNumber) * n);
- oldLeafs = (SpGistLeafTuple *) palloc(sizeof(SpGistLeafTuple) * n);
- newLeafs = (SpGistLeafTuple *) palloc(sizeof(SpGistLeafTuple) * n);
- leafPageSelect = (uint8 *) palloc(sizeof(uint8) * n);
+ toDelete = palloc_array(OffsetNumber, n);
+ toInsert = palloc_array(OffsetNumber, n);
+ oldLeafs = palloc_array(SpGistLeafTuple, n);
+ newLeafs = palloc_array(SpGistLeafTuple, n);
+ leafPageSelect = palloc_array(uint8, n);
STORE_STATE(state, xlrec.stateSrc);
@@ -918,8 +918,8 @@ doPickSplit(Relation index, SpGistState *state,
* out.nNodes with a value larger than the number of tuples on the input
* page, we can't allocate these arrays before here.
*/
- nodes = (SpGistNodeTuple *) palloc(sizeof(SpGistNodeTuple) * out.nNodes);
- leafSizes = (int *) palloc0(sizeof(int) * out.nNodes);
+ nodes = palloc_array(SpGistNodeTuple, out.nNodes);
+ leafSizes = palloc0_array(int, out.nNodes);
/*
* Form nodes of inner tuple and inner tuple itself
@@ -1058,7 +1058,7 @@ doPickSplit(Relation index, SpGistState *state,
* do so, even if totalLeafSizes is less than the available space,
* because we can't split a group across pages.
*/
- nodePageSelect = (uint8 *) palloc(sizeof(uint8) * out.nNodes);
+ nodePageSelect = palloc_array(uint8, out.nNodes);
curspace = currentFreeSpace;
newspace = PageGetExactFreeSpace(BufferGetPage(newLeafBuffer));
@@ -1744,8 +1744,8 @@ spgSplitNodeAction(Relation index, SpGistState *state,
* Construct new prefix tuple with requested number of nodes. We'll fill
* in the childNodeN'th node's downlink below.
*/
- nodes = (SpGistNodeTuple *) palloc(sizeof(SpGistNodeTuple) *
- out->result.splitTuple.prefixNNodes);
+ nodes = palloc_array(SpGistNodeTuple,
+ out->result.splitTuple.prefixNNodes);
for (i = 0; i < out->result.splitTuple.prefixNNodes; i++)
{
@@ -1773,7 +1773,7 @@ spgSplitNodeAction(Relation index, SpGistState *state,
* same node datums, but with the prefix specified by the picksplit
* function.
*/
- nodes = palloc(sizeof(SpGistNodeTuple) * innerTuple->nNodes);
+ nodes = palloc_array(SpGistNodeTuple, innerTuple->nNodes);
SGITITERATE(innerTuple, i, node)
{
nodes[i] = node;
diff --git a/src/backend/access/spgist/spginsert.c b/src/backend/access/spgist/spginsert.c
index 6a61e093fa0..dda99755f66 100644
--- a/src/backend/access/spgist/spginsert.c
+++ b/src/backend/access/spgist/spginsert.c
@@ -140,7 +140,7 @@ spgbuild(Relation heap, Relation index, IndexInfo *indexInfo)
true);
}
- result = (IndexBuildResult *) palloc0(sizeof(IndexBuildResult));
+ result = palloc0_object(IndexBuildResult);
result->heap_tuples = reltuples;
result->index_tuples = buildstate.indtuples;
diff --git a/src/backend/access/spgist/spgkdtreeproc.c b/src/backend/access/spgist/spgkdtreeproc.c
index d6989759e5f..89831f29c97 100644
--- a/src/backend/access/spgist/spgkdtreeproc.c
+++ b/src/backend/access/spgist/spgkdtreeproc.c
@@ -114,7 +114,7 @@ spg_kd_picksplit(PG_FUNCTION_ARGS)
SortedPoint *sorted;
double coord;
- sorted = palloc(sizeof(*sorted) * in->nTuples);
+ sorted = palloc_array(SortedPoint, in->nTuples);
for (i = 0; i < in->nTuples; i++)
{
sorted[i].p = DatumGetPointP(in->datums[i]);
diff --git a/src/backend/access/spgist/spgproc.c b/src/backend/access/spgist/spgproc.c
index 660009291da..05d560b5ad9 100644
--- a/src/backend/access/spgist/spgproc.c
+++ b/src/backend/access/spgist/spgproc.c
@@ -64,7 +64,7 @@ spg_key_orderbys_distances(Datum key, bool isLeaf,
ScanKey orderbys, int norderbys)
{
int sk_num;
- double *distances = (double *) palloc(norderbys * sizeof(double)),
+ double *distances = palloc_array(double, norderbys),
*distance = distances;
for (sk_num = 0; sk_num < norderbys; ++sk_num, ++orderbys, ++distance)
@@ -81,7 +81,7 @@ spg_key_orderbys_distances(Datum key, bool isLeaf,
BOX *
box_copy(BOX *orig)
{
- BOX *result = palloc(sizeof(BOX));
+ BOX *result = palloc_object(BOX);
*result = *orig;
return result;
diff --git a/src/backend/access/spgist/spgquadtreeproc.c b/src/backend/access/spgist/spgquadtreeproc.c
index 3e8cfa1709a..e0baa28ba88 100644
--- a/src/backend/access/spgist/spgquadtreeproc.c
+++ b/src/backend/access/spgist/spgquadtreeproc.c
@@ -82,7 +82,7 @@ getQuadrant(Point *centroid, Point *tst)
static BOX *
getQuadrantArea(BOX *bbox, Point *centroid, int quadrant)
{
- BOX *result = (BOX *) palloc(sizeof(BOX));
+ BOX *result = palloc_object(BOX);
switch (quadrant)
{
@@ -177,11 +177,11 @@ spg_quad_picksplit(PG_FUNCTION_ARGS)
/* Use the median values of x and y as the centroid point */
Point **sorted;
- sorted = palloc(sizeof(*sorted) * in->nTuples);
+ sorted = palloc_array(Point *, in->nTuples);
for (i = 0; i < in->nTuples; i++)
sorted[i] = DatumGetPointP(in->datums[i]);
- centroid = palloc(sizeof(*centroid));
+ centroid = palloc_object(Point);
qsort(sorted, in->nTuples, sizeof(*sorted), x_cmp);
centroid->x = sorted[in->nTuples >> 1]->x;
@@ -189,7 +189,7 @@ spg_quad_picksplit(PG_FUNCTION_ARGS)
centroid->y = sorted[in->nTuples >> 1]->y;
#else
/* Use the average values of x and y as the centroid point */
- centroid = palloc0(sizeof(*centroid));
+ centroid = palloc0_object(Point);
for (i = 0; i < in->nTuples; i++)
{
diff --git a/src/backend/access/spgist/spgscan.c b/src/backend/access/spgist/spgscan.c
index 986362a777f..4e0511ae0da 100644
--- a/src/backend/access/spgist/spgscan.c
+++ b/src/backend/access/spgist/spgscan.c
@@ -701,7 +701,7 @@ spgInnerTest(SpGistScanOpaque so, SpGistSearchItem *item,
{
/* collect node pointers */
SpGistNodeTuple node;
- SpGistNodeTuple *nodes = (SpGistNodeTuple *) palloc(sizeof(SpGistNodeTuple) * nNodes);
+ SpGistNodeTuple *nodes = palloc_array(SpGistNodeTuple, nNodes);
SGITITERATE(innerTuple, i, node)
{
@@ -970,8 +970,8 @@ storeGettuple(SpGistScanOpaque so, ItemPointer heapPtr,
so->distances[so->nPtrs] = NULL;
else
{
- IndexOrderByDistance *distances =
- palloc(sizeof(distances[0]) * so->numberOfOrderBys);
+ IndexOrderByDistance *distances = palloc_array(IndexOrderByDistance,
+ so->numberOfOrderBys);
int i;
for (i = 0; i < so->numberOfOrderBys; i++)
diff --git a/src/backend/access/spgist/spgtextproc.c b/src/backend/access/spgist/spgtextproc.c
index 73842655f08..1167b7514c4 100644
--- a/src/backend/access/spgist/spgtextproc.c
+++ b/src/backend/access/spgist/spgtextproc.c
@@ -371,7 +371,7 @@ spg_text_picksplit(PG_FUNCTION_ARGS)
}
/* Extract the node label (first non-common byte) from each value */
- nodes = (spgNodePtr *) palloc(sizeof(spgNodePtr) * in->nTuples);
+ nodes = palloc_array(spgNodePtr, in->nTuples);
for (i = 0; i < in->nTuples; i++)
{
diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c
index 6e968048917..cb987da37f4 100644
--- a/src/backend/access/spgist/spgutils.c
+++ b/src/backend/access/spgist/spgutils.c
@@ -1172,7 +1172,7 @@ spgExtractNodeLabels(SpGistState *state, SpGistInnerTuple innerTuple)
}
else
{
- nodeLabels = (Datum *) palloc(sizeof(Datum) * innerTuple->nNodes);
+ nodeLabels = palloc_array(Datum, innerTuple->nNodes);
SGITITERATE(innerTuple, i, node)
{
if (IndexTupleHasNulls(node))
diff --git a/src/backend/access/spgist/spgvacuum.c b/src/backend/access/spgist/spgvacuum.c
index 894aefa19e1..110c942ab9e 100644
--- a/src/backend/access/spgist/spgvacuum.c
+++ b/src/backend/access/spgist/spgvacuum.c
@@ -75,7 +75,7 @@ spgAddPendingTID(spgBulkDeleteState *bds, ItemPointer tid)
listLink = &pitem->next;
}
/* not there, so append new entry */
- pitem = (spgVacPendingItem *) palloc(sizeof(spgVacPendingItem));
+ pitem = palloc_object(spgVacPendingItem);
pitem->tid = *tid;
pitem->done = false;
pitem->next = NULL;
@@ -920,7 +920,7 @@ spgbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
/* allocate stats if first time through, else re-use existing struct */
if (stats == NULL)
- stats = (IndexBulkDeleteResult *) palloc0(sizeof(IndexBulkDeleteResult));
+ stats = palloc0_object(IndexBulkDeleteResult);
bds.info = info;
bds.stats = stats;
bds.callback = callback;
@@ -960,7 +960,7 @@ spgvacuumcleanup(IndexVacuumInfo *info, IndexBulkDeleteResult *stats)
*/
if (stats == NULL)
{
- stats = (IndexBulkDeleteResult *) palloc0(sizeof(IndexBulkDeleteResult));
+ stats = palloc0_object(IndexBulkDeleteResult);
bds.info = info;
bds.stats = stats;
bds.callback = dummy_callback;
diff --git a/src/backend/access/spgist/spgxlog.c b/src/backend/access/spgist/spgxlog.c
index b7986e6f713..3c540f20a1c 100644
--- a/src/backend/access/spgist/spgxlog.c
+++ b/src/backend/access/spgist/spgxlog.c
@@ -909,7 +909,7 @@ spgRedoVacuumRedirect(XLogReaderState *record)
int max = PageGetMaxOffsetNumber(page);
OffsetNumber *toDelete;
- toDelete = palloc(sizeof(OffsetNumber) * max);
+ toDelete = palloc_array(OffsetNumber, max);
for (i = xldata->firstPlaceholder; i <= max; i++)
toDelete[i - xldata->firstPlaceholder] = i;
diff --git a/src/backend/access/transam/multixact.c b/src/backend/access/transam/multixact.c
index 27ccdf9500f..1d0cf197887 100644
--- a/src/backend/access/transam/multixact.c
+++ b/src/backend/access/transam/multixact.c
@@ -558,8 +558,7 @@ MultiXactIdExpand(MultiXactId multi, TransactionId xid, MultiXactStatus status)
* Note we have the same race condition here as above: j could be 0 at the
* end of the loop.
*/
- newMembers = (MultiXactMember *)
- palloc(sizeof(MultiXactMember) * (nmembers + 1));
+ newMembers = palloc_array(MultiXactMember, (nmembers + 1));
for (i = 0, j = 0; i < nmembers; i++)
{
@@ -1506,7 +1505,7 @@ retry:
if (slept)
ConditionVariableCancelSleep();
- ptr = (MultiXactMember *) palloc(length * sizeof(MultiXactMember));
+ ptr = palloc_array(MultiXactMember, length);
truelength = 0;
prev_pageno = -1;
@@ -3532,7 +3531,7 @@ pg_get_multixact_members(PG_FUNCTION_ARGS)
funccxt = SRF_FIRSTCALL_INIT();
oldcxt = MemoryContextSwitchTo(funccxt->multi_call_memory_ctx);
- multi = palloc(sizeof(mxact));
+ multi = palloc_object(mxact);
/* no need to allow for old values here */
multi->nmembers = GetMultiXactIdMembers(mxid, &multi->members, false,
false);
diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c
index 7817bedc2ef..acab01f46ac 100644
--- a/src/backend/access/transam/parallel.c
+++ b/src/backend/access/transam/parallel.c
@@ -182,7 +182,7 @@ CreateParallelContext(const char *library_name, const char *function_name,
oldcontext = MemoryContextSwitchTo(TopTransactionContext);
/* Initialize a new ParallelContext. */
- pcxt = palloc0(sizeof(ParallelContext));
+ pcxt = palloc0_object(ParallelContext);
pcxt->subid = GetCurrentSubTransactionId();
pcxt->nworkers = nworkers;
pcxt->nworkers_to_launch = nworkers;
diff --git a/src/backend/access/transam/timeline.c b/src/backend/access/transam/timeline.c
index a27f27cc037..4ebb7495200 100644
--- a/src/backend/access/transam/timeline.c
+++ b/src/backend/access/transam/timeline.c
@@ -87,7 +87,7 @@ readTimeLineHistory(TimeLineID targetTLI)
/* Timeline 1 does not have a history file, so no need to check */
if (targetTLI == 1)
{
- entry = (TimeLineHistoryEntry *) palloc(sizeof(TimeLineHistoryEntry));
+ entry = palloc_object(TimeLineHistoryEntry);
entry->tli = targetTLI;
entry->begin = entry->end = InvalidXLogRecPtr;
return list_make1(entry);
@@ -110,7 +110,7 @@ readTimeLineHistory(TimeLineID targetTLI)
(errcode_for_file_access(),
errmsg("could not open file \"%s\": %m", path)));
/* Not there, so assume no parents */
- entry = (TimeLineHistoryEntry *) palloc(sizeof(TimeLineHistoryEntry));
+ entry = palloc_object(TimeLineHistoryEntry);
entry->tli = targetTLI;
entry->begin = entry->end = InvalidXLogRecPtr;
return list_make1(entry);
@@ -175,7 +175,7 @@ readTimeLineHistory(TimeLineID targetTLI)
lasttli = tli;
- entry = (TimeLineHistoryEntry *) palloc(sizeof(TimeLineHistoryEntry));
+ entry = palloc_object(TimeLineHistoryEntry);
entry->tli = tli;
entry->begin = prevend;
entry->end = ((uint64) (switchpoint_hi)) << 32 | (uint64) switchpoint_lo;
@@ -198,7 +198,7 @@ readTimeLineHistory(TimeLineID targetTLI)
* Create one more entry for the "tip" of the timeline, which has no entry
* in the history file.
*/
- entry = (TimeLineHistoryEntry *) palloc(sizeof(TimeLineHistoryEntry));
+ entry = palloc_object(TimeLineHistoryEntry);
entry->tli = targetTLI;
entry->begin = prevend;
entry->end = InvalidXLogRecPtr;
diff --git a/src/backend/access/transam/twophase.c b/src/backend/access/transam/twophase.c
index ab2f4a8a92f..90763565456 100644
--- a/src/backend/access/transam/twophase.c
+++ b/src/backend/access/transam/twophase.c
@@ -746,7 +746,7 @@ pg_prepared_xact(PG_FUNCTION_ARGS)
* Collect all the 2PC status information that we will format and send
* out as a result set.
*/
- status = (Working_State *) palloc(sizeof(Working_State));
+ status = palloc_object(Working_State);
funcctx->user_fctx = status;
status->ngxacts = GetPreparedTransactionList(&status->array);
@@ -2007,12 +2007,15 @@ PrescanPreparedTransactions(TransactionId **xids_p, int *nxids_p)
if (nxids == 0)
{
allocsize = 10;
- xids = palloc(allocsize * sizeof(TransactionId));
+ xids = palloc_array(TransactionId,
+ allocsize);
}
else
{
allocsize = allocsize * 2;
- xids = repalloc(xids, allocsize * sizeof(TransactionId));
+ xids = repalloc_array(xids,
+ TransactionId,
+ allocsize);
}
}
xids[nxids++] = xid;
diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c
index d331ab90d78..b1974d4556b 100644
--- a/src/backend/access/transam/xact.c
+++ b/src/backend/access/transam/xact.c
@@ -662,7 +662,7 @@ AssignTransactionId(TransactionState s)
TransactionState *parents;
size_t parentOffset = 0;
- parents = palloc(sizeof(TransactionState) * s->nestingLevel);
+ parents = palloc_array(TransactionState, s->nestingLevel);
while (p != NULL && !FullTransactionIdIsValid(p->fullTransactionId))
{
parents[parentOffset++] = p;
@@ -5569,7 +5569,7 @@ SerializeTransactionState(Size maxsize, char *start_address)
<= maxsize);
/* Copy them to our scratch space. */
- workspace = palloc(nxids * sizeof(TransactionId));
+ workspace = palloc_array(TransactionId, nxids);
for (s = CurrentTransactionState; s != NULL; s = s->parent)
{
if (FullTransactionIdIsValid(s->fullTransactionId))
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index bf3dbda901d..50ab515e651 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -4846,7 +4846,7 @@ void
LocalProcessControlFile(bool reset)
{
Assert(reset || ControlFile == NULL);
- ControlFile = palloc(sizeof(ControlFileData));
+ ControlFile = palloc_object(ControlFileData);
ReadControlFile();
}
@@ -9071,7 +9071,7 @@ do_pg_backup_start(const char *backupidstr, bool fast, List **tablespaces,
continue;
}
- ti = palloc(sizeof(tablespaceinfo));
+ ti = palloc_object(tablespaceinfo);
ti->oid = tsoid;
ti->path = pstrdup(linkpath);
ti->rpath = relpath;
diff --git a/src/backend/access/transam/xlogfuncs.c b/src/backend/access/transam/xlogfuncs.c
index 8c3090165f0..8ea7347e39c 100644
--- a/src/backend/access/transam/xlogfuncs.c
+++ b/src/backend/access/transam/xlogfuncs.c
@@ -90,7 +90,7 @@ pg_backup_start(PG_FUNCTION_ARGS)
}
oldcontext = MemoryContextSwitchTo(backupcontext);
- backup_state = (BackupState *) palloc0(sizeof(BackupState));
+ backup_state = palloc0_object(BackupState);
tablespace_map = makeStringInfo();
MemoryContextSwitchTo(oldcontext);
diff --git a/src/backend/access/transam/xloginsert.c b/src/backend/access/transam/xloginsert.c
index efed0970924..867fc733269 100644
--- a/src/backend/access/transam/xloginsert.c
+++ b/src/backend/access/transam/xloginsert.c
@@ -196,8 +196,9 @@ XLogEnsureRecordSpace(int max_block_id, int ndatas)
if (nbuffers > max_registered_buffers)
{
- registered_buffers = (registered_buffer *)
- repalloc(registered_buffers, sizeof(registered_buffer) * nbuffers);
+ registered_buffers = repalloc_array(registered_buffers,
+ registered_buffer,
+ nbuffers);
/*
* At least the padding bytes in the structs must be zeroed, because
@@ -210,7 +211,7 @@ XLogEnsureRecordSpace(int max_block_id, int ndatas)
if (ndatas > max_rdatas)
{
- rdatas = (XLogRecData *) repalloc(rdatas, sizeof(XLogRecData) * ndatas);
+ rdatas = repalloc_array(rdatas, XLogRecData, ndatas);
max_rdatas = ndatas;
}
}
diff --git a/src/backend/access/transam/xlogprefetcher.c b/src/backend/access/transam/xlogprefetcher.c
index 7735562db01..0670c897621 100644
--- a/src/backend/access/transam/xlogprefetcher.c
+++ b/src/backend/access/transam/xlogprefetcher.c
@@ -364,7 +364,7 @@ XLogPrefetcherAllocate(XLogReaderState *reader)
XLogPrefetcher *prefetcher;
HASHCTL ctl;
- prefetcher = palloc0(sizeof(XLogPrefetcher));
+ prefetcher = palloc0_object(XLogPrefetcher);
prefetcher->reader = reader;
ctl.keysize = sizeof(RelFileLocator);
diff --git a/src/backend/access/transam/xlogrecovery.c b/src/backend/access/transam/xlogrecovery.c
index 0bbe2eea206..0e0906f99b6 100644
--- a/src/backend/access/transam/xlogrecovery.c
+++ b/src/backend/access/transam/xlogrecovery.c
@@ -551,7 +551,7 @@ InitWalRecovery(ControlFileData *ControlFile, bool *wasShutdown_ptr,
* Set the WAL reading processor now, as it will be needed when reading
* the checkpoint record required (backup_label or not).
*/
- private = palloc0(sizeof(XLogPageReadPrivate));
+ private = palloc0_object(XLogPageReadPrivate);
xlogreader =
XLogReaderAllocate(wal_segment_size, NULL,
XL_ROUTINE(.page_read = &XLogPageRead,
@@ -1406,7 +1406,7 @@ read_tablespace_map(List **tablespaces)
errmsg("invalid data in file \"%s\"", TABLESPACE_MAP)));
str[n++] = '\0';
- ti = palloc0(sizeof(tablespaceinfo));
+ ti = palloc0_object(tablespaceinfo);
errno = 0;
ti->oid = strtoul(str, &endp, 10);
if (*endp != '\0' || errno == EINVAL || errno == ERANGE)
@@ -1457,7 +1457,7 @@ read_tablespace_map(List **tablespaces)
EndOfWalRecoveryInfo *
FinishWalRecovery(void)
{
- EndOfWalRecoveryInfo *result = palloc(sizeof(EndOfWalRecoveryInfo));
+ EndOfWalRecoveryInfo *result = palloc_object(EndOfWalRecoveryInfo);
XLogRecPtr lastRec;
TimeLineID lastRecTLI;
XLogRecPtr endOfLog;
diff --git a/src/backend/access/transam/xlogutils.c b/src/backend/access/transam/xlogutils.c
index 68d53815925..7d992fea070 100644
--- a/src/backend/access/transam/xlogutils.c
+++ b/src/backend/access/transam/xlogutils.c
@@ -585,7 +585,7 @@ CreateFakeRelcacheEntry(RelFileLocator rlocator)
Relation rel;
/* Allocate the Relation struct and all related space in one block. */
- fakeentry = palloc0(sizeof(FakeRelCacheEntryData));
+ fakeentry = palloc0_object(FakeRelCacheEntryData);
rel = (Relation) fakeentry;
rel->rd_rel = &fakeentry->pgc;
diff --git a/src/backend/backup/basebackup.c b/src/backend/backup/basebackup.c
index 3f8a3c55725..11be27a77c5 100644
--- a/src/backend/backup/basebackup.c
+++ b/src/backend/backup/basebackup.c
@@ -262,7 +262,7 @@ perform_base_backup(basebackup_options *opt, bbsink *sink,
total_checksum_failures = 0;
/* Allocate backup related variables. */
- backup_state = (BackupState *) palloc0(sizeof(BackupState));
+ backup_state = palloc0_object(BackupState);
tablespace_map = makeStringInfo();
basebackup_progress_wait_checkpoint();
@@ -289,7 +289,7 @@ perform_base_backup(basebackup_options *opt, bbsink *sink,
PrepareForIncrementalBackup(ib, backup_state);
/* Add a node for the base directory at the end */
- newti = palloc0(sizeof(tablespaceinfo));
+ newti = palloc0_object(tablespaceinfo);
newti->size = -1;
state.tablespaces = lappend(state.tablespaces, newti);
@@ -1206,7 +1206,8 @@ sendDir(bbsink *sink, const char *path, int basepathlen, bool sizeonly,
* But we don't need it at all if this is not an incremental backup.
*/
if (ib != NULL)
- relative_block_numbers = palloc(sizeof(BlockNumber) * RELSEG_SIZE);
+ relative_block_numbers = palloc_array(BlockNumber,
+ RELSEG_SIZE);
/*
* Determine if the current path is a database directory that can contain
diff --git a/src/backend/backup/basebackup_copy.c b/src/backend/backup/basebackup_copy.c
index a284ce318ff..e5e9c2e94c4 100644
--- a/src/backend/backup/basebackup_copy.c
+++ b/src/backend/backup/basebackup_copy.c
@@ -107,7 +107,7 @@ static const bbsink_ops bbsink_copystream_ops = {
bbsink *
bbsink_copystream_new(bool send_to_client)
{
- bbsink_copystream *sink = palloc0(sizeof(bbsink_copystream));
+ bbsink_copystream *sink = palloc0_object(bbsink_copystream);
*((const bbsink_ops **) &sink->base.bbs_ops) = &bbsink_copystream_ops;
sink->send_to_client = send_to_client;
diff --git a/src/backend/backup/basebackup_gzip.c b/src/backend/backup/basebackup_gzip.c
index c4cbb5f5276..aaad834291a 100644
--- a/src/backend/backup/basebackup_gzip.c
+++ b/src/backend/backup/basebackup_gzip.c
@@ -76,7 +76,7 @@ bbsink_gzip_new(bbsink *next, pg_compress_specification *compress)
Assert((compresslevel >= 1 && compresslevel <= 9) ||
compresslevel == Z_DEFAULT_COMPRESSION);
- sink = palloc0(sizeof(bbsink_gzip));
+ sink = palloc0_object(bbsink_gzip);
*((const bbsink_ops **) &sink->base.bbs_ops) = &bbsink_gzip_ops;
sink->base.bbs_next = next;
sink->compresslevel = compresslevel;
diff --git a/src/backend/backup/basebackup_incremental.c b/src/backend/backup/basebackup_incremental.c
index 360711fadb8..091bce3fca0 100644
--- a/src/backend/backup/basebackup_incremental.c
+++ b/src/backend/backup/basebackup_incremental.c
@@ -157,7 +157,7 @@ CreateIncrementalBackupInfo(MemoryContext mcxt)
oldcontext = MemoryContextSwitchTo(mcxt);
- ib = palloc0(sizeof(IncrementalBackupInfo));
+ ib = palloc0_object(IncrementalBackupInfo);
ib->mcxt = mcxt;
initStringInfo(&ib->buf);
@@ -169,7 +169,7 @@ CreateIncrementalBackupInfo(MemoryContext mcxt)
*/
ib->manifest_files = backup_file_create(mcxt, 10000, NULL);
- context = palloc0(sizeof(JsonManifestParseContext));
+ context = palloc0_object(JsonManifestParseContext);
/* Parse the manifest. */
context->private_data = ib;
context->version_cb = manifest_process_version;
@@ -311,7 +311,7 @@ PrepareForIncrementalBackup(IncrementalBackupInfo *ib,
* to the beginning.
*/
expectedTLEs = readTimeLineHistory(backup_state->starttli);
- tlep = palloc0(num_wal_ranges * sizeof(TimeLineHistoryEntry *));
+ tlep = palloc0_array(TimeLineHistoryEntry *, num_wal_ranges);
for (i = 0; i < num_wal_ranges; ++i)
{
backup_wal_range *range = list_nth(ib->manifest_wal_ranges, i);
@@ -995,7 +995,7 @@ manifest_process_wal_range(JsonManifestParseContext *context,
XLogRecPtr end_lsn)
{
IncrementalBackupInfo *ib = context->private_data;
- backup_wal_range *range = palloc(sizeof(backup_wal_range));
+ backup_wal_range *range = palloc_object(backup_wal_range);
range->tli = tli;
range->start_lsn = start_lsn;
diff --git a/src/backend/backup/basebackup_lz4.c b/src/backend/backup/basebackup_lz4.c
index c5ceccb846f..bd5704520fb 100644
--- a/src/backend/backup/basebackup_lz4.c
+++ b/src/backend/backup/basebackup_lz4.c
@@ -75,7 +75,7 @@ bbsink_lz4_new(bbsink *next, pg_compress_specification *compress)
compresslevel = compress->level;
Assert(compresslevel >= 0 && compresslevel <= 12);
- sink = palloc0(sizeof(bbsink_lz4));
+ sink = palloc0_object(bbsink_lz4);
*((const bbsink_ops **) &sink->base.bbs_ops) = &bbsink_lz4_ops;
sink->base.bbs_next = next;
sink->compresslevel = compresslevel;
diff --git a/src/backend/backup/basebackup_progress.c b/src/backend/backup/basebackup_progress.c
index 1d22b541f89..25ddd576f8b 100644
--- a/src/backend/backup/basebackup_progress.c
+++ b/src/backend/backup/basebackup_progress.c
@@ -62,7 +62,7 @@ bbsink_progress_new(bbsink *next, bool estimate_backup_size)
Assert(next != NULL);
- sink = palloc0(sizeof(bbsink));
+ sink = palloc0_object(bbsink);
*((const bbsink_ops **) &sink->bbs_ops) = &bbsink_progress_ops;
sink->bbs_next = next;
diff --git a/src/backend/backup/basebackup_server.c b/src/backend/backup/basebackup_server.c
index f5c0c61640a..7510ec06f5e 100644
--- a/src/backend/backup/basebackup_server.c
+++ b/src/backend/backup/basebackup_server.c
@@ -59,7 +59,7 @@ static const bbsink_ops bbsink_server_ops = {
bbsink *
bbsink_server_new(bbsink *next, char *pathname)
{
- bbsink_server *sink = palloc0(sizeof(bbsink_server));
+ bbsink_server *sink = palloc0_object(bbsink_server);
*((const bbsink_ops **) &sink->base.bbs_ops) = &bbsink_server_ops;
sink->pathname = pathname;
diff --git a/src/backend/backup/basebackup_target.c b/src/backend/backup/basebackup_target.c
index 84b1309d3bd..8b74828aed6 100644
--- a/src/backend/backup/basebackup_target.c
+++ b/src/backend/backup/basebackup_target.c
@@ -96,7 +96,7 @@ BaseBackupAddTarget(char *name,
* name into a newly-allocated chunk of memory.
*/
oldcontext = MemoryContextSwitchTo(TopMemoryContext);
- newtype = palloc(sizeof(BaseBackupTargetType));
+ newtype = palloc_object(BaseBackupTargetType);
newtype->name = pstrdup(name);
newtype->check_detail = check_detail;
newtype->get_sink = get_sink;
@@ -132,7 +132,7 @@ BaseBackupGetTargetHandle(char *target, char *target_detail)
BaseBackupTargetHandle *handle;
/* Found the target. */
- handle = palloc(sizeof(BaseBackupTargetHandle));
+ handle = palloc_object(BaseBackupTargetHandle);
handle->type = ttype;
handle->detail_arg = ttype->check_detail(target, target_detail);
diff --git a/src/backend/backup/basebackup_throttle.c b/src/backend/backup/basebackup_throttle.c
index b2b743238f9..95746c3ea40 100644
--- a/src/backend/backup/basebackup_throttle.c
+++ b/src/backend/backup/basebackup_throttle.c
@@ -72,7 +72,7 @@ bbsink_throttle_new(bbsink *next, uint32 maxrate)
Assert(next != NULL);
Assert(maxrate > 0);
- sink = palloc0(sizeof(bbsink_throttle));
+ sink = palloc0_object(bbsink_throttle);
*((const bbsink_ops **) &sink->base.bbs_ops) = &bbsink_throttle_ops;
sink->base.bbs_next = next;
diff --git a/src/backend/backup/basebackup_zstd.c b/src/backend/backup/basebackup_zstd.c
index 18b2e8fb0b3..647ee0eb978 100644
--- a/src/backend/backup/basebackup_zstd.c
+++ b/src/backend/backup/basebackup_zstd.c
@@ -70,7 +70,7 @@ bbsink_zstd_new(bbsink *next, pg_compress_specification *compress)
Assert(next != NULL);
- sink = palloc0(sizeof(bbsink_zstd));
+ sink = palloc0_object(bbsink_zstd);
*((const bbsink_ops **) &sink->base.bbs_ops) = &bbsink_zstd_ops;
sink->base.bbs_next = next;
sink->compress = compress;
diff --git a/src/backend/backup/walsummary.c b/src/backend/backup/walsummary.c
index c7a2c65cc6a..5d87dfb83bd 100644
--- a/src/backend/backup/walsummary.c
+++ b/src/backend/backup/walsummary.c
@@ -73,7 +73,7 @@ GetWalSummaries(TimeLineID tli, XLogRecPtr start_lsn, XLogRecPtr end_lsn)
continue;
/* Add it to the list. */
- ws = palloc(sizeof(WalSummaryFile));
+ ws = palloc_object(WalSummaryFile);
ws->tli = file_tli;
ws->start_lsn = file_start_lsn;
ws->end_lsn = file_end_lsn;
diff --git a/src/backend/bootstrap/bootstrap.c b/src/backend/bootstrap/bootstrap.c
index 359f58a8f95..95249dd54d1 100644
--- a/src/backend/bootstrap/bootstrap.c
+++ b/src/backend/bootstrap/bootstrap.c
@@ -740,7 +740,7 @@ populate_typ_list(void)
Form_pg_type typForm = (Form_pg_type) GETSTRUCT(tup);
struct typmap *newtyp;
- newtyp = (struct typmap *) palloc(sizeof(struct typmap));
+ newtyp = palloc_object(struct typmap);
Typ = lappend(Typ, newtyp);
newtyp->am_oid = typForm->oid;
@@ -949,7 +949,7 @@ index_register(Oid heap,
oldcxt = MemoryContextSwitchTo(nogc);
- newind = (IndexList *) palloc(sizeof(IndexList));
+ newind = palloc_object(IndexList);
newind->il_heap = heap;
newind->il_ind = ind;
newind->il_info = (IndexInfo *) palloc(sizeof(IndexInfo));
diff --git a/src/backend/catalog/aclchk.c b/src/backend/catalog/aclchk.c
index 02a754cc30a..803097769d9 100644
--- a/src/backend/catalog/aclchk.c
+++ b/src/backend/catalog/aclchk.c
@@ -1868,7 +1868,7 @@ ExecGrant_Relation(InternalGrant *istmt)
* corresponds to FirstLowInvalidHeapAttributeNumber.
*/
num_col_privileges = pg_class_tuple->relnatts - FirstLowInvalidHeapAttributeNumber + 1;
- col_privileges = (AclMode *) palloc0(num_col_privileges * sizeof(AclMode));
+ col_privileges = palloc0_array(AclMode, num_col_privileges);
have_col_privileges = false;
/*
diff --git a/src/backend/catalog/dependency.c b/src/backend/catalog/dependency.c
index 18316a3968b..a13add1adfb 100644
--- a/src/backend/catalog/dependency.c
+++ b/src/backend/catalog/dependency.c
@@ -800,8 +800,8 @@ findDependentObjects(const ObjectAddress *object,
* regression testing.)
*/
maxDependentObjects = 128; /* arbitrary initial allocation */
- dependentObjects = (ObjectAddressAndFlags *)
- palloc(maxDependentObjects * sizeof(ObjectAddressAndFlags));
+ dependentObjects = palloc_array(ObjectAddressAndFlags,
+ maxDependentObjects);
numDependentObjects = 0;
ScanKeyInit(&key[0],
@@ -900,9 +900,9 @@ findDependentObjects(const ObjectAddress *object,
{
/* enlarge array if needed */
maxDependentObjects *= 2;
- dependentObjects = (ObjectAddressAndFlags *)
- repalloc(dependentObjects,
- maxDependentObjects * sizeof(ObjectAddressAndFlags));
+ dependentObjects = repalloc_array(dependentObjects,
+ ObjectAddressAndFlags,
+ maxDependentObjects);
}
dependentObjects[numDependentObjects].obj = otherObject;
@@ -2503,7 +2503,7 @@ new_object_addresses(void)
{
ObjectAddresses *addrs;
- addrs = palloc(sizeof(ObjectAddresses));
+ addrs = palloc_object(ObjectAddresses);
addrs->numrefs = 0;
addrs->maxrefs = 32;
diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
index 57ef466acce..4b4c1929908 100644
--- a/src/backend/catalog/heap.c
+++ b/src/backend/catalog/heap.c
@@ -711,7 +711,7 @@ InsertPgAttributeTuples(Relation pg_attribute_rel,
/* Initialize the number of slots to use */
nslots = Min(tupdesc->natts,
(MAX_CATALOG_MULTI_INSERT_BYTES / sizeof(FormData_pg_attribute)));
- slot = palloc(sizeof(TupleTableSlot *) * nslots);
+ slot = palloc_array(TupleTableSlot *, nslots);
for (int i = 0; i < nslots; i++)
slot[i] = MakeSingleTupleTableSlot(td, &TTSOpsHeapTuple);
@@ -2096,7 +2096,7 @@ StoreRelCheck(Relation rel, const char *ccname, Node *expr,
ListCell *vl;
int i = 0;
- attNos = (int16 *) palloc(keycount * sizeof(int16));
+ attNos = palloc_array(int16, keycount);
foreach(vl, varList)
{
Var *var = (Var *) lfirst(vl);
@@ -2387,7 +2387,7 @@ AddRelationNewConstraints(Relation rel,
defOid = StoreAttrDefault(rel, colDef->attnum, expr, is_internal,
colDef->missingMode);
- cooked = (CookedConstraint *) palloc(sizeof(CookedConstraint));
+ cooked = palloc_object(CookedConstraint);
cooked->contype = CONSTR_DEFAULT;
cooked->conoid = defOid;
cooked->name = NULL;
@@ -2521,7 +2521,7 @@ AddRelationNewConstraints(Relation rel,
numchecks++;
- cooked = (CookedConstraint *) palloc(sizeof(CookedConstraint));
+ cooked = palloc_object(CookedConstraint);
cooked->contype = CONSTR_CHECK;
cooked->conoid = constrOid;
cooked->name = ccname;
@@ -2592,7 +2592,7 @@ AddRelationNewConstraints(Relation rel,
inhcount,
cdef->is_no_inherit);
- nncooked = (CookedConstraint *) palloc(sizeof(CookedConstraint));
+ nncooked = palloc_object(CookedConstraint);
nncooked->contype = CONSTR_NOTNULL;
nncooked->conoid = constrOid;
nncooked->name = nnname;
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 7377912b41e..384fbde621c 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -1413,7 +1413,7 @@ index_concurrently_create_copy(Relation heapRelation, Oid oldIndexId,
}
/* Extract opclass options for each attribute */
- opclassOptions = palloc0(sizeof(Datum) * newInfo->ii_NumIndexAttrs);
+ opclassOptions = palloc0_array(Datum, newInfo->ii_NumIndexAttrs);
for (int i = 0; i < newInfo->ii_NumIndexAttrs; i++)
opclassOptions[i] = get_attoptions(oldIndexId, i + 1);
diff --git a/src/backend/catalog/namespace.c b/src/backend/catalog/namespace.c
index d97d632a7ef..a952c79863c 100644
--- a/src/backend/catalog/namespace.c
+++ b/src/backend/catalog/namespace.c
@@ -3859,7 +3859,7 @@ GetSearchPathMatcher(MemoryContext context)
oldcxt = MemoryContextSwitchTo(context);
- result = (SearchPathMatcher *) palloc0(sizeof(SearchPathMatcher));
+ result = palloc0_object(SearchPathMatcher);
schemas = list_copy(activeSearchPath);
while (schemas && linitial_oid(schemas) != activeCreationNamespace)
{
@@ -3890,7 +3890,7 @@ CopySearchPathMatcher(SearchPathMatcher *path)
{
SearchPathMatcher *result;
- result = (SearchPathMatcher *) palloc(sizeof(SearchPathMatcher));
+ result = palloc_object(SearchPathMatcher);
result->schemas = list_copy(path->schemas);
result->addCatalog = path->addCatalog;
result->addTemp = path->addTemp;
diff --git a/src/backend/catalog/objectaddress.c b/src/backend/catalog/objectaddress.c
index d8eb8d3deaa..6cf79ba3fa3 100644
--- a/src/backend/catalog/objectaddress.c
+++ b/src/backend/catalog/objectaddress.c
@@ -6131,8 +6131,8 @@ strlist_to_textarray(List *list)
ALLOCSET_DEFAULT_SIZES);
oldcxt = MemoryContextSwitchTo(memcxt);
- datums = (Datum *) palloc(sizeof(Datum) * list_length(list));
- nulls = palloc(sizeof(bool) * list_length(list));
+ datums = palloc_array(Datum, list_length(list));
+ nulls = palloc_array(bool, list_length(list));
foreach(cell, list)
{
diff --git a/src/backend/catalog/pg_constraint.c b/src/backend/catalog/pg_constraint.c
index bbf4742e18c..bab864dca97 100644
--- a/src/backend/catalog/pg_constraint.c
+++ b/src/backend/catalog/pg_constraint.c
@@ -117,7 +117,7 @@ CreateConstraintEntry(const char *constraintName,
{
Datum *conkey;
- conkey = (Datum *) palloc(constraintNKeys * sizeof(Datum));
+ conkey = palloc_array(Datum, constraintNKeys);
for (i = 0; i < constraintNKeys; i++)
conkey[i] = Int16GetDatum(constraintKey[i]);
conkeyArray = construct_array_builtin(conkey, constraintNKeys, INT2OID);
@@ -129,7 +129,7 @@ CreateConstraintEntry(const char *constraintName,
{
Datum *fkdatums;
- fkdatums = (Datum *) palloc(foreignNKeys * sizeof(Datum));
+ fkdatums = palloc_array(Datum, foreignNKeys);
for (i = 0; i < foreignNKeys; i++)
fkdatums[i] = Int16GetDatum(foreignKey[i]);
confkeyArray = construct_array_builtin(fkdatums, foreignNKeys, INT2OID);
@@ -165,7 +165,7 @@ CreateConstraintEntry(const char *constraintName,
{
Datum *opdatums;
- opdatums = (Datum *) palloc(constraintNKeys * sizeof(Datum));
+ opdatums = palloc_array(Datum, constraintNKeys);
for (i = 0; i < constraintNKeys; i++)
opdatums[i] = ObjectIdGetDatum(exclOp[i]);
conexclopArray = construct_array_builtin(opdatums, constraintNKeys, OIDOID);
@@ -822,7 +822,7 @@ RelationGetNotNullConstraints(Oid relid, bool cooked, bool include_noinh)
{
CookedConstraint *cooked;
- cooked = (CookedConstraint *) palloc(sizeof(CookedConstraint));
+ cooked = palloc_object(CookedConstraint);
cooked->contype = CONSTR_NOTNULL;
cooked->conoid = conForm->oid;
diff --git a/src/backend/catalog/pg_depend.c b/src/backend/catalog/pg_depend.c
index c8b11f887e2..a6f63e107bb 100644
--- a/src/backend/catalog/pg_depend.c
+++ b/src/backend/catalog/pg_depend.c
@@ -88,7 +88,7 @@ recordMultipleDependencies(const ObjectAddress *depender,
*/
max_slots = Min(nreferenced,
MAX_CATALOG_MULTI_INSERT_BYTES / sizeof(FormData_pg_depend));
- slot = palloc(sizeof(TupleTableSlot *) * max_slots);
+ slot = palloc_array(TupleTableSlot *, max_slots);
/* Don't open indexes unless we need to make an update */
indstate = NULL;
diff --git a/src/backend/catalog/pg_enum.c b/src/backend/catalog/pg_enum.c
index a1634e58eec..207aab3646c 100644
--- a/src/backend/catalog/pg_enum.c
+++ b/src/backend/catalog/pg_enum.c
@@ -126,7 +126,7 @@ EnumValuesCreate(Oid enumTypeOid, List *vals)
* allocating the next), trouble could only occur if the OID counter wraps
* all the way around before we finish. Which seems unlikely.
*/
- oids = (Oid *) palloc(num_elems * sizeof(Oid));
+ oids = palloc_array(Oid, num_elems);
for (elemno = 0; elemno < num_elems; elemno++)
{
@@ -154,7 +154,7 @@ EnumValuesCreate(Oid enumTypeOid, List *vals)
/* allocate the slots to use and initialize them */
nslots = Min(num_elems,
MAX_CATALOG_MULTI_INSERT_BYTES / sizeof(FormData_pg_enum));
- slot = palloc(sizeof(TupleTableSlot *) * nslots);
+ slot = palloc_array(TupleTableSlot *, nslots);
for (int i = 0; i < nslots; i++)
slot[i] = MakeSingleTupleTableSlot(RelationGetDescr(pg_enum),
&TTSOpsHeapTuple);
@@ -362,7 +362,7 @@ restart:
nelems = list->n_members;
/* Sort the existing members by enumsortorder */
- existing = (HeapTuple *) palloc(nelems * sizeof(HeapTuple));
+ existing = palloc_array(HeapTuple, nelems);
for (i = 0; i < nelems; i++)
existing[i] = &(list->members[i]->tuple);
diff --git a/src/backend/catalog/pg_inherits.c b/src/backend/catalog/pg_inherits.c
index 929bb53b620..98409688f47 100644
--- a/src/backend/catalog/pg_inherits.c
+++ b/src/backend/catalog/pg_inherits.c
@@ -105,7 +105,7 @@ find_inheritance_children_extended(Oid parentrelId, bool omit_detached,
* Scan pg_inherits and build a working array of subclass OIDs.
*/
maxoids = 32;
- oidarr = (Oid *) palloc(maxoids * sizeof(Oid));
+ oidarr = palloc_array(Oid, maxoids);
numoids = 0;
relation = table_open(InheritsRelationId, AccessShareLock);
@@ -182,7 +182,7 @@ find_inheritance_children_extended(Oid parentrelId, bool omit_detached,
if (numoids >= maxoids)
{
maxoids *= 2;
- oidarr = (Oid *) repalloc(oidarr, maxoids * sizeof(Oid));
+ oidarr = repalloc_array(oidarr, Oid, maxoids);
}
oidarr[numoids++] = inhrelid;
}
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index b89098f5e99..d337417daa6 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -1059,7 +1059,7 @@ GetPublication(Oid pubid)
pubform = (Form_pg_publication) GETSTRUCT(tup);
- pub = (Publication *) palloc(sizeof(Publication));
+ pub = palloc_object(Publication);
pub->oid = pubid;
pub->name = pstrdup(NameStr(pubform->pubname));
pub->alltables = pubform->puballtables;
@@ -1167,7 +1167,7 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
*/
foreach(lc, pub_elem_tables)
{
- published_rel *table_info = (published_rel *) palloc(sizeof(published_rel));
+ published_rel *table_info = palloc_object(published_rel);
table_info->relid = lfirst_oid(lc);
table_info->pubid = pub_elem->oid;
@@ -1270,7 +1270,7 @@ pg_get_publication_tables(PG_FUNCTION_ARGS)
TupleDesc desc = RelationGetDescr(rel);
int i;
- attnums = (int16 *) palloc(desc->natts * sizeof(int16));
+ attnums = palloc_array(int16, desc->natts);
for (i = 0; i < desc->natts; i++)
{
diff --git a/src/backend/catalog/pg_shdepend.c b/src/backend/catalog/pg_shdepend.c
index 536191284e8..d01e31e7cb4 100644
--- a/src/backend/catalog/pg_shdepend.c
+++ b/src/backend/catalog/pg_shdepend.c
@@ -716,8 +716,7 @@ checkSharedDependencies(Oid classId, Oid objectId,
#define MAX_REPORTED_DEPS 100
allocedobjects = 128; /* arbitrary initial array size */
- objects = (ShDependObjectInfo *)
- palloc(allocedobjects * sizeof(ShDependObjectInfo));
+ objects = palloc_array(ShDependObjectInfo, allocedobjects);
numobjects = 0;
initStringInfo(&descs);
initStringInfo(&alldescs);
@@ -757,9 +756,9 @@ checkSharedDependencies(Oid classId, Oid objectId,
if (numobjects >= allocedobjects)
{
allocedobjects *= 2;
- objects = (ShDependObjectInfo *)
- repalloc(objects,
- allocedobjects * sizeof(ShDependObjectInfo));
+ objects = repalloc_array(objects,
+ ShDependObjectInfo,
+ allocedobjects);
}
objects[numobjects].object = object;
objects[numobjects].deptype = sdepForm->deptype;
@@ -791,7 +790,7 @@ checkSharedDependencies(Oid classId, Oid objectId,
}
if (!stored)
{
- dep = (remoteDep *) palloc(sizeof(remoteDep));
+ dep = palloc_object(remoteDep);
dep->dbOid = sdepForm->dbid;
dep->count = 1;
remDeps = lappend(remDeps, dep);
@@ -913,7 +912,7 @@ copyTemplateDependencies(Oid templateDbId, Oid newDbId)
* know that they will be used.
*/
max_slots = MAX_CATALOG_MULTI_INSERT_BYTES / sizeof(FormData_pg_shdepend);
- slot = palloc(sizeof(TupleTableSlot *) * max_slots);
+ slot = palloc_array(TupleTableSlot *, max_slots);
indstate = CatalogOpenIndexes(sdepRel);
diff --git a/src/backend/catalog/pg_subscription.c b/src/backend/catalog/pg_subscription.c
index 1395032413e..2021381afe3 100644
--- a/src/backend/catalog/pg_subscription.c
+++ b/src/backend/catalog/pg_subscription.c
@@ -89,7 +89,7 @@ GetSubscription(Oid subid, bool missing_ok)
subform = (Form_pg_subscription) GETSTRUCT(tup);
- sub = (Subscription *) palloc(sizeof(Subscription));
+ sub = palloc_object(Subscription);
sub->oid = subid;
sub->dbid = subform->subdbid;
sub->skiplsn = subform->subskiplsn;
@@ -563,7 +563,7 @@ GetSubscriptionRelations(Oid subid, bool not_ready)
subrel = (Form_pg_subscription_rel) GETSTRUCT(tup);
- relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));
+ relstate = palloc_object(SubscriptionRelState);
relstate->relid = subrel->srrelid;
relstate->state = subrel->srsubstate;
d = SysCacheGetAttr(SUBSCRIPTIONRELMAP, tup,
diff --git a/src/backend/catalog/storage.c b/src/backend/catalog/storage.c
index e74b471d4b9..d6634255dd1 100644
--- a/src/backend/catalog/storage.c
+++ b/src/backend/catalog/storage.c
@@ -691,12 +691,15 @@ smgrDoPendingDeletes(bool isCommit)
if (maxrels == 0)
{
maxrels = 8;
- srels = palloc(sizeof(SMgrRelation) * maxrels);
+ srels = palloc_array(SMgrRelation,
+ maxrels);
}
else if (maxrels <= nrels)
{
maxrels *= 2;
- srels = repalloc(srels, sizeof(SMgrRelation) * maxrels);
+ srels = repalloc_array(srels,
+ SMgrRelation,
+ maxrels);
}
srels[nrels++] = srel;
@@ -813,12 +816,13 @@ smgrDoPendingSyncs(bool isCommit, bool isParallelWorker)
if (maxrels == 0)
{
maxrels = 8;
- srels = palloc(sizeof(SMgrRelation) * maxrels);
+ srels = palloc_array(SMgrRelation, maxrels);
}
else if (maxrels <= nrels)
{
maxrels *= 2;
- srels = repalloc(srels, sizeof(SMgrRelation) * maxrels);
+ srels = repalloc_array(srels, SMgrRelation,
+ maxrels);
}
srels[nrels++] = srel;
@@ -893,7 +897,7 @@ smgrGetPendingDeletes(bool forCommit, RelFileLocator **ptr)
*ptr = NULL;
return 0;
}
- rptr = (RelFileLocator *) palloc(nrels * sizeof(RelFileLocator));
+ rptr = palloc_array(RelFileLocator, nrels);
*ptr = rptr;
for (pending = pendingDeletes; pending != NULL; pending = pending->next)
{
diff --git a/src/backend/commands/alter.c b/src/backend/commands/alter.c
index 78c1d4e1b84..27ad7d3fd0a 100644
--- a/src/backend/commands/alter.c
+++ b/src/backend/commands/alter.c
@@ -324,9 +324,9 @@ AlterObjectRename_internal(Relation rel, Oid objectId, const char *new_name)
}
/* Build modified tuple */
- values = palloc0(RelationGetNumberOfAttributes(rel) * sizeof(Datum));
- nulls = palloc0(RelationGetNumberOfAttributes(rel) * sizeof(bool));
- replaces = palloc0(RelationGetNumberOfAttributes(rel) * sizeof(bool));
+ values = palloc0_array(Datum, RelationGetNumberOfAttributes(rel));
+ nulls = palloc0_array(bool, RelationGetNumberOfAttributes(rel));
+ replaces = palloc0_array(bool, RelationGetNumberOfAttributes(rel));
namestrcpy(&nameattrdata, new_name);
values[Anum_name - 1] = NameGetDatum(&nameattrdata);
replaces[Anum_name - 1] = true;
@@ -786,9 +786,9 @@ AlterObjectNamespace_internal(Relation rel, Oid objid, Oid nspOid)
nspOid);
/* Build modified tuple */
- values = palloc0(RelationGetNumberOfAttributes(rel) * sizeof(Datum));
- nulls = palloc0(RelationGetNumberOfAttributes(rel) * sizeof(bool));
- replaces = palloc0(RelationGetNumberOfAttributes(rel) * sizeof(bool));
+ values = palloc0_array(Datum, RelationGetNumberOfAttributes(rel));
+ nulls = palloc0_array(bool, RelationGetNumberOfAttributes(rel));
+ replaces = palloc0_array(bool, RelationGetNumberOfAttributes(rel));
values[Anum_namespace - 1] = ObjectIdGetDatum(nspOid);
replaces[Anum_namespace - 1] = true;
newtup = heap_modify_tuple(tup, RelationGetDescr(rel),
@@ -996,9 +996,9 @@ AlterObjectOwner_internal(Oid classId, Oid objectId, Oid new_ownerId)
/* Build a modified tuple */
nattrs = RelationGetNumberOfAttributes(rel);
- values = palloc0(nattrs * sizeof(Datum));
- nulls = palloc0(nattrs * sizeof(bool));
- replaces = palloc0(nattrs * sizeof(bool));
+ values = palloc0_array(Datum, nattrs);
+ nulls = palloc0_array(bool, nattrs);
+ replaces = palloc0_array(bool, nattrs);
values[Anum_owner - 1] = ObjectIdGetDatum(new_ownerId);
replaces[Anum_owner - 1] = true;
diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index 2a7769b1fd1..a06261a5f43 100644
--- a/src/backend/commands/analyze.c
+++ b/src/backend/commands/analyze.c
@@ -372,8 +372,8 @@ do_analyze_rel(Relation onerel, VacuumParams *params,
Bitmapset *unique_cols = NULL;
ListCell *le;
- vacattrstats = (VacAttrStats **) palloc(list_length(va_cols) *
- sizeof(VacAttrStats *));
+ vacattrstats = palloc_array(VacAttrStats *,
+ list_length(va_cols));
tcnt = 0;
foreach(le, va_cols)
{
@@ -401,8 +401,7 @@ do_analyze_rel(Relation onerel, VacuumParams *params,
else
{
attr_cnt = onerel->rd_att->natts;
- vacattrstats = (VacAttrStats **)
- palloc(attr_cnt * sizeof(VacAttrStats *));
+ vacattrstats = palloc_array(VacAttrStats *, attr_cnt);
tcnt = 0;
for (i = 1; i <= attr_cnt; i++)
{
@@ -445,7 +444,7 @@ do_analyze_rel(Relation onerel, VacuumParams *params,
indexdata = NULL;
if (nindexes > 0)
{
- indexdata = (AnlIndexData *) palloc0(nindexes * sizeof(AnlIndexData));
+ indexdata = palloc0_array(AnlIndexData, nindexes);
for (ind = 0; ind < nindexes; ind++)
{
AnlIndexData *thisdata = &indexdata[ind];
@@ -521,7 +520,7 @@ do_analyze_rel(Relation onerel, VacuumParams *params,
/*
* Acquire the sample rows
*/
- rows = (HeapTuple *) palloc(targrows * sizeof(HeapTuple));
+ rows = palloc_array(HeapTuple, targrows);
pgstat_progress_update_param(PROGRESS_ANALYZE_PHASE,
inh ? PROGRESS_ANALYZE_PHASE_ACQUIRE_SAMPLE_ROWS_INH :
PROGRESS_ANALYZE_PHASE_ACQUIRE_SAMPLE_ROWS);
@@ -905,8 +904,8 @@ compute_index_stats(Relation onerel, double totalrows,
predicate = ExecPrepareQual(indexInfo->ii_Predicate, estate);
/* Compute and save index expression values */
- exprvals = (Datum *) palloc(numrows * attr_cnt * sizeof(Datum));
- exprnulls = (bool *) palloc(numrows * attr_cnt * sizeof(bool));
+ exprvals = palloc_array(Datum, numrows * attr_cnt);
+ exprnulls = palloc_array(bool, numrows * attr_cnt);
numindexrows = 0;
tcnt = 0;
for (rowno = 0; rowno < numrows; rowno++)
@@ -1057,7 +1056,7 @@ examine_attribute(Relation onerel, int attnum, Node *index_expr)
/*
* Create the VacAttrStats struct.
*/
- stats = (VacAttrStats *) palloc0(sizeof(VacAttrStats));
+ stats = palloc0_object(VacAttrStats);
stats->attstattarget = attstattarget;
/*
@@ -1416,10 +1415,10 @@ acquire_inherited_sample_rows(Relation onerel, int elevel,
* Identify acquirefuncs to use, and count blocks in all the relations.
* The result could overflow BlockNumber, so we use double arithmetic.
*/
- rels = (Relation *) palloc(list_length(tableOIDs) * sizeof(Relation));
- acquirefuncs = (AcquireSampleRowsFunc *)
- palloc(list_length(tableOIDs) * sizeof(AcquireSampleRowsFunc));
- relblocks = (double *) palloc(list_length(tableOIDs) * sizeof(double));
+ rels = palloc_array(Relation, list_length(tableOIDs));
+ acquirefuncs = palloc_array(AcquireSampleRowsFunc,
+ list_length(tableOIDs));
+ relblocks = palloc_array(double, list_length(tableOIDs));
totalblocks = 0;
nrels = 0;
has_child = false;
@@ -1695,7 +1694,8 @@ update_attstats(Oid relid, bool inh, int natts, VacAttrStats **vacattrstats)
if (nnum > 0)
{
- Datum *numdatums = (Datum *) palloc(nnum * sizeof(Datum));
+ Datum *numdatums = palloc_array(Datum,
+ nnum);
ArrayType *arry;
for (n = 0; n < nnum; n++)
@@ -1884,7 +1884,7 @@ std_typanalyze(VacAttrStats *stats)
NULL);
/* Save the operator info for compute_stats routines */
- mystats = (StdAnalyzeData *) palloc(sizeof(StdAnalyzeData));
+ mystats = palloc_object(StdAnalyzeData);
mystats->eqopr = eqopr;
mystats->eqfunc = OidIsValid(eqopr) ? get_opcode(eqopr) : InvalidOid;
mystats->ltopr = ltopr;
@@ -2067,7 +2067,7 @@ compute_distinct_stats(VacAttrStatsP stats,
track_max = 2 * num_mcv;
if (track_max < 10)
track_max = 10;
- track = (TrackItem *) palloc(track_max * sizeof(TrackItem));
+ track = palloc_array(TrackItem, track_max);
track_cnt = 0;
fmgr_info(mystats->eqfunc, &f_cmpeq);
@@ -2304,7 +2304,7 @@ compute_distinct_stats(VacAttrStatsP stats,
if (num_mcv > 0)
{
- mcv_counts = (int *) palloc(num_mcv * sizeof(int));
+ mcv_counts = palloc_array(int, num_mcv);
for (i = 0; i < num_mcv; i++)
mcv_counts[i] = track[i].count;
@@ -2324,8 +2324,8 @@ compute_distinct_stats(VacAttrStatsP stats,
/* Must copy the target values into anl_context */
old_context = MemoryContextSwitchTo(stats->anl_context);
- mcv_values = (Datum *) palloc(num_mcv * sizeof(Datum));
- mcv_freqs = (float4 *) palloc(num_mcv * sizeof(float4));
+ mcv_values = palloc_array(Datum, num_mcv);
+ mcv_freqs = palloc_array(float4, num_mcv);
for (i = 0; i < num_mcv; i++)
{
mcv_values[i] = datumCopy(track[i].value,
@@ -2403,9 +2403,9 @@ compute_scalar_stats(VacAttrStatsP stats,
int num_bins = stats->attstattarget;
StdAnalyzeData *mystats = (StdAnalyzeData *) stats->extra_data;
- values = (ScalarItem *) palloc(samplerows * sizeof(ScalarItem));
- tupnoLink = (int *) palloc(samplerows * sizeof(int));
- track = (ScalarMCVItem *) palloc(num_mcv * sizeof(ScalarMCVItem));
+ values = palloc_array(ScalarItem, samplerows);
+ tupnoLink = palloc_array(int, samplerows);
+ track = palloc_array(ScalarMCVItem, num_mcv);
memset(&ssup, 0, sizeof(ssup));
ssup.ssup_cxt = CurrentMemoryContext;
@@ -2669,7 +2669,7 @@ compute_scalar_stats(VacAttrStatsP stats,
if (num_mcv > 0)
{
- mcv_counts = (int *) palloc(num_mcv * sizeof(int));
+ mcv_counts = palloc_array(int, num_mcv);
for (i = 0; i < num_mcv; i++)
mcv_counts[i] = track[i].count;
@@ -2689,8 +2689,8 @@ compute_scalar_stats(VacAttrStatsP stats,
/* Must copy the target values into anl_context */
old_context = MemoryContextSwitchTo(stats->anl_context);
- mcv_values = (Datum *) palloc(num_mcv * sizeof(Datum));
- mcv_freqs = (float4 *) palloc(num_mcv * sizeof(float4));
+ mcv_values = palloc_array(Datum, num_mcv);
+ mcv_freqs = palloc_array(float4, num_mcv);
for (i = 0; i < num_mcv; i++)
{
mcv_values[i] = datumCopy(values[track[i].first].value,
@@ -2784,7 +2784,7 @@ compute_scalar_stats(VacAttrStatsP stats,
/* Must copy the target values into anl_context */
old_context = MemoryContextSwitchTo(stats->anl_context);
- hist_values = (Datum *) palloc(num_hist * sizeof(Datum));
+ hist_values = palloc_array(Datum, num_hist);
/*
* The object of this loop is to copy the first and last values[]
@@ -2839,7 +2839,7 @@ compute_scalar_stats(VacAttrStatsP stats,
/* Must copy the target values into anl_context */
old_context = MemoryContextSwitchTo(stats->anl_context);
- corrs = (float4 *) palloc(sizeof(float4));
+ corrs = palloc_object(float4);
MemoryContextSwitchTo(old_context);
/*----------
diff --git a/src/backend/commands/async.c b/src/backend/commands/async.c
index 4bd37d5beb5..043b1411637 100644
--- a/src/backend/commands/async.c
+++ b/src/backend/commands/async.c
@@ -1592,8 +1592,8 @@ SignalBackends(void)
* XXX in principle these pallocs could fail, which would be bad. Maybe
* preallocate the arrays? They're not that large, though.
*/
- pids = (int32 *) palloc(MaxBackends * sizeof(int32));
- procnos = (ProcNumber *) palloc(MaxBackends * sizeof(ProcNumber));
+ pids = palloc_array(int32, MaxBackends);
+ procnos = palloc_array(ProcNumber, MaxBackends);
count = 0;
LWLockAcquire(NotifyQueueLock, LW_EXCLUSIVE);
diff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c
index 99193f5c886..ee4d899a940 100644
--- a/src/backend/commands/cluster.c
+++ b/src/backend/commands/cluster.c
@@ -1667,7 +1667,7 @@ get_tables_to_cluster(MemoryContext cluster_context)
/* Use a permanent memory context for the result list */
old_context = MemoryContextSwitchTo(cluster_context);
- rtc = (RelToCluster *) palloc(sizeof(RelToCluster));
+ rtc = palloc_object(RelToCluster);
rtc->tableOid = index->indrelid;
rtc->indexOid = index->indexrelid;
rtcs = lappend(rtcs, rtc);
@@ -1721,7 +1721,7 @@ get_tables_to_cluster_partitioned(MemoryContext cluster_context, Oid indexOid)
/* Use a permanent memory context for the result list */
old_context = MemoryContextSwitchTo(cluster_context);
- rtc = (RelToCluster *) palloc(sizeof(RelToCluster));
+ rtc = palloc_object(RelToCluster);
rtc->tableOid = relid;
rtc->indexOid = indexrelid;
rtcs = lappend(rtcs, rtc);
diff --git a/src/backend/commands/collationcmds.c b/src/backend/commands/collationcmds.c
index 8acbfbbeda0..71df9bd9413 100644
--- a/src/backend/commands/collationcmds.c
+++ b/src/backend/commands/collationcmds.c
@@ -862,7 +862,7 @@ pg_import_system_collations(PG_FUNCTION_ARGS)
/* expansible array of aliases */
maxaliases = 100;
- aliases = (CollAliasData *) palloc(maxaliases * sizeof(CollAliasData));
+ aliases = palloc_array(CollAliasData, maxaliases);
naliases = 0;
locale_a_handle = OpenPipeStream("locale -a", "r");
@@ -906,8 +906,9 @@ pg_import_system_collations(PG_FUNCTION_ARGS)
if (naliases >= maxaliases)
{
maxaliases *= 2;
- aliases = (CollAliasData *)
- repalloc(aliases, maxaliases * sizeof(CollAliasData));
+ aliases = repalloc_array(aliases,
+ CollAliasData,
+ maxaliases);
}
aliases[naliases].localename = pstrdup(localebuf);
aliases[naliases].alias = pstrdup(alias);
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index cfca9d9dc29..e03923c1412 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -508,7 +508,7 @@ ProcessCopyOptions(ParseState *pstate,
/* Support external use for option sanity checking */
if (opts_out == NULL)
- opts_out = (CopyFormatOptions *) palloc0(sizeof(CopyFormatOptions));
+ opts_out = palloc0_object(CopyFormatOptions);
opts_out->file_encoding = -1;
diff --git a/src/backend/commands/copyfrom.c b/src/backend/commands/copyfrom.c
index 0cbd05f5602..43998ac5a96 100644
--- a/src/backend/commands/copyfrom.c
+++ b/src/backend/commands/copyfrom.c
@@ -225,7 +225,7 @@ CopyMultiInsertBufferInit(ResultRelInfo *rri)
{
CopyMultiInsertBuffer *buffer;
- buffer = (CopyMultiInsertBuffer *) palloc(sizeof(CopyMultiInsertBuffer));
+ buffer = palloc_object(CopyMultiInsertBuffer);
memset(buffer->slots, 0, sizeof(TupleTableSlot *) * MAX_BUFFERED_TUPLES);
buffer->resultRelInfo = rri;
buffer->bistate = (rri->ri_FdwRoutine == NULL) ? GetBulkInsertState() : NULL;
@@ -1620,10 +1620,10 @@ BeginCopyFrom(ParseState *pstate,
* the input function), and info about defaults and constraints. (Which
* input function we use depends on text/binary format choice.)
*/
- in_functions = (FmgrInfo *) palloc(num_phys_attrs * sizeof(FmgrInfo));
- typioparams = (Oid *) palloc(num_phys_attrs * sizeof(Oid));
- defmap = (int *) palloc(num_phys_attrs * sizeof(int));
- defexprs = (ExprState **) palloc(num_phys_attrs * sizeof(ExprState *));
+ in_functions = palloc_array(FmgrInfo, num_phys_attrs);
+ typioparams = palloc_array(Oid, num_phys_attrs);
+ defmap = palloc_array(int, num_phys_attrs);
+ defexprs = palloc_array(ExprState *, num_phys_attrs);
for (int attnum = 1; attnum <= num_phys_attrs; attnum++)
{
diff --git a/src/backend/commands/copyto.c b/src/backend/commands/copyto.c
index 99cb23cb347..67816ebf433 100644
--- a/src/backend/commands/copyto.c
+++ b/src/backend/commands/copyto.c
@@ -1275,7 +1275,7 @@ copy_dest_destroy(DestReceiver *self)
DestReceiver *
CreateCopyDestReceiver(void)
{
- DR_copy *self = (DR_copy *) palloc(sizeof(DR_copy));
+ DR_copy *self = palloc_object(DR_copy);
self->pub.receiveSlot = copy_dest_receive;
self->pub.rStartup = copy_dest_startup;
diff --git a/src/backend/commands/createas.c b/src/backend/commands/createas.c
index 23cecd99c9e..d6c4e7ce238 100644
--- a/src/backend/commands/createas.c
+++ b/src/backend/commands/createas.c
@@ -437,7 +437,7 @@ CreateTableAsRelExists(CreateTableAsStmt *ctas)
DestReceiver *
CreateIntoRelDestReceiver(IntoClause *intoClause)
{
- DR_intorel *self = (DR_intorel *) palloc0(sizeof(DR_intorel));
+ DR_intorel *self = palloc0_object(DR_intorel);
self->pub.receiveSlot = intorel_receive;
self->pub.rStartup = intorel_startup;
diff --git a/src/backend/commands/dbcommands.c b/src/backend/commands/dbcommands.c
index 46310add459..6cc1bcc3d97 100644
--- a/src/backend/commands/dbcommands.c
+++ b/src/backend/commands/dbcommands.c
@@ -429,7 +429,7 @@ ScanSourceDatabasePgClassTuple(HeapTupleData *tuple, Oid tbid, Oid dbid,
classForm->oid);
/* Prepare a rel info element and add it to the list. */
- relinfo = (CreateDBRelInfo *) palloc(sizeof(CreateDBRelInfo));
+ relinfo = palloc_object(CreateDBRelInfo);
if (OidIsValid(classForm->reltablespace))
relinfo->rlocator.spcOid = classForm->reltablespace;
else
@@ -3024,7 +3024,7 @@ remove_dbtablespaces(Oid db_id)
return;
}
- tablespace_ids = (Oid *) palloc(ntblspc * sizeof(Oid));
+ tablespace_ids = palloc_array(Oid, ntblspc);
i = 0;
foreach(cell, ltblspc)
tablespace_ids[i++] = lfirst_oid(cell);
diff --git a/src/backend/commands/event_trigger.c b/src/backend/commands/event_trigger.c
index edc2c988e29..cb0d2ebb658 100644
--- a/src/backend/commands/event_trigger.c
+++ b/src/backend/commands/event_trigger.c
@@ -360,7 +360,7 @@ filter_list_to_array(List *filterlist)
int i = 0,
l = list_length(filterlist);
- data = (Datum *) palloc(l * sizeof(Datum));
+ data = palloc_array(Datum, l);
foreach(lc, filterlist)
{
@@ -1288,7 +1288,7 @@ EventTriggerSQLDropAddObject(const ObjectAddress *object, bool original, bool no
oldcxt = MemoryContextSwitchTo(currentEventTriggerState->cxt);
- obj = palloc0(sizeof(SQLDropObject));
+ obj = palloc0_object(SQLDropObject);
obj->address = *object;
obj->original = original;
obj->normal = normal;
@@ -1594,7 +1594,7 @@ EventTriggerCollectSimpleCommand(ObjectAddress address,
oldcxt = MemoryContextSwitchTo(currentEventTriggerState->cxt);
- command = palloc(sizeof(CollectedCommand));
+ command = palloc_object(CollectedCommand);
command->type = SCT_Simple;
command->in_extension = creating_extension;
@@ -1630,7 +1630,7 @@ EventTriggerAlterTableStart(Node *parsetree)
oldcxt = MemoryContextSwitchTo(currentEventTriggerState->cxt);
- command = palloc(sizeof(CollectedCommand));
+ command = palloc_object(CollectedCommand);
command->type = SCT_AlterTable;
command->in_extension = creating_extension;
@@ -1686,7 +1686,7 @@ EventTriggerCollectAlterTableSubcmd(Node *subcmd, ObjectAddress address)
oldcxt = MemoryContextSwitchTo(currentEventTriggerState->cxt);
- newsub = palloc(sizeof(CollectedATSubcmd));
+ newsub = palloc_object(CollectedATSubcmd);
newsub->address = address;
newsub->parsetree = copyObject(subcmd);
@@ -1760,7 +1760,7 @@ EventTriggerCollectGrant(InternalGrant *istmt)
/*
* This is tedious, but necessary.
*/
- icopy = palloc(sizeof(InternalGrant));
+ icopy = palloc_object(InternalGrant);
memcpy(icopy, istmt, sizeof(InternalGrant));
icopy->objects = list_copy(istmt->objects);
icopy->grantees = list_copy(istmt->grantees);
@@ -1769,7 +1769,7 @@ EventTriggerCollectGrant(InternalGrant *istmt)
icopy->col_privs = lappend(icopy->col_privs, copyObject(lfirst(cell)));
/* Now collect it, using the copied InternalGrant */
- command = palloc(sizeof(CollectedCommand));
+ command = palloc_object(CollectedCommand);
command->type = SCT_Grant;
command->in_extension = creating_extension;
command->d.grant.istmt = icopy;
@@ -1800,7 +1800,7 @@ EventTriggerCollectAlterOpFam(AlterOpFamilyStmt *stmt, Oid opfamoid,
oldcxt = MemoryContextSwitchTo(currentEventTriggerState->cxt);
- command = palloc(sizeof(CollectedCommand));
+ command = palloc_object(CollectedCommand);
command->type = SCT_AlterOpFamily;
command->in_extension = creating_extension;
ObjectAddressSet(command->d.opfam.address,
@@ -1833,7 +1833,7 @@ EventTriggerCollectCreateOpClass(CreateOpClassStmt *stmt, Oid opcoid,
oldcxt = MemoryContextSwitchTo(currentEventTriggerState->cxt);
- command = palloc0(sizeof(CollectedCommand));
+ command = palloc0_object(CollectedCommand);
command->type = SCT_CreateOpClass;
command->in_extension = creating_extension;
ObjectAddressSet(command->d.createopc.address,
@@ -1867,7 +1867,7 @@ EventTriggerCollectAlterTSConfig(AlterTSConfigurationStmt *stmt, Oid cfgId,
oldcxt = MemoryContextSwitchTo(currentEventTriggerState->cxt);
- command = palloc0(sizeof(CollectedCommand));
+ command = palloc0_object(CollectedCommand);
command->type = SCT_AlterTSConfig;
command->in_extension = creating_extension;
ObjectAddressSet(command->d.atscfg.address,
@@ -1901,7 +1901,7 @@ EventTriggerCollectAlterDefPrivs(AlterDefaultPrivilegesStmt *stmt)
oldcxt = MemoryContextSwitchTo(currentEventTriggerState->cxt);
- command = palloc0(sizeof(CollectedCommand));
+ command = palloc0_object(CollectedCommand);
command->type = SCT_AlterDefaultPrivileges;
command->d.defprivs.objtype = stmt->action->objtype;
command->in_extension = creating_extension;
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index c24e66f82e1..0bbaf91eb89 100644
--- a/src/backend/commands/explain.c
+++ b/src/backend/commands/explain.c
@@ -386,7 +386,7 @@ ExplainQuery(ParseState *pstate, ExplainStmt *stmt,
ExplainState *
NewExplainState(void)
{
- ExplainState *es = (ExplainState *) palloc0(sizeof(ExplainState));
+ ExplainState *es = palloc0_object(ExplainState);
/* Set default options (most fields can be left as zeroes). */
es->costs = true;
@@ -4809,7 +4809,7 @@ ExplainCreateWorkersState(int num_workers)
{
ExplainWorkersState *wstate;
- wstate = (ExplainWorkersState *) palloc(sizeof(ExplainWorkersState));
+ wstate = palloc_object(ExplainWorkersState);
wstate->num_workers = num_workers;
wstate->worker_inited = (bool *) palloc0(num_workers * sizeof(bool));
wstate->worker_str = (StringInfoData *)
@@ -5884,7 +5884,7 @@ CreateExplainSerializeDestReceiver(ExplainState *es)
{
SerializeDestReceiver *self;
- self = (SerializeDestReceiver *) palloc0(sizeof(SerializeDestReceiver));
+ self = palloc0_object(SerializeDestReceiver);
self->pub.receiveSlot = serializeAnalyzeReceive;
self->pub.rStartup = serializeAnalyzeStartup;
diff --git a/src/backend/commands/extension.c b/src/backend/commands/extension.c
index ba540e3de5b..222cf6b3eca 100644
--- a/src/backend/commands/extension.c
+++ b/src/backend/commands/extension.c
@@ -608,7 +608,7 @@ read_extension_control_file(const char *extname)
/*
* Set up default values. Pointer fields are initially null.
*/
- control = (ExtensionControlFile *) palloc0(sizeof(ExtensionControlFile));
+ control = palloc0_object(ExtensionControlFile);
control->name = pstrdup(extname);
control->relocatable = false;
control->superuser = true;
@@ -638,7 +638,7 @@ read_extension_aux_control_file(const ExtensionControlFile *pcontrol,
/*
* Flat-copy the struct. Pointer fields share values with original.
*/
- acontrol = (ExtensionControlFile *) palloc(sizeof(ExtensionControlFile));
+ acontrol = palloc_object(ExtensionControlFile);
memcpy(acontrol, pcontrol, sizeof(ExtensionControlFile));
/*
@@ -1263,7 +1263,7 @@ get_ext_ver_info(const char *versionname, List **evi_list)
return evi;
}
- evi = (ExtensionVersionInfo *) palloc(sizeof(ExtensionVersionInfo));
+ evi = palloc_object(ExtensionVersionInfo);
evi->name = pstrdup(versionname);
evi->reachable = NIL;
evi->installable = false;
@@ -2429,7 +2429,7 @@ convert_requires_to_datum(List *requires)
ListCell *lc;
ndatums = list_length(requires);
- datums = (Datum *) palloc(ndatums * sizeof(Datum));
+ datums = palloc_array(Datum, ndatums);
ndatums = 0;
foreach(lc, requires)
{
diff --git a/src/backend/commands/functioncmds.c b/src/backend/commands/functioncmds.c
index b9fd7683abb..ab82a9191e1 100644
--- a/src/backend/commands/functioncmds.c
+++ b/src/backend/commands/functioncmds.c
@@ -210,10 +210,10 @@ interpret_function_parameter_list(ParseState *pstate,
*variadicArgType = InvalidOid; /* default result */
*requiredResultType = InvalidOid; /* default result */
- inTypes = (Oid *) palloc(parameterCount * sizeof(Oid));
- allTypes = (Datum *) palloc(parameterCount * sizeof(Datum));
- paramModes = (Datum *) palloc(parameterCount * sizeof(Datum));
- paramNames = (Datum *) palloc0(parameterCount * sizeof(Datum));
+ inTypes = palloc_array(Oid, parameterCount);
+ allTypes = palloc_array(Datum, parameterCount);
+ paramModes = palloc_array(Datum, parameterCount);
+ paramNames = palloc0_array(Datum, parameterCount);
*parameterDefaults = NIL;
/* Scan the list and extract data into work arrays */
@@ -1222,7 +1222,7 @@ CreateFunction(ParseState *pstate, CreateFunctionStmt *stmt)
Datum *arr;
int i;
- arr = palloc(list_length(trftypes_list) * sizeof(Datum));
+ arr = palloc_array(Datum, list_length(trftypes_list));
i = 0;
foreach(lc, trftypes_list)
arr[i++] = ObjectIdGetDatum(lfirst_oid(lc));
diff --git a/src/backend/commands/matview.c b/src/backend/commands/matview.c
index c12817091ed..7460438fc5d 100644
--- a/src/backend/commands/matview.c
+++ b/src/backend/commands/matview.c
@@ -464,7 +464,7 @@ refresh_matview_datafill(DestReceiver *dest, Query *query,
DestReceiver *
CreateTransientRelDestReceiver(Oid transientoid)
{
- DR_transientrel *self = (DR_transientrel *) palloc0(sizeof(DR_transientrel));
+ DR_transientrel *self = palloc0_object(DR_transientrel);
self->pub.receiveSlot = transientrel_receive;
self->pub.rStartup = transientrel_startup;
@@ -725,7 +725,7 @@ refresh_by_match_merge(Oid matviewOid, Oid tempOid, Oid relowner,
* include all rows.
*/
tupdesc = matviewRel->rd_att;
- opUsedForQual = (Oid *) palloc0(sizeof(Oid) * relnatts);
+ opUsedForQual = palloc0_array(Oid, relnatts);
foundUniqueIndex = false;
indexoidlist = RelationGetIndexList(matviewRel);
diff --git a/src/backend/commands/opclasscmds.c b/src/backend/commands/opclasscmds.c
index 2c325badf94..7d711cda0bb 100644
--- a/src/backend/commands/opclasscmds.c
+++ b/src/backend/commands/opclasscmds.c
@@ -523,7 +523,7 @@ DefineOpClass(CreateOpClassStmt *stmt)
#endif
/* Save the info */
- member = (OpFamilyMember *) palloc0(sizeof(OpFamilyMember));
+ member = palloc0_object(OpFamilyMember);
member->is_func = false;
member->object = operOid;
member->number = item->number;
@@ -547,7 +547,7 @@ DefineOpClass(CreateOpClassStmt *stmt)
get_func_name(funcOid));
#endif
/* Save the info */
- member = (OpFamilyMember *) palloc0(sizeof(OpFamilyMember));
+ member = palloc0_object(OpFamilyMember);
member->is_func = true;
member->object = funcOid;
member->number = item->number;
@@ -940,7 +940,7 @@ AlterOpFamilyAdd(AlterOpFamilyStmt *stmt, Oid amoid, Oid opfamilyoid,
#endif
/* Save the info */
- member = (OpFamilyMember *) palloc0(sizeof(OpFamilyMember));
+ member = palloc0_object(OpFamilyMember);
member->is_func = false;
member->object = operOid;
member->number = item->number;
@@ -970,7 +970,7 @@ AlterOpFamilyAdd(AlterOpFamilyStmt *stmt, Oid amoid, Oid opfamilyoid,
#endif
/* Save the info */
- member = (OpFamilyMember *) palloc0(sizeof(OpFamilyMember));
+ member = palloc0_object(OpFamilyMember);
member->is_func = true;
member->object = funcOid;
member->number = item->number;
@@ -1058,7 +1058,7 @@ AlterOpFamilyDrop(AlterOpFamilyStmt *stmt, Oid amoid, Oid opfamilyoid,
item->number, maxOpNumber)));
processTypesSpec(item->class_args, &lefttype, &righttype);
/* Save the info */
- member = (OpFamilyMember *) palloc0(sizeof(OpFamilyMember));
+ member = palloc0_object(OpFamilyMember);
member->is_func = false;
member->number = item->number;
member->lefttype = lefttype;
@@ -1074,7 +1074,7 @@ AlterOpFamilyDrop(AlterOpFamilyStmt *stmt, Oid amoid, Oid opfamilyoid,
item->number, maxProcNumber)));
processTypesSpec(item->class_args, &lefttype, &righttype);
/* Save the info */
- member = (OpFamilyMember *) palloc0(sizeof(OpFamilyMember));
+ member = palloc0_object(OpFamilyMember);
member->is_func = true;
member->number = item->number;
member->lefttype = lefttype;
diff --git a/src/backend/commands/policy.c b/src/backend/commands/policy.c
index 83056960fe4..befc3a4ec6f 100644
--- a/src/backend/commands/policy.c
+++ b/src/backend/commands/policy.c
@@ -144,14 +144,14 @@ policy_role_list_to_array(List *roles, int *num_roles)
if (roles == NIL)
{
*num_roles = 1;
- role_oids = (Datum *) palloc(*num_roles * sizeof(Datum));
+ role_oids = palloc_array(Datum, *num_roles);
role_oids[0] = ObjectIdGetDatum(ACL_ID_PUBLIC);
return role_oids;
}
*num_roles = list_length(roles);
- role_oids = (Datum *) palloc(*num_roles * sizeof(Datum));
+ role_oids = palloc_array(Datum, *num_roles);
foreach(cell, roles)
{
@@ -471,7 +471,7 @@ RemoveRoleFromObjectPolicy(Oid roleid, Oid classid, Oid policy_id)
* Ordinarily there'd be exactly one, but we must cope with duplicate
* mentions, since CREATE/ALTER POLICY historically have allowed that.
*/
- role_oids = (Datum *) palloc(num_roles * sizeof(Datum));
+ role_oids = palloc_array(Datum, num_roles);
for (i = 0, j = 0; i < num_roles; i++)
{
if (roles[i] != roleid)
@@ -945,7 +945,7 @@ AlterPolicy(AlterPolicyStmt *stmt)
nitems = ARR_DIMS(policy_roles)[0];
- role_oids = (Datum *) palloc(nitems * sizeof(Datum));
+ role_oids = palloc_array(Datum, nitems);
for (i = 0; i < nitems; i++)
role_oids[i] = ObjectIdGetDatum(roles[i]);
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 35747b3df5f..49a549c7101 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -1261,7 +1261,7 @@ AlterPublicationTables(AlterPublicationStmt *stmt, HeapTuple tup,
*/
if (!found)
{
- oldrel = palloc(sizeof(PublicationRelInfo));
+ oldrel = palloc_object(PublicationRelInfo);
oldrel->whereClause = NULL;
oldrel->columns = NIL;
oldrel->relation = table_open(oldrelid,
@@ -1643,7 +1643,7 @@ OpenTableList(List *tables)
continue;
}
- pub_rel = palloc(sizeof(PublicationRelInfo));
+ pub_rel = palloc_object(PublicationRelInfo);
pub_rel->relation = rel;
pub_rel->whereClause = t->whereClause;
pub_rel->columns = t->columns;
@@ -1712,7 +1712,7 @@ OpenTableList(List *tables)
/* find_all_inheritors already got lock */
rel = table_open(childrelid, NoLock);
- pub_rel = palloc(sizeof(PublicationRelInfo));
+ pub_rel = palloc_object(PublicationRelInfo);
pub_rel->relation = rel;
/* child inherits WHERE clause from parent */
pub_rel->whereClause = t->whereClause;
diff --git a/src/backend/commands/seclabel.c b/src/backend/commands/seclabel.c
index cee5d7bbb9c..07bed6e1487 100644
--- a/src/backend/commands/seclabel.c
+++ b/src/backend/commands/seclabel.c
@@ -573,7 +573,7 @@ register_label_provider(const char *provider_name, check_object_relabel_type hoo
MemoryContext oldcxt;
oldcxt = MemoryContextSwitchTo(TopMemoryContext);
- provider = palloc(sizeof(LabelProvider));
+ provider = palloc_object(LabelProvider);
provider->provider_name = pstrdup(provider_name);
provider->hook = hook;
label_provider_list = lappend(label_provider_list, provider);
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 2d8a71ca1e1..6d59d315bbd 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -519,7 +519,7 @@ publicationListToArray(List *publist)
ALLOCSET_DEFAULT_SIZES);
oldcxt = MemoryContextSwitchTo(memcxt);
- datums = (Datum *) palloc(sizeof(Datum) * list_length(publist));
+ datums = palloc_array(Datum, list_length(publist));
check_duplicates_in_publist(publist, datums);
@@ -869,7 +869,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
* potentially contain all tables in the database so speed of lookup
* is important.
*/
- subrel_local_oids = palloc(subrel_count * sizeof(Oid));
+ subrel_local_oids = palloc_array(Oid, subrel_count);
off = 0;
foreach(lc, subrel_states)
{
@@ -888,7 +888,7 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
* Rels that we want to remove from subscription and drop any slots
* and origins corresponding to them.
*/
- sub_remove_rels = palloc(subrel_count * sizeof(SubRemoveRels));
+ sub_remove_rels = palloc_array(SubRemoveRels, subrel_count);
/*
* Walk over the remote tables and try to match them to locally known
@@ -898,7 +898,8 @@ AlterSubscription_refresh(Subscription *sub, bool copy_data,
* Also builds array of local oids of remote tables for the next step.
*/
off = 0;
- pubrel_local_oids = palloc(list_length(pubrel_names) * sizeof(Oid));
+ pubrel_local_oids = palloc_array(Oid,
+ list_length(pubrel_names));
foreach(lc, pubrel_names)
{
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index d2420a9558c..187daeaedc2 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -966,7 +966,7 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
Assert(colDef->cooked_default == NULL);
- rawEnt = (RawColumnDefault *) palloc(sizeof(RawColumnDefault));
+ rawEnt = palloc_object(RawColumnDefault);
rawEnt->attnum = attnum;
rawEnt->raw_default = colDef->raw_default;
rawEnt->missingMode = false;
@@ -978,7 +978,7 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
{
CookedConstraint *cooked;
- cooked = (CookedConstraint *) palloc(sizeof(CookedConstraint));
+ cooked = palloc_object(CookedConstraint);
cooked->contype = CONSTR_DEFAULT;
cooked->conoid = InvalidOid; /* until created */
cooked->name = NULL;
@@ -2061,8 +2061,7 @@ ExecuteTruncateGuts(List *explicit_rels,
* ExecGetTriggerResultRel() find them.
*/
estate = CreateExecutorState();
- resultRelInfos = (ResultRelInfo *)
- palloc(list_length(rels) * sizeof(ResultRelInfo));
+ resultRelInfos = palloc_array(ResultRelInfo, list_length(rels));
resultRelInfo = resultRelInfos;
foreach(cell, rels)
{
@@ -2269,7 +2268,7 @@ ExecuteTruncateGuts(List *explicit_rels,
/* should only get here if wal_level >= logical */
Assert(XLogLogicalInfoActive());
- logrelids = palloc(list_length(relids_logged) * sizeof(Oid));
+ logrelids = palloc_array(Oid, list_length(relids_logged));
foreach(cell, relids_logged)
logrelids[i++] = lfirst_oid(cell);
@@ -6460,7 +6459,7 @@ ATGetQueueEntry(List **wqueue, Relation rel)
* Not there, so add it. Note that we make a copy of the relation's
* existing descriptor before anything interesting can happen to it.
*/
- tab = (AlteredTableInfo *) palloc0(sizeof(AlteredTableInfo));
+ tab = palloc0_object(AlteredTableInfo);
tab->relid = relid;
tab->rel = NULL; /* set later */
tab->relkind = rel->rd_rel->relkind;
@@ -7298,7 +7297,7 @@ ATExecAddColumn(List **wqueue, AlteredTableInfo *tab, Relation rel,
{
RawColumnDefault *rawEnt;
- rawEnt = (RawColumnDefault *) palloc(sizeof(RawColumnDefault));
+ rawEnt = palloc_object(RawColumnDefault);
rawEnt->attnum = attribute->attnum;
rawEnt->raw_default = copyObject(colDef->raw_default);
@@ -7409,7 +7408,7 @@ ATExecAddColumn(List **wqueue, AlteredTableInfo *tab, Relation rel,
{
NewColumnValue *newval;
- newval = (NewColumnValue *) palloc0(sizeof(NewColumnValue));
+ newval = palloc0_object(NewColumnValue);
newval->attnum = attribute->attnum;
newval->expr = expression_planner(defval);
newval->is_generated = (colDef->generated != '\0');
@@ -8013,7 +8012,7 @@ ATExecColumnDefault(Relation rel, const char *colName,
/* SET DEFAULT */
RawColumnDefault *rawEnt;
- rawEnt = (RawColumnDefault *) palloc(sizeof(RawColumnDefault));
+ rawEnt = palloc_object(RawColumnDefault);
rawEnt->attnum = attnum;
rawEnt->raw_default = newDefault;
rawEnt->missingMode = false;
@@ -8478,7 +8477,7 @@ ATExecSetExpression(AlteredTableInfo *tab, Relation rel, const char *colName,
false, false);
/* Prepare to store the new expression, in the catalogs */
- rawEnt = (RawColumnDefault *) palloc(sizeof(RawColumnDefault));
+ rawEnt = palloc_object(RawColumnDefault);
rawEnt->attnum = attnum;
rawEnt->raw_default = newExpr;
rawEnt->missingMode = false;
@@ -8494,7 +8493,7 @@ ATExecSetExpression(AlteredTableInfo *tab, Relation rel, const char *colName,
/* Prepare for table rewrite */
defval = (Expr *) build_column_default(rel, attnum);
- newval = (NewColumnValue *) palloc0(sizeof(NewColumnValue));
+ newval = palloc0_object(NewColumnValue);
newval->attnum = attnum;
newval->expr = expression_planner(defval);
newval->is_generated = true;
@@ -9629,7 +9628,7 @@ ATAddCheckNNConstraint(List **wqueue, AlteredTableInfo *tab, Relation rel,
{
NewConstraint *newcon;
- newcon = (NewConstraint *) palloc0(sizeof(NewConstraint));
+ newcon = palloc0_object(NewConstraint);
newcon->name = ccon->name;
newcon->contype = ccon->contype;
newcon->qual = ccon->expr;
@@ -10616,7 +10615,8 @@ addFkRecurseReferenced(Constraint *fkconstraint, Relation rel,
false);
if (map)
{
- mapped_pkattnum = palloc(sizeof(AttrNumber) * numfks);
+ mapped_pkattnum = palloc_array(AttrNumber,
+ numfks);
for (int j = 0; j < numfks; j++)
mapped_pkattnum[j] = map->attnums[pkattnum[j] - 1];
}
@@ -10748,7 +10748,7 @@ addFkRecurseReferencing(List **wqueue, Constraint *fkconstraint, Relation rel,
tab = ATGetQueueEntry(wqueue, rel);
- newcon = (NewConstraint *) palloc0(sizeof(NewConstraint));
+ newcon = palloc0_object(NewConstraint);
newcon->name = get_constraint_name(parentConstr);
newcon->contype = CONSTR_FOREIGN;
newcon->refrelid = RelationGetRelid(pkrel);
@@ -12135,7 +12135,7 @@ QueueFKConstraintValidation(List **wqueue, Relation conrel, Relation rel,
/* for now this is all we need */
fkconstraint->conname = pstrdup(NameStr(con->conname));
- newcon = (NewConstraint *) palloc0(sizeof(NewConstraint));
+ newcon = palloc0_object(NewConstraint);
newcon->name = fkconstraint->conname;
newcon->contype = CONSTR_FOREIGN;
newcon->refrelid = con->confrelid;
@@ -12235,7 +12235,7 @@ QueueCheckConstraintValidation(List **wqueue, Relation conrel, Relation rel,
}
/* Queue validation for phase 3 */
- newcon = (NewConstraint *) palloc0(sizeof(NewConstraint));
+ newcon = palloc0_object(NewConstraint);
newcon->name = constrName;
newcon->contype = CONSTR_CHECK;
newcon->refrelid = InvalidOid;
@@ -13476,7 +13476,7 @@ ATPrepAlterColumnType(List **wqueue,
* Add a work queue item to make ATRewriteTable update the column
* contents.
*/
- newval = (NewColumnValue *) palloc0(sizeof(NewColumnValue));
+ newval = palloc0_object(NewColumnValue);
newval->attnum = attnum;
newval->expr = (Expr *) transform;
newval->is_generated = false;
@@ -18182,7 +18182,7 @@ register_on_commit_action(Oid relid, OnCommitAction action)
oldcxt = MemoryContextSwitchTo(CacheMemoryContext);
- oc = (OnCommitItem *) palloc(sizeof(OnCommitItem));
+ oc = palloc_object(OnCommitItem);
oc->relid = relid;
oc->oncommit = action;
oc->creating_subid = GetCurrentSubTransactionId();
@@ -19479,8 +19479,8 @@ AttachPartitionEnsureIndexes(List **wqueue, Relation rel, Relation attachrel)
idxes = RelationGetIndexList(rel);
attachRelIdxs = RelationGetIndexList(attachrel);
- attachrelIdxRels = palloc(sizeof(Relation) * list_length(attachRelIdxs));
- attachInfos = palloc(sizeof(IndexInfo *) * list_length(attachRelIdxs));
+ attachrelIdxRels = palloc_array(Relation, list_length(attachRelIdxs));
+ attachInfos = palloc_array(IndexInfo *, list_length(attachRelIdxs));
/* Build arrays of all existing indexes and their IndexInfos */
foreach_oid(cldIdxId, attachRelIdxs)
diff --git a/src/backend/commands/tablespace.c b/src/backend/commands/tablespace.c
index 4ac2763c7f3..0b436444e9c 100644
--- a/src/backend/commands/tablespace.c
+++ b/src/backend/commands/tablespace.c
@@ -1227,7 +1227,7 @@ check_temp_tablespaces(char **newval, void **extra, GucSource source)
ListCell *l;
/* temporary workspace until we are done verifying the list */
- tblSpcs = (Oid *) palloc(list_length(namelist) * sizeof(Oid));
+ tblSpcs = palloc_array(Oid, list_length(namelist));
numSpcs = 0;
foreach(l, namelist)
{
diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c
index acf3e4a3f1f..a18bb17991f 100644
--- a/src/backend/commands/trigger.c
+++ b/src/backend/commands/trigger.c
@@ -929,7 +929,7 @@ CreateTriggerFiringOn(CreateTrigStmt *stmt, const char *queryString,
ListCell *cell;
int i = 0;
- columns = (int16 *) palloc(ncolumns * sizeof(int16));
+ columns = palloc_array(int16, ncolumns);
foreach(cell, stmt->columns)
{
char *name = strVal(lfirst(cell));
@@ -1873,7 +1873,7 @@ RelationBuildTriggers(Relation relation)
* necessary)
*/
maxtrigs = 16;
- triggers = (Trigger *) palloc(maxtrigs * sizeof(Trigger));
+ triggers = palloc_array(Trigger, maxtrigs);
numtrigs = 0;
/*
@@ -1901,7 +1901,7 @@ RelationBuildTriggers(Relation relation)
if (numtrigs >= maxtrigs)
{
maxtrigs *= 2;
- triggers = (Trigger *) repalloc(triggers, maxtrigs * sizeof(Trigger));
+ triggers = repalloc_array(triggers, Trigger, maxtrigs);
}
build = &(triggers[numtrigs]);
@@ -1988,7 +1988,7 @@ RelationBuildTriggers(Relation relation)
}
/* Build trigdesc */
- trigdesc = (TriggerDesc *) palloc0(sizeof(TriggerDesc));
+ trigdesc = palloc0_object(TriggerDesc);
trigdesc->triggers = triggers;
trigdesc->numtriggers = numtrigs;
for (i = 0; i < numtrigs; i++)
@@ -2093,10 +2093,10 @@ CopyTriggerDesc(TriggerDesc *trigdesc)
if (trigdesc == NULL || trigdesc->numtriggers <= 0)
return NULL;
- newdesc = (TriggerDesc *) palloc(sizeof(TriggerDesc));
+ newdesc = palloc_object(TriggerDesc);
memcpy(newdesc, trigdesc, sizeof(TriggerDesc));
- trigger = (Trigger *) palloc(trigdesc->numtriggers * sizeof(Trigger));
+ trigger = palloc_array(Trigger, trigdesc->numtriggers);
memcpy(trigger, trigdesc->triggers,
trigdesc->numtriggers * sizeof(Trigger));
newdesc->triggers = trigger;
@@ -2108,7 +2108,7 @@ CopyTriggerDesc(TriggerDesc *trigdesc)
{
int16 *newattr;
- newattr = (int16 *) palloc(trigger->tgnattr * sizeof(int16));
+ newattr = palloc_array(int16, trigger->tgnattr);
memcpy(newattr, trigger->tgattr,
trigger->tgnattr * sizeof(int16));
trigger->tgattr = newattr;
@@ -2118,7 +2118,7 @@ CopyTriggerDesc(TriggerDesc *trigdesc)
char **newargs;
int16 j;
- newargs = (char **) palloc(trigger->tgnargs * sizeof(char *));
+ newargs = palloc_array(char *, trigger->tgnargs);
for (j = 0; j < trigger->tgnargs; j++)
newargs[j] = pstrdup(trigger->tgargs[j]);
trigger->tgargs = newargs;
@@ -4822,7 +4822,7 @@ GetAfterTriggersTableData(Oid relid, CmdType cmdType)
oldcxt = MemoryContextSwitchTo(CurTransactionContext);
- table = (AfterTriggersTableData *) palloc0(sizeof(AfterTriggersTableData));
+ table = palloc0_object(AfterTriggersTableData);
table->relid = relid;
table->cmdType = cmdType;
qs->tables = lappend(qs->tables, table);
@@ -4971,7 +4971,7 @@ MakeTransitionCaptureState(TriggerDesc *trigdesc, Oid relid, CmdType cmdType)
MemoryContextSwitchTo(oldcxt);
/* Now build the TransitionCaptureState struct, in caller's context */
- state = (TransitionCaptureState *) palloc0(sizeof(TransitionCaptureState));
+ state = palloc0_object(TransitionCaptureState);
state->tcs_delete_old_table = trigdesc->trig_delete_old_table;
state->tcs_update_old_table = trigdesc->trig_update_old_table;
state->tcs_update_new_table = trigdesc->trig_update_new_table;
diff --git a/src/backend/commands/tsearchcmds.c b/src/backend/commands/tsearchcmds.c
index ab16d42ad56..d11c08c1260 100644
--- a/src/backend/commands/tsearchcmds.c
+++ b/src/backend/commands/tsearchcmds.c
@@ -1027,7 +1027,7 @@ DefineTSConfiguration(List *names, List *parameters, ObjectAddress *copied)
* know that they will be used.
*/
max_slots = MAX_CATALOG_MULTI_INSERT_BYTES / sizeof(FormData_pg_ts_config_map);
- slot = palloc(sizeof(TupleTableSlot *) * max_slots);
+ slot = palloc_array(TupleTableSlot *, max_slots);
ScanKeyInit(&skey,
Anum_pg_ts_config_map_mapcfg,
@@ -1261,7 +1261,7 @@ getTokenTypes(Oid prsId, List *tokennames)
{
if (strcmp(strVal(val), list[j].alias) == 0)
{
- TSTokenTypeItem *ts = (TSTokenTypeItem *) palloc0(sizeof(TSTokenTypeItem));
+ TSTokenTypeItem *ts = palloc0_object(TSTokenTypeItem);
ts->num = list[j].lexid;
ts->name = pstrdup(strVal(val));
@@ -1344,7 +1344,7 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
* Convert list of dictionary names to array of dict OIDs
*/
ndict = list_length(stmt->dicts);
- dictIds = (Oid *) palloc(sizeof(Oid) * ndict);
+ dictIds = palloc_array(Oid, ndict);
i = 0;
foreach(c, stmt->dicts)
{
@@ -1432,7 +1432,7 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
/* Allocate the slots to use and initialize them */
nslots = Min(ntoken * ndict,
MAX_CATALOG_MULTI_INSERT_BYTES / sizeof(FormData_pg_ts_config_map));
- slot = palloc(sizeof(TupleTableSlot *) * nslots);
+ slot = palloc_array(TupleTableSlot *, nslots);
for (i = 0; i < nslots; i++)
slot[i] = MakeSingleTupleTableSlot(RelationGetDescr(relMap),
&TTSOpsHeapTuple);
diff --git a/src/backend/commands/typecmds.c b/src/backend/commands/typecmds.c
index 3cb3ca1cca1..537d9b5fa64 100644
--- a/src/backend/commands/typecmds.c
+++ b/src/backend/commands/typecmds.c
@@ -3421,7 +3421,7 @@ get_rels_with_domain(Oid domainOid, LOCKMODE lockmode)
}
/* Build the RelToCheck entry with enough space for all atts */
- rtc = (RelToCheck *) palloc(sizeof(RelToCheck));
+ rtc = palloc_object(RelToCheck);
rtc->rel = rel;
rtc->natts = 0;
rtc->atts = (int *) palloc(sizeof(int) * RelationGetNumberOfAttributes(rel));
diff --git a/src/backend/commands/user.c b/src/backend/commands/user.c
index 0db174e6f10..b7adb9726b2 100644
--- a/src/backend/commands/user.c
+++ b/src/backend/commands/user.c
@@ -1904,7 +1904,7 @@ AddRoleMems(Oid currentUserId, const char *rolename, Oid roleid,
else
{
Oid objectId;
- Oid *newmembers = palloc(sizeof(Oid));
+ Oid *newmembers = palloc_object(Oid);
/*
* The values for these options can be taken directly from 'popt'.
@@ -2295,7 +2295,7 @@ initialize_revoke_actions(CatCList *memlist)
if (memlist->n_members == 0)
return NULL;
- result = palloc(sizeof(RevokeRoleGrantAction) * memlist->n_members);
+ result = palloc_array(RevokeRoleGrantAction, memlist->n_members);
for (i = 0; i < memlist->n_members; i++)
result[i] = RRG_NOOP;
return result;
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 0d92e694d6a..21931093a08 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -268,7 +268,7 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
/*
* Compute the number of parallel vacuum workers to launch
*/
- will_parallel_vacuum = (bool *) palloc0(sizeof(bool) * nindexes);
+ will_parallel_vacuum = palloc0_array(bool, nindexes);
parallel_workers = parallel_vacuum_compute_workers(indrels, nindexes,
nrequested_workers,
will_parallel_vacuum);
@@ -279,7 +279,7 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
return NULL;
}
- pvs = (ParallelVacuumState *) palloc0(sizeof(ParallelVacuumState));
+ pvs = palloc0_object(ParallelVacuumState);
pvs->indrels = indrels;
pvs->nindexes = nindexes;
pvs->will_parallel_vacuum = will_parallel_vacuum;
diff --git a/src/backend/executor/execExpr.c b/src/backend/executor/execExpr.c
index 8f28da4bf94..508361389a1 100644
--- a/src/backend/executor/execExpr.c
+++ b/src/backend/executor/execExpr.c
@@ -3541,8 +3541,7 @@ ExecInitCoerceToDomain(ExprEvalStep *scratch, CoerceToDomain *ctest,
* during executor initialization. That means we don't need typcache.c to
* provide compiled exprs.
*/
- constraint_ref = (DomainConstraintRef *)
- palloc(sizeof(DomainConstraintRef));
+ constraint_ref = palloc_object(DomainConstraintRef);
InitDomainConstraintRef(ctest->resulttype,
constraint_ref,
CurrentMemoryContext,
@@ -3592,8 +3591,8 @@ ExecInitCoerceToDomain(ExprEvalStep *scratch, CoerceToDomain *ctest,
ExprEvalStep scratch2 = {0};
/* Yes, so make output workspace for MAKE_READONLY */
- domainval = (Datum *) palloc(sizeof(Datum));
- domainnull = (bool *) palloc(sizeof(bool));
+ domainval = palloc_object(Datum);
+ domainnull = palloc_object(bool);
/* Emit MAKE_READONLY */
scratch2.opcode = EEOP_MAKE_READONLY;
@@ -4140,7 +4139,7 @@ ExecBuildHash32FromAttrs(TupleDesc desc, const TupleTableSlotOps *ops,
* one column to hash or an initial value plus one column.
*/
if ((int64) numCols + (init_value != 0) > 1)
- iresult = palloc(sizeof(NullableDatum));
+ iresult = palloc_object(NullableDatum);
/* find the highest attnum so we deform the tuple to that point */
for (int i = 0; i < numCols; i++)
@@ -4306,7 +4305,7 @@ ExecBuildHash32Expr(TupleDesc desc, const TupleTableSlotOps *ops,
* than one expression to hash or an initial value plus one expression.
*/
if ((int64) num_exprs + (init_value != 0) > 1)
- iresult = palloc(sizeof(NullableDatum));
+ iresult = palloc_object(NullableDatum);
if (init_value == 0)
{
@@ -4352,7 +4351,7 @@ ExecBuildHash32Expr(TupleDesc desc, const TupleTableSlotOps *ops,
funcid = hashfunc_oids[i];
/* Allocate hash function lookup data. */
- finfo = palloc0(sizeof(FmgrInfo));
+ finfo = palloc0_object(FmgrInfo);
fcinfo = palloc0(SizeForFunctionCallInfo(1));
fmgr_info(funcid, finfo);
@@ -4521,7 +4520,7 @@ ExecBuildGroupingEqual(TupleDesc ldesc, TupleDesc rdesc,
InvokeFunctionExecuteHook(foid);
/* Set up the primary fmgr lookup information */
- finfo = palloc0(sizeof(FmgrInfo));
+ finfo = palloc0_object(FmgrInfo);
fcinfo = palloc0(SizeForFunctionCallInfo(2));
fmgr_info(foid, finfo);
fmgr_info_set_expr(NULL, finfo);
@@ -4657,7 +4656,7 @@ ExecBuildParamSetEqual(TupleDesc desc,
InvokeFunctionExecuteHook(foid);
/* Set up the primary fmgr lookup information */
- finfo = palloc0(sizeof(FmgrInfo));
+ finfo = palloc0_object(FmgrInfo);
fcinfo = palloc0(SizeForFunctionCallInfo(2));
fmgr_info(foid, finfo);
fmgr_info_set_expr(NULL, finfo);
@@ -4730,7 +4729,7 @@ ExecInitJsonExpr(JsonExpr *jsexpr, ExprState *state,
Datum *resv, bool *resnull,
ExprEvalStep *scratch)
{
- JsonExprState *jsestate = palloc0(sizeof(JsonExprState));
+ JsonExprState *jsestate = palloc0_object(JsonExprState);
ListCell *argexprlc;
ListCell *argnamelc;
List *jumps_return_null = NIL;
@@ -4781,7 +4780,7 @@ ExecInitJsonExpr(JsonExpr *jsexpr, ExprState *state,
{
Expr *argexpr = (Expr *) lfirst(argexprlc);
String *argname = lfirst_node(String, argnamelc);
- JsonPathVariable *var = palloc(sizeof(*var));
+ JsonPathVariable *var = palloc_object(JsonPathVariable);
var->name = argname->sval;
var->namelen = strlen(var->name);
@@ -4855,7 +4854,7 @@ ExecInitJsonExpr(JsonExpr *jsexpr, ExprState *state,
FunctionCallInfo fcinfo;
getTypeInputInfo(jsexpr->returning->typid, &typinput, &typioparam);
- finfo = palloc0(sizeof(FmgrInfo));
+ finfo = palloc0_object(FmgrInfo);
fcinfo = palloc0(SizeForFunctionCallInfo(3));
fmgr_info(typinput, finfo);
fmgr_info_set_expr((Node *) jsexpr->returning, finfo);
diff --git a/src/backend/executor/execExprInterp.c b/src/backend/executor/execExprInterp.c
index 1127e6f11eb..4286a56646d 100644
--- a/src/backend/executor/execExprInterp.c
+++ b/src/backend/executor/execExprInterp.c
@@ -3373,10 +3373,10 @@ ExecEvalArrayExpr(ExprState *state, ExprEvalStep *op)
char *dat;
int iitem;
- subdata = (char **) palloc(nelems * sizeof(char *));
- subbitmaps = (bits8 **) palloc(nelems * sizeof(bits8 *));
- subbytes = (int *) palloc(nelems * sizeof(int));
- subnitems = (int *) palloc(nelems * sizeof(int));
+ subdata = palloc_array(char *, nelems);
+ subbitmaps = palloc_array(bits8 *, nelems);
+ subbytes = palloc_array(int, nelems);
+ subnitems = palloc_array(int, nelems);
/* loop through and get data area from each element */
for (int elemoff = 0; elemoff < nelems; elemoff++)
@@ -3427,9 +3427,9 @@ ExecEvalArrayExpr(ExprState *state, ExprEvalStep *op)
errmsg("number of array dimensions (%d) exceeds the maximum allowed (%d)",
ndims, MAXDIM)));
- elem_dims = (int *) palloc(elem_ndims * sizeof(int));
+ elem_dims = palloc_array(int, elem_ndims);
memcpy(elem_dims, ARR_DIMS(array), elem_ndims * sizeof(int));
- elem_lbs = (int *) palloc(elem_ndims * sizeof(int));
+ elem_lbs = palloc_array(int, elem_ndims);
memcpy(elem_lbs, ARR_LBOUND(array), elem_ndims * sizeof(int));
firstone = false;
diff --git a/src/backend/executor/execGrouping.c b/src/backend/executor/execGrouping.c
index 33b124fbb0a..3aa81389716 100644
--- a/src/backend/executor/execGrouping.c
+++ b/src/backend/executor/execGrouping.c
@@ -69,7 +69,7 @@ execTuplesMatchPrepare(TupleDesc desc,
if (numCols == 0)
return NULL;
- eqFunctions = (Oid *) palloc(numCols * sizeof(Oid));
+ eqFunctions = palloc_array(Oid, numCols);
/* lookup equality functions */
for (i = 0; i < numCols; i++)
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 7c87f012c30..599e3931d55 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -185,7 +185,7 @@ ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative)
* allocate space for result arrays
*/
relationDescs = (RelationPtr) palloc(len * sizeof(Relation));
- indexInfoArray = (IndexInfo **) palloc(len * sizeof(IndexInfo *));
+ indexInfoArray = palloc_array(IndexInfo *, len);
resultRelInfo->ri_NumIndices = len;
resultRelInfo->ri_IndexRelationDescs = relationDescs;
diff --git a/src/backend/executor/execJunk.c b/src/backend/executor/execJunk.c
index 3f196de1ad2..5e125db0e29 100644
--- a/src/backend/executor/execJunk.c
+++ b/src/backend/executor/execJunk.c
@@ -93,7 +93,7 @@ ExecInitJunkFilter(List *targetList, TupleTableSlot *slot)
AttrNumber cleanResno;
ListCell *t;
- cleanMap = (AttrNumber *) palloc(cleanLength * sizeof(AttrNumber));
+ cleanMap = palloc_array(AttrNumber, cleanLength);
cleanResno = 0;
foreach(t, targetList)
{
@@ -165,7 +165,7 @@ ExecInitJunkFilterConversion(List *targetList,
cleanLength = cleanTupType->natts;
if (cleanLength > 0)
{
- cleanMap = (AttrNumber *) palloc0(cleanLength * sizeof(AttrNumber));
+ cleanMap = palloc0_array(AttrNumber, cleanLength);
t = list_head(targetList);
for (i = 0; i < cleanLength; i++)
{
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index fb8dba3ab2c..534cebb565b 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -899,7 +899,7 @@ InitPlan(QueryDesc *queryDesc, int eflags)
if (relation)
CheckValidRowMarkRel(relation, rc->markType);
- erm = (ExecRowMark *) palloc(sizeof(ExecRowMark));
+ erm = palloc_object(ExecRowMark);
erm->relation = relation;
erm->relid = relid;
erm->rti = rc->rti;
@@ -2413,7 +2413,7 @@ ExecFindRowMark(EState *estate, Index rti, bool missing_ok)
ExecAuxRowMark *
ExecBuildAuxRowMark(ExecRowMark *erm, List *targetlist)
{
- ExecAuxRowMark *aerm = (ExecAuxRowMark *) palloc0(sizeof(ExecAuxRowMark));
+ ExecAuxRowMark *aerm = palloc0_object(ExecAuxRowMark);
char resname[32];
aerm->rowmark = erm;
diff --git a/src/backend/executor/execParallel.c b/src/backend/executor/execParallel.c
index ff4d9dd1bb3..e42a06a2486 100644
--- a/src/backend/executor/execParallel.c
+++ b/src/backend/executor/execParallel.c
@@ -543,8 +543,7 @@ ExecParallelSetupTupleQueues(ParallelContext *pcxt, bool reinitialize)
return NULL;
/* Allocate memory for shared memory queue handles. */
- responseq = (shm_mq_handle **)
- palloc(pcxt->nworkers * sizeof(shm_mq_handle *));
+ responseq = palloc_array(shm_mq_handle *, pcxt->nworkers);
/*
* If not reinitializing, allocate space from the DSM for the queues;
@@ -623,7 +622,7 @@ ExecInitParallelPlan(PlanState *planstate, EState *estate,
ExecSetParamPlanMulti(sendParams, GetPerTupleExprContext(estate));
/* Allocate object for return value. */
- pei = palloc0(sizeof(ParallelExecutorInfo));
+ pei = palloc0_object(ParallelExecutorInfo);
pei->finished = false;
pei->planstate = planstate;
diff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c
index 7e71d422a62..046933bfc4b 100644
--- a/src/backend/executor/execPartition.c
+++ b/src/backend/executor/execPartition.c
@@ -223,7 +223,7 @@ ExecSetupPartitionTupleRouting(EState *estate, Relation rel)
* The reason for this is that a common case is for INSERT to insert a
* single tuple into a partitioned table and this must be fast.
*/
- proute = (PartitionTupleRouting *) palloc0(sizeof(PartitionTupleRouting));
+ proute = palloc0_object(PartitionTupleRouting);
proute->partition_root = rel;
proute->memcxt = CurrentMemoryContext;
/* Rest of members initialized by zeroing */
@@ -2201,7 +2201,7 @@ PartitionPruneFixSubPlanMap(PartitionPruneState *prunestate,
* new ones. For convenience of initialization, we use 1-based indexes in
* this array and leave pruned items as 0.
*/
- new_subplan_indexes = (int *) palloc0(sizeof(int) * n_total_subplans);
+ new_subplan_indexes = palloc0_array(int, n_total_subplans);
newidx = 1;
i = -1;
while ((i = bms_next_member(initially_valid_subplans, i)) >= 0)
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index 3985e84d3a6..98c80f61ffb 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -251,7 +251,8 @@ retry:
if (!isIdxSafeToSkipDuplicates)
{
if (eq == NULL)
- eq = palloc0(sizeof(*eq) * outslot->tts_tupleDescriptor->natts);
+ eq = palloc0_array(TypeCacheEntry *,
+ outslot->tts_tupleDescriptor->natts);
if (!tuples_equal(outslot, searchslot, eq))
continue;
@@ -397,7 +398,8 @@ RelationFindReplTupleSeq(Relation rel, LockTupleMode lockmode,
Assert(equalTupleDescs(desc, outslot->tts_tupleDescriptor));
- eq = palloc0(sizeof(*eq) * outslot->tts_tupleDescriptor->natts);
+ eq = palloc0_array(TypeCacheEntry *,
+ outslot->tts_tupleDescriptor->natts);
/* Start a heap scan. */
InitDirtySnapshot(snap);
diff --git a/src/backend/executor/execSRF.c b/src/backend/executor/execSRF.c
index a03fe780a02..484cf252463 100644
--- a/src/backend/executor/execSRF.c
+++ b/src/backend/executor/execSRF.c
@@ -337,7 +337,7 @@ ExecMakeTableFunctionResult(SetExprState *setexpr,
int natts = expectedDesc->natts;
bool *nullflags;
- nullflags = (bool *) palloc(natts * sizeof(bool));
+ nullflags = palloc_array(bool, natts);
memset(nullflags, true, natts * sizeof(bool));
tuplestore_putvalues(tupstore, expectedDesc, NULL, nullflags);
}
@@ -405,7 +405,7 @@ no_function_result:
int natts = expectedDesc->natts;
bool *nullflags;
- nullflags = (bool *) palloc(natts * sizeof(bool));
+ nullflags = palloc_array(bool, natts);
memset(nullflags, true, natts * sizeof(bool));
tuplestore_putvalues(tupstore, expectedDesc, NULL, nullflags);
}
diff --git a/src/backend/executor/execTuples.c b/src/backend/executor/execTuples.c
index 7de490462d4..ff5caa2b35b 100644
--- a/src/backend/executor/execTuples.c
+++ b/src/backend/executor/execTuples.c
@@ -2281,7 +2281,7 @@ TupleDescGetAttInMetadata(TupleDesc tupdesc)
int32 *atttypmods;
AttInMetadata *attinmeta;
- attinmeta = (AttInMetadata *) palloc(sizeof(AttInMetadata));
+ attinmeta = palloc_object(AttInMetadata);
/* "Bless" the tupledesc so that we can make rowtype datums with it */
attinmeta->tupdesc = BlessTupleDesc(tupdesc);
@@ -2289,9 +2289,9 @@ TupleDescGetAttInMetadata(TupleDesc tupdesc)
/*
* Gather info needed later to call the "in" function for each attribute
*/
- attinfuncinfo = (FmgrInfo *) palloc0(natts * sizeof(FmgrInfo));
- attioparams = (Oid *) palloc0(natts * sizeof(Oid));
- atttypmods = (int32 *) palloc0(natts * sizeof(int32));
+ attinfuncinfo = palloc0_array(FmgrInfo, natts);
+ attioparams = palloc0_array(Oid, natts);
+ atttypmods = palloc0_array(int32, natts);
for (i = 0; i < natts; i++)
{
@@ -2328,8 +2328,8 @@ BuildTupleFromCStrings(AttInMetadata *attinmeta, char **values)
int i;
HeapTuple tuple;
- dvalues = (Datum *) palloc(natts * sizeof(Datum));
- nulls = (bool *) palloc(natts * sizeof(bool));
+ dvalues = palloc_array(Datum, natts);
+ nulls = palloc_array(bool, natts);
/*
* Call the "in" function for each non-dropped attribute, even for nulls,
@@ -2445,7 +2445,7 @@ begin_tup_output_tupdesc(DestReceiver *dest,
{
TupOutputState *tstate;
- tstate = (TupOutputState *) palloc(sizeof(TupOutputState));
+ tstate = palloc_object(TupOutputState);
tstate->slot = MakeSingleTupleTableSlot(tupdesc, tts_ops);
tstate->dest = dest;
diff --git a/src/backend/executor/functions.c b/src/backend/executor/functions.c
index 757f8068e21..a77d4d14369 100644
--- a/src/backend/executor/functions.c
+++ b/src/backend/executor/functions.c
@@ -199,7 +199,7 @@ prepare_sql_fn_parse_info(HeapTuple procedureTuple,
Oid *argOidVect;
int argnum;
- argOidVect = (Oid *) palloc(nargs * sizeof(Oid));
+ argOidVect = palloc_array(Oid, nargs);
memcpy(argOidVect,
procedureStruct->proargtypes.values,
nargs * sizeof(Oid));
@@ -528,7 +528,7 @@ init_execution_state(List *queryTree_list,
CreateCommandName((Node *) stmt))));
/* OK, build the execution_state for this query */
- newes = (execution_state *) palloc(sizeof(execution_state));
+ newes = palloc_object(execution_state);
if (preves)
preves->next = newes;
else
@@ -2068,7 +2068,7 @@ coerce_fn_result_column(TargetEntry *src_tle,
DestReceiver *
CreateSQLFunctionDestReceiver(void)
{
- DR_sqlfunction *self = (DR_sqlfunction *) palloc0(sizeof(DR_sqlfunction));
+ DR_sqlfunction *self = palloc0_object(DR_sqlfunction);
self->pub.receiveSlot = sqlfunction_receive;
self->pub.rStartup = sqlfunction_startup;
diff --git a/src/backend/executor/instrument.c b/src/backend/executor/instrument.c
index 2d3569b3748..4183a934c1a 100644
--- a/src/backend/executor/instrument.c
+++ b/src/backend/executor/instrument.c
@@ -33,7 +33,7 @@ InstrAlloc(int n, int instrument_options, bool async_mode)
Instrumentation *instr;
/* initialize all fields to zeroes, then modify as needed */
- instr = palloc0(n * sizeof(Instrumentation));
+ instr = palloc0_array(Instrumentation, n);
if (instrument_options & (INSTRUMENT_BUFFERS | INSTRUMENT_TIMER | INSTRUMENT_WAL))
{
bool need_buffers = (instrument_options & INSTRUMENT_BUFFERS) != 0;
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 3005b5c0e3b..af7587f673e 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -2991,7 +2991,7 @@ static HashAggBatch *
hashagg_batch_new(LogicalTape *input_tape, int setno,
int64 input_tuples, double input_card, int used_bits)
{
- HashAggBatch *batch = palloc0(sizeof(HashAggBatch));
+ HashAggBatch *batch = palloc0_object(HashAggBatch);
batch->setno = setno;
batch->used_bits = used_bits;
@@ -4252,7 +4252,7 @@ build_pertrans_for_aggref(AggStatePerTrans pertrans,
Assert(numArguments > 0);
Assert(list_length(aggref->aggdistinct) == numDistinctCols);
- ops = palloc(numDistinctCols * sizeof(Oid));
+ ops = palloc_array(Oid, numDistinctCols);
i = 0;
foreach(lc, aggref->aggdistinct)
diff --git a/src/backend/executor/nodeAppend.c b/src/backend/executor/nodeAppend.c
index 0bd0e4e54d3..7b78fcf7475 100644
--- a/src/backend/executor/nodeAppend.c
+++ b/src/backend/executor/nodeAppend.c
@@ -177,8 +177,7 @@ ExecInitAppend(Append *node, EState *estate, int eflags)
appendstate->as_prune_state = NULL;
}
- appendplanstates = (PlanState **) palloc(nplans *
- sizeof(PlanState *));
+ appendplanstates = palloc_array(PlanState *, nplans);
/*
* call ExecInitNode on each of the valid plans to be executed and save
@@ -262,7 +261,7 @@ ExecInitAppend(Append *node, EState *estate, int eflags)
{
AsyncRequest *areq;
- areq = palloc(sizeof(AsyncRequest));
+ areq = palloc_object(AsyncRequest);
areq->requestor = (PlanState *) appendstate;
areq->requestee = appendplanstates[i];
areq->request_index = i;
diff --git a/src/backend/executor/nodeBitmapAnd.c b/src/backend/executor/nodeBitmapAnd.c
index 939907b6fcd..79be7d04ce1 100644
--- a/src/backend/executor/nodeBitmapAnd.c
+++ b/src/backend/executor/nodeBitmapAnd.c
@@ -69,7 +69,7 @@ ExecInitBitmapAnd(BitmapAnd *node, EState *estate, int eflags)
*/
nplans = list_length(node->bitmapplans);
- bitmapplanstates = (PlanState **) palloc0(nplans * sizeof(PlanState *));
+ bitmapplanstates = palloc0_array(PlanState *, nplans);
/*
* create new BitmapAndState for our BitmapAnd node
diff --git a/src/backend/executor/nodeBitmapOr.c b/src/backend/executor/nodeBitmapOr.c
index a9ede1c1087..c7b2a35fda9 100644
--- a/src/backend/executor/nodeBitmapOr.c
+++ b/src/backend/executor/nodeBitmapOr.c
@@ -70,7 +70,7 @@ ExecInitBitmapOr(BitmapOr *node, EState *estate, int eflags)
*/
nplans = list_length(node->bitmapplans);
- bitmapplanstates = (PlanState **) palloc0(nplans * sizeof(PlanState *));
+ bitmapplanstates = palloc0_array(PlanState *, nplans);
/*
* create new BitmapOrState for our BitmapOr node
diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c
index 3b2275e8fe9..d4b41f71036 100644
--- a/src/backend/executor/nodeIndexscan.c
+++ b/src/backend/executor/nodeIndexscan.c
@@ -464,7 +464,7 @@ reorderqueue_push(IndexScanState *node, TupleTableSlot *slot,
ReorderTuple *rt;
int i;
- rt = (ReorderTuple *) palloc(sizeof(ReorderTuple));
+ rt = palloc_object(ReorderTuple);
rt->htup = ExecCopySlotHeapTuple(slot);
rt->orderbyvals =
(Datum *) palloc(sizeof(Datum) * scandesc->numberOfOrderBys);
@@ -1163,8 +1163,7 @@ ExecIndexBuildScanKeys(PlanState *planstate, Relation index,
n_runtime_keys = max_runtime_keys = *numRuntimeKeys;
/* Allocate array_keys as large as it could possibly need to be */
- array_keys = (IndexArrayKeyInfo *)
- palloc0(n_scan_keys * sizeof(IndexArrayKeyInfo));
+ array_keys = palloc0_array(IndexArrayKeyInfo, n_scan_keys);
n_array_keys = 0;
/*
@@ -1254,14 +1253,15 @@ ExecIndexBuildScanKeys(PlanState *planstate, Relation index,
if (max_runtime_keys == 0)
{
max_runtime_keys = 8;
- runtime_keys = (IndexRuntimeKeyInfo *)
- palloc(max_runtime_keys * sizeof(IndexRuntimeKeyInfo));
+ runtime_keys = palloc_array(IndexRuntimeKeyInfo,
+ max_runtime_keys);
}
else
{
max_runtime_keys *= 2;
- runtime_keys = (IndexRuntimeKeyInfo *)
- repalloc(runtime_keys, max_runtime_keys * sizeof(IndexRuntimeKeyInfo));
+ runtime_keys = repalloc_array(runtime_keys,
+ IndexRuntimeKeyInfo,
+ max_runtime_keys);
}
}
runtime_keys[n_runtime_keys].scan_key = this_scan_key;
@@ -1378,14 +1378,15 @@ ExecIndexBuildScanKeys(PlanState *planstate, Relation index,
if (max_runtime_keys == 0)
{
max_runtime_keys = 8;
- runtime_keys = (IndexRuntimeKeyInfo *)
- palloc(max_runtime_keys * sizeof(IndexRuntimeKeyInfo));
+ runtime_keys = palloc_array(IndexRuntimeKeyInfo,
+ max_runtime_keys);
}
else
{
max_runtime_keys *= 2;
- runtime_keys = (IndexRuntimeKeyInfo *)
- repalloc(runtime_keys, max_runtime_keys * sizeof(IndexRuntimeKeyInfo));
+ runtime_keys = repalloc_array(runtime_keys,
+ IndexRuntimeKeyInfo,
+ max_runtime_keys);
}
}
runtime_keys[n_runtime_keys].scan_key = this_sub_key;
@@ -1496,14 +1497,15 @@ ExecIndexBuildScanKeys(PlanState *planstate, Relation index,
if (max_runtime_keys == 0)
{
max_runtime_keys = 8;
- runtime_keys = (IndexRuntimeKeyInfo *)
- palloc(max_runtime_keys * sizeof(IndexRuntimeKeyInfo));
+ runtime_keys = palloc_array(IndexRuntimeKeyInfo,
+ max_runtime_keys);
}
else
{
max_runtime_keys *= 2;
- runtime_keys = (IndexRuntimeKeyInfo *)
- repalloc(runtime_keys, max_runtime_keys * sizeof(IndexRuntimeKeyInfo));
+ runtime_keys = repalloc_array(runtime_keys,
+ IndexRuntimeKeyInfo,
+ max_runtime_keys);
}
}
runtime_keys[n_runtime_keys].scan_key = this_scan_key;
diff --git a/src/backend/executor/nodeMemoize.c b/src/backend/executor/nodeMemoize.c
index 609deb12afb..bf7f5c8d6a3 100644
--- a/src/backend/executor/nodeMemoize.c
+++ b/src/backend/executor/nodeMemoize.c
@@ -554,7 +554,7 @@ cache_lookup(MemoizeState *mstate, bool *found)
oldcontext = MemoryContextSwitchTo(mstate->tableContext);
/* Allocate a new key */
- entry->key = key = (MemoizeKey *) palloc(sizeof(MemoizeKey));
+ entry->key = key = palloc_object(MemoizeKey);
key->params = ExecCopySlotMinimalTuple(mstate->probeslot);
/* Update the total cache memory utilization */
@@ -633,7 +633,7 @@ cache_store_tuple(MemoizeState *mstate, TupleTableSlot *slot)
oldcontext = MemoryContextSwitchTo(mstate->tableContext);
- tuple = (MemoizeTuple *) palloc(sizeof(MemoizeTuple));
+ tuple = palloc_object(MemoizeTuple);
tuple->mintuple = ExecCopySlotMinimalTuple(slot);
tuple->next = NULL;
@@ -1005,7 +1005,7 @@ ExecInitMemoize(Memoize *node, EState *estate, int eflags)
* data */
mstate->hashfunctions = (FmgrInfo *) palloc(nkeys * sizeof(FmgrInfo));
- eqfuncoids = palloc(nkeys * sizeof(Oid));
+ eqfuncoids = palloc_array(Oid, nkeys);
for (i = 0; i < nkeys; i++)
{
diff --git a/src/backend/executor/nodeMergeAppend.c b/src/backend/executor/nodeMergeAppend.c
index e152c9ee3a0..3afa1433c94 100644
--- a/src/backend/executor/nodeMergeAppend.c
+++ b/src/backend/executor/nodeMergeAppend.c
@@ -121,7 +121,7 @@ ExecInitMergeAppend(MergeAppend *node, EState *estate, int eflags)
mergestate->ms_prune_state = NULL;
}
- mergeplanstates = (PlanState **) palloc(nplans * sizeof(PlanState *));
+ mergeplanstates = palloc_array(PlanState *, nplans);
mergestate->mergeplans = mergeplanstates;
mergestate->ms_nplans = nplans;
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index bc82e035ba2..5c964aa153d 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -437,7 +437,7 @@ ExecInitStoredGenerated(ResultRelInfo *resultRelInfo,
*/
oldContext = MemoryContextSwitchTo(estate->es_query_cxt);
- ri_GeneratedExprs = (ExprState **) palloc0(natts * sizeof(ExprState *));
+ ri_GeneratedExprs = palloc0_array(ExprState *, natts);
ri_NumGeneratedNeeded = 0;
for (int i = 0; i < natts; i++)
@@ -542,8 +542,8 @@ ExecComputeStoredGenerated(ResultRelInfo *resultRelInfo,
oldContext = MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));
- values = palloc(sizeof(*values) * natts);
- nulls = palloc(sizeof(*nulls) * natts);
+ values = palloc_array(Datum, natts);
+ nulls = palloc_array(bool, natts);
slot_getallattrs(slot);
memcpy(nulls, slot->tts_isnull, sizeof(*nulls) * natts);
diff --git a/src/backend/executor/nodeSamplescan.c b/src/backend/executor/nodeSamplescan.c
index 6b3db7548ed..c28bc6fc620 100644
--- a/src/backend/executor/nodeSamplescan.c
+++ b/src/backend/executor/nodeSamplescan.c
@@ -228,7 +228,7 @@ tablesample_init(SampleScanState *scanstate)
ListCell *arg;
scanstate->donetuples = 0;
- params = (Datum *) palloc(list_length(scanstate->args) * sizeof(Datum));
+ params = palloc_array(Datum, list_length(scanstate->args));
i = 0;
foreach(arg, scanstate->args)
diff --git a/src/backend/executor/nodeSubplan.c b/src/backend/executor/nodeSubplan.c
index 49767ed6a52..666f6d79059 100644
--- a/src/backend/executor/nodeSubplan.c
+++ b/src/backend/executor/nodeSubplan.c
@@ -960,10 +960,10 @@ ExecInitSubPlan(SubPlan *subplan, PlanState *parent)
sstate->tab_eq_funcoids = (Oid *) palloc(ncols * sizeof(Oid));
sstate->tab_collations = (Oid *) palloc(ncols * sizeof(Oid));
sstate->tab_hash_funcs = (FmgrInfo *) palloc(ncols * sizeof(FmgrInfo));
- lhs_hash_funcs = (FmgrInfo *) palloc(ncols * sizeof(FmgrInfo));
+ lhs_hash_funcs = palloc_array(FmgrInfo, ncols);
sstate->cur_eq_funcs = (FmgrInfo *) palloc(ncols * sizeof(FmgrInfo));
/* we'll need the cross-type equality fns below, but not in sstate */
- cross_eq_funcoids = (Oid *) palloc(ncols * sizeof(Oid));
+ cross_eq_funcoids = palloc_array(Oid, ncols);
i = 1;
foreach(l, oplist)
diff --git a/src/backend/executor/nodeTidrangescan.c b/src/backend/executor/nodeTidrangescan.c
index ab2eab9596e..5a3e3b50d00 100644
--- a/src/backend/executor/nodeTidrangescan.c
+++ b/src/backend/executor/nodeTidrangescan.c
@@ -72,7 +72,7 @@ MakeTidOpExpr(OpExpr *expr, TidRangeScanState *tidstate)
else
elog(ERROR, "could not identify CTID variable");
- tidopexpr = (TidOpExpr *) palloc(sizeof(TidOpExpr));
+ tidopexpr = palloc_object(TidOpExpr);
tidopexpr->inclusive = false; /* for now */
switch (expr->opno)
diff --git a/src/backend/executor/nodeTidscan.c b/src/backend/executor/nodeTidscan.c
index 5e56e29a15f..2216c20ca9f 100644
--- a/src/backend/executor/nodeTidscan.c
+++ b/src/backend/executor/nodeTidscan.c
@@ -78,7 +78,7 @@ TidExprListCreate(TidScanState *tidstate)
foreach(l, node->tidquals)
{
Expr *expr = (Expr *) lfirst(l);
- TidExpr *tidexpr = (TidExpr *) palloc0(sizeof(TidExpr));
+ TidExpr *tidexpr = palloc0_object(TidExpr);
if (is_opclause(expr))
{
@@ -157,8 +157,7 @@ TidListEval(TidScanState *tidstate)
* ScalarArrayOpExprs, we may have to enlarge the array.
*/
numAllocTids = list_length(tidstate->tss_tidexprs);
- tidList = (ItemPointerData *)
- palloc(numAllocTids * sizeof(ItemPointerData));
+ tidList = palloc_array(ItemPointerData, numAllocTids);
numTids = 0;
foreach(l, tidstate->tss_tidexprs)
@@ -189,9 +188,9 @@ TidListEval(TidScanState *tidstate)
if (numTids >= numAllocTids)
{
numAllocTids *= 2;
- tidList = (ItemPointerData *)
- repalloc(tidList,
- numAllocTids * sizeof(ItemPointerData));
+ tidList = repalloc_array(tidList,
+ ItemPointerData,
+ numAllocTids);
}
tidList[numTids++] = *itemptr;
}
@@ -214,9 +213,9 @@ TidListEval(TidScanState *tidstate)
if (numTids + ndatums > numAllocTids)
{
numAllocTids = numTids + ndatums;
- tidList = (ItemPointerData *)
- repalloc(tidList,
- numAllocTids * sizeof(ItemPointerData));
+ tidList = repalloc_array(tidList,
+ ItemPointerData,
+ numAllocTids);
}
for (i = 0; i < ndatums; i++)
{
@@ -245,9 +244,9 @@ TidListEval(TidScanState *tidstate)
if (numTids >= numAllocTids)
{
numAllocTids *= 2;
- tidList = (ItemPointerData *)
- repalloc(tidList,
- numAllocTids * sizeof(ItemPointerData));
+ tidList = repalloc_array(tidList,
+ ItemPointerData,
+ numAllocTids);
}
tidList[numTids++] = cursor_tid;
}
diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c
index ecb2e4ccaa1..e9babf29576 100644
--- a/src/backend/executor/spi.c
+++ b/src/backend/executor/spi.c
@@ -119,9 +119,8 @@ SPI_connect_ext(int options)
if (_SPI_stack_depth == _SPI_connected + 1)
{
newdepth = _SPI_stack_depth * 2;
- _SPI_stack = (_SPI_connection *)
- repalloc(_SPI_stack,
- newdepth * sizeof(_SPI_connection));
+ _SPI_stack = repalloc_array(_SPI_stack,
+ _SPI_connection, newdepth);
_SPI_stack_depth = newdepth;
}
}
@@ -1130,8 +1129,8 @@ SPI_modifytuple(Relation rel, HeapTuple tuple, int natts, int *attnum,
SPI_result = 0;
numberOfAttributes = rel->rd_att->natts;
- v = (Datum *) palloc(numberOfAttributes * sizeof(Datum));
- n = (bool *) palloc(numberOfAttributes * sizeof(bool));
+ v = palloc_array(Datum, numberOfAttributes);
+ n = palloc_array(bool, numberOfAttributes);
/* fetch old values and nulls */
heap_deform_tuple(tuple, rel->rd_att, v, n);
@@ -2141,8 +2140,7 @@ spi_dest_startup(DestReceiver *self, int operation, TupleDesc typeinfo)
ALLOCSET_DEFAULT_SIZES);
MemoryContextSwitchTo(tuptabcxt);
- _SPI_current->tuptable = tuptable = (SPITupleTable *)
- palloc0(sizeof(SPITupleTable));
+ _SPI_current->tuptable = tuptable = palloc0_object(SPITupleTable);
tuptable->tuptabcxt = tuptabcxt;
tuptable->subid = GetCurrentSubTransactionId();
diff --git a/src/backend/executor/tqueue.c b/src/backend/executor/tqueue.c
index 6c5e1f1262d..d9bf59f672d 100644
--- a/src/backend/executor/tqueue.c
+++ b/src/backend/executor/tqueue.c
@@ -120,7 +120,7 @@ CreateTupleQueueDestReceiver(shm_mq_handle *handle)
{
TQueueDestReceiver *self;
- self = (TQueueDestReceiver *) palloc0(sizeof(TQueueDestReceiver));
+ self = palloc0_object(TQueueDestReceiver);
self->pub.receiveSlot = tqueueReceiveSlot;
self->pub.rStartup = tqueueStartupReceiver;
@@ -138,7 +138,7 @@ CreateTupleQueueDestReceiver(shm_mq_handle *handle)
TupleQueueReader *
CreateTupleQueueReader(shm_mq_handle *handle)
{
- TupleQueueReader *reader = palloc0(sizeof(TupleQueueReader));
+ TupleQueueReader *reader = palloc0_object(TupleQueueReader);
reader->queue = handle;
diff --git a/src/backend/executor/tstoreReceiver.c b/src/backend/executor/tstoreReceiver.c
index 562de676457..fed58c5e2f5 100644
--- a/src/backend/executor/tstoreReceiver.c
+++ b/src/backend/executor/tstoreReceiver.c
@@ -237,7 +237,7 @@ tstoreDestroyReceiver(DestReceiver *self)
DestReceiver *
CreateTuplestoreDestReceiver(void)
{
- TStoreState *self = (TStoreState *) palloc0(sizeof(TStoreState));
+ TStoreState *self = palloc0_object(TStoreState);
self->pub.receiveSlot = tstoreReceiveSlot_notoast; /* might change */
self->pub.rStartup = tstoreStartupReceiver;
diff --git a/src/backend/foreign/foreign.c b/src/backend/foreign/foreign.c
index f0835fc3070..aaaa2e34633 100644
--- a/src/backend/foreign/foreign.c
+++ b/src/backend/foreign/foreign.c
@@ -65,7 +65,7 @@ GetForeignDataWrapperExtended(Oid fdwid, bits16 flags)
fdwform = (Form_pg_foreign_data_wrapper) GETSTRUCT(tp);
- fdw = (ForeignDataWrapper *) palloc(sizeof(ForeignDataWrapper));
+ fdw = palloc_object(ForeignDataWrapper);
fdw->fdwid = fdwid;
fdw->owner = fdwform->fdwowner;
fdw->fdwname = pstrdup(NameStr(fdwform->fdwname));
@@ -139,7 +139,7 @@ GetForeignServerExtended(Oid serverid, bits16 flags)
serverform = (Form_pg_foreign_server) GETSTRUCT(tp);
- server = (ForeignServer *) palloc(sizeof(ForeignServer));
+ server = palloc_object(ForeignServer);
server->serverid = serverid;
server->servername = pstrdup(NameStr(serverform->srvname));
server->owner = serverform->srvowner;
@@ -226,7 +226,7 @@ GetUserMapping(Oid userid, Oid serverid)
MappingUserName(userid), server->servername)));
}
- um = (UserMapping *) palloc(sizeof(UserMapping));
+ um = palloc_object(UserMapping);
um->umid = ((Form_pg_user_mapping) GETSTRUCT(tp))->oid;
um->userid = userid;
um->serverid = serverid;
@@ -264,7 +264,7 @@ GetForeignTable(Oid relid)
elog(ERROR, "cache lookup failed for foreign table %u", relid);
tableform = (Form_pg_foreign_table) GETSTRUCT(tp);
- ft = (ForeignTable *) palloc(sizeof(ForeignTable));
+ ft = palloc_object(ForeignTable);
ft->relid = relid;
ft->serverid = tableform->ftserver;
@@ -462,7 +462,7 @@ GetFdwRoutineForRelation(Relation relation, bool makecopy)
/* We have valid cached data --- does the caller want a copy? */
if (makecopy)
{
- fdwroutine = (FdwRoutine *) palloc(sizeof(FdwRoutine));
+ fdwroutine = palloc_object(FdwRoutine);
memcpy(fdwroutine, relation->rd_fdwroutine, sizeof(FdwRoutine));
return fdwroutine;
}
diff --git a/src/backend/jit/llvm/llvmjit.c b/src/backend/jit/llvm/llvmjit.c
index 614926720fb..39cc1c3e830 100644
--- a/src/backend/jit/llvm/llvmjit.c
+++ b/src/backend/jit/llvm/llvmjit.c
@@ -496,7 +496,7 @@ llvm_copy_attributes_at_index(LLVMValueRef v_from, LLVMValueRef v_to, uint32 ind
if (num_attributes == 0)
return;
- attrs = palloc(sizeof(LLVMAttributeRef) * num_attributes);
+ attrs = palloc_array(LLVMAttributeRef, num_attributes);
LLVMGetAttributesAtIndex(v_from, index, attrs);
for (int attno = 0; attno < num_attributes; attno++)
diff --git a/src/backend/jit/llvm/llvmjit_deform.c b/src/backend/jit/llvm/llvmjit_deform.c
index 5d169c7a40b..1749ca80e7f 100644
--- a/src/backend/jit/llvm/llvmjit_deform.c
+++ b/src/backend/jit/llvm/llvmjit_deform.c
@@ -156,12 +156,12 @@ slot_compile_deform(LLVMJitContext *context, TupleDesc desc,
b = LLVMCreateBuilderInContext(lc);
- attcheckattnoblocks = palloc(sizeof(LLVMBasicBlockRef) * natts);
- attstartblocks = palloc(sizeof(LLVMBasicBlockRef) * natts);
- attisnullblocks = palloc(sizeof(LLVMBasicBlockRef) * natts);
- attcheckalignblocks = palloc(sizeof(LLVMBasicBlockRef) * natts);
- attalignblocks = palloc(sizeof(LLVMBasicBlockRef) * natts);
- attstoreblocks = palloc(sizeof(LLVMBasicBlockRef) * natts);
+ attcheckattnoblocks = palloc_array(LLVMBasicBlockRef, natts);
+ attstartblocks = palloc_array(LLVMBasicBlockRef, natts);
+ attisnullblocks = palloc_array(LLVMBasicBlockRef, natts);
+ attcheckalignblocks = palloc_array(LLVMBasicBlockRef, natts);
+ attalignblocks = palloc_array(LLVMBasicBlockRef, natts);
+ attstoreblocks = palloc_array(LLVMBasicBlockRef, natts);
known_alignment = 0;
diff --git a/src/backend/jit/llvm/llvmjit_expr.c b/src/backend/jit/llvm/llvmjit_expr.c
index 3ef01aadd47..a117b2a7fee 100644
--- a/src/backend/jit/llvm/llvmjit_expr.c
+++ b/src/backend/jit/llvm/llvmjit_expr.c
@@ -297,7 +297,7 @@ llvm_compile_expr(ExprState *state)
"v.econtext.aggnulls");
/* allocate blocks for each op upfront, so we can do jumps easily */
- opblocks = palloc(sizeof(LLVMBasicBlockRef) * state->steps_len);
+ opblocks = palloc_array(LLVMBasicBlockRef, state->steps_len);
for (int opno = 0; opno < state->steps_len; opno++)
opblocks[opno] = l_bb_append_v(eval_fn, "b.op.%d.start", opno);
@@ -690,7 +690,8 @@ llvm_compile_expr(ExprState *state)
LLVMBuildStore(b, l_sbool_const(1), v_resnullp);
/* create blocks for checking args, one for each */
- b_checkargnulls = palloc(op->d.func.nargs * sizeof(LLVMBasicBlockRef));
+ b_checkargnulls = palloc_array(LLVMBasicBlockRef,
+ op->d.func.nargs);
for (int argno = 0; argno < op->d.func.nargs; argno++)
b_checkargnulls[argno] =
l_bb_before_v(b_nonull, "b.%d.isnull.%d", opno,
@@ -2519,7 +2520,8 @@ llvm_compile_expr(ExprState *state)
v_nullsp = l_ptr_const(nulls, l_ptr(TypeStorageBool));
/* create blocks for checking args */
- b_checknulls = palloc(nargs * sizeof(LLVMBasicBlockRef));
+ b_checknulls = palloc_array(LLVMBasicBlockRef,
+ nargs);
for (int argno = 0; argno < nargs; argno++)
{
b_checknulls[argno] =
@@ -2970,7 +2972,7 @@ llvm_compile_expr(ExprState *state)
*/
{
- CompiledExprState *cstate = palloc0(sizeof(CompiledExprState));
+ CompiledExprState *cstate = palloc0_object(CompiledExprState);
cstate->context = context;
cstate->funcname = funcname;
@@ -3082,7 +3084,7 @@ build_EvalXFuncInt(LLVMBuilderRef b, LLVMModuleRef mod, const char *funcname,
elog(ERROR, "parameter mismatch: %s expects %d passed %d",
funcname, LLVMCountParams(v_fn), nargs + 2);
- params = palloc(sizeof(LLVMValueRef) * (2 + nargs));
+ params = palloc_array(LLVMValueRef, (2 + nargs));
params[argno++] = v_state;
params[argno++] = l_ptr_const(op, l_ptr(StructExprEvalStep));
diff --git a/src/backend/lib/bipartite_match.c b/src/backend/lib/bipartite_match.c
index 5af789652c7..ed54f190494 100644
--- a/src/backend/lib/bipartite_match.c
+++ b/src/backend/lib/bipartite_match.c
@@ -38,7 +38,7 @@ static bool hk_depth_search(BipartiteMatchState *state, int u);
BipartiteMatchState *
BipartiteMatch(int u_size, int v_size, short **adjacency)
{
- BipartiteMatchState *state = palloc(sizeof(BipartiteMatchState));
+ BipartiteMatchState *state = palloc_object(BipartiteMatchState);
if (u_size < 0 || u_size >= SHRT_MAX ||
v_size < 0 || v_size >= SHRT_MAX)
diff --git a/src/backend/lib/dshash.c b/src/backend/lib/dshash.c
index b8d031f2015..43486ea6498 100644
--- a/src/backend/lib/dshash.c
+++ b/src/backend/lib/dshash.c
@@ -209,7 +209,7 @@ dshash_create(dsa_area *area, const dshash_parameters *params, void *arg)
dsa_pointer control;
/* Allocate the backend-local object representing the hash table. */
- hash_table = palloc(sizeof(dshash_table));
+ hash_table = palloc_object(dshash_table);
/* Allocate the control object in shared memory. */
control = dsa_allocate(area, sizeof(dshash_table_control));
@@ -274,7 +274,7 @@ dshash_attach(dsa_area *area, const dshash_parameters *params,
dsa_pointer control;
/* Allocate the backend-local object representing the hash table. */
- hash_table = palloc(sizeof(dshash_table));
+ hash_table = palloc_object(dshash_table);
/* Find the control object in shared memory. */
control = handle;
diff --git a/src/backend/lib/integerset.c b/src/backend/lib/integerset.c
index f4153b0e15a..aca1df2ad5a 100644
--- a/src/backend/lib/integerset.c
+++ b/src/backend/lib/integerset.c
@@ -284,7 +284,7 @@ intset_create(void)
{
IntegerSet *intset;
- intset = (IntegerSet *) palloc(sizeof(IntegerSet));
+ intset = palloc_object(IntegerSet);
intset->context = CurrentMemoryContext;
intset->mem_used = GetMemoryChunkSpace(intset);
diff --git a/src/backend/lib/knapsack.c b/src/backend/lib/knapsack.c
index 5b3697a090f..711d16c7d37 100644
--- a/src/backend/lib/knapsack.c
+++ b/src/backend/lib/knapsack.c
@@ -65,8 +65,8 @@ DiscreteKnapsack(int max_weight, int num_items,
Assert(max_weight >= 0);
Assert(num_items > 0 && item_weights);
- values = palloc((1 + max_weight) * sizeof(double));
- sets = palloc((1 + max_weight) * sizeof(Bitmapset *));
+ values = palloc_array(double, (1 + max_weight));
+ sets = palloc_array(Bitmapset *, (1 + max_weight));
for (i = 0; i <= max_weight; ++i)
{
diff --git a/src/backend/lib/pairingheap.c b/src/backend/lib/pairingheap.c
index 0aef8a88f1b..f85c17ddbc7 100644
--- a/src/backend/lib/pairingheap.c
+++ b/src/backend/lib/pairingheap.c
@@ -43,7 +43,7 @@ pairingheap_allocate(pairingheap_comparator compare, void *arg)
{
pairingheap *heap;
- heap = (pairingheap *) palloc(sizeof(pairingheap));
+ heap = palloc_object(pairingheap);
heap->ph_compare = compare;
heap->ph_arg = arg;
diff --git a/src/backend/lib/rbtree.c b/src/backend/lib/rbtree.c
index 3b5e5faa9bf..13388432d34 100644
--- a/src/backend/lib/rbtree.c
+++ b/src/backend/lib/rbtree.c
@@ -106,7 +106,7 @@ rbt_create(Size node_size,
rbt_freefunc freefunc,
void *arg)
{
- RBTree *tree = (RBTree *) palloc(sizeof(RBTree));
+ RBTree *tree = palloc_object(RBTree);
Assert(node_size > sizeof(RBTNode));
diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c
index 26dd241efa9..f369d5266b7 100644
--- a/src/backend/libpq/auth-scram.c
+++ b/src/backend/libpq/auth-scram.c
@@ -242,7 +242,7 @@ scram_init(Port *port, const char *selected_mech, const char *shadow_pass)
scram_state *state;
bool got_secret;
- state = (scram_state *) palloc0(sizeof(scram_state));
+ state = palloc0_object(scram_state);
state->port = port;
state->state = SCRAM_AUTH_INIT;
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 510c9ffc6d7..b619d42dd88 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -311,7 +311,7 @@ regcomp_auth_token(AuthToken *token, char *filename, int line_num,
return 0; /* nothing to compile */
token->regex = (regex_t *) palloc0(sizeof(regex_t));
- wstr = palloc((strlen(token->string + 1) + 1) * sizeof(pg_wchar));
+ wstr = palloc_array(pg_wchar, (strlen(token->string + 1) + 1));
wlen = pg_mb2wchar_with_len(token->string + 1,
wstr, strlen(token->string + 1));
@@ -352,7 +352,7 @@ regexec_auth_token(const char *match, AuthToken *token, size_t nmatch,
Assert(token->string[0] == '/' && token->regex);
- wmatchstr = palloc((strlen(match) + 1) * sizeof(pg_wchar));
+ wmatchstr = palloc_array(pg_wchar, (strlen(match) + 1));
wmatchlen = pg_mb2wchar_with_len(match, wmatchstr, strlen(match));
r = pg_regexec(token->regex, wmatchstr, wmatchlen, 0, NULL, nmatch, pmatch, 0);
@@ -892,7 +892,7 @@ process_line:
* to this list.
*/
oldcxt = MemoryContextSwitchTo(tokenize_context);
- tok_line = (TokenizedAuthLine *) palloc0(sizeof(TokenizedAuthLine));
+ tok_line = palloc0_object(TokenizedAuthLine);
tok_line->fields = current_line;
tok_line->file_name = pstrdup(filename);
tok_line->line_num = line_number;
@@ -1340,7 +1340,7 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
AuthToken *token;
HbaLine *parsedline;
- parsedline = palloc0(sizeof(HbaLine));
+ parsedline = palloc0_object(HbaLine);
parsedline->sourcefile = pstrdup(file_name);
parsedline->linenumber = line_num;
parsedline->rawline = pstrdup(tok_line->raw_line);
@@ -2567,7 +2567,7 @@ check_hba(hbaPort *port)
}
/* If no matching entry was found, then implicitly reject. */
- hba = palloc0(sizeof(HbaLine));
+ hba = palloc0_object(HbaLine);
hba->auth_method = uaImplicitReject;
port->hba = hba;
}
@@ -2703,7 +2703,7 @@ parse_ident_line(TokenizedAuthLine *tok_line, int elevel)
Assert(tok_line->fields != NIL);
field = list_head(tok_line->fields);
- parsedline = palloc0(sizeof(IdentLine));
+ parsedline = palloc0_object(IdentLine);
parsedline->linenumber = line_num;
/* Get the map token (must exist) */
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 1bf27d93cfa..23cc1e1986c 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -178,7 +178,7 @@ pq_init(ClientSocket *client_sock)
int latch_pos PG_USED_FOR_ASSERTS_ONLY;
/* allocate the Port struct and copy the ClientSocket contents to it */
- port = palloc0(sizeof(Port));
+ port = palloc0_object(Port);
port->sock = client_sock->sock;
memcpy(&port->raddr.addr, &client_sock->raddr.addr, client_sock->raddr.salen);
port->raddr.salen = client_sock->raddr.salen;
diff --git a/src/backend/nodes/queryjumblefuncs.c b/src/backend/nodes/queryjumblefuncs.c
index b103a281936..63723a9ca2d 100644
--- a/src/backend/nodes/queryjumblefuncs.c
+++ b/src/backend/nodes/queryjumblefuncs.c
@@ -115,7 +115,7 @@ JumbleQuery(Query *query)
Assert(IsQueryIdEnabled());
- jstate = (JumbleState *) palloc(sizeof(JumbleState));
+ jstate = palloc_object(JumbleState);
/* Set up workspace for query jumbling */
jstate->jumble = (unsigned char *) palloc(JUMBLE_SIZE);
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 64d3a09f765..34942e29686 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -663,7 +663,7 @@ fnname(int numCols) \
return NULL; /* it was "<>", so return NULL pointer */ \
if (length != 1 || token[0] != '(') \
elog(ERROR, "unrecognized token: \"%.*s\"", length, token); \
- vals = (datatype *) palloc(numCols * sizeof(datatype)); \
+ vals = palloc_array(datatype, numCols); \
for (int i = 0; i < numCols; i++) \
{ \
token = pg_strtok(&length); \
diff --git a/src/backend/optimizer/geqo/geqo_erx.c b/src/backend/optimizer/geqo/geqo_erx.c
index af289f7eeb7..df1a13b1a23 100644
--- a/src/backend/optimizer/geqo/geqo_erx.c
+++ b/src/backend/optimizer/geqo/geqo_erx.c
@@ -62,7 +62,7 @@ alloc_edge_table(PlannerInfo *root, int num_gene)
* directly; 0 will not be used
*/
- edge_table = (Edge *) palloc((num_gene + 1) * sizeof(Edge));
+ edge_table = palloc_array(Edge, (num_gene + 1));
return edge_table;
}
diff --git a/src/backend/optimizer/geqo/geqo_eval.c b/src/backend/optimizer/geqo/geqo_eval.c
index f07d1dc8ac6..7e60b5dce06 100644
--- a/src/backend/optimizer/geqo/geqo_eval.c
+++ b/src/backend/optimizer/geqo/geqo_eval.c
@@ -191,7 +191,7 @@ gimme_tree(PlannerInfo *root, Gene *tour, int num_gene)
cur_rel_index - 1);
/* Make it into a single-rel clump */
- cur_clump = (Clump *) palloc(sizeof(Clump));
+ cur_clump = palloc_object(Clump);
cur_clump->joinrel = cur_rel;
cur_clump->size = 1;
diff --git a/src/backend/optimizer/geqo/geqo_pmx.c b/src/backend/optimizer/geqo/geqo_pmx.c
index 01d55711925..049145451a3 100644
--- a/src/backend/optimizer/geqo/geqo_pmx.c
+++ b/src/backend/optimizer/geqo/geqo_pmx.c
@@ -48,10 +48,10 @@
void
pmx(PlannerInfo *root, Gene *tour1, Gene *tour2, Gene *offspring, int num_gene)
{
- int *failed = (int *) palloc((num_gene + 1) * sizeof(int));
- int *from = (int *) palloc((num_gene + 1) * sizeof(int));
- int *indx = (int *) palloc((num_gene + 1) * sizeof(int));
- int *check_list = (int *) palloc((num_gene + 1) * sizeof(int));
+ int *failed = palloc_array(int, (num_gene + 1));
+ int *from = palloc_array(int, (num_gene + 1));
+ int *indx = palloc_array(int, (num_gene + 1));
+ int *check_list = palloc_array(int, (num_gene + 1));
int left,
right,
diff --git a/src/backend/optimizer/geqo/geqo_pool.c b/src/backend/optimizer/geqo/geqo_pool.c
index b6de0d93f28..a94d064edd1 100644
--- a/src/backend/optimizer/geqo/geqo_pool.c
+++ b/src/backend/optimizer/geqo/geqo_pool.c
@@ -46,7 +46,7 @@ alloc_pool(PlannerInfo *root, int pool_size, int string_length)
int i;
/* pool */
- new_pool = (Pool *) palloc(sizeof(Pool));
+ new_pool = palloc_object(Pool);
new_pool->size = (int) pool_size;
new_pool->string_length = (int) string_length;
@@ -163,7 +163,7 @@ alloc_chromo(PlannerInfo *root, int string_length)
{
Chromosome *chromo;
- chromo = (Chromosome *) palloc(sizeof(Chromosome));
+ chromo = palloc_object(Chromosome);
chromo->string = (Gene *) palloc((string_length + 1) * sizeof(Gene));
return chromo;
diff --git a/src/backend/optimizer/geqo/geqo_recombination.c b/src/backend/optimizer/geqo/geqo_recombination.c
index a5d3e47ad11..73e76ef05d4 100644
--- a/src/backend/optimizer/geqo/geqo_recombination.c
+++ b/src/backend/optimizer/geqo/geqo_recombination.c
@@ -74,7 +74,7 @@ alloc_city_table(PlannerInfo *root, int num_gene)
* palloc one extra location so that nodes numbered 1..n can be indexed
* directly; 0 will not be used
*/
- city_table = (City *) palloc((num_gene + 1) * sizeof(City));
+ city_table = palloc_array(City, (num_gene + 1));
return city_table;
}
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index 1115ebeee29..f4e7e1eeacd 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -998,7 +998,7 @@ set_append_rel_size(PlannerInfo *root, RelOptInfo *rel,
parent_rows = 0;
parent_size = 0;
nattrs = rel->max_attr - rel->min_attr + 1;
- parent_attrsizes = (double *) palloc0(nattrs * sizeof(double));
+ parent_attrsizes = palloc0_array(double, nattrs);
foreach(l, root->append_rel_list)
{
diff --git a/src/backend/optimizer/path/clausesel.c b/src/backend/optimizer/path/clausesel.c
index 5d51f97f219..5853fa8f48f 100644
--- a/src/backend/optimizer/path/clausesel.c
+++ b/src/backend/optimizer/path/clausesel.c
@@ -495,7 +495,7 @@ addRangeClause(RangeQueryClause **rqlist, Node *clause,
}
/* No matching var found, so make a new clause-pair data structure */
- rqelem = (RangeQueryClause *) palloc(sizeof(RangeQueryClause));
+ rqelem = palloc_object(RangeQueryClause);
rqelem->var = var;
if (is_lobound)
{
diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index ec004ed9493..740accf9be3 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -2189,7 +2189,7 @@ append_nonpartial_cost(List *subpaths, int numpaths, int parallel_workers)
* whichever is less.
*/
arrlen = Min(parallel_workers, numpaths);
- costarr = (Cost *) palloc(sizeof(Cost) * arrlen);
+ costarr = palloc_array(Cost, arrlen);
/* The first few paths will each be claimed by a different worker. */
path_index = 0;
@@ -4113,7 +4113,7 @@ cached_scansel(PlannerInfo *root, RestrictInfo *rinfo, PathKey *pathkey)
/* Cache the result in suitably long-lived workspace */
oldcontext = MemoryContextSwitchTo(root->planner_cxt);
- cache = (MergeScanSelCache *) palloc(sizeof(MergeScanSelCache));
+ cache = palloc_object(MergeScanSelCache);
cache->opfamily = pathkey->pk_opfamily;
cache->collation = pathkey->pk_eclass->ec_collation;
cache->strategy = pathkey->pk_strategy;
diff --git a/src/backend/optimizer/path/equivclass.c b/src/backend/optimizer/path/equivclass.c
index 7cafaca33c5..c95e1fa087c 100644
--- a/src/backend/optimizer/path/equivclass.c
+++ b/src/backend/optimizer/path/equivclass.c
@@ -1222,8 +1222,8 @@ generate_base_implied_equalities_no_const(PlannerInfo *root,
* ordering would succeed. XXX FIXME: use a UNION-FIND algorithm similar
* to the way we build merged ECs. (Use a list-of-lists for each rel.)
*/
- prev_ems = (EquivalenceMember **)
- palloc0(root->simple_rel_array_size * sizeof(EquivalenceMember *));
+ prev_ems = palloc0_array(EquivalenceMember *,
+ root->simple_rel_array_size);
foreach(lc, ec->ec_members)
{
diff --git a/src/backend/optimizer/path/indxpath.c b/src/backend/optimizer/path/indxpath.c
index fa3edf60f3c..d3459c837cf 100644
--- a/src/backend/optimizer/path/indxpath.c
+++ b/src/backend/optimizer/path/indxpath.c
@@ -1266,7 +1266,7 @@ group_similar_or_args(PlannerInfo *root, RelOptInfo *rel, RestrictInfo *rinfo)
* which will be used to sort these arguments at the next step.
*/
i = -1;
- matches = (OrArgIndexMatch *) palloc(sizeof(OrArgIndexMatch) * n);
+ matches = palloc_array(OrArgIndexMatch, n);
foreach(lc, orargs)
{
Node *arg = lfirst(lc);
@@ -1791,8 +1791,7 @@ choose_bitmap_and(PlannerInfo *root, RelOptInfo *rel, List *paths)
* same set of clauses; keep only the cheapest-to-scan of any such groups.
* The surviving paths are put into an array for qsort'ing.
*/
- pathinfoarray = (PathClauseUsage **)
- palloc(npaths * sizeof(PathClauseUsage *));
+ pathinfoarray = palloc_array(PathClauseUsage *, npaths);
clauselist = NIL;
npaths = 0;
foreach(l, paths)
@@ -2028,7 +2027,7 @@ classify_index_clause_usage(Path *path, List **clauselist)
Bitmapset *clauseids;
ListCell *lc;
- result = (PathClauseUsage *) palloc(sizeof(PathClauseUsage));
+ result = palloc_object(PathClauseUsage);
result->path = path;
/* Recursively find the quals and preds used by the path */
@@ -3434,7 +3433,7 @@ match_orclause_to_indexcol(PlannerInfo *root,
get_typlenbyvalalign(consttype, &typlen, &typbyval, &typalign);
- elems = (Datum *) palloc(sizeof(Datum) * list_length(consts));
+ elems = palloc_array(Datum, list_length(consts));
foreach_node(Const, value, consts)
{
Assert(!value->constisnull);
diff --git a/src/backend/optimizer/path/pathkeys.c b/src/backend/optimizer/path/pathkeys.c
index 154eb505d75..df951ed30a1 100644
--- a/src/backend/optimizer/path/pathkeys.c
+++ b/src/backend/optimizer/path/pathkeys.c
@@ -1675,8 +1675,8 @@ select_outer_pathkeys_for_merge(PlannerInfo *root,
* Make arrays of the ECs used by the mergeclauses (dropping any
* duplicates) and their "popularity" scores.
*/
- ecs = (EquivalenceClass **) palloc(nClauses * sizeof(EquivalenceClass *));
- scores = (int *) palloc(nClauses * sizeof(int));
+ ecs = palloc_array(EquivalenceClass *, nClauses);
+ scores = palloc_array(int, nClauses);
necs = 0;
foreach(lc, mergeclauses)
diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c
index 1106cd85f0c..fcebd5c46f2 100644
--- a/src/backend/optimizer/plan/createplan.c
+++ b/src/backend/optimizer/plan/createplan.c
@@ -1687,8 +1687,8 @@ create_memoize_plan(PlannerInfo *root, MemoizePath *best_path, int flags)
nkeys = list_length(param_exprs);
Assert(nkeys > 0);
- operators = palloc(nkeys * sizeof(Oid));
- collations = palloc(nkeys * sizeof(Oid));
+ operators = palloc_array(Oid, nkeys);
+ collations = palloc_array(Oid, nkeys);
i = 0;
forboth(lc, param_exprs, lc2, best_path->hash_operators)
@@ -1797,8 +1797,8 @@ create_unique_plan(PlannerInfo *root, UniquePath *best_path, int flags)
*/
newtlist = subplan->targetlist;
numGroupCols = list_length(uniq_exprs);
- groupColIdx = (AttrNumber *) palloc(numGroupCols * sizeof(AttrNumber));
- groupCollations = (Oid *) palloc(numGroupCols * sizeof(Oid));
+ groupColIdx = palloc_array(AttrNumber, numGroupCols);
+ groupCollations = palloc_array(Oid, numGroupCols);
groupColPos = 0;
foreach(l, uniq_exprs)
@@ -1824,7 +1824,7 @@ create_unique_plan(PlannerInfo *root, UniquePath *best_path, int flags)
* those are cross-type operators then the equality operators are the
* ones for the IN clause operators' RHS datatype.
*/
- groupOperators = (Oid *) palloc(numGroupCols * sizeof(Oid));
+ groupOperators = palloc_array(Oid, numGroupCols);
groupColPos = 0;
foreach(l, in_operators)
{
@@ -2359,7 +2359,7 @@ remap_groupColIdx(PlannerInfo *root, List *groupClause)
Assert(grouping_map);
- new_grpColIdx = palloc0(sizeof(AttrNumber) * list_length(groupClause));
+ new_grpColIdx = palloc0_array(AttrNumber, list_length(groupClause));
i = 0;
foreach(lc, groupClause)
@@ -2422,7 +2422,7 @@ create_groupingsets_plan(PlannerInfo *root, GroupingSetsPath *best_path)
maxref = gc->tleSortGroupRef;
}
- grouping_map = (AttrNumber *) palloc0((maxref + 1) * sizeof(AttrNumber));
+ grouping_map = palloc0_array(AttrNumber, (maxref + 1));
/* Now look up the column numbers in the child's tlist */
foreach(lc, root->processed_groupClause)
@@ -2646,9 +2646,9 @@ create_windowagg_plan(PlannerInfo *root, WindowAggPath *best_path)
* Convert SortGroupClause lists into arrays of attr indexes and equality
* operators, as wanted by executor.
*/
- partColIdx = (AttrNumber *) palloc(sizeof(AttrNumber) * numPart);
- partOperators = (Oid *) palloc(sizeof(Oid) * numPart);
- partCollations = (Oid *) palloc(sizeof(Oid) * numPart);
+ partColIdx = palloc_array(AttrNumber, numPart);
+ partOperators = palloc_array(Oid, numPart);
+ partCollations = palloc_array(Oid, numPart);
partNumCols = 0;
foreach(lc, wc->partitionClause)
@@ -2663,9 +2663,9 @@ create_windowagg_plan(PlannerInfo *root, WindowAggPath *best_path)
partNumCols++;
}
- ordColIdx = (AttrNumber *) palloc(sizeof(AttrNumber) * numOrder);
- ordOperators = (Oid *) palloc(sizeof(Oid) * numOrder);
- ordCollations = (Oid *) palloc(sizeof(Oid) * numOrder);
+ ordColIdx = palloc_array(AttrNumber, numOrder);
+ ordOperators = palloc_array(Oid, numOrder);
+ ordCollations = palloc_array(Oid, numOrder);
ordNumCols = 0;
foreach(lc, wc->orderClause)
@@ -2875,9 +2875,9 @@ create_limit_plan(PlannerInfo *root, LimitPath *best_path, int flags)
ListCell *l;
numUniqkeys = list_length(parse->sortClause);
- uniqColIdx = (AttrNumber *) palloc(numUniqkeys * sizeof(AttrNumber));
- uniqOperators = (Oid *) palloc(numUniqkeys * sizeof(Oid));
- uniqCollations = (Oid *) palloc(numUniqkeys * sizeof(Oid));
+ uniqColIdx = palloc_array(AttrNumber, numUniqkeys);
+ uniqOperators = palloc_array(Oid, numUniqkeys);
+ uniqCollations = palloc_array(Oid, numUniqkeys);
numUniqkeys = 0;
foreach(l, parse->sortClause)
@@ -4627,10 +4627,10 @@ create_mergejoin_plan(PlannerInfo *root,
*/
nClauses = list_length(mergeclauses);
Assert(nClauses == list_length(best_path->path_mergeclauses));
- mergefamilies = (Oid *) palloc(nClauses * sizeof(Oid));
- mergecollations = (Oid *) palloc(nClauses * sizeof(Oid));
- mergereversals = (bool *) palloc(nClauses * sizeof(bool));
- mergenullsfirst = (bool *) palloc(nClauses * sizeof(bool));
+ mergefamilies = palloc_array(Oid, nClauses);
+ mergecollations = palloc_array(Oid, nClauses);
+ mergereversals = palloc_array(bool, nClauses);
+ mergenullsfirst = palloc_array(bool, nClauses);
opathkey = NULL;
opeclass = NULL;
@@ -5380,7 +5380,7 @@ order_qual_clauses(PlannerInfo *root, List *clauses)
* Collect the items and costs into an array. This is to avoid repeated
* cost_qual_eval work if the inputs aren't RestrictInfos.
*/
- items = (QualItem *) palloc(nitems * sizeof(QualItem));
+ items = palloc_array(QualItem, nitems);
i = 0;
foreach(lc, clauses)
{
@@ -5965,9 +5965,9 @@ make_recursive_union(List *tlist,
Oid *dupCollations;
ListCell *slitem;
- dupColIdx = (AttrNumber *) palloc(sizeof(AttrNumber) * numCols);
- dupOperators = (Oid *) palloc(sizeof(Oid) * numCols);
- dupCollations = (Oid *) palloc(sizeof(Oid) * numCols);
+ dupColIdx = palloc_array(AttrNumber, numCols);
+ dupOperators = palloc_array(Oid, numCols);
+ dupCollations = palloc_array(Oid, numCols);
foreach(slitem, distinctList)
{
@@ -6260,10 +6260,10 @@ prepare_sort_from_pathkeys(Plan *lefttree, List *pathkeys,
* We will need at most list_length(pathkeys) sort columns; possibly less
*/
numsortkeys = list_length(pathkeys);
- sortColIdx = (AttrNumber *) palloc(numsortkeys * sizeof(AttrNumber));
- sortOperators = (Oid *) palloc(numsortkeys * sizeof(Oid));
- collations = (Oid *) palloc(numsortkeys * sizeof(Oid));
- nullsFirst = (bool *) palloc(numsortkeys * sizeof(bool));
+ sortColIdx = palloc_array(AttrNumber, numsortkeys);
+ sortOperators = palloc_array(Oid, numsortkeys);
+ collations = palloc_array(Oid, numsortkeys);
+ nullsFirst = palloc_array(bool, numsortkeys);
numsortkeys = 0;
@@ -6501,10 +6501,10 @@ make_sort_from_sortclauses(List *sortcls, Plan *lefttree)
/* Convert list-ish representation to arrays wanted by executor */
numsortkeys = list_length(sortcls);
- sortColIdx = (AttrNumber *) palloc(numsortkeys * sizeof(AttrNumber));
- sortOperators = (Oid *) palloc(numsortkeys * sizeof(Oid));
- collations = (Oid *) palloc(numsortkeys * sizeof(Oid));
- nullsFirst = (bool *) palloc(numsortkeys * sizeof(bool));
+ sortColIdx = palloc_array(AttrNumber, numsortkeys);
+ sortOperators = palloc_array(Oid, numsortkeys);
+ collations = palloc_array(Oid, numsortkeys);
+ nullsFirst = palloc_array(bool, numsortkeys);
numsortkeys = 0;
foreach(l, sortcls)
@@ -6552,10 +6552,10 @@ make_sort_from_groupcols(List *groupcls,
/* Convert list-ish representation to arrays wanted by executor */
numsortkeys = list_length(groupcls);
- sortColIdx = (AttrNumber *) palloc(numsortkeys * sizeof(AttrNumber));
- sortOperators = (Oid *) palloc(numsortkeys * sizeof(Oid));
- collations = (Oid *) palloc(numsortkeys * sizeof(Oid));
- nullsFirst = (bool *) palloc(numsortkeys * sizeof(bool));
+ sortColIdx = palloc_array(AttrNumber, numsortkeys);
+ sortOperators = palloc_array(Oid, numsortkeys);
+ collations = palloc_array(Oid, numsortkeys);
+ nullsFirst = palloc_array(bool, numsortkeys);
numsortkeys = 0;
foreach(l, groupcls)
@@ -6796,9 +6796,9 @@ make_unique_from_sortclauses(Plan *lefttree, List *distinctList)
* operators, as wanted by executor
*/
Assert(numCols > 0);
- uniqColIdx = (AttrNumber *) palloc(sizeof(AttrNumber) * numCols);
- uniqOperators = (Oid *) palloc(sizeof(Oid) * numCols);
- uniqCollations = (Oid *) palloc(sizeof(Oid) * numCols);
+ uniqColIdx = palloc_array(AttrNumber, numCols);
+ uniqOperators = palloc_array(Oid, numCols);
+ uniqCollations = palloc_array(Oid, numCols);
foreach(slitem, distinctList)
{
@@ -6845,9 +6845,9 @@ make_unique_from_pathkeys(Plan *lefttree, List *pathkeys, int numCols)
* prepare_sort_from_pathkeys ... maybe unify sometime?
*/
Assert(numCols >= 0 && numCols <= list_length(pathkeys));
- uniqColIdx = (AttrNumber *) palloc(sizeof(AttrNumber) * numCols);
- uniqOperators = (Oid *) palloc(sizeof(Oid) * numCols);
- uniqCollations = (Oid *) palloc(sizeof(Oid) * numCols);
+ uniqColIdx = palloc_array(AttrNumber, numCols);
+ uniqOperators = palloc_array(Oid, numCols);
+ uniqCollations = palloc_array(Oid, numCols);
foreach(lc, pathkeys)
{
@@ -6982,10 +6982,10 @@ make_setop(SetOpCmd cmd, SetOpStrategy strategy,
* convert SortGroupClause list into arrays of attr indexes and comparison
* operators, as wanted by executor
*/
- cmpColIdx = (AttrNumber *) palloc(sizeof(AttrNumber) * numCols);
- cmpOperators = (Oid *) palloc(sizeof(Oid) * numCols);
- cmpCollations = (Oid *) palloc(sizeof(Oid) * numCols);
- cmpNullsFirst = (bool *) palloc(sizeof(bool) * numCols);
+ cmpColIdx = palloc_array(AttrNumber, numCols);
+ cmpOperators = palloc_array(Oid, numCols);
+ cmpCollations = palloc_array(Oid, numCols);
+ cmpNullsFirst = palloc_array(bool, numCols);
foreach(slitem, groupList)
{
diff --git a/src/backend/optimizer/plan/initsplan.c b/src/backend/optimizer/plan/initsplan.c
index 2cb0ae6d659..6a70eae50c6 100644
--- a/src/backend/optimizer/plan/initsplan.c
+++ b/src/backend/optimizer/plan/initsplan.c
@@ -431,8 +431,8 @@ remove_useless_groupby_columns(PlannerInfo *root)
* Fill groupbyattnos[k] with a bitmapset of the column attnos of RTE k
* that are GROUP BY items.
*/
- groupbyattnos = (Bitmapset **) palloc0(sizeof(Bitmapset *) *
- (list_length(parse->rtable) + 1));
+ groupbyattnos = palloc0_array(Bitmapset *,
+ (list_length(parse->rtable) + 1));
foreach(lc, root->processed_groupClause)
{
SortGroupClause *sgc = lfirst_node(SortGroupClause, lc);
@@ -590,8 +590,8 @@ remove_useless_groupby_columns(PlannerInfo *root)
* allocate the surplusvars[] array until we find something.
*/
if (surplusvars == NULL)
- surplusvars = (Bitmapset **) palloc0(sizeof(Bitmapset *) *
- (list_length(parse->rtable) + 1));
+ surplusvars = palloc0_array(Bitmapset *,
+ (list_length(parse->rtable) + 1));
/* Remember the attnos of the removable columns */
surplusvars[relid] = bms_difference(relattnos, best_keycolumns);
diff --git a/src/backend/optimizer/plan/planagg.c b/src/backend/optimizer/plan/planagg.c
index 64605be3178..461f2b4ce92 100644
--- a/src/backend/optimizer/plan/planagg.c
+++ b/src/backend/optimizer/plan/planagg.c
@@ -335,7 +335,7 @@ build_minmax_path(PlannerInfo *root, MinMaxAggInfo *mminfo,
* than before. (This means that when we are done, there will be no Vars
* of level 1, which is why the subquery can become an initplan.)
*/
- subroot = (PlannerInfo *) palloc(sizeof(PlannerInfo));
+ subroot = palloc_object(PlannerInfo);
memcpy(subroot, root, sizeof(PlannerInfo));
subroot->query_level++;
subroot->parent_root = root;
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 6803edd0854..b133fde082b 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -2107,7 +2107,7 @@ preprocess_grouping_sets(PlannerInfo *root)
List *sets;
int maxref = 0;
ListCell *lc_set;
- grouping_sets_data *gd = palloc0(sizeof(grouping_sets_data));
+ grouping_sets_data *gd = palloc0_object(grouping_sets_data);
parse->groupingSets = expand_grouping_sets(parse->groupingSets, parse->groupDistinct, -1);
@@ -2897,10 +2897,10 @@ extract_rollup_sets(List *groupingSets)
* to leave 0 free for the NIL node in the graph algorithm.
*----------
*/
- orig_sets = palloc0((num_sets_raw + 1) * sizeof(List *));
- set_masks = palloc0((num_sets_raw + 1) * sizeof(Bitmapset *));
- adjacency = palloc0((num_sets_raw + 1) * sizeof(short *));
- adjacency_buf = palloc((num_sets_raw + 1) * sizeof(short));
+ orig_sets = palloc0_array(List *, (num_sets_raw + 1));
+ set_masks = palloc0_array(Bitmapset *, (num_sets_raw + 1));
+ adjacency = palloc0_array(short *, (num_sets_raw + 1));
+ adjacency_buf = palloc_array(short, (num_sets_raw + 1));
j_size = 0;
j = 0;
@@ -2985,7 +2985,7 @@ extract_rollup_sets(List *groupingSets)
* pair_vu[v] = u (both will be true, but we check both so that we can do
* it in one pass)
*/
- chains = palloc0((num_sets + 1) * sizeof(int));
+ chains = palloc0_array(int, (num_sets + 1));
for (i = 1; i <= num_sets; ++i)
{
@@ -3001,7 +3001,7 @@ extract_rollup_sets(List *groupingSets)
}
/* build result lists. */
- results = palloc0((num_chains + 1) * sizeof(List *));
+ results = palloc0_array(List *, (num_chains + 1));
for (i = 1; i <= num_sets; ++i)
{
@@ -4258,7 +4258,8 @@ consider_groupingsets_paths(PlannerInfo *root,
double scale;
int num_rollups = list_length(gd->rollups);
int k_capacity;
- int *k_weights = palloc(num_rollups * sizeof(int));
+ int *k_weights = palloc_array(int,
+ num_rollups);
Bitmapset *hash_items = NULL;
int i;
@@ -5838,8 +5839,8 @@ select_active_windows(PlannerInfo *root, WindowFuncLists *wflists)
List *result = NIL;
ListCell *lc;
int nActive = 0;
- WindowClauseSortData *actives = palloc(sizeof(WindowClauseSortData)
- * list_length(windowClause));
+ WindowClauseSortData *actives = palloc_array(WindowClauseSortData,
+ list_length(windowClause));
/* First, construct an array of the active windows */
foreach(lc, windowClause)
@@ -6263,8 +6264,8 @@ make_sort_input_target(PlannerInfo *root,
/* Inspect tlist and collect per-column information */
ncols = list_length(final_target->exprs);
- col_is_srf = (bool *) palloc0(ncols * sizeof(bool));
- postpone_col = (bool *) palloc0(ncols * sizeof(bool));
+ col_is_srf = palloc0_array(bool, ncols);
+ postpone_col = palloc0_array(bool, ncols);
have_srf = have_volatile = have_expensive = have_srf_sortcols = false;
i = 0;
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index 1e7b7bc6ffc..c0e699cc82a 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -308,7 +308,7 @@ set_plan_references(PlannerInfo *root, Plan *plan)
PlanRowMark *newrc;
/* flat copy is enough since all fields are scalars */
- newrc = (PlanRowMark *) palloc(sizeof(PlanRowMark));
+ newrc = palloc_object(PlanRowMark);
memcpy(newrc, rc, sizeof(PlanRowMark));
/* adjust indexes ... but *not* the rowmarkId */
@@ -541,7 +541,7 @@ add_rte_to_flat_rtable(PlannerGlobal *glob, List *rteperminfos,
RangeTblEntry *newrte;
/* flat copy to duplicate all the scalar fields */
- newrte = (RangeTblEntry *) palloc(sizeof(RangeTblEntry));
+ newrte = palloc_object(RangeTblEntry);
memcpy(newrte, rte, sizeof(RangeTblEntry));
/* zap unneeded sub-structure */
@@ -1956,7 +1956,7 @@ offset_relid_set(Relids relids, int rtoffset)
static inline Var *
copyVar(Var *var)
{
- Var *newvar = (Var *) palloc(sizeof(Var));
+ Var *newvar = palloc_object(Var);
*newvar = *var;
return newvar;
diff --git a/src/backend/optimizer/prep/prepjointree.c b/src/backend/optimizer/prep/prepjointree.c
index 5d9225e9909..52d69e5881a 100644
--- a/src/backend/optimizer/prep/prepjointree.c
+++ b/src/backend/optimizer/prep/prepjointree.c
@@ -547,7 +547,7 @@ pull_up_sublinks_jointree_recurse(PlannerInfo *root, Node *jtnode,
* Make a modifiable copy of join node, but don't bother copying its
* subnodes (yet).
*/
- j = (JoinExpr *) palloc(sizeof(JoinExpr));
+ j = palloc_object(JoinExpr);
memcpy(j, jtnode, sizeof(JoinExpr));
jtlink = (Node *) j;
@@ -1623,8 +1623,8 @@ make_setop_translation_list(Query *query, int newvarno,
/* Initialize reverse-translation array with all entries zero */
/* (entries for resjunk columns will stay that way) */
appinfo->num_child_cols = list_length(query->targetList);
- appinfo->parent_colnos = pcolnos =
- (AttrNumber *) palloc0(appinfo->num_child_cols * sizeof(AttrNumber));
+ appinfo->parent_colnos = pcolnos = palloc0_array(AttrNumber,
+ appinfo->num_child_cols);
foreach(l, query->targetList)
{
@@ -3063,8 +3063,7 @@ reduce_outer_joins_pass1(Node *jtnode)
{
reduce_outer_joins_pass1_state *result;
- result = (reduce_outer_joins_pass1_state *)
- palloc(sizeof(reduce_outer_joins_pass1_state));
+ result = palloc_object(reduce_outer_joins_pass1_state);
result->relids = NULL;
result->contains_outer = false;
result->sub_states = NIL;
@@ -3416,7 +3415,7 @@ report_reduced_full_join(reduce_outer_joins_pass2_state *state2,
{
reduce_outer_joins_partial_state *statep;
- statep = palloc(sizeof(reduce_outer_joins_partial_state));
+ statep = palloc_object(reduce_outer_joins_partial_state);
statep->full_join_rti = rtindex;
statep->unreduced_side = relids;
state2->partial_reduced = lappend(state2->partial_reduced, statep);
diff --git a/src/backend/optimizer/prep/prepunion.c b/src/backend/optimizer/prep/prepunion.c
index 7c27dc24e21..a9f15883374 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -1500,7 +1500,7 @@ generate_append_tlist(List *colTypes, List *colCollations,
* If the inputs all agree on type and typmod of a particular column, use
* that typmod; else use -1.
*/
- colTypmods = (int32 *) palloc(list_length(colTypes) * sizeof(int32));
+ colTypmods = palloc_array(int32, list_length(colTypes));
foreach(tlistl, input_tlists)
{
diff --git a/src/backend/optimizer/util/appendinfo.c b/src/backend/optimizer/util/appendinfo.c
index 5b3dc0d8653..6bff78d393e 100644
--- a/src/backend/optimizer/util/appendinfo.c
+++ b/src/backend/optimizer/util/appendinfo.c
@@ -93,8 +93,7 @@ make_inh_translation_list(Relation oldrelation, Relation newrelation,
/* Initialize reverse-translation array with all entries zero */
appinfo->num_child_cols = newnatts;
- appinfo->parent_colnos = pcolnos =
- (AttrNumber *) palloc0(newnatts * sizeof(AttrNumber));
+ appinfo->parent_colnos = pcolnos = palloc0_array(AttrNumber, newnatts);
for (old_attno = 0; old_attno < oldnatts; old_attno++)
{
@@ -757,8 +756,7 @@ find_appinfos_by_relids(PlannerInfo *root, Relids relids, int *nappinfos)
int i;
/* Allocate an array that's certainly big enough */
- appinfos = (AppendRelInfo **)
- palloc(sizeof(AppendRelInfo *) * bms_num_members(relids));
+ appinfos = palloc_array(AppendRelInfo *, bms_num_members(relids));
i = -1;
while ((i = bms_next_member(relids, i)) >= 0)
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
index 43dfecfb47f..c6d7b28fe50 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -226,7 +226,7 @@ contain_window_function(Node *clause)
WindowFuncLists *
find_window_functions(Node *clause, Index maxWinRef)
{
- WindowFuncLists *lists = palloc(sizeof(WindowFuncLists));
+ WindowFuncLists *lists = palloc_object(WindowFuncLists);
lists->numWindowFuncs = 0;
lists->maxWinRef = maxWinRef;
@@ -4798,7 +4798,7 @@ inline_function(Oid funcid, Oid result_type, Oid result_collid,
* substitution of the inputs. So start building expression with inputs
* substituted.
*/
- usecounts = (int *) palloc0(funcform->pronargs * sizeof(int));
+ usecounts = palloc0_array(int, funcform->pronargs);
newexpr = substitute_actual_parameters(newexpr, funcform->pronargs,
args, usecounts);
diff --git a/src/backend/optimizer/util/plancat.c b/src/backend/optimizer/util/plancat.c
index 71abb01f655..a470e4f5b51 100644
--- a/src/backend/optimizer/util/plancat.c
+++ b/src/backend/optimizer/util/plancat.c
@@ -2535,7 +2535,7 @@ set_baserel_partition_key_exprs(Relation relation,
Assert(partkey != NULL);
partnatts = partkey->partnatts;
- partexprs = (List **) palloc(sizeof(List *) * partnatts);
+ partexprs = palloc_array(List *, partnatts);
lc = list_head(partkey->partexprs);
for (cnt = 0; cnt < partnatts; cnt++)
diff --git a/src/backend/optimizer/util/predtest.c b/src/backend/optimizer/util/predtest.c
index b76fc81b08d..f2b2c309ef4 100644
--- a/src/backend/optimizer/util/predtest.c
+++ b/src/backend/optimizer/util/predtest.c
@@ -967,7 +967,7 @@ arrayconst_startup_fn(Node *clause, PredIterInfo info)
char elmalign;
/* Create working state struct */
- state = (ArrayConstIterState *) palloc(sizeof(ArrayConstIterState));
+ state = palloc_object(ArrayConstIterState);
info->state = state;
/* Deconstruct the array literal */
@@ -1046,7 +1046,7 @@ arrayexpr_startup_fn(Node *clause, PredIterInfo info)
ArrayExpr *arrayexpr;
/* Create working state struct */
- state = (ArrayExprIterState *) palloc(sizeof(ArrayExprIterState));
+ state = palloc_object(ArrayExprIterState);
info->state = state;
/* Set up a dummy OpExpr to return as the per-item node */
diff --git a/src/backend/optimizer/util/tlist.c b/src/backend/optimizer/util/tlist.c
index d2b4ecc5e51..af7b19be5c7 100644
--- a/src/backend/optimizer/util/tlist.c
+++ b/src/backend/optimizer/util/tlist.c
@@ -467,7 +467,7 @@ extract_grouping_ops(List *groupClause)
Oid *groupOperators;
ListCell *glitem;
- groupOperators = (Oid *) palloc(sizeof(Oid) * numCols);
+ groupOperators = palloc_array(Oid, numCols);
foreach(glitem, groupClause)
{
@@ -493,7 +493,7 @@ extract_grouping_collations(List *groupClause, List *tlist)
Oid *grpCollations;
ListCell *glitem;
- grpCollations = (Oid *) palloc(sizeof(Oid) * numCols);
+ grpCollations = palloc_array(Oid, numCols);
foreach(glitem, groupClause)
{
@@ -518,7 +518,7 @@ extract_grouping_cols(List *groupClause, List *tlist)
int colno = 0;
ListCell *glitem;
- grpColIdx = (AttrNumber *) palloc(sizeof(AttrNumber) * numCols);
+ grpColIdx = palloc_array(AttrNumber, numCols);
foreach(glitem, groupClause)
{
@@ -1089,7 +1089,7 @@ split_pathtarget_walker(Node *node, split_pathtarget_context *context)
*/
if (list_member(context->input_target_exprs, node))
{
- split_pathtarget_item *item = palloc(sizeof(split_pathtarget_item));
+ split_pathtarget_item *item = palloc_object(split_pathtarget_item);
item->expr = node;
item->sortgroupref = context->current_sgref;
@@ -1109,7 +1109,7 @@ split_pathtarget_walker(Node *node, split_pathtarget_context *context)
IsA(node, GroupingFunc) ||
IsA(node, WindowFunc))
{
- split_pathtarget_item *item = palloc(sizeof(split_pathtarget_item));
+ split_pathtarget_item *item = palloc_object(split_pathtarget_item);
item->expr = node;
item->sortgroupref = context->current_sgref;
@@ -1124,7 +1124,7 @@ split_pathtarget_walker(Node *node, split_pathtarget_context *context)
*/
if (IS_SRF_CALL(node))
{
- split_pathtarget_item *item = palloc(sizeof(split_pathtarget_item));
+ split_pathtarget_item *item = palloc_object(split_pathtarget_item);
List *save_input_vars = context->current_input_vars;
List *save_input_srfs = context->current_input_srfs;
int save_current_depth = context->current_depth;
diff --git a/src/backend/parser/analyze.c b/src/backend/parser/analyze.c
index 76f58b3aca3..4df043f8700 100644
--- a/src/backend/parser/analyze.c
+++ b/src/backend/parser/analyze.c
@@ -1631,7 +1631,7 @@ transformValuesClause(ParseState *pstate, SelectStmt *stmt)
/* Remember post-transformation length of first sublist */
sublist_length = list_length(sublist);
/* and allocate array for per-column lists */
- colexprs = (List **) palloc0(sublist_length * sizeof(List *));
+ colexprs = palloc0_array(List *, sublist_length);
}
else if (sublist_length != list_length(sublist))
{
@@ -1903,8 +1903,8 @@ transformSetOperationStmt(ParseState *pstate, SelectStmt *stmt)
qry->targetList = NIL;
targetvars = NIL;
targetnames = NIL;
- sortnscolumns = (ParseNamespaceColumn *)
- palloc0(list_length(sostmt->colTypes) * sizeof(ParseNamespaceColumn));
+ sortnscolumns = palloc0_array(ParseNamespaceColumn,
+ list_length(sostmt->colTypes));
sortcolindex = 0;
forfour(lct, sostmt->colTypes,
@@ -2660,8 +2660,7 @@ addNSItemForReturning(ParseState *pstate, const char *aliasname,
colnames = pstate->p_target_nsitem->p_rte->eref->colnames;
numattrs = list_length(colnames);
- nscolumns = (ParseNamespaceColumn *)
- palloc(numattrs * sizeof(ParseNamespaceColumn));
+ nscolumns = palloc_array(ParseNamespaceColumn, numattrs);
memcpy(nscolumns, pstate->p_target_nsitem->p_nscolumns,
numattrs * sizeof(ParseNamespaceColumn));
@@ -2671,7 +2670,7 @@ addNSItemForReturning(ParseState *pstate, const char *aliasname,
nscolumns[i].p_varreturningtype = returning_type;
/* build the nsitem, copying most fields from the target relation */
- nsitem = (ParseNamespaceItem *) palloc(sizeof(ParseNamespaceItem));
+ nsitem = palloc_object(ParseNamespaceItem);
nsitem->p_names = makeAlias(aliasname, colnames);
nsitem->p_rte = pstate->p_target_nsitem->p_rte;
nsitem->p_rtindex = pstate->p_target_nsitem->p_rtindex;
diff --git a/src/backend/parser/parse_clause.c b/src/backend/parser/parse_clause.c
index 2e64fcae7b2..4e79ce327ce 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -733,7 +733,7 @@ transformRangeTableFunc(ParseState *pstate, RangeTableFunc *rtf)
tf->ordinalitycol = -1;
/* Process column specs */
- names = palloc(sizeof(char *) * list_length(rtf->columns));
+ names = palloc_array(char *, list_length(rtf->columns));
colno = 0;
foreach(col, rtf->columns)
@@ -1288,9 +1288,8 @@ transformFromClauseItem(ParseState *pstate, Node *n,
res_colvars = NIL;
/* this may be larger than needed, but it's not worth being exact */
- res_nscolumns = (ParseNamespaceColumn *)
- palloc0((list_length(l_colnames) + list_length(r_colnames)) *
- sizeof(ParseNamespaceColumn));
+ res_nscolumns = palloc0_array(ParseNamespaceColumn,
+ (list_length(l_colnames) + list_length(r_colnames)));
res_colindex = 0;
if (j->usingClause)
@@ -1573,7 +1572,7 @@ transformFromClauseItem(ParseState *pstate, Node *n,
{
ParseNamespaceItem *jnsitem;
- jnsitem = (ParseNamespaceItem *) palloc(sizeof(ParseNamespaceItem));
+ jnsitem = palloc_object(ParseNamespaceItem);
jnsitem->p_names = j->join_using_alias;
jnsitem->p_rte = nsitem->p_rte;
jnsitem->p_rtindex = nsitem->p_rtindex;
diff --git a/src/backend/parser/parse_expr.c b/src/backend/parser/parse_expr.c
index bad1df732ea..392b2056443 100644
--- a/src/backend/parser/parse_expr.c
+++ b/src/backend/parser/parse_expr.c
@@ -2893,7 +2893,7 @@ make_row_comparison_op(ParseState *pstate, List *opname,
* containing the operators, and see which interpretations (strategy
* numbers) exist for each operator.
*/
- opinfo_lists = (List **) palloc(nopers * sizeof(List *));
+ opinfo_lists = palloc_array(List *, nopers);
strats = NULL;
i = 0;
foreach(l, opexprs)
diff --git a/src/backend/parser/parse_node.c b/src/backend/parser/parse_node.c
index d6feb16aef3..5f50fe750a6 100644
--- a/src/backend/parser/parse_node.c
+++ b/src/backend/parser/parse_node.c
@@ -40,7 +40,7 @@ make_parsestate(ParseState *parentParseState)
{
ParseState *pstate;
- pstate = palloc0(sizeof(ParseState));
+ pstate = palloc0_object(ParseState);
pstate->parentParseState = parentParseState;
diff --git a/src/backend/parser/parse_param.c b/src/backend/parser/parse_param.c
index 930921626b6..772f3e3c1d8 100644
--- a/src/backend/parser/parse_param.c
+++ b/src/backend/parser/parse_param.c
@@ -68,7 +68,7 @@ void
setup_parse_fixed_parameters(ParseState *pstate,
const Oid *paramTypes, int numParams)
{
- FixedParamState *parstate = palloc(sizeof(FixedParamState));
+ FixedParamState *parstate = palloc_object(FixedParamState);
parstate->paramTypes = paramTypes;
parstate->numParams = numParams;
@@ -84,7 +84,7 @@ void
setup_parse_variable_parameters(ParseState *pstate,
Oid **paramTypes, int *numParams)
{
- VarParamState *parstate = palloc(sizeof(VarParamState));
+ VarParamState *parstate = palloc_object(VarParamState);
parstate->paramTypes = paramTypes;
parstate->numParams = numParams;
diff --git a/src/backend/parser/parse_relation.c b/src/backend/parser/parse_relation.c
index 101fba34b18..d7143e7f7ba 100644
--- a/src/backend/parser/parse_relation.c
+++ b/src/backend/parser/parse_relation.c
@@ -964,7 +964,7 @@ searchRangeTableForCol(ParseState *pstate, const char *alias, const char *colnam
int location)
{
ParseState *orig_pstate = pstate;
- FuzzyAttrMatchState *fuzzystate = palloc(sizeof(FuzzyAttrMatchState));
+ FuzzyAttrMatchState *fuzzystate = palloc_object(FuzzyAttrMatchState);
fuzzystate->distance = MAX_FUZZY_DISTANCE + 1;
fuzzystate->rfirst = NULL;
@@ -1315,8 +1315,7 @@ buildNSItemFromTupleDesc(RangeTblEntry *rte, Index rtindex,
Assert(maxattrs == list_length(rte->eref->colnames));
/* extract per-column data from the tupdesc */
- nscolumns = (ParseNamespaceColumn *)
- palloc0(maxattrs * sizeof(ParseNamespaceColumn));
+ nscolumns = palloc0_array(ParseNamespaceColumn, maxattrs);
for (varattno = 0; varattno < maxattrs; varattno++)
{
@@ -1336,7 +1335,7 @@ buildNSItemFromTupleDesc(RangeTblEntry *rte, Index rtindex,
}
/* ... and build the nsitem */
- nsitem = (ParseNamespaceItem *) palloc(sizeof(ParseNamespaceItem));
+ nsitem = palloc_object(ParseNamespaceItem);
nsitem->p_names = rte->eref;
nsitem->p_rte = rte;
nsitem->p_rtindex = rtindex;
@@ -1381,8 +1380,7 @@ buildNSItemFromLists(RangeTblEntry *rte, Index rtindex,
Assert(maxattrs == list_length(colcollations));
/* extract per-column data from the lists */
- nscolumns = (ParseNamespaceColumn *)
- palloc0(maxattrs * sizeof(ParseNamespaceColumn));
+ nscolumns = palloc0_array(ParseNamespaceColumn, maxattrs);
varattno = 0;
forthree(lct, coltypes,
@@ -1400,7 +1398,7 @@ buildNSItemFromLists(RangeTblEntry *rte, Index rtindex,
}
/* ... and build the nsitem */
- nsitem = (ParseNamespaceItem *) palloc(sizeof(ParseNamespaceItem));
+ nsitem = palloc_object(ParseNamespaceItem);
nsitem->p_names = rte->eref;
nsitem->p_rte = rte;
nsitem->p_rtindex = rtindex;
@@ -1791,7 +1789,7 @@ addRangeTableEntryForFunction(ParseState *pstate,
rte->eref = eref;
/* Process each function ... */
- functupdescs = (TupleDesc *) palloc(nfuncs * sizeof(TupleDesc));
+ functupdescs = palloc_array(TupleDesc, nfuncs);
totalatts = 0;
funcno = 0;
@@ -2302,7 +2300,7 @@ addRangeTableEntryForJoin(ParseState *pstate,
* Build a ParseNamespaceItem, but don't add it to the pstate's namespace
* list --- caller must do that if appropriate.
*/
- nsitem = (ParseNamespaceItem *) palloc(sizeof(ParseNamespaceItem));
+ nsitem = palloc_object(ParseNamespaceItem);
nsitem->p_names = rte->eref;
nsitem->p_rte = rte;
nsitem->p_perminfo = NULL;
diff --git a/src/backend/parser/parse_type.c b/src/backend/parser/parse_type.c
index 7713bdc6af0..c7335ad9db1 100644
--- a/src/backend/parser/parse_type.c
+++ b/src/backend/parser/parse_type.c
@@ -369,7 +369,7 @@ typenameTypeMod(ParseState *pstate, const TypeName *typeName, Type typ)
* Currently, we allow simple numeric constants, string literals, and
* identifiers; possibly this list could be extended.
*/
- datums = (Datum *) palloc(list_length(typeName->typmods) * sizeof(Datum));
+ datums = palloc_array(Datum, list_length(typeName->typmods));
n = 0;
foreach(l, typeName->typmods)
{
diff --git a/src/backend/partitioning/partbounds.c b/src/backend/partitioning/partbounds.c
index 4bdc2941efb..cf1f48491b9 100644
--- a/src/backend/partitioning/partbounds.c
+++ b/src/backend/partitioning/partbounds.c
@@ -360,8 +360,7 @@ create_hash_bounds(PartitionBoundSpec **boundspecs, int nparts,
boundinfo->null_index = -1;
boundinfo->default_index = -1;
- hbounds = (PartitionHashBound *)
- palloc(nparts * sizeof(PartitionHashBound));
+ hbounds = palloc_array(PartitionHashBound, nparts);
/* Convert from node to the internal representation */
for (i = 0; i < nparts; i++)
@@ -397,7 +396,7 @@ create_hash_bounds(PartitionBoundSpec **boundspecs, int nparts,
* arrays, here we just allocate a single array and below we'll just
* assign a portion of this array per partition.
*/
- boundDatums = (Datum *) palloc(nparts * 2 * sizeof(Datum));
+ boundDatums = palloc_array(Datum, nparts * 2);
/*
* For hash partitioning, there are as many datums (modulus and remainder
@@ -480,8 +479,7 @@ create_list_bounds(PartitionBoundSpec **boundspecs, int nparts,
boundinfo->default_index = -1;
ndatums = get_non_null_list_datum_count(boundspecs, nparts);
- all_values = (PartitionListValue *)
- palloc(ndatums * sizeof(PartitionListValue));
+ all_values = palloc_array(PartitionListValue, ndatums);
/* Create a unified list of non-null values across all partitions. */
for (j = 0, i = 0; i < nparts; i++)
@@ -544,7 +542,7 @@ create_list_bounds(PartitionBoundSpec **boundspecs, int nparts,
* arrays, here we just allocate a single array and below we'll just
* assign a portion of this array per datum.
*/
- boundDatums = (Datum *) palloc(ndatums * sizeof(Datum));
+ boundDatums = palloc_array(Datum, ndatums);
/*
* Copy values. Canonical indexes are values ranging from 0 to (nparts -
@@ -698,8 +696,7 @@ create_range_bounds(PartitionBoundSpec **boundspecs, int nparts,
/* Will be set correctly below. */
boundinfo->default_index = -1;
- all_bounds = (PartitionRangeBound **)
- palloc0(2 * nparts * sizeof(PartitionRangeBound *));
+ all_bounds = palloc0_array(PartitionRangeBound *, 2 * nparts);
/* Create a unified list of range bounds across all the partitions. */
ndatums = 0;
@@ -739,8 +736,7 @@ create_range_bounds(PartitionBoundSpec **boundspecs, int nparts,
key);
/* Save distinct bounds from all_bounds into rbounds. */
- rbounds = (PartitionRangeBound **)
- palloc(ndatums * sizeof(PartitionRangeBound *));
+ rbounds = palloc_array(PartitionRangeBound *, ndatums);
k = 0;
prev = NULL;
for (i = 0; i < ndatums; i++)
@@ -823,9 +819,9 @@ create_range_bounds(PartitionBoundSpec **boundspecs, int nparts,
* arrays in each loop.
*/
partnatts = key->partnatts;
- boundDatums = (Datum *) palloc(ndatums * partnatts * sizeof(Datum));
- boundKinds = (PartitionRangeDatumKind *) palloc(ndatums * partnatts *
- sizeof(PartitionRangeDatumKind));
+ boundDatums = palloc_array(Datum, ndatums * partnatts);
+ boundKinds = palloc_array(PartitionRangeDatumKind,
+ ndatums * partnatts);
for (i = 0; i < ndatums; i++)
{
@@ -1038,8 +1034,8 @@ partition_bounds_copy(PartitionBoundInfo src,
* for storing the PartitionRangeDatumKind, we allocate a single chunk
* here and use a smaller portion of it for each datum.
*/
- boundKinds = (PartitionRangeDatumKind *) palloc(ndatums * partnatts *
- sizeof(PartitionRangeDatumKind));
+ boundKinds = palloc_array(PartitionRangeDatumKind,
+ ndatums * partnatts);
for (i = 0; i < ndatums; i++)
{
@@ -1060,7 +1056,7 @@ partition_bounds_copy(PartitionBoundInfo src,
*/
hash_part = (key->strategy == PARTITION_STRATEGY_HASH);
natts = hash_part ? 2 : partnatts;
- boundDatums = palloc(ndatums * natts * sizeof(Datum));
+ boundDatums = palloc_array(Datum, ndatums * natts);
for (i = 0; i < ndatums; i++)
{
@@ -2392,7 +2388,7 @@ fix_merged_indexes(PartitionMap *outer_map, PartitionMap *inner_map,
Assert(nmerged > 0);
- new_indexes = (int *) palloc(sizeof(int) * nmerged);
+ new_indexes = palloc_array(int, nmerged);
for (i = 0; i < nmerged; i++)
new_indexes[i] = -1;
@@ -2452,8 +2448,8 @@ generate_matching_part_pairs(RelOptInfo *outer_rel, RelOptInfo *inner_rel,
Assert(*outer_parts == NIL);
Assert(*inner_parts == NIL);
- outer_indexes = (int *) palloc(sizeof(int) * nmerged);
- inner_indexes = (int *) palloc(sizeof(int) * nmerged);
+ outer_indexes = palloc_array(int, nmerged);
+ inner_indexes = palloc_array(int, nmerged);
for (i = 0; i < nmerged; i++)
outer_indexes[i] = inner_indexes[i] = -1;
@@ -3433,7 +3429,7 @@ make_one_partition_rbound(PartitionKey key, int index, List *datums, bool lower)
Assert(datums != NIL);
- bound = (PartitionRangeBound *) palloc0(sizeof(PartitionRangeBound));
+ bound = palloc0_object(PartitionRangeBound);
bound->index = index;
bound->datums = (Datum *) palloc0(key->partnatts * sizeof(Datum));
bound->kind = (PartitionRangeDatumKind *) palloc0(key->partnatts *
diff --git a/src/backend/partitioning/partdesc.c b/src/backend/partitioning/partdesc.c
index 328b4d450e4..fa651df106b 100644
--- a/src/backend/partitioning/partdesc.c
+++ b/src/backend/partitioning/partdesc.c
@@ -171,9 +171,9 @@ retry:
/* Allocate working arrays for OIDs, leaf flags, and boundspecs. */
if (nparts > 0)
{
- oids = (Oid *) palloc(nparts * sizeof(Oid));
- is_leaf = (bool *) palloc(nparts * sizeof(bool));
- boundspecs = palloc(nparts * sizeof(PartitionBoundSpec *));
+ oids = palloc_array(Oid, nparts);
+ is_leaf = palloc_array(bool, nparts);
+ boundspecs = palloc_array(PartitionBoundSpec *, nparts);
}
/* Collect bound spec nodes for each partition. */
diff --git a/src/backend/partitioning/partprune.c b/src/backend/partitioning/partprune.c
index fa3c5b3c3bb..d57200946a5 100644
--- a/src/backend/partitioning/partprune.c
+++ b/src/backend/partitioning/partprune.c
@@ -242,7 +242,7 @@ make_partition_pruneinfo(PlannerInfo *root, RelOptInfo *parentrel,
* that zero can represent an un-filled array entry.
*/
allpartrelids = NIL;
- relid_subplan_map = palloc0(sizeof(int) * root->simple_rel_array_size);
+ relid_subplan_map = palloc0_array(int, root->simple_rel_array_size);
i = 1;
foreach(lc, subpaths)
@@ -458,7 +458,7 @@ make_partitionedrel_pruneinfo(PlannerInfo *root, RelOptInfo *parentrel,
* In this phase we discover whether runtime pruning is needed at all; if
* not, we can avoid doing further work.
*/
- relid_subpart_map = palloc0(sizeof(int) * root->simple_rel_array_size);
+ relid_subpart_map = palloc0_array(int, root->simple_rel_array_size);
i = 1;
rti = -1;
@@ -645,11 +645,11 @@ make_partitionedrel_pruneinfo(PlannerInfo *root, RelOptInfo *parentrel,
* Also construct a Bitmapset of all partitions that are present (that
* is, not pruned already).
*/
- subplan_map = (int *) palloc(nparts * sizeof(int));
+ subplan_map = palloc_array(int, nparts);
memset(subplan_map, -1, nparts * sizeof(int));
- subpart_map = (int *) palloc(nparts * sizeof(int));
+ subpart_map = palloc_array(int, nparts);
memset(subpart_map, -1, nparts * sizeof(int));
- relid_map = (Oid *) palloc0(nparts * sizeof(Oid));
+ relid_map = palloc0_array(Oid, nparts);
present_parts = NULL;
i = -1;
@@ -838,8 +838,7 @@ get_matching_partitions(PartitionPruneContext *context, List *pruning_steps)
* result of applying all pruning steps is the value contained in the slot
* of the last pruning step.
*/
- results = (PruneStepResult **)
- palloc0(num_steps * sizeof(PruneStepResult *));
+ results = palloc0_array(PruneStepResult *, num_steps);
foreach(lc, pruning_steps)
{
PartitionPruneStep *step = lfirst(lc);
@@ -1861,7 +1860,7 @@ match_clause_to_partition_key(GeneratePruningStepsContext *context,
return PARTCLAUSE_MATCH_STEPS;
}
- partclause = (PartClauseInfo *) palloc(sizeof(PartClauseInfo));
+ partclause = palloc_object(PartClauseInfo);
partclause->keyno = partkeyidx;
/* Do pruning with the Boolean equality operator. */
partclause->opno = BooleanEqualOperator;
@@ -2118,7 +2117,7 @@ match_clause_to_partition_key(GeneratePruningStepsContext *context,
/*
* Build the clause, passing the negator if applicable.
*/
- partclause = (PartClauseInfo *) palloc(sizeof(PartClauseInfo));
+ partclause = palloc_object(PartClauseInfo);
partclause->keyno = partkeyidx;
if (is_opne_listp)
{
@@ -2664,7 +2663,7 @@ get_matching_hash_bounds(PartitionPruneContext *context,
StrategyNumber opstrategy, Datum *values, int nvalues,
FmgrInfo *partsupfunc, Bitmapset *nullkeys)
{
- PruneStepResult *result = (PruneStepResult *) palloc0(sizeof(PruneStepResult));
+ PruneStepResult *result = palloc0_object(PruneStepResult);
PartitionBoundInfo boundinfo = context->boundinfo;
int *partindices = boundinfo->indexes;
int partnatts = context->partnatts;
@@ -2741,7 +2740,7 @@ get_matching_list_bounds(PartitionPruneContext *context,
StrategyNumber opstrategy, Datum value, int nvalues,
FmgrInfo *partsupfunc, Bitmapset *nullkeys)
{
- PruneStepResult *result = (PruneStepResult *) palloc0(sizeof(PruneStepResult));
+ PruneStepResult *result = palloc0_object(PruneStepResult);
PartitionBoundInfo boundinfo = context->boundinfo;
int off,
minoff,
@@ -2952,7 +2951,7 @@ get_matching_range_bounds(PartitionPruneContext *context,
StrategyNumber opstrategy, Datum *values, int nvalues,
FmgrInfo *partsupfunc, Bitmapset *nullkeys)
{
- PruneStepResult *result = (PruneStepResult *) palloc0(sizeof(PruneStepResult));
+ PruneStepResult *result = palloc0_object(PruneStepResult);
PartitionBoundInfo boundinfo = context->boundinfo;
Oid *partcollation = context->partcollation;
int partnatts = context->partnatts;
@@ -3475,7 +3474,7 @@ perform_pruning_base_step(PartitionPruneContext *context,
{
PruneStepResult *result;
- result = (PruneStepResult *) palloc(sizeof(PruneStepResult));
+ result = palloc_object(PruneStepResult);
result->bound_offsets = NULL;
result->scan_default = false;
result->scan_null = false;
@@ -3564,7 +3563,7 @@ perform_pruning_combine_step(PartitionPruneContext *context,
PartitionPruneStepCombine *cstep,
PruneStepResult **step_results)
{
- PruneStepResult *result = (PruneStepResult *) palloc0(sizeof(PruneStepResult));
+ PruneStepResult *result = palloc0_object(PruneStepResult);
bool firststep;
ListCell *lc1;
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 0ab921a169b..6b8fc7d5da9 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -1016,7 +1016,7 @@ rebuild_database_list(Oid newdb)
int i;
/* put all the hash elements into an array */
- dbary = palloc(nelems * sizeof(avl_dbase));
+ dbary = palloc_array(avl_dbase, nelems);
i = 0;
hash_seq_init(&seq, dbhash);
@@ -1848,7 +1848,7 @@ get_database_list(void)
*/
oldcxt = MemoryContextSwitchTo(resultcxt);
- avdb = (avw_dbase *) palloc(sizeof(avw_dbase));
+ avdb = palloc_object(avw_dbase);
avdb->adw_datid = pgdatabase->oid;
avdb->adw_name = pstrdup(NameStr(pgdatabase->datname));
@@ -2699,7 +2699,7 @@ extract_autovac_opts(HeapTuple tup, TupleDesc pg_class_desc)
if (relopts == NULL)
return NULL;
- av = palloc(sizeof(AutoVacOpts));
+ av = palloc_object(AutoVacOpts);
memcpy(av, &(((StdRdOptions *) relopts)->autovacuum), sizeof(AutoVacOpts));
pfree(relopts);
@@ -2794,7 +2794,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
? avopts->multixact_freeze_table_age
: default_multixact_freeze_table_age;
- tab = palloc(sizeof(autovac_table));
+ tab = palloc_object(autovac_table);
tab->at_relid = relid;
tab->at_sharedrel = classForm->relisshared;
diff --git a/src/backend/postmaster/checkpointer.c b/src/backend/postmaster/checkpointer.c
index 9bfd0fd665c..fc1cb72ce73 100644
--- a/src/backend/postmaster/checkpointer.c
+++ b/src/backend/postmaster/checkpointer.c
@@ -1191,7 +1191,7 @@ CompactCheckpointerRequestQueue(void)
return false;
/* Initialize skip_slot array */
- skip_slot = palloc0(sizeof(bool) * CheckpointerShmem->num_requests);
+ skip_slot = palloc0_array(bool, CheckpointerShmem->num_requests);
/* Initialize temporary hash table */
ctl.keysize = sizeof(CheckpointerRequest);
@@ -1302,7 +1302,7 @@ AbsorbSyncRequests(void)
n = CheckpointerShmem->num_requests;
if (n > 0)
{
- requests = (CheckpointerRequest *) palloc(n * sizeof(CheckpointerRequest));
+ requests = palloc_array(CheckpointerRequest, n);
memcpy(requests, CheckpointerShmem->requests, n * sizeof(CheckpointerRequest));
}
diff --git a/src/backend/postmaster/pgarch.c b/src/backend/postmaster/pgarch.c
index 12ee815a626..4d10c5aa205 100644
--- a/src/backend/postmaster/pgarch.c
+++ b/src/backend/postmaster/pgarch.c
@@ -253,7 +253,7 @@ PgArchiverMain(char *startup_data, size_t startup_data_len)
PgArch->pgprocno = MyProcNumber;
/* Create workspace for pgarch_readyXlog() */
- arch_files = palloc(sizeof(struct arch_files_state));
+ arch_files = palloc_object(struct arch_files_state);
arch_files->arch_files_size = 0;
/* Initialize our max-heap for prioritizing files to archive. */
@@ -939,7 +939,7 @@ LoadArchiveLibrary(void)
ereport(ERROR,
(errmsg("archive modules must register an archive callback")));
- archive_module_state = (ArchiveModuleState *) palloc0(sizeof(ArchiveModuleState));
+ archive_module_state = palloc0_object(ArchiveModuleState);
if (ArchiveCallbacks->startup_cb != NULL)
ArchiveCallbacks->startup_cb(archive_module_state);
diff --git a/src/backend/postmaster/pmchild.c b/src/backend/postmaster/pmchild.c
index 0d473226c3a..1f6998e393a 100644
--- a/src/backend/postmaster/pmchild.c
+++ b/src/backend/postmaster/pmchild.c
@@ -125,7 +125,7 @@ InitPostmasterChildSlots(void)
num_pmchild_slots += pmchild_pools[i].size;
/* Initialize them */
- slots = palloc(num_pmchild_slots * sizeof(PMChild));
+ slots = palloc_array(PMChild, num_pmchild_slots);
slotno = 0;
for (int btype = 0; btype < BACKEND_NUM_TYPES; btype++)
{
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index 5f615d0f605..35f51d93a9d 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -1091,7 +1091,7 @@ PostmasterMain(int argc, char *argv[])
* First set up an on_proc_exit function that's charged with closing the
* sockets again at postmaster shutdown.
*/
- ListenSockets = palloc(MAXLISTEN * sizeof(pgsocket));
+ ListenSockets = palloc_array(pgsocket, MAXLISTEN);
on_proc_exit(CloseServerPorts, 0);
if (ListenAddresses)
@@ -4267,7 +4267,7 @@ pgwin32_register_deadchild_callback(HANDLE procHandle, DWORD procId)
{
win32_deadchild_waitinfo *childinfo;
- childinfo = palloc(sizeof(win32_deadchild_waitinfo));
+ childinfo = palloc_object(win32_deadchild_waitinfo);
childinfo->procHandle = procHandle;
childinfo->procId = procId;
diff --git a/src/backend/postmaster/syslogger.c b/src/backend/postmaster/syslogger.c
index a71810d55e5..41e3a0cfbd6 100644
--- a/src/backend/postmaster/syslogger.c
+++ b/src/backend/postmaster/syslogger.c
@@ -960,7 +960,7 @@ process_pipe_input(char *logbuffer, int *bytes_in_logbuffer)
* Need a free slot, but there isn't one in the list,
* so create a new one and extend the list with it.
*/
- free_slot = palloc(sizeof(save_buffer));
+ free_slot = palloc_object(save_buffer);
buffer_list = lappend(buffer_list, free_slot);
buffer_lists[p.pid % NBUFFER_LISTS] = buffer_list;
}
diff --git a/src/backend/postmaster/walsummarizer.c b/src/backend/postmaster/walsummarizer.c
index ffbf0439358..8b9762bd828 100644
--- a/src/backend/postmaster/walsummarizer.c
+++ b/src/backend/postmaster/walsummarizer.c
@@ -917,8 +917,7 @@ SummarizeWAL(TimeLineID tli, XLogRecPtr start_lsn, bool exact,
bool fast_forward = true;
/* Initialize private data for xlogreader. */
- private_data = (SummarizerReadLocalXLogPrivate *)
- palloc0(sizeof(SummarizerReadLocalXLogPrivate));
+ private_data = palloc0_object(SummarizerReadLocalXLogPrivate);
private_data->tli = tli;
private_data->historic = !XLogRecPtrIsInvalid(switch_lsn);
private_data->read_upto = maximum_lsn;
diff --git a/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c b/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c
index 1b158c9d288..112593de839 100644
--- a/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c
+++ b/src/backend/replication/libpqwalreceiver/libpqwalreceiver.c
@@ -210,7 +210,7 @@ libpqrcv_connect(const char *conninfo, bool replication, bool logical,
Assert(i < sizeof(keys));
- conn = palloc0(sizeof(WalReceiverConn));
+ conn = palloc0_object(WalReceiverConn);
conn->streamConn = PQconnectStartParams(keys, vals,
/* expand_dbname = */ true);
if (PQstatus(conn->streamConn) == CONNECTION_BAD)
@@ -1249,7 +1249,7 @@ libpqrcv_exec(WalReceiverConn *conn, const char *query,
const int nRetTypes, const Oid *retTypes)
{
PGresult *pgres = NULL;
- WalRcvExecResult *walres = palloc0(sizeof(WalRcvExecResult));
+ WalRcvExecResult *walres = palloc0_object(WalRcvExecResult);
char *diag_sqlstate;
if (MyDatabaseId == InvalidOid)
diff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.c
index 268b2675caa..b4949eb12f5 100644
--- a/src/backend/replication/logical/applyparallelworker.c
+++ b/src/backend/replication/logical/applyparallelworker.c
@@ -425,7 +425,7 @@ pa_launch_parallel_worker(void)
*/
oldcontext = MemoryContextSwitchTo(ApplyContext);
- winfo = (ParallelApplyWorkerInfo *) palloc0(sizeof(ParallelApplyWorkerInfo));
+ winfo = palloc0_object(ParallelApplyWorkerInfo);
/* Setup shared memory. */
if (!pa_setup_dsm(winfo))
diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c
index a3c7adbf1a8..19cdb9f625f 100644
--- a/src/backend/replication/logical/launcher.c
+++ b/src/backend/replication/logical/launcher.c
@@ -142,7 +142,7 @@ get_subscription_list(void)
*/
oldcxt = MemoryContextSwitchTo(resultcxt);
- sub = (Subscription *) palloc0(sizeof(Subscription));
+ sub = palloc0_object(Subscription);
sub->oid = subform->oid;
sub->dbid = subform->subdbid;
sub->owner = subform->subowner;
diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c
index 0b25efafe2b..6c0503ad2e8 100644
--- a/src/backend/replication/logical/logical.c
+++ b/src/backend/replication/logical/logical.c
@@ -170,7 +170,7 @@ StartupDecodingContext(List *output_plugin_options,
"Logical decoding context",
ALLOCSET_DEFAULT_SIZES);
old_context = MemoryContextSwitchTo(context);
- ctx = palloc0(sizeof(LogicalDecodingContext));
+ ctx = palloc0_object(LogicalDecodingContext);
ctx->context = context;
diff --git a/src/backend/replication/logical/logicalfuncs.c b/src/backend/replication/logical/logicalfuncs.c
index 0148ec36788..4c5747ba956 100644
--- a/src/backend/replication/logical/logicalfuncs.c
+++ b/src/backend/replication/logical/logicalfuncs.c
@@ -140,7 +140,7 @@ pg_logical_slot_get_changes_guts(FunctionCallInfo fcinfo, bool confirm, bool bin
arr = PG_GETARG_ARRAYTYPE_P(3);
/* state to write output to */
- p = palloc0(sizeof(DecodingOutputState));
+ p = palloc0_object(DecodingOutputState);
p->binary_output = binary;
diff --git a/src/backend/replication/logical/proto.c b/src/backend/replication/logical/proto.c
index bef350714db..8713f0b0725 100644
--- a/src/backend/replication/logical/proto.c
+++ b/src/backend/replication/logical/proto.c
@@ -689,7 +689,7 @@ logicalrep_write_rel(StringInfo out, TransactionId xid, Relation rel,
LogicalRepRelation *
logicalrep_read_rel(StringInfo in)
{
- LogicalRepRelation *rel = palloc(sizeof(LogicalRepRelation));
+ LogicalRepRelation *rel = palloc_object(LogicalRepRelation);
rel->remoteid = pq_getmsgint(in, 4);
@@ -978,8 +978,8 @@ logicalrep_read_attrs(StringInfo in, LogicalRepRelation *rel)
Bitmapset *attkeys = NULL;
natts = pq_getmsgint(in, 2);
- attnames = palloc(natts * sizeof(char *));
- atttyps = palloc(natts * sizeof(Oid));
+ attnames = palloc_array(char *, natts);
+ atttyps = palloc_array(Oid, natts);
/* read the attributes */
for (i = 0; i < natts; i++)
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 79b60df7cf0..082f1152678 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -2387,7 +2387,8 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,
int nrelations = 0;
Relation *relations;
- relations = palloc0(nrelids * sizeof(Relation));
+ relations = palloc0_array(Relation,
+ nrelids);
for (i = 0; i < nrelids; i++)
{
Oid relid = change->data.truncate.relids[i];
@@ -3446,8 +3447,7 @@ ReorderBufferGetCatalogChangesXacts(ReorderBuffer *rb)
return NULL;
/* Initialize XID array */
- xids = (TransactionId *) palloc(sizeof(TransactionId) *
- dclist_count(&rb->catchange_txns));
+ xids = palloc_array(TransactionId, dclist_count(&rb->catchange_txns));
dclist_foreach(iter, &rb->catchange_txns)
{
ReorderBufferTXN *txn = dclist_container(ReorderBufferTXN,
@@ -4835,9 +4835,9 @@ ReorderBufferToastReplace(ReorderBuffer *rb, ReorderBufferTXN *txn,
toast_desc = RelationGetDescr(toast_rel);
/* should we allocate from stack instead? */
- attrs = palloc0(sizeof(Datum) * desc->natts);
- isnull = palloc0(sizeof(bool) * desc->natts);
- free = palloc0(sizeof(bool) * desc->natts);
+ attrs = palloc0_array(Datum, desc->natts);
+ isnull = palloc0_array(bool, desc->natts);
+ free = palloc0_array(bool, desc->natts);
newtup = change->data.tp.newtuple;
@@ -5246,7 +5246,7 @@ UpdateLogicalMappings(HTAB *tuplecid_data, Oid relid, Snapshot snapshot)
continue;
/* ok, relevant, queue for apply */
- f = palloc(sizeof(RewriteMappingFile));
+ f = palloc_object(RewriteMappingFile);
f->lsn = f_lsn;
strcpy(f->fname, mapping_de->d_name);
files = lappend(files, f);
diff --git a/src/backend/replication/logical/slotsync.c b/src/backend/replication/logical/slotsync.c
index f6945af1d43..4768035aeb3 100644
--- a/src/backend/replication/logical/slotsync.c
+++ b/src/backend/replication/logical/slotsync.c
@@ -822,7 +822,7 @@ synchronize_slots(WalReceiverConn *wrconn)
while (tuplestore_gettupleslot(res->tuplestore, true, false, tupslot))
{
bool isnull;
- RemoteSlot *remote_slot = palloc0(sizeof(RemoteSlot));
+ RemoteSlot *remote_slot = palloc0_object(RemoteSlot);
Datum d;
int col = 0;
diff --git a/src/backend/replication/logical/snapbuild.c b/src/backend/replication/logical/snapbuild.c
index bbedd3de318..347571ef5de 100644
--- a/src/backend/replication/logical/snapbuild.c
+++ b/src/backend/replication/logical/snapbuild.c
@@ -199,7 +199,7 @@ AllocateSnapshotBuilder(ReorderBuffer *reorder,
ALLOCSET_DEFAULT_SIZES);
oldcontext = MemoryContextSwitchTo(context);
- builder = palloc0(sizeof(SnapBuild));
+ builder = palloc0_object(SnapBuild);
builder->state = SNAPBUILD_START;
builder->context = context;
@@ -486,8 +486,7 @@ SnapBuildInitialSnapshot(SnapBuild *builder)
MyProc->xmin = snap->xmin;
/* allocate in transaction context */
- newxip = (TransactionId *)
- palloc(sizeof(TransactionId) * GetMaxSnapshotXidCount());
+ newxip = palloc_array(TransactionId, GetMaxSnapshotXidCount());
/*
* snapbuild.c builds transactions in an "inverted" manner, which means it
diff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c
index 6af5c9fe16c..19a21c8a9d0 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -1609,7 +1609,7 @@ FetchTableStates(bool *started_tx)
oldctx = MemoryContextSwitchTo(CacheMemoryContext);
foreach(lc, rstates)
{
- rstate = palloc(sizeof(SubscriptionRelState));
+ rstate = palloc_object(SubscriptionRelState);
memcpy(rstate, lfirst(lc), sizeof(SubscriptionRelState));
table_states_not_ready = lappend(table_states_not_ready, rstate);
}
diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c
index 334bf3e7aff..0d2b1bf7b52 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -655,7 +655,7 @@ create_edata_for_relation(LogicalRepRelMapEntry *rel)
List *perminfos = NIL;
ResultRelInfo *resultRelInfo;
- edata = (ApplyExecutionData *) palloc0(sizeof(ApplyExecutionData));
+ edata = palloc0_object(ApplyExecutionData);
edata->targetRel = rel;
edata->estate = estate = CreateExecutorState();
@@ -753,8 +753,8 @@ slot_fill_defaults(LogicalRepRelMapEntry *rel, EState *estate,
if (num_phys_attrs == rel->remoterel.natts)
return;
- defmap = (int *) palloc(num_phys_attrs * sizeof(int));
- defexprs = (ExprState **) palloc(num_phys_attrs * sizeof(ExprState *));
+ defmap = palloc_array(int, num_phys_attrs);
+ defexprs = palloc_array(ExprState *, num_phys_attrs);
Assert(rel->attrmap->maplen == num_phys_attrs);
for (attnum = 0; attnum < num_phys_attrs; attnum++)
@@ -3540,7 +3540,7 @@ store_flush_position(XLogRecPtr remote_lsn, XLogRecPtr local_lsn)
MemoryContextSwitchTo(ApplyContext);
/* Track commit lsn */
- flushpos = (FlushPosition *) palloc(sizeof(FlushPosition));
+ flushpos = palloc_object(FlushPosition);
flushpos->local_end = local_lsn;
flushpos->remote_end = remote_lsn;
@@ -4250,14 +4250,15 @@ subxact_info_add(TransactionId xid)
* subxact_info_read.
*/
oldctx = MemoryContextSwitchTo(LogicalStreamingContext);
- subxacts = palloc(subxact_data.nsubxacts_max * sizeof(SubXactInfo));
+ subxacts = palloc_array(SubXactInfo,
+ subxact_data.nsubxacts_max);
MemoryContextSwitchTo(oldctx);
}
else if (subxact_data.nsubxacts == subxact_data.nsubxacts_max)
{
subxact_data.nsubxacts_max *= 2;
- subxacts = repalloc(subxacts,
- subxact_data.nsubxacts_max * sizeof(SubXactInfo));
+ subxacts = repalloc_array(subxacts, SubXactInfo,
+ subxact_data.nsubxacts_max);
}
subxacts[subxact_data.nsubxacts].xid = xid;
diff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c
index 2b7499b34b9..4d35957c1b0 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -424,7 +424,7 @@ static void
pgoutput_startup(LogicalDecodingContext *ctx, OutputPluginOptions *opt,
bool is_init)
{
- PGOutputData *data = palloc0(sizeof(PGOutputData));
+ PGOutputData *data = palloc0_object(PGOutputData);
static bool publication_callback_registered = false;
/* Create our memory context for private allocations. */
@@ -1629,7 +1629,7 @@ pgoutput_truncate(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,
old = MemoryContextSwitchTo(data->context);
- relids = palloc0(nrelations * sizeof(Oid));
+ relids = palloc0_array(Oid, nrelations);
nrelids = 0;
for (i = 0; i < nrelations; i++)
diff --git a/src/backend/replication/syncrep.c b/src/backend/replication/syncrep.c
index 1ce8bc7533f..96c2a80b144 100644
--- a/src/backend/replication/syncrep.c
+++ b/src/backend/replication/syncrep.c
@@ -662,9 +662,9 @@ SyncRepGetNthLatestSyncRecPtr(XLogRecPtr *writePtr,
/* Should have enough candidates, or somebody messed up */
Assert(nth > 0 && nth <= num_standbys);
- write_array = (XLogRecPtr *) palloc(sizeof(XLogRecPtr) * num_standbys);
- flush_array = (XLogRecPtr *) palloc(sizeof(XLogRecPtr) * num_standbys);
- apply_array = (XLogRecPtr *) palloc(sizeof(XLogRecPtr) * num_standbys);
+ write_array = palloc_array(XLogRecPtr, num_standbys);
+ flush_array = palloc_array(XLogRecPtr, num_standbys);
+ apply_array = palloc_array(XLogRecPtr, num_standbys);
for (i = 0; i < num_standbys; i++)
{
diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c
index 716092f717c..b431055c76c 100644
--- a/src/backend/replication/walreceiver.c
+++ b/src/backend/replication/walreceiver.c
@@ -1459,8 +1459,8 @@ pg_stat_get_wal_receiver(PG_FUNCTION_ARGS)
if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
elog(ERROR, "return type must be a row type");
- values = palloc0(sizeof(Datum) * tupdesc->natts);
- nulls = palloc0(sizeof(bool) * tupdesc->natts);
+ values = palloc0_array(Datum, tupdesc->natts);
+ nulls = palloc0_array(bool, tupdesc->natts);
/* Fetch values */
values[0] = Int32GetDatum(pid);
diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c
index a0782b1bbf6..52f08e807e0 100644
--- a/src/backend/replication/walsender.c
+++ b/src/backend/replication/walsender.c
@@ -3819,7 +3819,7 @@ WalSndGetStateString(WalSndState state)
static Interval *
offset_to_interval(TimeOffset offset)
{
- Interval *result = palloc(sizeof(Interval));
+ Interval *result = palloc_object(Interval);
result->month = 0;
result->day = 0;
diff --git a/src/backend/rewrite/rewriteHandler.c b/src/backend/rewrite/rewriteHandler.c
index b74f2acc327..a976f0dced0 100644
--- a/src/backend/rewrite/rewriteHandler.c
+++ b/src/backend/rewrite/rewriteHandler.c
@@ -795,7 +795,7 @@ rewriteTargetListIU(List *targetList,
* scan, then appended to the reconstructed tlist.
*/
numattrs = RelationGetNumberOfAttributes(target_relation);
- new_tles = (TargetEntry **) palloc0(numattrs * sizeof(TargetEntry *));
+ new_tles = palloc0_array(TargetEntry *, numattrs);
next_junk_attrno = numattrs + 1;
foreach(temp, targetList)
@@ -1449,7 +1449,7 @@ rewriteValuesRTE(Query *parsetree, RangeTblEntry *rte, int rti,
* columns), and we complain if such a thing does occur.
*/
numattrs = list_length(linitial(rte->values_lists));
- attrnos = (int *) palloc0(numattrs * sizeof(int));
+ attrnos = palloc0_array(int, numattrs);
foreach(lc, parsetree->targetList)
{
@@ -4285,7 +4285,7 @@ RewriteQuery(Query *parsetree, List *rewrite_events, int orig_rt_length)
RelationGetRelationName(rt_entry_relation))));
}
- rev = (rewrite_event *) palloc(sizeof(rewrite_event));
+ rev = palloc_object(rewrite_event);
rev->relation = RelationGetRelid(rt_entry_relation);
rev->event = event;
rewrite_events = lappend(rewrite_events, rev);
diff --git a/src/backend/rewrite/rewriteManip.c b/src/backend/rewrite/rewriteManip.c
index bca11500e9e..0205377cef9 100644
--- a/src/backend/rewrite/rewriteManip.c
+++ b/src/backend/rewrite/rewriteManip.c
@@ -1558,7 +1558,7 @@ map_variable_attnos_mutator(Node *node,
var->varlevelsup == context->sublevels_up)
{
/* Found a matching variable, make the substitution */
- Var *newvar = (Var *) palloc(sizeof(Var));
+ Var *newvar = palloc_object(Var);
int attno = var->varattno;
*newvar = *var; /* initially copy all fields of the Var */
@@ -1629,7 +1629,7 @@ map_variable_attnos_mutator(Node *node,
context->to_rowtype != var->vartype)
{
ConvertRowtypeExpr *newnode;
- Var *newvar = (Var *) palloc(sizeof(Var));
+ Var *newvar = palloc_object(Var);
/* whole-row variable, warn caller */
*(context->found_whole_row) = true;
@@ -1642,7 +1642,7 @@ map_variable_attnos_mutator(Node *node,
/* Var itself is changed to the requested type. */
newvar->vartype = context->to_rowtype;
- newnode = (ConvertRowtypeExpr *) palloc(sizeof(ConvertRowtypeExpr));
+ newnode = palloc_object(ConvertRowtypeExpr);
*newnode = *r; /* initially copy all fields of the CRE */
newnode->arg = (Expr *) newvar;
diff --git a/src/backend/snowball/dict_snowball.c b/src/backend/snowball/dict_snowball.c
index 5197968e860..e1524aa7ca0 100644
--- a/src/backend/snowball/dict_snowball.c
+++ b/src/backend/snowball/dict_snowball.c
@@ -226,7 +226,7 @@ dsnowball_init(PG_FUNCTION_ARGS)
bool stoploaded = false;
ListCell *l;
- d = (DictSnowball *) palloc0(sizeof(DictSnowball));
+ d = palloc0_object(DictSnowball);
foreach(l, dictoptions)
{
@@ -275,7 +275,7 @@ dsnowball_lexize(PG_FUNCTION_ARGS)
char *in = (char *) PG_GETARG_POINTER(1);
int32 len = PG_GETARG_INT32(2);
char *txt = str_tolower(in, len, DEFAULT_COLLATION_OID);
- TSLexeme *res = palloc0(sizeof(TSLexeme) * 2);
+ TSLexeme *res = palloc0_array(TSLexeme, 2);
/*
* Do not pass strings exceeding 1000 bytes to the stemmer, as they're
diff --git a/src/backend/statistics/dependencies.c b/src/backend/statistics/dependencies.c
index eb2fc4366b4..8b7fa0e3591 100644
--- a/src/backend/statistics/dependencies.c
+++ b/src/backend/statistics/dependencies.c
@@ -156,7 +156,7 @@ generate_dependencies_recurse(DependencyGenerator state, int index,
static void
generate_dependencies(DependencyGenerator state)
{
- AttrNumber *current = (AttrNumber *) palloc0(sizeof(AttrNumber) * state->k);
+ AttrNumber *current = palloc0_array(AttrNumber, state->k);
generate_dependencies_recurse(state, 0, 0, current);
@@ -243,7 +243,7 @@ dependency_degree(StatsBuildData *data, int k, AttrNumber *dependency)
* Translate the array of indexes to regular attnums for the dependency
* (we will need this to identify the columns in StatsBuildData).
*/
- attnums_dep = (AttrNumber *) palloc(k * sizeof(AttrNumber));
+ attnums_dep = palloc_array(AttrNumber, k);
for (i = 0; i < k; i++)
attnums_dep[i] = data->attnums[dependency[i]];
@@ -408,8 +408,7 @@ statext_dependencies_build(StatsBuildData *data)
/* initialize the list of dependencies */
if (dependencies == NULL)
{
- dependencies
- = (MVDependencies *) palloc0(sizeof(MVDependencies));
+ dependencies = palloc0_object(MVDependencies);
dependencies->magic = STATS_DEPS_MAGIC;
dependencies->type = STATS_DEPS_TYPE_BASIC;
@@ -511,7 +510,7 @@ statext_dependencies_deserialize(bytea *data)
VARSIZE_ANY_EXHDR(data), SizeOfHeader);
/* read the MVDependencies header */
- dependencies = (MVDependencies *) palloc0(sizeof(MVDependencies));
+ dependencies = palloc0_object(MVDependencies);
/* initialize pointer to the data part (skip the varlena header) */
tmp = VARDATA_ANY(data);
@@ -1050,7 +1049,7 @@ clauselist_apply_dependencies(PlannerInfo *root, List *clauses,
* and mark all the corresponding clauses as estimated.
*/
nattrs = bms_num_members(attnums);
- attr_sel = (Selectivity *) palloc(sizeof(Selectivity) * nattrs);
+ attr_sel = palloc_array(Selectivity, nattrs);
attidx = 0;
i = -1;
@@ -1397,8 +1396,7 @@ dependencies_clauselist_selectivity(PlannerInfo *root,
if (!has_stats_of_kind(rel->statlist, STATS_EXT_DEPENDENCIES))
return 1.0;
- list_attnums = (AttrNumber *) palloc(sizeof(AttrNumber) *
- list_length(clauses));
+ list_attnums = palloc_array(AttrNumber, list_length(clauses));
/*
* We allocate space as if every clause was a unique expression, although
@@ -1406,7 +1404,7 @@ dependencies_clauselist_selectivity(PlannerInfo *root,
* we'll translate to attnums, and there might be duplicates. But it's
* easier and cheaper to just do one allocation than repalloc later.
*/
- unique_exprs = (Node **) palloc(sizeof(Node *) * list_length(clauses));
+ unique_exprs = palloc_array(Node *, list_length(clauses));
unique_exprs_cnt = 0;
/*
@@ -1559,8 +1557,8 @@ dependencies_clauselist_selectivity(PlannerInfo *root,
* make it just the right size, but it's likely wasteful anyway thanks to
* moving the freed chunks to freelists etc.
*/
- func_dependencies = (MVDependencies **) palloc(sizeof(MVDependencies *) *
- list_length(rel->statlist));
+ func_dependencies = palloc_array(MVDependencies *,
+ list_length(rel->statlist));
nfunc_dependencies = 0;
total_ndeps = 0;
@@ -1783,8 +1781,7 @@ dependencies_clauselist_selectivity(PlannerInfo *root,
* Work out which dependencies we can apply, starting with the
* widest/strongest ones, and proceeding to smaller/weaker ones.
*/
- dependencies = (MVDependency **) palloc(sizeof(MVDependency *) *
- total_ndeps);
+ dependencies = palloc_array(MVDependency *, total_ndeps);
ndependencies = 0;
while (true)
diff --git a/src/backend/statistics/extended_stats.c b/src/backend/statistics/extended_stats.c
index a8b63ec0884..c8b3445bade 100644
--- a/src/backend/statistics/extended_stats.c
+++ b/src/backend/statistics/extended_stats.c
@@ -446,7 +446,7 @@ fetch_statentries_for_relation(Relation pg_statext, Oid relid)
Form_pg_statistic_ext staForm;
List *exprs = NIL;
- entry = palloc0(sizeof(StatExtEntry));
+ entry = palloc0_object(StatExtEntry);
staForm = (Form_pg_statistic_ext) GETSTRUCT(htup);
entry->statOid = staForm->oid;
entry->schema = get_namespace_name(staForm->stxnamespace);
@@ -532,7 +532,7 @@ examine_attribute(Node *expr)
/*
* Create the VacAttrStats struct.
*/
- stats = (VacAttrStats *) palloc0(sizeof(VacAttrStats));
+ stats = palloc0_object(VacAttrStats);
stats->attstattarget = -1;
/*
@@ -613,7 +613,7 @@ examine_expression(Node *expr, int stattarget)
/*
* Create the VacAttrStats struct.
*/
- stats = (VacAttrStats *) palloc0(sizeof(VacAttrStats));
+ stats = palloc0_object(VacAttrStats);
/*
* We can't have statistics target specified for the expression, so we
@@ -698,7 +698,7 @@ lookup_var_attr_stats(Bitmapset *attrs, List *exprs,
natts = bms_num_members(attrs) + list_length(exprs);
- stats = (VacAttrStats **) palloc(natts * sizeof(VacAttrStats *));
+ stats = palloc_array(VacAttrStats *, natts);
/* lookup VacAttrStats info for the requested columns (same attnum) */
while ((x = bms_next_member(attrs, x)) >= 0)
@@ -946,7 +946,7 @@ build_attnums_array(Bitmapset *attrs, int nexprs, int *numattrs)
*numattrs = num;
/* build attnums from the bitmapset */
- attnums = (AttrNumber *) palloc(sizeof(AttrNumber) * num);
+ attnums = palloc_array(AttrNumber, num);
i = 0;
j = -1;
while ((j = bms_next_member(attrs, j)) >= 0)
@@ -1027,7 +1027,7 @@ build_sorted_items(StatsBuildData *data, int *nitems,
}
/* build a local cache of typlen for all attributes */
- typlen = (int *) palloc(sizeof(int) * data->nattnums);
+ typlen = palloc_array(int, data->nattnums);
for (i = 0; i < data->nattnums; i++)
typlen[i] = get_typlen(data->stats[i]->attrtypid);
@@ -1726,8 +1726,7 @@ statext_mcv_clauselist_selectivity(PlannerInfo *root, List *clauses, int varReli
if (!has_stats_of_kind(rel->statlist, STATS_EXT_MCV))
return sel;
- list_attnums = (Bitmapset **) palloc(sizeof(Bitmapset *) *
- list_length(clauses));
+ list_attnums = palloc_array(Bitmapset *, list_length(clauses));
/* expressions extracted from complex expressions */
list_exprs = (List **) palloc(sizeof(Node *) * list_length(clauses));
@@ -2152,8 +2151,8 @@ compute_expr_stats(Relation onerel, AnlExprData *exprdata, int nexprs,
econtext->ecxt_scantuple = slot;
/* Compute and save expression values */
- exprvals = (Datum *) palloc(numrows * sizeof(Datum));
- exprnulls = (bool *) palloc(numrows * sizeof(bool));
+ exprvals = palloc_array(Datum, numrows);
+ exprnulls = palloc_array(bool, numrows);
tcnt = 0;
for (i = 0; i < numrows; i++)
@@ -2270,7 +2269,7 @@ build_expr_data(List *exprs, int stattarget)
AnlExprData *exprdata;
ListCell *lc;
- exprdata = (AnlExprData *) palloc0(nexprs * sizeof(AnlExprData));
+ exprdata = palloc0_array(AnlExprData, nexprs);
idx = 0;
foreach(lc, exprs)
@@ -2363,7 +2362,8 @@ serialize_expr_stats(AnlExprData *exprdata, int nexprs)
if (nnum > 0)
{
int n;
- Datum *numdatums = (Datum *) palloc(nnum * sizeof(Datum));
+ Datum *numdatums = palloc_array(Datum,
+ nnum);
ArrayType *arry;
for (n = 0; n < nnum; n++)
diff --git a/src/backend/statistics/mcv.c b/src/backend/statistics/mcv.c
index d98cda698d9..61126823da9 100644
--- a/src/backend/statistics/mcv.c
+++ b/src/backend/statistics/mcv.c
@@ -270,7 +270,7 @@ statext_mcv_build(StatsBuildData *data, double totalrows, int stattarget)
+ sizeof(SortSupportData));
/* compute frequencies for values in each column */
- nfreqs = (int *) palloc0(sizeof(int) * numattrs);
+ nfreqs = palloc0_array(int, numattrs);
freqs = build_column_frequencies(groups, ngroups, mss, nfreqs);
/*
@@ -428,7 +428,7 @@ build_distinct_groups(int numrows, SortItem *items, MultiSortSupport mss,
j;
int ngroups = count_distinct_groups(numrows, items, mss);
- SortItem *groups = (SortItem *) palloc(ngroups * sizeof(SortItem));
+ SortItem *groups = palloc_array(SortItem, ngroups);
j = 0;
groups[0] = items[0];
@@ -635,8 +635,8 @@ statext_mcv_serialize(MCVList *mcvlist, VacAttrStats **stats)
char *endptr PG_USED_FOR_ASSERTS_ONLY;
/* values per dimension (and number of non-NULL values) */
- Datum **values = (Datum **) palloc0(sizeof(Datum *) * ndims);
- int *counts = (int *) palloc0(sizeof(int) * ndims);
+ Datum **values = palloc0_array(Datum *, ndims);
+ int *counts = palloc0_array(int, ndims);
/*
* We'll include some rudimentary information about the attribute types
@@ -646,7 +646,7 @@ statext_mcv_serialize(MCVList *mcvlist, VacAttrStats **stats)
* the statistics gets dropped automatically. We need to store the info
* about the arrays of deduplicated values anyway.
*/
- info = (DimensionInfo *) palloc0(sizeof(DimensionInfo) * ndims);
+ info = palloc0_array(DimensionInfo, ndims);
/* sort support data for all attributes included in the MCV list */
ssup = (SortSupport) palloc0(sizeof(SortSupportData) * ndims);
@@ -1097,7 +1097,7 @@ statext_mcv_deserialize(bytea *data)
ptr += (sizeof(Oid) * ndims);
/* Now it's safe to access the dimension info. */
- info = palloc(ndims * sizeof(DimensionInfo));
+ info = palloc_array(DimensionInfo, ndims);
memcpy(info, ptr, ndims * sizeof(DimensionInfo));
ptr += (ndims * sizeof(DimensionInfo));
@@ -1134,7 +1134,7 @@ statext_mcv_deserialize(bytea *data)
* original values (it might go away).
*/
datalen = 0; /* space for by-ref data */
- map = (Datum **) palloc(ndims * sizeof(Datum *));
+ map = palloc_array(Datum *, ndims);
for (dim = 0; dim < ndims; dim++)
{
@@ -1609,7 +1609,7 @@ mcv_get_match_bitmap(PlannerInfo *root, List *clauses,
Assert(mcvlist->nitems > 0);
Assert(mcvlist->nitems <= STATS_MCVLIST_MAX_ITEMS);
- matches = palloc(sizeof(bool) * mcvlist->nitems);
+ matches = palloc_array(bool, mcvlist->nitems);
memset(matches, !is_or, sizeof(bool) * mcvlist->nitems);
/*
diff --git a/src/backend/statistics/mvdistinct.c b/src/backend/statistics/mvdistinct.c
index 7e7a63405c8..5a98057a89c 100644
--- a/src/backend/statistics/mvdistinct.c
+++ b/src/backend/statistics/mvdistinct.c
@@ -444,7 +444,7 @@ ndistinct_for_combination(double totalrows, StatsBuildData *data,
* using the specified column combination as dimensions. We could try to
* sort in place, but it'd probably be more complex and bug-prone.
*/
- items = (SortItem *) palloc(numrows * sizeof(SortItem));
+ items = palloc_array(SortItem, numrows);
values = (Datum *) palloc0(sizeof(Datum) * numrows * k);
isnull = (bool *) palloc0(sizeof(bool) * numrows * k);
@@ -593,7 +593,7 @@ generator_init(int n, int k)
Assert((n >= k) && (k > 0));
/* allocate the generator state as a single chunk of memory */
- state = (CombinationGenerator *) palloc(sizeof(CombinationGenerator));
+ state = palloc_object(CombinationGenerator);
state->ncombinations = n_choose_k(n, k);
@@ -691,7 +691,7 @@ generate_combinations_recurse(CombinationGenerator *state,
static void
generate_combinations(CombinationGenerator *state)
{
- int *current = (int *) palloc0(sizeof(int) * state->k);
+ int *current = palloc0_array(int, state->k);
generate_combinations_recurse(state, 0, 0, current);
diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c
index 739daa1153a..57d39a59db3 100644
--- a/src/backend/storage/buffer/bufmgr.c
+++ b/src/backend/storage/buffer/bufmgr.c
@@ -4165,7 +4165,7 @@ DropRelationsAllBuffers(SMgrRelation *smgr_reln, int nlocators)
if (nlocators == 0)
return;
- rels = palloc(sizeof(SMgrRelation) * nlocators); /* non-local relations */
+ rels = palloc_array(SMgrRelation, nlocators); /* non-local relations */
/* If it's a local relation, it's localbuf.c's problem. */
for (i = 0; i < nlocators; i++)
@@ -4247,7 +4247,7 @@ DropRelationsAllBuffers(SMgrRelation *smgr_reln, int nlocators)
}
pfree(block);
- locators = palloc(sizeof(RelFileLocator) * n); /* non-local relations */
+ locators = palloc_array(RelFileLocator, n); /* non-local relations */
for (i = 0; i < n; i++)
locators[i] = rels[i]->smgr_rlocator.locator;
@@ -4597,7 +4597,7 @@ FlushRelationsAllBuffers(SMgrRelation *smgrs, int nrels)
return;
/* fill-in array for qsort */
- srels = palloc(sizeof(SMgrSortArray) * nrels);
+ srels = palloc_array(SMgrSortArray, nrels);
for (i = 0; i < nrels; i++)
{
diff --git a/src/backend/storage/file/buffile.c b/src/backend/storage/file/buffile.c
index 6449f82a72b..93f91caaf78 100644
--- a/src/backend/storage/file/buffile.c
+++ b/src/backend/storage/file/buffile.c
@@ -117,7 +117,7 @@ static File MakeNewFileSetSegment(BufFile *buffile, int segment);
static BufFile *
makeBufFileCommon(int nfiles)
{
- BufFile *file = (BufFile *) palloc(sizeof(BufFile));
+ BufFile *file = palloc_object(BufFile);
file->numFiles = nfiles;
file->isInterXact = false;
@@ -297,7 +297,7 @@ BufFileOpenFileSet(FileSet *fileset, const char *name, int mode,
File *files;
int nfiles = 0;
- files = palloc(sizeof(File) * capacity);
+ files = palloc_array(File, capacity);
/*
* We don't know how many segments there are, so we'll probe the
@@ -309,7 +309,7 @@ BufFileOpenFileSet(FileSet *fileset, const char *name, int mode,
if (nfiles + 1 > capacity)
{
capacity *= 2;
- files = repalloc(files, sizeof(File) * capacity);
+ files = repalloc_array(files, File, capacity);
}
/* Try to load a segment. */
FileSetSegmentName(segment_name, name, nfiles);
diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c
index 843d1021cf9..7d1d77999b8 100644
--- a/src/backend/storage/file/fd.c
+++ b/src/backend/storage/file/fd.c
@@ -974,7 +974,7 @@ count_usable_fds(int max_to_probe, int *usable_fds, int *already_open)
#endif
size = 1024;
- fd = (int *) palloc(size * sizeof(int));
+ fd = palloc_array(int, size);
#ifdef HAVE_GETRLIMIT
getrlimit_status = getrlimit(RLIMIT_NOFILE, &rlim);
@@ -1009,7 +1009,7 @@ count_usable_fds(int max_to_probe, int *usable_fds, int *already_open)
if (used >= size)
{
size *= 2;
- fd = (int *) repalloc(fd, size * sizeof(int));
+ fd = repalloc_array(fd, int, size);
}
fd[used++] = thisfd;
diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c
index 2e54c11f880..9e71f75f4de 100644
--- a/src/backend/storage/ipc/procarray.c
+++ b/src/backend/storage/ipc/procarray.c
@@ -1162,7 +1162,7 @@ ProcArrayApplyRecoveryInfo(RunningTransactions running)
* Allocate a temporary array to avoid modifying the array passed as
* argument.
*/
- xids = palloc(sizeof(TransactionId) * (running->xcnt + running->subxcnt));
+ xids = palloc_array(TransactionId, (running->xcnt + running->subxcnt));
/*
* Add to the temp array any xids which have not already completed.
@@ -3050,8 +3050,7 @@ GetVirtualXIDsDelayingChkpt(int *nvxids, int type)
Assert(type != 0);
/* allocate what's certainly enough result space */
- vxids = (VirtualTransactionId *)
- palloc(sizeof(VirtualTransactionId) * arrayP->maxProcs);
+ vxids = palloc_array(VirtualTransactionId, arrayP->maxProcs);
LWLockAcquire(ProcArrayLock, LW_SHARED);
@@ -3331,8 +3330,7 @@ GetCurrentVirtualXIDs(TransactionId limitXmin, bool excludeXmin0,
int index;
/* allocate what's certainly enough result space */
- vxids = (VirtualTransactionId *)
- palloc(sizeof(VirtualTransactionId) * arrayP->maxProcs);
+ vxids = palloc_array(VirtualTransactionId, arrayP->maxProcs);
LWLockAcquire(ProcArrayLock, LW_SHARED);
diff --git a/src/backend/storage/ipc/shm_mq.c b/src/backend/storage/ipc/shm_mq.c
index 2c79a649f46..9c888d3eb78 100644
--- a/src/backend/storage/ipc/shm_mq.c
+++ b/src/backend/storage/ipc/shm_mq.c
@@ -289,7 +289,7 @@ shm_mq_get_sender(shm_mq *mq)
shm_mq_handle *
shm_mq_attach(shm_mq *mq, dsm_segment *seg, BackgroundWorkerHandle *handle)
{
- shm_mq_handle *mqh = palloc(sizeof(shm_mq_handle));
+ shm_mq_handle *mqh = palloc_object(shm_mq_handle);
Assert(mq->mq_receiver == MyProc || mq->mq_sender == MyProc);
mqh->mqh_queue = mq;
diff --git a/src/backend/storage/lmgr/deadlock.c b/src/backend/storage/lmgr/deadlock.c
index dc6923f8132..15165624f19 100644
--- a/src/backend/storage/lmgr/deadlock.c
+++ b/src/backend/storage/lmgr/deadlock.c
@@ -151,16 +151,16 @@ InitDeadLockChecking(void)
* FindLockCycle needs at most MaxBackends entries in visitedProcs[] and
* deadlockDetails[].
*/
- visitedProcs = (PGPROC **) palloc(MaxBackends * sizeof(PGPROC *));
- deadlockDetails = (DEADLOCK_INFO *) palloc(MaxBackends * sizeof(DEADLOCK_INFO));
+ visitedProcs = palloc_array(PGPROC *, MaxBackends);
+ deadlockDetails = palloc_array(DEADLOCK_INFO, MaxBackends);
/*
* TopoSort needs to consider at most MaxBackends wait-queue entries, and
* it needn't run concurrently with FindLockCycle.
*/
topoProcs = visitedProcs; /* re-use this space */
- beforeConstraints = (int *) palloc(MaxBackends * sizeof(int));
- afterConstraints = (int *) palloc(MaxBackends * sizeof(int));
+ beforeConstraints = palloc_array(int, MaxBackends);
+ afterConstraints = palloc_array(int, MaxBackends);
/*
* We need to consider rearranging at most MaxBackends/2 wait queues
@@ -168,9 +168,8 @@ InitDeadLockChecking(void)
* and the expanded form of the wait queues can't involve more than
* MaxBackends total waiters.
*/
- waitOrders = (WAIT_ORDER *)
- palloc((MaxBackends / 2) * sizeof(WAIT_ORDER));
- waitOrderProcs = (PGPROC **) palloc(MaxBackends * sizeof(PGPROC *));
+ waitOrders = palloc_array(WAIT_ORDER, (MaxBackends / 2));
+ waitOrderProcs = palloc_array(PGPROC *, MaxBackends);
/*
* Allow at most MaxBackends distinct constraints in a configuration. (Is
@@ -181,7 +180,7 @@ InitDeadLockChecking(void)
* really big might potentially allow a stack-overflow problem.
*/
maxCurConstraints = MaxBackends;
- curConstraints = (EDGE *) palloc(maxCurConstraints * sizeof(EDGE));
+ curConstraints = palloc_array(EDGE, maxCurConstraints);
/*
* Allow up to 3*MaxBackends constraints to be saved without having to
@@ -192,8 +191,7 @@ InitDeadLockChecking(void)
* output workspace for FindLockCycle.
*/
maxPossibleConstraints = MaxBackends * 4;
- possibleConstraints =
- (EDGE *) palloc(maxPossibleConstraints * sizeof(EDGE));
+ possibleConstraints = palloc_array(EDGE, maxPossibleConstraints);
MemoryContextSwitchTo(oldcxt);
}
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index 3e2f98b371c..a7a76ae864d 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -3007,9 +3007,8 @@ GetLockConflicts(const LOCKTAG *locktag, LOCKMODE lockmode, int *countp)
(MaxBackends + max_prepared_xacts + 1));
}
else
- vxids = (VirtualTransactionId *)
- palloc0(sizeof(VirtualTransactionId) *
- (MaxBackends + max_prepared_xacts + 1));
+ vxids = palloc0_array(VirtualTransactionId,
+ (MaxBackends + max_prepared_xacts + 1));
/* Compute hash code and partition lock, and look up conflicting modes. */
hashcode = LockTagHashCode(locktag);
@@ -3706,7 +3705,7 @@ GetLockStatusData(void)
int el;
int i;
- data = (LockData *) palloc(sizeof(LockData));
+ data = palloc_object(LockData);
/* Guess how much space we'll need. */
els = MaxBackends;
@@ -3906,7 +3905,7 @@ GetBlockerStatusData(int blocked_pid)
PGPROC *proc;
int i;
- data = (BlockedProcsData *) palloc(sizeof(BlockedProcsData));
+ data = palloc_object(BlockedProcsData);
/*
* Guess how much space we'll need, and preallocate. Most of the time
@@ -4099,7 +4098,7 @@ GetRunningTransactionLocks(int *nlocks)
* Allocating enough space for all locks in the lock table is overkill,
* but it's more convenient and faster than having to enlarge the array.
*/
- accessExclusiveLocks = palloc(els * sizeof(xl_standby_lock));
+ accessExclusiveLocks = palloc_array(xl_standby_lock, els);
/* Now scan the tables to copy the data */
hash_seq_init(&seqstat, LockMethodProcLockHash);
diff --git a/src/backend/storage/lmgr/lwlock.c b/src/backend/storage/lmgr/lwlock.c
index 2f558ffea14..f5092ce5f42 100644
--- a/src/backend/storage/lmgr/lwlock.c
+++ b/src/backend/storage/lmgr/lwlock.c
@@ -687,9 +687,9 @@ RequestNamedLWLockTranche(const char *tranche_name, int num_lwlocks)
{
int i = pg_nextpower2_32(NamedLWLockTrancheRequests + 1);
- NamedLWLockTrancheRequestArray = (NamedLWLockTrancheRequest *)
- repalloc(NamedLWLockTrancheRequestArray,
- i * sizeof(NamedLWLockTrancheRequest));
+ NamedLWLockTrancheRequestArray = repalloc_array(NamedLWLockTrancheRequestArray,
+ NamedLWLockTrancheRequest,
+ i);
NamedLWLockTrancheRequestsAllocated = i;
}
diff --git a/src/backend/storage/lmgr/predicate.c b/src/backend/storage/lmgr/predicate.c
index 5b21a053981..acb6914054f 100644
--- a/src/backend/storage/lmgr/predicate.c
+++ b/src/backend/storage/lmgr/predicate.c
@@ -1441,7 +1441,7 @@ GetPredicateLockStatusData(void)
HASH_SEQ_STATUS seqstat;
PREDICATELOCK *predlock;
- data = (PredicateLockData *) palloc(sizeof(PredicateLockData));
+ data = palloc_object(PredicateLockData);
/*
* To ensure consistency, take simultaneous locks on all partition locks
diff --git a/src/backend/storage/smgr/bulk_write.c b/src/backend/storage/smgr/bulk_write.c
index ecd441f1be2..74d39a123f6 100644
--- a/src/backend/storage/smgr/bulk_write.c
+++ b/src/backend/storage/smgr/bulk_write.c
@@ -101,7 +101,7 @@ smgr_bulk_start_smgr(SMgrRelation smgr, ForkNumber forknum, bool use_wal)
{
BulkWriteState *state;
- state = palloc(sizeof(BulkWriteState));
+ state = palloc_object(BulkWriteState);
state->smgr = smgr;
state->forknum = forknum;
state->use_wal = use_wal;
diff --git a/src/backend/storage/smgr/md.c b/src/backend/storage/smgr/md.c
index 7bf0b45e2c3..0aa4f69db17 100644
--- a/src/backend/storage/smgr/md.c
+++ b/src/backend/storage/smgr/md.c
@@ -1463,7 +1463,7 @@ DropRelationFiles(RelFileLocator *delrels, int ndelrels, bool isRedo)
SMgrRelation *srels;
int i;
- srels = palloc(sizeof(SMgrRelation) * ndelrels);
+ srels = palloc_array(SMgrRelation, ndelrels);
for (i = 0; i < ndelrels; i++)
{
SMgrRelation srel = smgropen(delrels[i], INVALID_PROC_NUMBER);
diff --git a/src/backend/storage/smgr/smgr.c b/src/backend/storage/smgr/smgr.c
index ebe35c04de5..f15ea71c977 100644
--- a/src/backend/storage/smgr/smgr.c
+++ b/src/backend/storage/smgr/smgr.c
@@ -481,7 +481,7 @@ smgrdounlinkall(SMgrRelation *rels, int nrels, bool isRedo)
* create an array which contains all relations to be dropped, and close
* each relation's forks at the smgr level while at it
*/
- rlocators = palloc(sizeof(RelFileLocatorBackend) * nrels);
+ rlocators = palloc_array(RelFileLocatorBackend, nrels);
for (i = 0; i < nrels; i++)
{
RelFileLocatorBackend rlocator = rels[i]->smgr_rlocator;
diff --git a/src/backend/storage/sync/sync.c b/src/backend/storage/sync/sync.c
index fc16db90133..9d4c5eae5f6 100644
--- a/src/backend/storage/sync/sync.c
+++ b/src/backend/storage/sync/sync.c
@@ -531,7 +531,7 @@ RememberSyncRequest(const FileTag *ftag, SyncRequestType type)
MemoryContext oldcxt = MemoryContextSwitchTo(pendingOpsCxt);
PendingUnlinkEntry *entry;
- entry = palloc(sizeof(PendingUnlinkEntry));
+ entry = palloc_object(PendingUnlinkEntry);
entry->tag = *ftag;
entry->cycle_ctr = checkpoint_cycle_ctr;
entry->canceled = false;
diff --git a/src/backend/tcop/fastpath.c b/src/backend/tcop/fastpath.c
index 62f9ffa0dc0..7ccc4c9c828 100644
--- a/src/backend/tcop/fastpath.c
+++ b/src/backend/tcop/fastpath.c
@@ -339,7 +339,7 @@ parse_fcall_arguments(StringInfo msgBuf, struct fp_info *fip,
numAFormats = pq_getmsgint(msgBuf, 2);
if (numAFormats > 0)
{
- aformats = (int16 *) palloc(numAFormats * sizeof(int16));
+ aformats = palloc_array(int16, numAFormats);
for (i = 0; i < numAFormats; i++)
aformats[i] = pq_getmsgint(msgBuf, 2);
}
diff --git a/src/backend/tcop/pquery.c b/src/backend/tcop/pquery.c
index 6f22496305a..e1a2a4bef22 100644
--- a/src/backend/tcop/pquery.c
+++ b/src/backend/tcop/pquery.c
@@ -73,7 +73,7 @@ CreateQueryDesc(PlannedStmt *plannedstmt,
QueryEnvironment *queryEnv,
int instrument_options)
{
- QueryDesc *qd = (QueryDesc *) palloc(sizeof(QueryDesc));
+ QueryDesc *qd = palloc_object(QueryDesc);
qd->operation = plannedstmt->commandType; /* operation */
qd->plannedstmt = plannedstmt; /* plan */
diff --git a/src/backend/tsearch/dict.c b/src/backend/tsearch/dict.c
index eb968858683..b6ed29182e2 100644
--- a/src/backend/tsearch/dict.c
+++ b/src/backend/tsearch/dict.c
@@ -61,7 +61,7 @@ ts_lexize(PG_FUNCTION_ARGS)
ptr = res;
while (ptr->lexeme)
ptr++;
- da = (Datum *) palloc(sizeof(Datum) * (ptr - res));
+ da = palloc_array(Datum, (ptr - res));
ptr = res;
while (ptr->lexeme)
{
diff --git a/src/backend/tsearch/dict_ispell.c b/src/backend/tsearch/dict_ispell.c
index 63bd193a78a..6594a9eef0f 100644
--- a/src/backend/tsearch/dict_ispell.c
+++ b/src/backend/tsearch/dict_ispell.c
@@ -37,7 +37,7 @@ dispell_init(PG_FUNCTION_ARGS)
stoploaded = false;
ListCell *l;
- d = (DictISpell *) palloc0(sizeof(DictISpell));
+ d = palloc0_object(DictISpell);
NIStartBuild(&(d->obj));
diff --git a/src/backend/tsearch/dict_simple.c b/src/backend/tsearch/dict_simple.c
index 2c972fc0538..f6639ac7c97 100644
--- a/src/backend/tsearch/dict_simple.c
+++ b/src/backend/tsearch/dict_simple.c
@@ -31,7 +31,7 @@ Datum
dsimple_init(PG_FUNCTION_ARGS)
{
List *dictoptions = (List *) PG_GETARG_POINTER(0);
- DictSimple *d = (DictSimple *) palloc0(sizeof(DictSimple));
+ DictSimple *d = palloc0_object(DictSimple);
bool stoploaded = false,
acceptloaded = false;
ListCell *l;
@@ -87,13 +87,13 @@ dsimple_lexize(PG_FUNCTION_ARGS)
{
/* reject as stopword */
pfree(txt);
- res = palloc0(sizeof(TSLexeme) * 2);
+ res = palloc0_array(TSLexeme, 2);
PG_RETURN_POINTER(res);
}
else if (d->accept)
{
/* accept */
- res = palloc0(sizeof(TSLexeme) * 2);
+ res = palloc0_array(TSLexeme, 2);
res[0].lexeme = txt;
PG_RETURN_POINTER(res);
}
diff --git a/src/backend/tsearch/dict_synonym.c b/src/backend/tsearch/dict_synonym.c
index 0da5a9d6868..bf93e6c0d6a 100644
--- a/src/backend/tsearch/dict_synonym.c
+++ b/src/backend/tsearch/dict_synonym.c
@@ -134,7 +134,7 @@ dsynonym_init(PG_FUNCTION_ARGS)
errmsg("could not open synonym file \"%s\": %m",
filename)));
- d = (DictSyn *) palloc0(sizeof(DictSyn));
+ d = palloc0_object(DictSyn);
while ((line = tsearch_readline(&trst)) != NULL)
{
@@ -235,7 +235,7 @@ dsynonym_lexize(PG_FUNCTION_ARGS)
if (!found)
PG_RETURN_POINTER(NULL);
- res = palloc0(sizeof(TSLexeme) * 2);
+ res = palloc0_array(TSLexeme, 2);
res[0].lexeme = pnstrdup(found->out, found->outlen);
res[0].flags = found->flags;
diff --git a/src/backend/tsearch/dict_thesaurus.c b/src/backend/tsearch/dict_thesaurus.c
index 1bebe36a691..a3c6ab9b992 100644
--- a/src/backend/tsearch/dict_thesaurus.c
+++ b/src/backend/tsearch/dict_thesaurus.c
@@ -305,7 +305,7 @@ addCompiledLexeme(TheLexeme *newwrds, int *nnw, int *tnm, TSLexeme *lexeme, Lexe
if (*nnw >= *tnm)
{
*tnm *= 2;
- newwrds = (TheLexeme *) repalloc(newwrds, sizeof(TheLexeme) * *tnm);
+ newwrds = repalloc_array(newwrds, TheLexeme, *tnm);
}
newwrds[*nnw].entries = (LexemeInfo *) palloc(sizeof(LexemeInfo));
@@ -393,7 +393,7 @@ compileTheLexeme(DictThesaurus *d)
int i,
nnw = 0,
tnm = 16;
- TheLexeme *newwrds = (TheLexeme *) palloc(sizeof(TheLexeme) * tnm),
+ TheLexeme *newwrds = palloc_array(TheLexeme, tnm),
*ptrwrds;
for (i = 0; i < d->nwrds; i++)
@@ -602,7 +602,7 @@ thesaurus_init(PG_FUNCTION_ARGS)
List *namelist;
ListCell *l;
- d = (DictThesaurus *) palloc0(sizeof(DictThesaurus));
+ d = palloc0_object(DictThesaurus);
foreach(l, dictoptions)
{
@@ -755,7 +755,7 @@ copyTSLexeme(TheSubstitute *ts)
TSLexeme *res;
uint16 i;
- res = (TSLexeme *) palloc(sizeof(TSLexeme) * (ts->reslen + 1));
+ res = palloc_array(TSLexeme, (ts->reslen + 1));
for (i = 0; i < ts->reslen; i++)
{
res[i] = ts->res[i];
@@ -833,7 +833,7 @@ thesaurus_lexize(PG_FUNCTION_ARGS)
ptr++;
}
- infos = (LexemeInfo **) palloc(sizeof(LexemeInfo *) * nlex);
+ infos = palloc_array(LexemeInfo *, nlex);
for (i = 0; i < nlex; i++)
if ((infos[i] = findTheLexeme(d, basevar[i].lexeme)) == NULL)
break;
diff --git a/src/backend/tsearch/spell.c b/src/backend/tsearch/spell.c
index 018b66d2c69..0019f4aec78 100644
--- a/src/backend/tsearch/spell.c
+++ b/src/backend/tsearch/spell.c
@@ -1987,7 +1987,7 @@ NISortAffixes(IspellDict *Conf)
/* Store compound affixes in the Conf->CompoundAffix array */
if (Conf->naffixes > 1)
qsort(Conf->Affix, Conf->naffixes, sizeof(AFFIX), cmpaffix);
- Conf->CompoundAffix = ptr = (CMPDAffix *) palloc(sizeof(CMPDAffix) * Conf->naffixes);
+ Conf->CompoundAffix = ptr = palloc_array(CMPDAffix, Conf->naffixes);
ptr->affix = NULL;
for (i = 0; i < Conf->naffixes; i++)
@@ -2143,7 +2143,7 @@ CheckAffix(const char *word, size_t len, AFFIX *Affix, int flagflags, char *neww
/* Convert data string to wide characters */
newword_len = strlen(newword);
- data = (pg_wchar *) palloc((newword_len + 1) * sizeof(pg_wchar));
+ data = palloc_array(pg_wchar, (newword_len + 1));
data_len = pg_mb2wchar_with_len(newword, data, newword_len);
if (pg_regexec(Affix->reg.pregex, data, data_len,
@@ -2193,7 +2193,7 @@ NormalizeSubWord(IspellDict *Conf, const char *word, int flag)
if (wrdlen > MAXNORMLEN)
return NULL;
- cur = forms = (char **) palloc(MAX_NORM * sizeof(char *));
+ cur = forms = palloc_array(char *, MAX_NORM);
*cur = NULL;
@@ -2336,7 +2336,7 @@ CheckCompoundAffixes(CMPDAffix **ptr, const char *word, int len, bool CheckInPla
static SplitVar *
CopyVar(SplitVar *s, int makedup)
{
- SplitVar *v = (SplitVar *) palloc(sizeof(SplitVar));
+ SplitVar *v = palloc_object(SplitVar);
v->next = NULL;
if (s)
diff --git a/src/backend/tsearch/ts_parse.c b/src/backend/tsearch/ts_parse.c
index e5da6cf17ec..166c6902541 100644
--- a/src/backend/tsearch/ts_parse.c
+++ b/src/backend/tsearch/ts_parse.c
@@ -99,7 +99,7 @@ LPLRemoveHead(ListParsedLex *list)
static void
LexizeAddLemm(LexizeData *ld, int type, char *lemm, int lenlemm)
{
- ParsedLex *newpl = (ParsedLex *) palloc(sizeof(ParsedLex));
+ ParsedLex *newpl = palloc_object(ParsedLex);
newpl->type = type;
newpl->lemm = lemm;
diff --git a/src/backend/tsearch/ts_selfuncs.c b/src/backend/tsearch/ts_selfuncs.c
index 0c1d2bc1109..85ffda7a355 100644
--- a/src/backend/tsearch/ts_selfuncs.c
+++ b/src/backend/tsearch/ts_selfuncs.c
@@ -226,7 +226,7 @@ mcelem_tsquery_selec(TSQuery query, Datum *mcelem, int nmcelem,
/*
* Transpose the data into a single array so we can use bsearch().
*/
- lookup = (TextFreq *) palloc(sizeof(TextFreq) * nmcelem);
+ lookup = palloc_array(TextFreq, nmcelem);
for (i = 0; i < nmcelem; i++)
{
/*
diff --git a/src/backend/tsearch/ts_typanalyze.c b/src/backend/tsearch/ts_typanalyze.c
index 1494da1c9d3..4912bdf773d 100644
--- a/src/backend/tsearch/ts_typanalyze.c
+++ b/src/backend/tsearch/ts_typanalyze.c
@@ -320,7 +320,7 @@ compute_tsvector_stats(VacAttrStats *stats,
cutoff_freq = 9 * lexeme_no / bucket_width;
i = hash_get_num_entries(lexemes_tab); /* surely enough space */
- sort_table = (TrackItem **) palloc(sizeof(TrackItem *) * i);
+ sort_table = palloc_array(TrackItem *, i);
hash_seq_init(&scan_status, lexemes_tab);
track_len = 0;
@@ -395,8 +395,8 @@ compute_tsvector_stats(VacAttrStats *stats,
* create that for a tsvector column, since null elements aren't
* possible.)
*/
- mcelem_values = (Datum *) palloc(num_mcelem * sizeof(Datum));
- mcelem_freqs = (float4 *) palloc((num_mcelem + 2) * sizeof(float4));
+ mcelem_values = palloc_array(Datum, num_mcelem);
+ mcelem_freqs = palloc_array(float4, (num_mcelem + 2));
/*
* See comments above about use of nonnull_cnt as the divisor for
diff --git a/src/backend/tsearch/ts_utils.c b/src/backend/tsearch/ts_utils.c
index 0b4a5786644..384059493bf 100644
--- a/src/backend/tsearch/ts_utils.c
+++ b/src/backend/tsearch/ts_utils.c
@@ -105,12 +105,13 @@ readstoplist(const char *fname, StopList *s, char *(*wordop) (const char *, size
if (reallen == 0)
{
reallen = 64;
- stop = (char **) palloc(sizeof(char *) * reallen);
+ stop = palloc_array(char *, reallen);
}
else
{
reallen *= 2;
- stop = (char **) repalloc(stop, sizeof(char *) * reallen);
+ stop = repalloc_array(stop, char *,
+ reallen);
}
}
diff --git a/src/backend/tsearch/wparser.c b/src/backend/tsearch/wparser.c
index a8ddb610991..0aa64d5f084 100644
--- a/src/backend/tsearch/wparser.c
+++ b/src/backend/tsearch/wparser.c
@@ -58,7 +58,7 @@ tt_setup_firstcall(FuncCallContext *funcctx, FunctionCallInfo fcinfo,
oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
- st = (TSTokenTypeStorage *) palloc(sizeof(TSTokenTypeStorage));
+ st = palloc_object(TSTokenTypeStorage);
st->cur = 0;
/* lextype takes one dummy argument */
st->list = (LexDescr *) DatumGetPointer(OidFunctionCall1(prs->lextypeOid,
@@ -173,7 +173,7 @@ prs_setup_firstcall(FuncCallContext *funcctx, FunctionCallInfo fcinfo,
oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
- st = (PrsStorage *) palloc(sizeof(PrsStorage));
+ st = palloc_object(PrsStorage);
st->cur = 0;
st->len = 16;
st->list = (LexemeEntry *) palloc(sizeof(LexemeEntry) * st->len);
@@ -373,7 +373,7 @@ ts_headline_jsonb_byid_opt(PG_FUNCTION_ARGS)
Jsonb *out;
JsonTransformStringValuesAction action = (JsonTransformStringValuesAction) headline_json_value;
HeadlineParsedText prs;
- HeadlineJsonState *state = palloc0(sizeof(HeadlineJsonState));
+ HeadlineJsonState *state = palloc0_object(HeadlineJsonState);
memset(&prs, 0, sizeof(HeadlineParsedText));
prs.lenwords = 32;
@@ -450,7 +450,7 @@ ts_headline_json_byid_opt(PG_FUNCTION_ARGS)
JsonTransformStringValuesAction action = (JsonTransformStringValuesAction) headline_json_value;
HeadlineParsedText prs;
- HeadlineJsonState *state = palloc0(sizeof(HeadlineJsonState));
+ HeadlineJsonState *state = palloc0_object(HeadlineJsonState);
memset(&prs, 0, sizeof(HeadlineParsedText));
prs.lenwords = 32;
diff --git a/src/backend/tsearch/wparser_def.c b/src/backend/tsearch/wparser_def.c
index f26923d044b..5d4debb8a6f 100644
--- a/src/backend/tsearch/wparser_def.c
+++ b/src/backend/tsearch/wparser_def.c
@@ -271,7 +271,7 @@ static bool TParserGet(TParser *prs);
static TParserPosition *
newTParserPosition(TParserPosition *prev)
{
- TParserPosition *res = (TParserPosition *) palloc(sizeof(TParserPosition));
+ TParserPosition *res = palloc_object(TParserPosition);
if (prev)
memcpy(res, prev, sizeof(TParserPosition));
@@ -288,7 +288,7 @@ newTParserPosition(TParserPosition *prev)
static TParser *
TParserInit(char *str, int len)
{
- TParser *prs = (TParser *) palloc0(sizeof(TParser));
+ TParser *prs = palloc0_object(TParser);
prs->charmaxlen = pg_database_encoding_max_length();
prs->str = str;
@@ -345,7 +345,7 @@ TParserInit(char *str, int len)
static TParser *
TParserCopyInit(const TParser *orig)
{
- TParser *prs = (TParser *) palloc0(sizeof(TParser));
+ TParser *prs = palloc0_object(TParser);
prs->charmaxlen = orig->charmaxlen;
prs->str = orig->str + orig->state->posbyte;
@@ -1877,7 +1877,7 @@ TParserGet(TParser *prs)
Datum
prsd_lextype(PG_FUNCTION_ARGS)
{
- LexDescr *descr = (LexDescr *) palloc(sizeof(LexDescr) * (LASTNUM + 1));
+ LexDescr *descr = palloc_array(LexDescr, (LASTNUM + 1));
int i;
for (i = 1; i <= LASTNUM; i++)
@@ -2296,7 +2296,7 @@ mark_hl_fragments(HeadlineParsedText *prs, TSQuery query, List *locations,
maxitems;
CoverPos *covers;
- covers = palloc(maxcovers * sizeof(CoverPos));
+ covers = palloc_array(CoverPos, maxcovers);
/* get all covers */
while (hlCover(prs, query, locations, &nextpos, &p, &q))
@@ -2317,7 +2317,8 @@ mark_hl_fragments(HeadlineParsedText *prs, TSQuery query, List *locations,
if (numcovers >= maxcovers)
{
maxcovers *= 2;
- covers = repalloc(covers, sizeof(CoverPos) * maxcovers);
+ covers = repalloc_array(covers, CoverPos,
+ maxcovers);
}
covers[numcovers].startpos = startpos;
covers[numcovers].endpos = endpos;
diff --git a/src/backend/utils/activity/pgstat_relation.c b/src/backend/utils/activity/pgstat_relation.c
index 09247ba0971..d8261a7b03e 100644
--- a/src/backend/utils/activity/pgstat_relation.c
+++ b/src/backend/utils/activity/pgstat_relation.c
@@ -501,7 +501,7 @@ find_tabstat_entry(Oid rel_id)
}
tabentry = (PgStat_TableStatus *) entry_ref->pending;
- tablestatus = palloc(sizeof(PgStat_TableStatus));
+ tablestatus = palloc_object(PgStat_TableStatus);
*tablestatus = *tabentry;
/*
diff --git a/src/backend/utils/activity/wait_event.c b/src/backend/utils/activity/wait_event.c
index d9b8f34a355..e81b4f96a24 100644
--- a/src/backend/utils/activity/wait_event.c
+++ b/src/backend/utils/activity/wait_event.c
@@ -317,7 +317,7 @@ GetWaitEventCustomNames(uint32 classId, int *nwaitevents)
els = hash_get_num_entries(WaitEventCustomHashByName);
/* Allocate enough space for all entries */
- waiteventnames = palloc(els * sizeof(char *));
+ waiteventnames = palloc_array(char *, els);
/* Now scan the hash table to copy the data */
hash_seq_init(&hash_seq, WaitEventCustomHashByName);
diff --git a/src/backend/utils/adt/acl.c b/src/backend/utils/adt/acl.c
index 6a76550a5e2..4fc5eee7863 100644
--- a/src/backend/utils/adt/acl.c
+++ b/src/backend/utils/adt/acl.c
@@ -602,7 +602,7 @@ aclitemin(PG_FUNCTION_ARGS)
Node *escontext = fcinfo->context;
AclItem *aip;
- aip = (AclItem *) palloc(sizeof(AclItem));
+ aip = palloc_object(AclItem);
s = aclparse(s, aip, escontext);
if (s == NULL)
@@ -1537,7 +1537,7 @@ aclmembers(const Acl *acl, Oid **roleids)
check_acl(acl);
/* Allocate the worst-case space requirement */
- list = palloc(ACL_NUM(acl) * 2 * sizeof(Oid));
+ list = palloc_array(Oid, ACL_NUM(acl) * 2);
acldat = ACL_DAT(acl);
/*
@@ -1645,7 +1645,7 @@ makeaclitem(PG_FUNCTION_ARGS)
priv = convert_any_priv_string(privtext, any_priv_map);
- result = (AclItem *) palloc(sizeof(AclItem));
+ result = palloc_object(AclItem);
result->ai_grantee = grantee;
result->ai_grantor = grantor;
@@ -1805,7 +1805,7 @@ aclexplode(PG_FUNCTION_ARGS)
funcctx->tuple_desc = BlessTupleDesc(tupdesc);
/* allocate memory for user context */
- idx = (int *) palloc(sizeof(int[2]));
+ idx = palloc_array(int, 2);
idx[0] = 0; /* ACL array item index */
idx[1] = -1; /* privilege type counter */
funcctx->user_fctx = idx;
diff --git a/src/backend/utils/adt/array_selfuncs.c b/src/backend/utils/adt/array_selfuncs.c
index a69a84c2aee..08f3eea2173 100644
--- a/src/backend/utils/adt/array_selfuncs.c
+++ b/src/backend/utils/adt/array_selfuncs.c
@@ -753,7 +753,7 @@ mcelem_array_contained_selec(Datum *mcelem, int nmcelem,
* elem_selec is array of estimated frequencies for elements in the
* constant.
*/
- elem_selec = (float *) palloc(sizeof(float) * nitems);
+ elem_selec = palloc_array(float, nitems);
/* Scan mcelem and array in parallel. */
mcelem_index = 0;
@@ -927,7 +927,7 @@ calc_hist(const float4 *hist, int nhist, int n)
next_interval;
float frac;
- hist_part = (float *) palloc((n + 1) * sizeof(float));
+ hist_part = palloc_array(float, (n + 1));
/*
* frac is a probability contribution for each interval between histogram
@@ -1019,8 +1019,8 @@ calc_distr(const float *p, int n, int m, float rest)
* Since we return only the last row of the matrix and need only the
* current and previous row for calculations, allocate two rows.
*/
- row = (float *) palloc((m + 1) * sizeof(float));
- prev_row = (float *) palloc((m + 1) * sizeof(float));
+ row = palloc_array(float, (m + 1));
+ prev_row = palloc_array(float, (m + 1));
/* M[0,0] = 1 */
row[0] = 1.0f;
diff --git a/src/backend/utils/adt/array_typanalyze.c b/src/backend/utils/adt/array_typanalyze.c
index 44a6eb5dad0..f5bbf33f379 100644
--- a/src/backend/utils/adt/array_typanalyze.c
+++ b/src/backend/utils/adt/array_typanalyze.c
@@ -132,7 +132,7 @@ array_typanalyze(PG_FUNCTION_ARGS)
PG_RETURN_BOOL(true);
/* Store our findings for use by compute_array_stats() */
- extra_data = (ArrayAnalyzeExtraData *) palloc(sizeof(ArrayAnalyzeExtraData));
+ extra_data = palloc_object(ArrayAnalyzeExtraData);
extra_data->type_id = typentry->type_id;
extra_data->eq_opr = typentry->eq_opr;
extra_data->coll_id = stats->attrcollid; /* collation we should use */
@@ -469,7 +469,7 @@ compute_array_stats(VacAttrStats *stats, AnalyzeAttrFetchFunc fetchfunc,
cutoff_freq = 9 * element_no / bucket_width;
i = hash_get_num_entries(elements_tab); /* surely enough space */
- sort_table = (TrackItem **) palloc(sizeof(TrackItem *) * i);
+ sort_table = palloc_array(TrackItem *, i);
hash_seq_init(&scan_status, elements_tab);
track_len = 0;
@@ -532,8 +532,8 @@ compute_array_stats(VacAttrStats *stats, AnalyzeAttrFetchFunc fetchfunc,
* through all the values. We also want the frequency of null
* elements. Store these three values at the end of mcelem_freqs.
*/
- mcelem_values = (Datum *) palloc(num_mcelem * sizeof(Datum));
- mcelem_freqs = (float4 *) palloc((num_mcelem + 3) * sizeof(float4));
+ mcelem_values = palloc_array(Datum, num_mcelem);
+ mcelem_freqs = palloc_array(float4, (num_mcelem + 3));
/*
* See comments above about use of nonnull_cnt as the divisor for
@@ -589,8 +589,8 @@ compute_array_stats(VacAttrStats *stats, AnalyzeAttrFetchFunc fetchfunc,
* Create an array of DECountItem pointers, and sort them into
* increasing count order.
*/
- sorted_count_items = (DECountItem **)
- palloc(sizeof(DECountItem *) * count_items_count);
+ sorted_count_items = palloc_array(DECountItem *,
+ count_items_count);
hash_seq_init(&scan_status, count_tab);
j = 0;
while ((count_item = (DECountItem *) hash_seq_search(&scan_status)) != NULL)
diff --git a/src/backend/utils/adt/array_userfuncs.c b/src/backend/utils/adt/array_userfuncs.c
index 0b02fe37445..e851928c07e 100644
--- a/src/backend/utils/adt/array_userfuncs.c
+++ b/src/backend/utils/adt/array_userfuncs.c
@@ -357,8 +357,8 @@ array_cat(PG_FUNCTION_ARGS)
* themselves) of the input argument arrays
*/
ndims = ndims1;
- dims = (int *) palloc(ndims * sizeof(int));
- lbs = (int *) palloc(ndims * sizeof(int));
+ dims = palloc_array(int, ndims);
+ lbs = palloc_array(int, ndims);
dims[0] = dims1[0] + dims2[0];
lbs[0] = lbs1[0];
@@ -383,8 +383,8 @@ array_cat(PG_FUNCTION_ARGS)
* the first argument inserted at the front of the outer dimension
*/
ndims = ndims2;
- dims = (int *) palloc(ndims * sizeof(int));
- lbs = (int *) palloc(ndims * sizeof(int));
+ dims = palloc_array(int, ndims);
+ lbs = palloc_array(int, ndims);
memcpy(dims, dims2, ndims * sizeof(int));
memcpy(lbs, lbs2, ndims * sizeof(int));
@@ -411,8 +411,8 @@ array_cat(PG_FUNCTION_ARGS)
* second argument appended to the end of the outer dimension
*/
ndims = ndims1;
- dims = (int *) palloc(ndims * sizeof(int));
- lbs = (int *) palloc(ndims * sizeof(int));
+ dims = palloc_array(int, ndims);
+ lbs = palloc_array(int, ndims);
memcpy(dims, dims1, ndims * sizeof(int));
memcpy(lbs, lbs1, ndims * sizeof(int));
diff --git a/src/backend/utils/adt/arrayfuncs.c b/src/backend/utils/adt/arrayfuncs.c
index d777f38ed99..1ebd66a83be 100644
--- a/src/backend/utils/adt/arrayfuncs.c
+++ b/src/backend/utils/adt/arrayfuncs.c
@@ -1107,8 +1107,8 @@ array_out(PG_FUNCTION_ARGS)
* any overhead such as escaping backslashes), and detect whether each
* item needs double quotes.
*/
- values = (char **) palloc(nitems * sizeof(char *));
- needquotes = (bool *) palloc(nitems * sizeof(bool));
+ values = palloc_array(char *, nitems);
+ needquotes = palloc_array(bool, nitems);
overall_length = 0;
array_iter_setup(&iter, v);
@@ -1393,8 +1393,8 @@ array_recv(PG_FUNCTION_ARGS)
typalign = my_extra->typalign;
typioparam = my_extra->typioparam;
- dataPtr = (Datum *) palloc(nitems * sizeof(Datum));
- nullsPtr = (bool *) palloc(nitems * sizeof(bool));
+ dataPtr = palloc_array(Datum, nitems);
+ nullsPtr = palloc_array(bool, nitems);
ReadArrayBinary(buf, nitems,
&my_extra->proc, typioparam, typmod,
typlen, typbyval, typalign,
@@ -2671,11 +2671,11 @@ array_set_element_expanded(Datum arraydatum,
int newlen = dim[0] + dim[0] / 8;
newlen = Max(newlen, dim[0]); /* integer overflow guard */
- eah->dvalues = dvalues = (Datum *)
- repalloc(dvalues, newlen * sizeof(Datum));
+ eah->dvalues = dvalues = repalloc_array(dvalues, Datum,
+ newlen);
if (dnulls)
- eah->dnulls = dnulls = (bool *)
- repalloc(dnulls, newlen * sizeof(bool));
+ eah->dnulls = dnulls = repalloc_array(dnulls, bool,
+ newlen);
eah->dvalueslen = newlen;
}
@@ -3271,8 +3271,8 @@ array_map(Datum arrayd,
typalign = ret_extra->typalign;
/* Allocate temporary arrays for new values */
- values = (Datum *) palloc(nitems * sizeof(Datum));
- nulls = (bool *) palloc(nitems * sizeof(bool));
+ values = palloc_array(Datum, nitems);
+ nulls = palloc_array(bool, nitems);
/* Loop over source data */
array_iter_setup(&iter, v);
@@ -3581,7 +3581,7 @@ construct_empty_array(Oid elmtype)
{
ArrayType *result;
- result = (ArrayType *) palloc0(sizeof(ArrayType));
+ result = palloc0_object(ArrayType);
SET_VARSIZE(result, sizeof(ArrayType));
result->ndim = 0;
result->dataoffset = 0;
@@ -3644,9 +3644,9 @@ deconstruct_array(ArrayType *array,
Assert(ARR_ELEMTYPE(array) == elmtype);
nelems = ArrayGetNItems(ARR_NDIM(array), ARR_DIMS(array));
- *elemsp = elems = (Datum *) palloc(nelems * sizeof(Datum));
+ *elemsp = elems = palloc_array(Datum, nelems);
if (nullsp)
- *nullsp = nulls = (bool *) palloc0(nelems * sizeof(bool));
+ *nullsp = nulls = palloc0_array(bool, nelems);
else
nulls = NULL;
*nelemsp = nelems;
@@ -4208,7 +4208,7 @@ hash_array(PG_FUNCTION_ARGS)
* modify typentry, since that points directly into the type
* cache.
*/
- record_typentry = palloc0(sizeof(*record_typentry));
+ record_typentry = palloc0_object(TypeCacheEntry);
record_typentry->type_id = element_type;
/* fill in what we need below */
@@ -5941,7 +5941,7 @@ generate_subscripts(PG_FUNCTION_ARGS)
* switch to memory context appropriate for multiple function calls
*/
oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
- fctx = (generate_subscripts_fctx *) palloc(sizeof(generate_subscripts_fctx));
+ fctx = palloc_object(generate_subscripts_fctx);
lb = AARR_LBOUND(v);
dimv = AARR_DIMS(v);
@@ -6288,7 +6288,7 @@ array_unnest(PG_FUNCTION_ARGS)
arr = PG_GETARG_ANY_ARRAY_P(0);
/* allocate memory for user context */
- fctx = (array_unnest_fctx *) palloc(sizeof(array_unnest_fctx));
+ fctx = palloc_object(array_unnest_fctx);
/* initialize state */
array_iter_setup(&fctx->iter, arr);
@@ -6461,8 +6461,8 @@ array_replace_internal(ArrayType *array,
collation, NULL, NULL);
/* Allocate temporary arrays for new values */
- values = (Datum *) palloc(nitems * sizeof(Datum));
- nulls = (bool *) palloc(nitems * sizeof(bool));
+ values = palloc_array(Datum, nitems);
+ nulls = palloc_array(bool, nitems);
/* Loop over source data */
arraydataptr = ARR_DATA_PTR(array);
diff --git a/src/backend/utils/adt/arraysubs.c b/src/backend/utils/adt/arraysubs.c
index 562179b3799..29894b13941 100644
--- a/src/backend/utils/adt/arraysubs.c
+++ b/src/backend/utils/adt/arraysubs.c
@@ -496,7 +496,7 @@ array_exec_setup(const SubscriptingRef *sbsref,
/*
* Allocate type-specific workspace.
*/
- workspace = (ArraySubWorkspace *) palloc(sizeof(ArraySubWorkspace));
+ workspace = palloc_object(ArraySubWorkspace);
sbsrefstate->workspace = workspace;
/*
diff --git a/src/backend/utils/adt/arrayutils.c b/src/backend/utils/adt/arrayutils.c
index 650bb51d4cd..228b37af971 100644
--- a/src/backend/utils/adt/arrayutils.c
+++ b/src/backend/utils/adt/arrayutils.c
@@ -253,7 +253,7 @@ ArrayGetIntegerTypmods(ArrayType *arr, int *n)
deconstruct_array_builtin(arr, CSTRINGOID, &elem_values, NULL, n);
- result = (int32 *) palloc(*n * sizeof(int32));
+ result = palloc_array(int32, *n);
for (i = 0; i < *n; i++)
result[i] = pg_strtoint32(DatumGetCString(elem_values[i]));
diff --git a/src/backend/utils/adt/date.c b/src/backend/utils/adt/date.c
index f279853deb8..7c2bfca5101 100644
--- a/src/backend/utils/adt/date.c
+++ b/src/backend/utils/adt/date.c
@@ -356,7 +356,7 @@ GetSQLCurrentTime(int32 typmod)
GetCurrentTimeUsec(tm, &fsec, &tz);
- result = (TimeTzADT *) palloc(sizeof(TimeTzADT));
+ result = palloc_object(TimeTzADT);
tm2timetz(tm, fsec, tz, result);
AdjustTimeForTypmod(&(result->time), typmod);
return result;
@@ -2010,7 +2010,7 @@ time_interval(PG_FUNCTION_ARGS)
TimeADT time = PG_GETARG_TIMEADT(0);
Interval *result;
- result = (Interval *) palloc(sizeof(Interval));
+ result = palloc_object(Interval);
result->time = time;
result->day = 0;
@@ -2055,7 +2055,7 @@ time_mi_time(PG_FUNCTION_ARGS)
TimeADT time2 = PG_GETARG_TIMEADT(1);
Interval *result;
- result = (Interval *) palloc(sizeof(Interval));
+ result = palloc_object(Interval);
result->month = 0;
result->day = 0;
@@ -2322,7 +2322,7 @@ timetz_in(PG_FUNCTION_ARGS)
PG_RETURN_NULL();
}
- result = (TimeTzADT *) palloc(sizeof(TimeTzADT));
+ result = palloc_object(TimeTzADT);
tm2timetz(tm, fsec, tz, result);
AdjustTimeForTypmod(&(result->time), typmod);
@@ -2361,7 +2361,7 @@ timetz_recv(PG_FUNCTION_ARGS)
int32 typmod = PG_GETARG_INT32(2);
TimeTzADT *result;
- result = (TimeTzADT *) palloc(sizeof(TimeTzADT));
+ result = palloc_object(TimeTzADT);
result->time = pq_getmsgint64(buf);
@@ -2447,7 +2447,7 @@ timetz_scale(PG_FUNCTION_ARGS)
int32 typmod = PG_GETARG_INT32(1);
TimeTzADT *result;
- result = (TimeTzADT *) palloc(sizeof(TimeTzADT));
+ result = palloc_object(TimeTzADT);
result->time = time->time;
result->zone = time->zone;
@@ -2623,7 +2623,7 @@ timetz_pl_interval(PG_FUNCTION_ARGS)
(errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),
errmsg("cannot add infinite interval to time")));
- result = (TimeTzADT *) palloc(sizeof(TimeTzADT));
+ result = palloc_object(TimeTzADT);
result->time = time->time + span->time;
result->time -= result->time / USECS_PER_DAY * USECS_PER_DAY;
@@ -2650,7 +2650,7 @@ timetz_mi_interval(PG_FUNCTION_ARGS)
(errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),
errmsg("cannot subtract infinite interval from time")));
- result = (TimeTzADT *) palloc(sizeof(TimeTzADT));
+ result = palloc_object(TimeTzADT);
result->time = time->time - span->time;
result->time -= result->time / USECS_PER_DAY * USECS_PER_DAY;
@@ -2857,7 +2857,7 @@ time_timetz(PG_FUNCTION_ARGS)
time2tm(time, tm, &fsec);
tz = DetermineTimeZoneOffset(tm, session_timezone);
- result = (TimeTzADT *) palloc(sizeof(TimeTzADT));
+ result = palloc_object(TimeTzADT);
result->time = time;
result->zone = tz;
@@ -2887,7 +2887,7 @@ timestamptz_timetz(PG_FUNCTION_ARGS)
(errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),
errmsg("timestamp out of range")));
- result = (TimeTzADT *) palloc(sizeof(TimeTzADT));
+ result = palloc_object(TimeTzADT);
tm2timetz(tm, fsec, tz, result);
@@ -3120,7 +3120,7 @@ timetz_zone(PG_FUNCTION_ARGS)
errmsg("timestamp out of range")));
}
- result = (TimeTzADT *) palloc(sizeof(TimeTzADT));
+ result = palloc_object(TimeTzADT);
result->time = t->time + (t->zone - tz) * USECS_PER_SEC;
/* C99 modulo has the wrong sign convention for negative input */
@@ -3161,7 +3161,7 @@ timetz_izone(PG_FUNCTION_ARGS)
tz = -(zone->time / USECS_PER_SEC);
- result = (TimeTzADT *) palloc(sizeof(TimeTzADT));
+ result = palloc_object(TimeTzADT);
result->time = time->time + (time->zone - tz) * USECS_PER_SEC;
/* C99 modulo has the wrong sign convention for negative input */
diff --git a/src/backend/utils/adt/datetime.c b/src/backend/utils/adt/datetime.c
index 5d893cff50c..7651c19afdd 100644
--- a/src/backend/utils/adt/datetime.c
+++ b/src/backend/utils/adt/datetime.c
@@ -5146,7 +5146,7 @@ pg_timezone_abbrevs_zone(PG_FUNCTION_ARGS)
oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
/* allocate memory for user context */
- pindex = (int *) palloc(sizeof(int));
+ pindex = palloc_object(int);
*pindex = 0;
funcctx->user_fctx = pindex;
@@ -5181,7 +5181,7 @@ pg_timezone_abbrevs_zone(PG_FUNCTION_ARGS)
/* Convert offset (in seconds) to an interval; can't overflow */
MemSet(&itm_in, 0, sizeof(struct pg_itm_in));
itm_in.tm_usec = (int64) gmtoff * USECS_PER_SEC;
- resInterval = (Interval *) palloc(sizeof(Interval));
+ resInterval = palloc_object(Interval);
(void) itmin2interval(&itm_in, resInterval);
values[1] = IntervalPGetDatum(resInterval);
@@ -5233,7 +5233,7 @@ pg_timezone_abbrevs_abbrevs(PG_FUNCTION_ARGS)
oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
/* allocate memory for user context */
- pindex = (int *) palloc(sizeof(int));
+ pindex = palloc_object(int);
*pindex = 0;
funcctx->user_fctx = pindex;
@@ -5304,7 +5304,7 @@ pg_timezone_abbrevs_abbrevs(PG_FUNCTION_ARGS)
/* Convert offset (in seconds) to an interval; can't overflow */
MemSet(&itm_in, 0, sizeof(struct pg_itm_in));
itm_in.tm_usec = (int64) gmtoffset * USECS_PER_SEC;
- resInterval = (Interval *) palloc(sizeof(Interval));
+ resInterval = palloc_object(Interval);
(void) itmin2interval(&itm_in, resInterval);
values[1] = IntervalPGetDatum(resInterval);
@@ -5372,7 +5372,7 @@ pg_timezone_names(PG_FUNCTION_ARGS)
/* Convert tzoff to an interval; can't overflow */
MemSet(&itm_in, 0, sizeof(struct pg_itm_in));
itm_in.tm_usec = (int64) -tzoff * USECS_PER_SEC;
- resInterval = (Interval *) palloc(sizeof(Interval));
+ resInterval = palloc_object(Interval);
(void) itmin2interval(&itm_in, resInterval);
values[2] = IntervalPGetDatum(resInterval);
diff --git a/src/backend/utils/adt/enum.c b/src/backend/utils/adt/enum.c
index fcc6981632b..65d476d211c 100644
--- a/src/backend/utils/adt/enum.c
+++ b/src/backend/utils/adt/enum.c
@@ -572,7 +572,7 @@ enum_range_internal(Oid enumtypoid, Oid lower, Oid upper)
enum_scan = systable_beginscan_ordered(enum_rel, enum_idx, NULL, 1, &skey);
max = 64;
- elems = (Datum *) palloc(max * sizeof(Datum));
+ elems = palloc_array(Datum, max);
cnt = 0;
left_found = !OidIsValid(lower);
@@ -591,7 +591,7 @@ enum_range_internal(Oid enumtypoid, Oid lower, Oid upper)
if (cnt >= max)
{
max *= 2;
- elems = (Datum *) repalloc(elems, max * sizeof(Datum));
+ elems = repalloc_array(elems, Datum, max);
}
elems[cnt++] = ObjectIdGetDatum(enum_oid);
diff --git a/src/backend/utils/adt/formatting.c b/src/backend/utils/adt/formatting.c
index 3960235e14e..796fcd5e257 100644
--- a/src/backend/utils/adt/formatting.c
+++ b/src/backend/utils/adt/formatting.c
@@ -3839,7 +3839,7 @@ datetime_to_char_body(TmToChar *tmtc, text *fmt, bool is_interval, Oid collid)
*/
incache = false;
- format = (FormatNode *) palloc((fmt_len + 1) * sizeof(FormatNode));
+ format = palloc_array(FormatNode, (fmt_len + 1));
parse_format(format, fmt_str, DCH_keywords,
DCH_suff, DCH_index, DCH_FLAG, NULL);
@@ -4189,7 +4189,7 @@ parse_datetime(text *date_txt, text *fmt, Oid collid, bool strict,
{
if (flags & DCH_ZONED)
{
- TimeTzADT *result = palloc(sizeof(TimeTzADT));
+ TimeTzADT *result = palloc_object(TimeTzADT);
if (ftz.has_tz)
{
@@ -4262,7 +4262,7 @@ datetime_format_has_tz(const char *fmt_str)
*/
incache = false;
- format = (FormatNode *) palloc((fmt_len + 1) * sizeof(FormatNode));
+ format = palloc_array(FormatNode, (fmt_len + 1));
parse_format(format, fmt_str, DCH_keywords,
DCH_suff, DCH_index, DCH_FLAG, NULL);
@@ -4350,7 +4350,7 @@ do_to_timestamp(text *date_txt, text *fmt, Oid collid, bool std,
* Allocate new memory if format picture is bigger than static
* cache and do not use cache (call parser always)
*/
- format = (FormatNode *) palloc((fmt_len + 1) * sizeof(FormatNode));
+ format = palloc_array(FormatNode, (fmt_len + 1));
parse_format(format, fmt_str, DCH_keywords, DCH_suff, DCH_index,
DCH_FLAG | (std ? STD_FLAG : 0), NULL);
@@ -4898,7 +4898,7 @@ NUM_cache(int len, NUMDesc *Num, text *pars_str, bool *shouldFree)
* Allocate new memory if format picture is bigger than static cache
* and do not use cache (call parser always)
*/
- format = (FormatNode *) palloc((len + 1) * sizeof(FormatNode));
+ format = palloc_array(FormatNode, (len + 1));
*shouldFree = true;
diff --git a/src/backend/utils/adt/geo_ops.c b/src/backend/utils/adt/geo_ops.c
index 377a1b3f3ad..2537d00ff87 100644
--- a/src/backend/utils/adt/geo_ops.c
+++ b/src/backend/utils/adt/geo_ops.c
@@ -423,7 +423,7 @@ box_in(PG_FUNCTION_ARGS)
{
char *str = PG_GETARG_CSTRING(0);
Node *escontext = fcinfo->context;
- BOX *box = (BOX *) palloc(sizeof(BOX));
+ BOX *box = palloc_object(BOX);
bool isopen;
float8 x,
y;
@@ -470,7 +470,7 @@ box_recv(PG_FUNCTION_ARGS)
float8 x,
y;
- box = (BOX *) palloc(sizeof(BOX));
+ box = palloc_object(BOX);
box->high.x = pq_getmsgfloat8(buf);
box->high.y = pq_getmsgfloat8(buf);
@@ -849,7 +849,7 @@ Datum
box_center(PG_FUNCTION_ARGS)
{
BOX *box = PG_GETARG_BOX_P(0);
- Point *result = (Point *) palloc(sizeof(Point));
+ Point *result = palloc_object(Point);
box_cn(result, box);
@@ -914,7 +914,7 @@ box_intersect(PG_FUNCTION_ARGS)
if (!box_ov(box1, box2))
PG_RETURN_NULL();
- result = (BOX *) palloc(sizeof(BOX));
+ result = palloc_object(BOX);
result->high.x = float8_min(box1->high.x, box2->high.x);
result->low.x = float8_max(box1->low.x, box2->low.x);
@@ -933,7 +933,7 @@ Datum
box_diagonal(PG_FUNCTION_ARGS)
{
BOX *box = PG_GETARG_BOX_P(0);
- LSEG *result = (LSEG *) palloc(sizeof(LSEG));
+ LSEG *result = palloc_object(LSEG);
statlseg_construct(result, &box->high, &box->low);
@@ -980,7 +980,7 @@ line_in(PG_FUNCTION_ARGS)
{
char *str = PG_GETARG_CSTRING(0);
Node *escontext = fcinfo->context;
- LINE *line = (LINE *) palloc(sizeof(LINE));
+ LINE *line = palloc_object(LINE);
LSEG lseg;
bool isopen;
char *s;
@@ -1040,7 +1040,7 @@ line_recv(PG_FUNCTION_ARGS)
StringInfo buf = (StringInfo) PG_GETARG_POINTER(0);
LINE *line;
- line = (LINE *) palloc(sizeof(LINE));
+ line = palloc_object(LINE);
line->A = pq_getmsgfloat8(buf);
line->B = pq_getmsgfloat8(buf);
@@ -1116,7 +1116,7 @@ line_construct_pp(PG_FUNCTION_ARGS)
{
Point *pt1 = PG_GETARG_POINT_P(0);
Point *pt2 = PG_GETARG_POINT_P(1);
- LINE *result = (LINE *) palloc(sizeof(LINE));
+ LINE *result = palloc_object(LINE);
if (point_eq_point(pt1, pt2))
ereport(ERROR,
@@ -1289,7 +1289,7 @@ line_interpt(PG_FUNCTION_ARGS)
LINE *l2 = PG_GETARG_LINE_P(1);
Point *result;
- result = (Point *) palloc(sizeof(Point));
+ result = palloc_object(Point);
if (!line_interpt_line(result, l1, l2))
PG_RETURN_NULL();
@@ -1831,7 +1831,7 @@ Datum
point_in(PG_FUNCTION_ARGS)
{
char *str = PG_GETARG_CSTRING(0);
- Point *point = (Point *) palloc(sizeof(Point));
+ Point *point = palloc_object(Point);
/* Ignore failure from pair_decode, since our return value won't matter */
pair_decode(str, &point->x, &point->y, NULL, "point", str, fcinfo->context);
@@ -1855,7 +1855,7 @@ point_recv(PG_FUNCTION_ARGS)
StringInfo buf = (StringInfo) PG_GETARG_POINTER(0);
Point *point;
- point = (Point *) palloc(sizeof(Point));
+ point = palloc_object(Point);
point->x = pq_getmsgfloat8(buf);
point->y = pq_getmsgfloat8(buf);
PG_RETURN_POINT_P(point);
@@ -2066,7 +2066,7 @@ lseg_in(PG_FUNCTION_ARGS)
{
char *str = PG_GETARG_CSTRING(0);
Node *escontext = fcinfo->context;
- LSEG *lseg = (LSEG *) palloc(sizeof(LSEG));
+ LSEG *lseg = palloc_object(LSEG);
bool isopen;
if (!path_decode(str, true, 2, &lseg->p[0], &isopen, NULL, "lseg", str,
@@ -2094,7 +2094,7 @@ lseg_recv(PG_FUNCTION_ARGS)
StringInfo buf = (StringInfo) PG_GETARG_POINTER(0);
LSEG *lseg;
- lseg = (LSEG *) palloc(sizeof(LSEG));
+ lseg = palloc_object(LSEG);
lseg->p[0].x = pq_getmsgfloat8(buf);
lseg->p[0].y = pq_getmsgfloat8(buf);
@@ -2130,7 +2130,7 @@ lseg_construct(PG_FUNCTION_ARGS)
{
Point *pt1 = PG_GETARG_POINT_P(0);
Point *pt2 = PG_GETARG_POINT_P(1);
- LSEG *result = (LSEG *) palloc(sizeof(LSEG));
+ LSEG *result = palloc_object(LSEG);
statlseg_construct(result, pt1, pt2);
@@ -2318,7 +2318,7 @@ lseg_center(PG_FUNCTION_ARGS)
LSEG *lseg = PG_GETARG_LSEG_P(0);
Point *result;
- result = (Point *) palloc(sizeof(Point));
+ result = palloc_object(Point);
result->x = float8_div(float8_pl(lseg->p[0].x, lseg->p[1].x), 2.0);
result->y = float8_div(float8_pl(lseg->p[0].y, lseg->p[1].y), 2.0);
@@ -2364,7 +2364,7 @@ lseg_interpt(PG_FUNCTION_ARGS)
LSEG *l2 = PG_GETARG_LSEG_P(1);
Point *result;
- result = (Point *) palloc(sizeof(Point));
+ result = palloc_object(Point);
if (!lseg_interpt_lseg(result, l1, l2))
PG_RETURN_NULL();
@@ -2753,7 +2753,7 @@ close_pl(PG_FUNCTION_ARGS)
LINE *line = PG_GETARG_LINE_P(1);
Point *result;
- result = (Point *) palloc(sizeof(Point));
+ result = palloc_object(Point);
if (isnan(line_closept_point(result, line, pt)))
PG_RETURN_NULL();
@@ -2794,7 +2794,7 @@ close_ps(PG_FUNCTION_ARGS)
LSEG *lseg = PG_GETARG_LSEG_P(1);
Point *result;
- result = (Point *) palloc(sizeof(Point));
+ result = palloc_object(Point);
if (isnan(lseg_closept_point(result, lseg, pt)))
PG_RETURN_NULL();
@@ -2859,7 +2859,7 @@ close_lseg(PG_FUNCTION_ARGS)
if (lseg_sl(l1) == lseg_sl(l2))
PG_RETURN_NULL();
- result = (Point *) palloc(sizeof(Point));
+ result = palloc_object(Point);
if (isnan(lseg_closept_lseg(result, l2, l1)))
PG_RETURN_NULL();
@@ -2936,7 +2936,7 @@ close_pb(PG_FUNCTION_ARGS)
BOX *box = PG_GETARG_BOX_P(1);
Point *result;
- result = (Point *) palloc(sizeof(Point));
+ result = palloc_object(Point);
if (isnan(box_closept_point(result, box, pt)))
PG_RETURN_NULL();
@@ -2994,7 +2994,7 @@ close_ls(PG_FUNCTION_ARGS)
if (lseg_sl(lseg) == line_sl(line))
PG_RETURN_NULL();
- result = (Point *) palloc(sizeof(Point));
+ result = palloc_object(Point);
if (isnan(lseg_closept_line(result, lseg, line)))
PG_RETURN_NULL();
@@ -3066,7 +3066,7 @@ close_sb(PG_FUNCTION_ARGS)
BOX *box = PG_GETARG_BOX_P(1);
Point *result;
- result = (Point *) palloc(sizeof(Point));
+ result = palloc_object(Point);
if (isnan(box_closept_lseg(result, box, lseg)))
PG_RETURN_NULL();
@@ -4099,7 +4099,7 @@ construct_point(PG_FUNCTION_ARGS)
float8 y = PG_GETARG_FLOAT8(1);
Point *result;
- result = (Point *) palloc(sizeof(Point));
+ result = palloc_object(Point);
point_construct(result, x, y);
@@ -4122,7 +4122,7 @@ point_add(PG_FUNCTION_ARGS)
Point *p2 = PG_GETARG_POINT_P(1);
Point *result;
- result = (Point *) palloc(sizeof(Point));
+ result = palloc_object(Point);
point_add_point(result, p1, p2);
@@ -4145,7 +4145,7 @@ point_sub(PG_FUNCTION_ARGS)
Point *p2 = PG_GETARG_POINT_P(1);
Point *result;
- result = (Point *) palloc(sizeof(Point));
+ result = palloc_object(Point);
point_sub_point(result, p1, p2);
@@ -4170,7 +4170,7 @@ point_mul(PG_FUNCTION_ARGS)
Point *p2 = PG_GETARG_POINT_P(1);
Point *result;
- result = (Point *) palloc(sizeof(Point));
+ result = palloc_object(Point);
point_mul_point(result, p1, p2);
@@ -4199,7 +4199,7 @@ point_div(PG_FUNCTION_ARGS)
Point *p2 = PG_GETARG_POINT_P(1);
Point *result;
- result = (Point *) palloc(sizeof(Point));
+ result = palloc_object(Point);
point_div_point(result, p1, p2);
@@ -4220,7 +4220,7 @@ points_box(PG_FUNCTION_ARGS)
Point *p2 = PG_GETARG_POINT_P(1);
BOX *result;
- result = (BOX *) palloc(sizeof(BOX));
+ result = palloc_object(BOX);
box_construct(result, p1, p2);
@@ -4234,7 +4234,7 @@ box_add(PG_FUNCTION_ARGS)
Point *p = PG_GETARG_POINT_P(1);
BOX *result;
- result = (BOX *) palloc(sizeof(BOX));
+ result = palloc_object(BOX);
point_add_point(&result->high, &box->high, p);
point_add_point(&result->low, &box->low, p);
@@ -4249,7 +4249,7 @@ box_sub(PG_FUNCTION_ARGS)
Point *p = PG_GETARG_POINT_P(1);
BOX *result;
- result = (BOX *) palloc(sizeof(BOX));
+ result = palloc_object(BOX);
point_sub_point(&result->high, &box->high, p);
point_sub_point(&result->low, &box->low, p);
@@ -4266,7 +4266,7 @@ box_mul(PG_FUNCTION_ARGS)
Point high,
low;
- result = (BOX *) palloc(sizeof(BOX));
+ result = palloc_object(BOX);
point_mul_point(&high, &box->high, p);
point_mul_point(&low, &box->low, p);
@@ -4285,7 +4285,7 @@ box_div(PG_FUNCTION_ARGS)
Point high,
low;
- result = (BOX *) palloc(sizeof(BOX));
+ result = palloc_object(BOX);
point_div_point(&high, &box->high, p);
point_div_point(&low, &box->low, p);
@@ -4304,7 +4304,7 @@ point_box(PG_FUNCTION_ARGS)
Point *pt = PG_GETARG_POINT_P(0);
BOX *box;
- box = (BOX *) palloc(sizeof(BOX));
+ box = palloc_object(BOX);
box->high.x = pt->x;
box->low.x = pt->x;
@@ -4324,7 +4324,7 @@ boxes_bound_box(PG_FUNCTION_ARGS)
*box2 = PG_GETARG_BOX_P(1),
*container;
- container = (BOX *) palloc(sizeof(BOX));
+ container = palloc_object(BOX);
container->high.x = float8_max(box1->high.x, box2->high.x);
container->low.x = float8_min(box1->low.x, box2->low.x);
@@ -4506,7 +4506,7 @@ poly_center(PG_FUNCTION_ARGS)
Point *result;
CIRCLE circle;
- result = (Point *) palloc(sizeof(Point));
+ result = palloc_object(Point);
poly_to_circle(&circle, poly);
*result = circle.center;
@@ -4521,7 +4521,7 @@ poly_box(PG_FUNCTION_ARGS)
POLYGON *poly = PG_GETARG_POLYGON_P(0);
BOX *box;
- box = (BOX *) palloc(sizeof(BOX));
+ box = palloc_object(BOX);
*box = poly->boundbox;
PG_RETURN_BOX_P(box);
@@ -4612,7 +4612,7 @@ circle_in(PG_FUNCTION_ARGS)
{
char *str = PG_GETARG_CSTRING(0);
Node *escontext = fcinfo->context;
- CIRCLE *circle = (CIRCLE *) palloc(sizeof(CIRCLE));
+ CIRCLE *circle = palloc_object(CIRCLE);
char *s,
*cp;
int depth = 0;
@@ -4705,7 +4705,7 @@ circle_recv(PG_FUNCTION_ARGS)
StringInfo buf = (StringInfo) PG_GETARG_POINTER(0);
CIRCLE *circle;
- circle = (CIRCLE *) palloc(sizeof(CIRCLE));
+ circle = palloc_object(CIRCLE);
circle->center.x = pq_getmsgfloat8(buf);
circle->center.y = pq_getmsgfloat8(buf);
@@ -4968,7 +4968,7 @@ circle_add_pt(PG_FUNCTION_ARGS)
Point *point = PG_GETARG_POINT_P(1);
CIRCLE *result;
- result = (CIRCLE *) palloc(sizeof(CIRCLE));
+ result = palloc_object(CIRCLE);
point_add_point(&result->center, &circle->center, point);
result->radius = circle->radius;
@@ -4983,7 +4983,7 @@ circle_sub_pt(PG_FUNCTION_ARGS)
Point *point = PG_GETARG_POINT_P(1);
CIRCLE *result;
- result = (CIRCLE *) palloc(sizeof(CIRCLE));
+ result = palloc_object(CIRCLE);
point_sub_point(&result->center, &circle->center, point);
result->radius = circle->radius;
@@ -5002,7 +5002,7 @@ circle_mul_pt(PG_FUNCTION_ARGS)
Point *point = PG_GETARG_POINT_P(1);
CIRCLE *result;
- result = (CIRCLE *) palloc(sizeof(CIRCLE));
+ result = palloc_object(CIRCLE);
point_mul_point(&result->center, &circle->center, point);
result->radius = float8_mul(circle->radius, HYPOT(point->x, point->y));
@@ -5017,7 +5017,7 @@ circle_div_pt(PG_FUNCTION_ARGS)
Point *point = PG_GETARG_POINT_P(1);
CIRCLE *result;
- result = (CIRCLE *) palloc(sizeof(CIRCLE));
+ result = palloc_object(CIRCLE);
point_div_point(&result->center, &circle->center, point);
result->radius = float8_div(circle->radius, HYPOT(point->x, point->y));
@@ -5145,7 +5145,7 @@ circle_center(PG_FUNCTION_ARGS)
CIRCLE *circle = PG_GETARG_CIRCLE_P(0);
Point *result;
- result = (Point *) palloc(sizeof(Point));
+ result = palloc_object(Point);
result->x = circle->center.x;
result->y = circle->center.y;
@@ -5173,7 +5173,7 @@ cr_circle(PG_FUNCTION_ARGS)
float8 radius = PG_GETARG_FLOAT8(1);
CIRCLE *result;
- result = (CIRCLE *) palloc(sizeof(CIRCLE));
+ result = palloc_object(CIRCLE);
result->center.x = center->x;
result->center.y = center->y;
@@ -5189,7 +5189,7 @@ circle_box(PG_FUNCTION_ARGS)
BOX *box;
float8 delta;
- box = (BOX *) palloc(sizeof(BOX));
+ box = palloc_object(BOX);
delta = float8_div(circle->radius, sqrt(2.0));
@@ -5210,7 +5210,7 @@ box_circle(PG_FUNCTION_ARGS)
BOX *box = PG_GETARG_BOX_P(0);
CIRCLE *circle;
- circle = (CIRCLE *) palloc(sizeof(CIRCLE));
+ circle = palloc_object(CIRCLE);
circle->center.x = float8_div(float8_pl(box->high.x, box->low.x), 2.0);
circle->center.y = float8_div(float8_pl(box->high.y, box->low.y), 2.0);
@@ -5309,7 +5309,7 @@ poly_circle(PG_FUNCTION_ARGS)
POLYGON *poly = PG_GETARG_POLYGON_P(0);
CIRCLE *result;
- result = (CIRCLE *) palloc(sizeof(CIRCLE));
+ result = palloc_object(CIRCLE);
poly_to_circle(result, poly);
diff --git a/src/backend/utils/adt/geo_spgist.c b/src/backend/utils/adt/geo_spgist.c
index fec33e95372..f5a3b871e95 100644
--- a/src/backend/utils/adt/geo_spgist.c
+++ b/src/backend/utils/adt/geo_spgist.c
@@ -156,7 +156,7 @@ getQuadrant(BOX *centroid, BOX *inBox)
static RangeBox *
getRangeBox(BOX *box)
{
- RangeBox *range_box = (RangeBox *) palloc(sizeof(RangeBox));
+ RangeBox *range_box = palloc_object(RangeBox);
range_box->left.low = box->low.x;
range_box->left.high = box->high.x;
@@ -176,7 +176,7 @@ getRangeBox(BOX *box)
static RectBox *
initRectBox(void)
{
- RectBox *rect_box = (RectBox *) palloc(sizeof(RectBox));
+ RectBox *rect_box = palloc_object(RectBox);
float8 infinity = get_float8_infinity();
rect_box->range_box_x.left.low = -infinity;
@@ -204,7 +204,7 @@ initRectBox(void)
static RectBox *
nextRectBox(RectBox *rect_box, RangeBox *centroid, uint8 quadrant)
{
- RectBox *next_rect_box = (RectBox *) palloc(sizeof(RectBox));
+ RectBox *next_rect_box = palloc_object(RectBox);
memcpy(next_rect_box, rect_box, sizeof(RectBox));
@@ -445,10 +445,10 @@ spg_box_quad_picksplit(PG_FUNCTION_ARGS)
BOX *centroid;
int median,
i;
- float8 *lowXs = palloc(sizeof(float8) * in->nTuples);
- float8 *highXs = palloc(sizeof(float8) * in->nTuples);
- float8 *lowYs = palloc(sizeof(float8) * in->nTuples);
- float8 *highYs = palloc(sizeof(float8) * in->nTuples);
+ float8 *lowXs = palloc_array(float8, in->nTuples);
+ float8 *highXs = palloc_array(float8, in->nTuples);
+ float8 *lowYs = palloc_array(float8, in->nTuples);
+ float8 *highYs = palloc_array(float8, in->nTuples);
/* Calculate median of all 4D coordinates */
for (i = 0; i < in->nTuples; i++)
@@ -468,7 +468,7 @@ spg_box_quad_picksplit(PG_FUNCTION_ARGS)
median = in->nTuples / 2;
- centroid = palloc(sizeof(BOX));
+ centroid = palloc_object(BOX);
centroid->low.x = lowXs[median];
centroid->high.x = highXs[median];
@@ -580,7 +580,8 @@ spg_box_quad_inner_consistent(PG_FUNCTION_ARGS)
if (in->norderbys > 0 && in->nNodes > 0)
{
- double *distances = palloc(sizeof(double) * in->norderbys);
+ double *distances = palloc_array(double,
+ in->norderbys);
int j;
for (j = 0; j < in->norderbys; j++)
@@ -609,7 +610,7 @@ spg_box_quad_inner_consistent(PG_FUNCTION_ARGS)
* following operations.
*/
centroid = getRangeBox(DatumGetBoxP(in->prefixDatum));
- queries = (RangeBox **) palloc(in->nkeys * sizeof(RangeBox *));
+ queries = palloc_array(RangeBox *, in->nkeys);
for (i = 0; i < in->nkeys; i++)
{
BOX *box = spg_box_quad_get_scankey_bbox(&in->scankeys[i], NULL);
@@ -703,7 +704,8 @@ spg_box_quad_inner_consistent(PG_FUNCTION_ARGS)
if (in->norderbys > 0)
{
- double *distances = palloc(sizeof(double) * in->norderbys);
+ double *distances = palloc_array(double,
+ in->norderbys);
int j;
out->distances[out->nNodes] = distances;
@@ -878,7 +880,7 @@ spg_poly_quad_compress(PG_FUNCTION_ARGS)
POLYGON *polygon = PG_GETARG_POLYGON_P(0);
BOX *box;
- box = (BOX *) palloc(sizeof(BOX));
+ box = palloc_object(BOX);
*box = polygon->boundbox;
PG_RETURN_BOX_P(box);
diff --git a/src/backend/utils/adt/int.c b/src/backend/utils/adt/int.c
index b5781989a64..60411ee024d 100644
--- a/src/backend/utils/adt/int.c
+++ b/src/backend/utils/adt/int.c
@@ -1537,7 +1537,7 @@ generate_series_step_int4(PG_FUNCTION_ARGS)
oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
/* allocate memory for user context */
- fctx = (generate_series_fctx *) palloc(sizeof(generate_series_fctx));
+ fctx = palloc_object(generate_series_fctx);
/*
* Use fctx to keep state from call to call. Seed current with the
diff --git a/src/backend/utils/adt/int8.c b/src/backend/utils/adt/int8.c
index 9dd5889f34c..159626bf343 100644
--- a/src/backend/utils/adt/int8.c
+++ b/src/backend/utils/adt/int8.c
@@ -1411,7 +1411,7 @@ generate_series_step_int8(PG_FUNCTION_ARGS)
oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
/* allocate memory for user context */
- fctx = (generate_series_fctx *) palloc(sizeof(generate_series_fctx));
+ fctx = palloc_object(generate_series_fctx);
/*
* Use fctx to keep state from call to call. Seed current with the
diff --git a/src/backend/utils/adt/json.c b/src/backend/utils/adt/json.c
index 51452755f58..9a66e3875cd 100644
--- a/src/backend/utils/adt/json.c
+++ b/src/backend/utils/adt/json.c
@@ -805,7 +805,7 @@ json_agg_transfn_worker(FunctionCallInfo fcinfo, bool absent_on_null)
* use the right context to enlarge the object if necessary.
*/
oldcontext = MemoryContextSwitchTo(aggcontext);
- state = (JsonAggState *) palloc(sizeof(JsonAggState));
+ state = palloc_object(JsonAggState);
state->str = makeStringInfo();
MemoryContextSwitchTo(oldcontext);
@@ -1027,7 +1027,7 @@ json_object_agg_transfn_worker(FunctionCallInfo fcinfo,
* sure they use the right context to enlarge the object if necessary.
*/
oldcontext = MemoryContextSwitchTo(aggcontext);
- state = (JsonAggState *) palloc(sizeof(JsonAggState));
+ state = palloc_object(JsonAggState);
state->str = makeStringInfo();
if (unique_keys)
json_unique_builder_init(&state->unique_check);
@@ -1760,7 +1760,7 @@ json_unique_object_start(void *_state)
return JSON_SUCCESS;
/* push object entry to stack */
- entry = palloc(sizeof(*entry));
+ entry = palloc_object(JsonUniqueStackEntry);
entry->object_id = state->id_counter++;
entry->parent = state->stack;
state->stack = entry;
diff --git a/src/backend/utils/adt/jsonb.c b/src/backend/utils/adt/jsonb.c
index f4889d9ed72..992b8978d10 100644
--- a/src/backend/utils/adt/jsonb.c
+++ b/src/backend/utils/adt/jsonb.c
@@ -1477,7 +1477,7 @@ clone_parse_state(JsonbParseState *state)
if (state == NULL)
return NULL;
- result = palloc(sizeof(JsonbParseState));
+ result = palloc_object(JsonbParseState);
icursor = state;
ocursor = result;
for (;;)
@@ -1530,8 +1530,8 @@ jsonb_agg_transfn_worker(FunctionCallInfo fcinfo, bool absent_on_null)
errmsg("could not determine input data type")));
oldcontext = MemoryContextSwitchTo(aggcontext);
- state = palloc(sizeof(JsonbAggState));
- result = palloc0(sizeof(JsonbInState));
+ state = palloc_object(JsonbAggState);
+ result = palloc0_object(JsonbInState);
state->res = result;
result->res = pushJsonbValue(&result->parseState,
WJB_BEGIN_ARRAY, NULL);
@@ -1700,8 +1700,8 @@ jsonb_object_agg_transfn_worker(FunctionCallInfo fcinfo,
Oid arg_type;
oldcontext = MemoryContextSwitchTo(aggcontext);
- state = palloc(sizeof(JsonbAggState));
- result = palloc0(sizeof(JsonbInState));
+ state = palloc_object(JsonbAggState);
+ result = palloc0_object(JsonbInState);
state->res = result;
result->res = pushJsonbValue(&result->parseState,
WJB_BEGIN_OBJECT, NULL);
diff --git a/src/backend/utils/adt/jsonb_gin.c b/src/backend/utils/adt/jsonb_gin.c
index c1950792b5a..c9a2fc94f3f 100644
--- a/src/backend/utils/adt/jsonb_gin.c
+++ b/src/backend/utils/adt/jsonb_gin.c
@@ -307,7 +307,7 @@ jsonb_ops__add_path_item(JsonPathGinPath *path, JsonPathItem *jsp)
return false;
}
- pentry = palloc(sizeof(*pentry));
+ pentry = palloc_object(JsonPathGinPathItem);
pentry->type = jsp->type;
pentry->keyName = keyName;
@@ -869,7 +869,7 @@ gin_extract_jsonb_query(PG_FUNCTION_ARGS)
text *query = PG_GETARG_TEXT_PP(0);
*nentries = 1;
- entries = (Datum *) palloc(sizeof(Datum));
+ entries = palloc_object(Datum);
entries[0] = make_text_key(JGINFLAG_KEY,
VARDATA_ANY(query),
VARSIZE_ANY_EXHDR(query));
@@ -887,7 +887,7 @@ gin_extract_jsonb_query(PG_FUNCTION_ARGS)
deconstruct_array_builtin(query, TEXTOID, &key_datums, &key_nulls, &key_count);
- entries = (Datum *) palloc(sizeof(Datum) * key_count);
+ entries = palloc_array(Datum, key_count);
for (i = 0, j = 0; i < key_count; i++)
{
@@ -1126,7 +1126,7 @@ gin_extract_jsonb_path(PG_FUNCTION_ARGS)
case WJB_BEGIN_OBJECT:
/* Push a stack level for this object */
parent = stack;
- stack = (PathHashStack *) palloc(sizeof(PathHashStack));
+ stack = palloc_object(PathHashStack);
/*
* We pass forward hashes from outer nesting levels so that
diff --git a/src/backend/utils/adt/jsonb_util.c b/src/backend/utils/adt/jsonb_util.c
index 773f3690c7b..920050eec01 100644
--- a/src/backend/utils/adt/jsonb_util.c
+++ b/src/backend/utils/adt/jsonb_util.c
@@ -362,7 +362,7 @@ findJsonbValueFromContainer(JsonbContainer *container, uint32 flags,
if ((flags & JB_FARRAY) && JsonContainerIsArray(container))
{
- JsonbValue *result = palloc(sizeof(JsonbValue));
+ JsonbValue *result = palloc_object(JsonbValue);
char *base_addr = (char *) (children + count);
uint32 offset = 0;
int i;
@@ -445,7 +445,7 @@ getKeyJsonValueFromContainer(JsonbContainer *container,
int index = stopMiddle + count;
if (!res)
- res = palloc(sizeof(JsonbValue));
+ res = palloc_object(JsonbValue);
fillJsonbValue(container, index, baseAddr,
getJsonbOffset(container, index),
@@ -487,7 +487,7 @@ getIthJsonbValueFromContainer(JsonbContainer *container, uint32 i)
if (i >= nelements)
return NULL;
- result = palloc(sizeof(JsonbValue));
+ result = palloc_object(JsonbValue);
fillJsonbValue(container, i, base_addr,
getJsonbOffset(container, i),
@@ -737,7 +737,7 @@ pushJsonbValueScalar(JsonbParseState **pstate, JsonbIteratorToken seq,
static JsonbParseState *
pushState(JsonbParseState **pstate)
{
- JsonbParseState *ns = palloc(sizeof(JsonbParseState));
+ JsonbParseState *ns = palloc_object(JsonbParseState);
ns->next = *pstate;
ns->unique_keys = false;
@@ -1007,7 +1007,7 @@ iteratorFromContainer(JsonbContainer *container, JsonbIterator *parent)
{
JsonbIterator *it;
- it = palloc0(sizeof(JsonbIterator));
+ it = palloc0_object(JsonbIterator);
it->container = container;
it->parent = parent;
it->nElems = JsonContainerSize(container);
@@ -1253,7 +1253,8 @@ JsonbDeepContains(JsonbIterator **val, JsonbIterator **mContained)
uint32 j = 0;
/* Make room for all possible values */
- lhsConts = palloc(sizeof(JsonbValue) * nLhsElems);
+ lhsConts = palloc_array(JsonbValue,
+ nLhsElems);
for (i = 0; i < nLhsElems; i++)
{
diff --git a/src/backend/utils/adt/jsonfuncs.c b/src/backend/utils/adt/jsonfuncs.c
index c2e90f1a3bf..e1824a03c93 100644
--- a/src/backend/utils/adt/jsonfuncs.c
+++ b/src/backend/utils/adt/jsonfuncs.c
@@ -592,7 +592,7 @@ jsonb_object_keys(PG_FUNCTION_ARGS)
funcctx = SRF_FIRSTCALL_INIT();
oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
- state = palloc(sizeof(OkeysState));
+ state = palloc_object(OkeysState);
state->result_size = JB_ROOT_COUNT(jb);
state->result_count = 0;
@@ -743,8 +743,8 @@ json_object_keys(PG_FUNCTION_ARGS)
funcctx = SRF_FIRSTCALL_INIT();
oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
- state = palloc(sizeof(OkeysState));
- sem = palloc0(sizeof(JsonSemAction));
+ state = palloc_object(OkeysState);
+ sem = palloc0_object(JsonSemAction);
state->lex = makeJsonLexContext(&lex, json, true);
state->result_size = 256;
@@ -1044,8 +1044,8 @@ get_path_all(FunctionCallInfo fcinfo, bool as_text)
deconstruct_array_builtin(path, TEXTOID, &pathtext, &pathnulls, &npath);
- tpath = palloc(npath * sizeof(char *));
- ipath = palloc(npath * sizeof(int));
+ tpath = palloc_array(char *, npath);
+ ipath = palloc_array(int, npath);
for (i = 0; i < npath; i++)
{
@@ -1105,8 +1105,8 @@ get_worker(text *json,
int npath,
bool normalize_results)
{
- JsonSemAction *sem = palloc0(sizeof(JsonSemAction));
- GetState *state = palloc0(sizeof(GetState));
+ JsonSemAction *sem = palloc0_object(JsonSemAction);
+ GetState *state = palloc0_object(GetState);
Assert(npath >= 0);
@@ -1681,7 +1681,7 @@ jsonb_set_element(Jsonb *jb, Datum *path, int path_len,
JsonbValue *res;
JsonbParseState *state = NULL;
JsonbIterator *it;
- bool *path_nulls = palloc0(path_len * sizeof(bool));
+ bool *path_nulls = palloc0_array(bool, path_len);
if (newval->type == jbvArray && newval->val.array.rawScalar)
*newval = newval->val.array.elems[0];
@@ -1726,7 +1726,7 @@ push_path(JsonbParseState **st, int level, Datum *path_elems,
* Since it contains only information about path slice from level to the
* end, the access index must be normalized by level.
*/
- enum jbvType *tpath = palloc0((path_len - level) * sizeof(enum jbvType));
+ enum jbvType *tpath = palloc0_array(enum jbvType, (path_len - level));
JsonbValue newkey;
/*
@@ -1855,14 +1855,14 @@ json_array_length(PG_FUNCTION_ARGS)
JsonLexContext lex;
JsonSemAction *sem;
- state = palloc0(sizeof(AlenState));
+ state = palloc0_object(AlenState);
state->lex = makeJsonLexContext(&lex, json, false);
/* palloc0 does this for us */
#if 0
state->count = 0;
#endif
- sem = palloc0(sizeof(JsonSemAction));
+ sem = palloc0_object(JsonSemAction);
sem->semstate = state;
sem->object_start = alen_object_start;
sem->scalar = alen_scalar;
@@ -2062,8 +2062,8 @@ each_worker(FunctionCallInfo fcinfo, bool as_text)
ReturnSetInfo *rsi;
EachState *state;
- state = palloc0(sizeof(EachState));
- sem = palloc0(sizeof(JsonSemAction));
+ state = palloc0_object(EachState);
+ sem = palloc0_object(JsonSemAction);
rsi = (ReturnSetInfo *) fcinfo->resultinfo;
@@ -2315,8 +2315,8 @@ elements_worker(FunctionCallInfo fcinfo, const char *funcname, bool as_text)
/* elements only needs escaped strings when as_text */
makeJsonLexContext(&lex, json, as_text);
- state = palloc0(sizeof(ElementsState));
- sem = palloc0(sizeof(JsonSemAction));
+ state = palloc0_object(ElementsState);
+ sem = palloc0_object(JsonSemAction);
InitMaterializedSRF(fcinfo, MAT_SRF_USE_EXPECTED_DESC | MAT_SRF_BLESS);
rsi = (ReturnSetInfo *) fcinfo->resultinfo;
@@ -2957,7 +2957,7 @@ populate_array(ArrayIOData *aio,
Assert(ctx.ndims > 0);
- lbs = palloc(sizeof(int) * ctx.ndims);
+ lbs = palloc_array(int, ctx.ndims);
for (i = 0; i < ctx.ndims; i++)
lbs[i] = 1;
@@ -3554,8 +3554,8 @@ populate_record(TupleDesc tupdesc,
record->ncolumns = ncolumns;
}
- values = (Datum *) palloc(ncolumns * sizeof(Datum));
- nulls = (bool *) palloc(ncolumns * sizeof(bool));
+ values = palloc_array(Datum, ncolumns);
+ nulls = palloc_array(bool, ncolumns);
if (defaultval)
{
@@ -3823,8 +3823,8 @@ get_json_object_as_hash(const char *json, int len, const char *funcname,
&ctl,
HASH_ELEM | HASH_STRINGS | HASH_CONTEXT);
- state = palloc0(sizeof(JHashState));
- sem = palloc0(sizeof(JsonSemAction));
+ state = palloc0_object(JHashState);
+ sem = palloc0_object(JsonSemAction);
state->function_name = funcname;
state->hash = tab;
@@ -3910,7 +3910,7 @@ hash_object_field_end(void *state, char *fname, bool isnull)
if (_state->save_json_start != NULL)
{
int len = _state->lex->prev_token_terminator - _state->save_json_start;
- char *val = palloc((len + 1) * sizeof(char));
+ char *val = palloc_array(char, (len + 1));
memcpy(val, _state->save_json_start, len);
val[len] = '\0';
@@ -4121,7 +4121,7 @@ populate_recordset_worker(FunctionCallInfo fcinfo, const char *funcname,
*/
update_cached_tupdesc(&cache->c.io.composite, cache->fn_mcxt);
- state = palloc0(sizeof(PopulateRecordsetState));
+ state = palloc0_object(PopulateRecordsetState);
/* make tuplestore in a sufficiently long-lived memory context */
old_cxt = MemoryContextSwitchTo(rsi->econtext->ecxt_per_query_memory);
@@ -4140,7 +4140,7 @@ populate_recordset_worker(FunctionCallInfo fcinfo, const char *funcname,
JsonLexContext lex;
JsonSemAction *sem;
- sem = palloc0(sizeof(JsonSemAction));
+ sem = palloc0_object(JsonSemAction);
makeJsonLexContext(&lex, json, true);
@@ -4361,7 +4361,7 @@ populate_recordset_object_field_end(void *state, char *fname, bool isnull)
if (_state->save_json_start != NULL)
{
int len = _state->lex->prev_token_terminator - _state->save_json_start;
- char *val = palloc((len + 1) * sizeof(char));
+ char *val = palloc_array(char, (len + 1));
memcpy(val, _state->save_json_start, len);
val[len] = '\0';
@@ -4497,8 +4497,8 @@ json_strip_nulls(PG_FUNCTION_ARGS)
JsonLexContext lex;
JsonSemAction *sem;
- state = palloc0(sizeof(StripnullState));
- sem = palloc0(sizeof(JsonSemAction));
+ state = palloc0_object(StripnullState);
+ sem = palloc0_object(JsonSemAction);
state->lex = makeJsonLexContext(&lex, json, true);
state->strval = makeStringInfo();
@@ -5710,8 +5710,8 @@ iterate_json_values(text *json, uint32 flags, void *action_state,
JsonIterateStringValuesAction action)
{
JsonLexContext lex;
- JsonSemAction *sem = palloc0(sizeof(JsonSemAction));
- IterateJsonStringValuesState *state = palloc0(sizeof(IterateJsonStringValuesState));
+ JsonSemAction *sem = palloc0_object(JsonSemAction);
+ IterateJsonStringValuesState *state = palloc0_object(IterateJsonStringValuesState);
state->lex = makeJsonLexContext(&lex, json, true);
state->action = action;
@@ -5831,8 +5831,8 @@ transform_json_string_values(text *json, void *action_state,
JsonTransformStringValuesAction transform_action)
{
JsonLexContext lex;
- JsonSemAction *sem = palloc0(sizeof(JsonSemAction));
- TransformJsonStringValuesState *state = palloc0(sizeof(TransformJsonStringValuesState));
+ JsonSemAction *sem = palloc0_object(JsonSemAction);
+ TransformJsonStringValuesState *state = palloc0_object(TransformJsonStringValuesState);
state->lex = makeJsonLexContext(&lex, json, true);
state->strval = makeStringInfo();
diff --git a/src/backend/utils/adt/jsonpath_exec.c b/src/backend/utils/adt/jsonpath_exec.c
index f6dfcb11a62..8c79e7b8bfa 100644
--- a/src/backend/utils/adt/jsonpath_exec.c
+++ b/src/backend/utils/adt/jsonpath_exec.c
@@ -1087,7 +1087,7 @@ executeItemOptUnwrapTarget(JsonPathExecContext *cxt, JsonPathItem *jsp,
case jpiType:
{
- JsonbValue *jbv = palloc(sizeof(*jbv));
+ JsonbValue *jbv = palloc_object(JsonbValue);
jbv->type = jbvString;
jbv->val.string.val = pstrdup(JsonbTypeName(jb));
@@ -1117,7 +1117,7 @@ executeItemOptUnwrapTarget(JsonPathExecContext *cxt, JsonPathItem *jsp,
size = 1;
}
- jb = palloc(sizeof(*jb));
+ jb = palloc_object(JsonbValue);
jb->type = jbvNumeric;
jb->val.numeric = int64_to_numeric(size);
@@ -2160,7 +2160,7 @@ executeBinaryArithmExpr(JsonPathExecContext *cxt, JsonPathItem *jsp,
if (!jspGetNext(jsp, &elem) && !found)
return jperOk;
- lval = palloc(sizeof(*lval));
+ lval = palloc_object(JsonbValue);
lval->type = jbvNumeric;
lval->val.numeric = res;
@@ -2315,7 +2315,7 @@ executeNumericItemMethod(JsonPathExecContext *cxt, JsonPathItem *jsp,
if (!jspGetNext(jsp, &next) && !found)
return jperOk;
- jb = palloc(sizeof(*jb));
+ jb = palloc_object(JsonbValue);
jb->type = jbvNumeric;
jb->val.numeric = DatumGetNumeric(datum);
@@ -3016,7 +3016,7 @@ GetJsonPathVar(void *cxt, char *varName, int varNameLen,
return NULL;
}
- result = palloc(sizeof(JsonbValue));
+ result = palloc_object(JsonbValue);
if (var->isnull)
{
*baseObjectId = 0;
@@ -3443,7 +3443,7 @@ compareNumeric(Numeric a, Numeric b)
static JsonbValue *
copyJsonbValue(JsonbValue *src)
{
- JsonbValue *dst = palloc(sizeof(*dst));
+ JsonbValue *dst = palloc_object(JsonbValue);
*dst = *src;
@@ -4117,7 +4117,7 @@ JsonTableInitOpaque(TableFuncScanState *state, int natts)
JsonExpr *je = castNode(JsonExpr, tf->docexpr);
List *args = NIL;
- cxt = palloc0(sizeof(JsonTableExecContext));
+ cxt = palloc0_object(JsonTableExecContext);
cxt->magic = JSON_TABLE_EXEC_CONTEXT_MAGIC;
/*
@@ -4136,7 +4136,7 @@ JsonTableInitOpaque(TableFuncScanState *state, int natts)
{
ExprState *state = lfirst_node(ExprState, exprlc);
String *name = lfirst_node(String, namelc);
- JsonPathVariable *var = palloc(sizeof(*var));
+ JsonPathVariable *var = palloc_object(JsonPathVariable);
var->name = pstrdup(name->sval);
var->namelen = strlen(var->name);
@@ -4193,7 +4193,7 @@ JsonTableInitPlan(JsonTableExecContext *cxt, JsonTablePlan *plan,
JsonTablePlanState *parentstate,
List *args, MemoryContext mcxt)
{
- JsonTablePlanState *planstate = palloc0(sizeof(*planstate));
+ JsonTablePlanState *planstate = palloc0_object(JsonTablePlanState);
planstate->plan = plan;
planstate->parent = parentstate;
diff --git a/src/backend/utils/adt/levenshtein.c b/src/backend/utils/adt/levenshtein.c
index 15a90f6f50c..e1f91161dd6 100644
--- a/src/backend/utils/adt/levenshtein.c
+++ b/src/backend/utils/adt/levenshtein.c
@@ -195,7 +195,7 @@ varstr_levenshtein(const char *source, int slen,
int i;
const char *cp = source;
- s_char_len = (int *) palloc((m + 1) * sizeof(int));
+ s_char_len = palloc_array(int, (m + 1));
for (i = 0; i < m; ++i)
{
s_char_len[i] = pg_mblen(cp);
@@ -209,7 +209,7 @@ varstr_levenshtein(const char *source, int slen,
++n;
/* Previous and current rows of notional array. */
- prev = (int *) palloc(2 * m * sizeof(int));
+ prev = palloc_array(int, 2 * m);
curr = prev + m;
/*
diff --git a/src/backend/utils/adt/lockfuncs.c b/src/backend/utils/adt/lockfuncs.c
index 00e67fb46d0..46e74d497fa 100644
--- a/src/backend/utils/adt/lockfuncs.c
+++ b/src/backend/utils/adt/lockfuncs.c
@@ -152,7 +152,7 @@ pg_lock_status(PG_FUNCTION_ARGS)
* Collect all the locking information that we will format and send
* out as a result set.
*/
- mystatus = (PG_Lock_Status *) palloc(sizeof(PG_Lock_Status));
+ mystatus = palloc_object(PG_Lock_Status);
funcctx->user_fctx = mystatus;
mystatus->lockData = GetLockStatusData();
@@ -476,7 +476,7 @@ pg_blocking_pids(PG_FUNCTION_ARGS)
lockData = GetBlockerStatusData(blocked_pid);
/* We can't need more output entries than there are reported PROCLOCKs */
- arrayelems = (Datum *) palloc(lockData->nlocks * sizeof(Datum));
+ arrayelems = palloc_array(Datum, lockData->nlocks);
narrayelems = 0;
/* For each blocked proc in the lock group ... */
@@ -578,7 +578,7 @@ pg_safe_snapshot_blocking_pids(PG_FUNCTION_ARGS)
Datum *blocker_datums;
/* A buffer big enough for any possible blocker list without truncation */
- blockers = (int *) palloc(MaxBackends * sizeof(int));
+ blockers = palloc_array(int, MaxBackends);
/* Collect a snapshot of processes waited for by GetSafeSnapshot */
num_blockers =
@@ -589,7 +589,7 @@ pg_safe_snapshot_blocking_pids(PG_FUNCTION_ARGS)
{
int i;
- blocker_datums = (Datum *) palloc(num_blockers * sizeof(Datum));
+ blocker_datums = palloc_array(Datum, num_blockers);
for (i = 0; i < num_blockers; ++i)
blocker_datums[i] = Int32GetDatum(blockers[i]);
}
diff --git a/src/backend/utils/adt/mac.c b/src/backend/utils/adt/mac.c
index 3644e9735f5..733515404d7 100644
--- a/src/backend/utils/adt/mac.c
+++ b/src/backend/utils/adt/mac.c
@@ -101,7 +101,7 @@ macaddr_in(PG_FUNCTION_ARGS)
(errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),
errmsg("invalid octet value in \"macaddr\" value: \"%s\"", str)));
- result = (macaddr *) palloc(sizeof(macaddr));
+ result = palloc_object(macaddr);
result->a = a;
result->b = b;
@@ -142,7 +142,7 @@ macaddr_recv(PG_FUNCTION_ARGS)
StringInfo buf = (StringInfo) PG_GETARG_POINTER(0);
macaddr *addr;
- addr = (macaddr *) palloc(sizeof(macaddr));
+ addr = palloc_object(macaddr);
addr->a = pq_getmsgbyte(buf);
addr->b = pq_getmsgbyte(buf);
@@ -289,7 +289,7 @@ macaddr_not(PG_FUNCTION_ARGS)
macaddr *addr = PG_GETARG_MACADDR_P(0);
macaddr *result;
- result = (macaddr *) palloc(sizeof(macaddr));
+ result = palloc_object(macaddr);
result->a = ~addr->a;
result->b = ~addr->b;
result->c = ~addr->c;
@@ -306,7 +306,7 @@ macaddr_and(PG_FUNCTION_ARGS)
macaddr *addr2 = PG_GETARG_MACADDR_P(1);
macaddr *result;
- result = (macaddr *) palloc(sizeof(macaddr));
+ result = palloc_object(macaddr);
result->a = addr1->a & addr2->a;
result->b = addr1->b & addr2->b;
result->c = addr1->c & addr2->c;
@@ -323,7 +323,7 @@ macaddr_or(PG_FUNCTION_ARGS)
macaddr *addr2 = PG_GETARG_MACADDR_P(1);
macaddr *result;
- result = (macaddr *) palloc(sizeof(macaddr));
+ result = palloc_object(macaddr);
result->a = addr1->a | addr2->a;
result->b = addr1->b | addr2->b;
result->c = addr1->c | addr2->c;
@@ -343,7 +343,7 @@ macaddr_trunc(PG_FUNCTION_ARGS)
macaddr *addr = PG_GETARG_MACADDR_P(0);
macaddr *result;
- result = (macaddr *) palloc(sizeof(macaddr));
+ result = palloc_object(macaddr);
result->a = addr->a;
result->b = addr->b;
@@ -374,7 +374,7 @@ macaddr_sortsupport(PG_FUNCTION_ARGS)
oldcontext = MemoryContextSwitchTo(ssup->ssup_cxt);
- uss = palloc(sizeof(macaddr_sortsupport_state));
+ uss = palloc_object(macaddr_sortsupport_state);
uss->input_count = 0;
uss->estimating = true;
initHyperLogLog(&uss->abbr_card, 10);
diff --git a/src/backend/utils/adt/mac8.c b/src/backend/utils/adt/mac8.c
index 08e41ba4eea..ea715a7a0d4 100644
--- a/src/backend/utils/adt/mac8.c
+++ b/src/backend/utils/adt/mac8.c
@@ -207,7 +207,7 @@ macaddr8_in(PG_FUNCTION_ARGS)
else if (count != 8)
goto fail;
- result = (macaddr8 *) palloc0(sizeof(macaddr8));
+ result = palloc0_object(macaddr8);
result->a = a;
result->b = b;
@@ -256,7 +256,7 @@ macaddr8_recv(PG_FUNCTION_ARGS)
StringInfo buf = (StringInfo) PG_GETARG_POINTER(0);
macaddr8 *addr;
- addr = (macaddr8 *) palloc0(sizeof(macaddr8));
+ addr = palloc0_object(macaddr8);
addr->a = pq_getmsgbyte(buf);
addr->b = pq_getmsgbyte(buf);
@@ -417,7 +417,7 @@ macaddr8_not(PG_FUNCTION_ARGS)
macaddr8 *addr = PG_GETARG_MACADDR8_P(0);
macaddr8 *result;
- result = (macaddr8 *) palloc0(sizeof(macaddr8));
+ result = palloc0_object(macaddr8);
result->a = ~addr->a;
result->b = ~addr->b;
result->c = ~addr->c;
@@ -437,7 +437,7 @@ macaddr8_and(PG_FUNCTION_ARGS)
macaddr8 *addr2 = PG_GETARG_MACADDR8_P(1);
macaddr8 *result;
- result = (macaddr8 *) palloc0(sizeof(macaddr8));
+ result = palloc0_object(macaddr8);
result->a = addr1->a & addr2->a;
result->b = addr1->b & addr2->b;
result->c = addr1->c & addr2->c;
@@ -457,7 +457,7 @@ macaddr8_or(PG_FUNCTION_ARGS)
macaddr8 *addr2 = PG_GETARG_MACADDR8_P(1);
macaddr8 *result;
- result = (macaddr8 *) palloc0(sizeof(macaddr8));
+ result = palloc0_object(macaddr8);
result->a = addr1->a | addr2->a;
result->b = addr1->b | addr2->b;
result->c = addr1->c | addr2->c;
@@ -479,7 +479,7 @@ macaddr8_trunc(PG_FUNCTION_ARGS)
macaddr8 *addr = PG_GETARG_MACADDR8_P(0);
macaddr8 *result;
- result = (macaddr8 *) palloc0(sizeof(macaddr8));
+ result = palloc0_object(macaddr8);
result->a = addr->a;
result->b = addr->b;
@@ -502,7 +502,7 @@ macaddr8_set7bit(PG_FUNCTION_ARGS)
macaddr8 *addr = PG_GETARG_MACADDR8_P(0);
macaddr8 *result;
- result = (macaddr8 *) palloc0(sizeof(macaddr8));
+ result = palloc0_object(macaddr8);
result->a = addr->a | 0x02;
result->b = addr->b;
@@ -526,7 +526,7 @@ macaddrtomacaddr8(PG_FUNCTION_ARGS)
macaddr *addr6 = PG_GETARG_MACADDR_P(0);
macaddr8 *result;
- result = (macaddr8 *) palloc0(sizeof(macaddr8));
+ result = palloc0_object(macaddr8);
result->a = addr6->a;
result->b = addr6->b;
@@ -547,7 +547,7 @@ macaddr8tomacaddr(PG_FUNCTION_ARGS)
macaddr8 *addr = PG_GETARG_MACADDR8_P(0);
macaddr *result;
- result = (macaddr *) palloc0(sizeof(macaddr));
+ result = palloc0_object(macaddr);
if ((addr->d != 0xFF) || (addr->e != 0xFE))
ereport(ERROR,
diff --git a/src/backend/utils/adt/mcxtfuncs.c b/src/backend/utils/adt/mcxtfuncs.c
index 396c2f223b4..96c32851ab4 100644
--- a/src/backend/utils/adt/mcxtfuncs.c
+++ b/src/backend/utils/adt/mcxtfuncs.c
@@ -52,7 +52,7 @@ int_list_to_array(const List *list)
ArrayType *result_array;
length = list_length(list);
- datum_array = (Datum *) palloc(length * sizeof(Datum));
+ datum_array = palloc_array(Datum, length);
foreach_int(i, list)
datum_array[foreach_current_index(i)] = Int32GetDatum(i);
diff --git a/src/backend/utils/adt/misc.c b/src/backend/utils/adt/misc.c
index 6fcfd031428..779004f3787 100644
--- a/src/backend/utils/adt/misc.c
+++ b/src/backend/utils/adt/misc.c
@@ -516,7 +516,7 @@ pg_get_catalog_foreign_keys(PG_FUNCTION_ARGS)
* array_in, and it wouldn't be very efficient if we could. Fill an
* FmgrInfo to use for the call.
*/
- arrayinp = (FmgrInfo *) palloc(sizeof(FmgrInfo));
+ arrayinp = palloc_object(FmgrInfo);
fmgr_info(F_ARRAY_IN, arrayinp);
funcctx->user_fctx = arrayinp;
diff --git a/src/backend/utils/adt/multirangetypes.c b/src/backend/utils/adt/multirangetypes.c
index cd84ced5b48..19e3a8fac3c 100644
--- a/src/backend/utils/adt/multirangetypes.c
+++ b/src/backend/utils/adt/multirangetypes.c
@@ -125,7 +125,7 @@ multirange_in(PG_FUNCTION_ARGS)
int32 range_count = 0;
int32 range_capacity = 8;
RangeType *range;
- RangeType **ranges = palloc(range_capacity * sizeof(RangeType *));
+ RangeType **ranges = palloc_array(RangeType *, range_capacity);
MultirangeIOData *cache;
MultirangeType *ret;
MultirangeParseState parse_state;
@@ -201,8 +201,9 @@ multirange_in(PG_FUNCTION_ARGS)
if (range_capacity == range_count)
{
range_capacity *= 2;
- ranges = (RangeType **)
- repalloc(ranges, range_capacity * sizeof(RangeType *));
+ ranges = repalloc_array(ranges,
+ RangeType *,
+ range_capacity);
}
ranges_seen++;
if (!InputFunctionCallSafe(&cache->typioproc,
@@ -348,7 +349,7 @@ multirange_recv(PG_FUNCTION_ARGS)
cache = get_multirange_io_data(fcinfo, mltrngtypoid, IOFunc_receive);
range_count = pq_getmsgint(buf, 4);
- ranges = palloc(range_count * sizeof(RangeType *));
+ ranges = palloc_array(RangeType *, range_count);
initStringInfo(&tmpbuf);
for (int i = 0; i < range_count; i++)
@@ -998,7 +999,7 @@ multirange_constructor2(PG_FUNCTION_ARGS)
deconstruct_array(rangeArray, rngtypid, rangetyp->typlen, rangetyp->typbyval,
rangetyp->typalign, &elements, &nulls, &range_count);
- ranges = palloc0(range_count * sizeof(RangeType *));
+ ranges = palloc0_array(RangeType *, range_count);
for (i = 0; i < range_count; i++)
{
if (nulls[i])
@@ -1102,7 +1103,7 @@ multirange_union(PG_FUNCTION_ARGS)
multirange_deserialize(typcache->rngtype, mr2, &range_count2, &ranges2);
range_count3 = range_count1 + range_count2;
- ranges3 = palloc0(range_count3 * sizeof(RangeType *));
+ ranges3 = palloc0_array(RangeType *, range_count3);
memcpy(ranges3, ranges1, range_count1 * sizeof(RangeType *));
memcpy(ranges3 + range_count1, ranges2, range_count2 * sizeof(RangeType *));
PG_RETURN_MULTIRANGE_P(make_multirange(typcache->type_id, typcache->rngtype,
@@ -1156,7 +1157,7 @@ multirange_minus_internal(Oid mltrngtypoid, TypeCacheEntry *rangetyp,
* Worst case: every range in ranges1 makes a different cut to some range
* in ranges2.
*/
- ranges3 = palloc0((range_count1 + range_count2) * sizeof(RangeType *));
+ ranges3 = palloc0_array(RangeType *, (range_count1 + range_count2));
range_count3 = 0;
/*
@@ -1282,7 +1283,7 @@ multirange_intersect_internal(Oid mltrngtypoid, TypeCacheEntry *rangetyp,
* but one extra won't hurt.
*-----------------------------------------------
*/
- ranges3 = palloc0((range_count1 + range_count2) * sizeof(RangeType *));
+ ranges3 = palloc0_array(RangeType *, (range_count1 + range_count2));
range_count3 = 0;
/*
@@ -1395,7 +1396,7 @@ range_agg_finalfn(PG_FUNCTION_ARGS)
mltrngtypoid = get_fn_expr_rettype(fcinfo->flinfo);
typcache = multirange_get_typcache(fcinfo, mltrngtypoid);
- ranges = palloc0(range_count * sizeof(RangeType *));
+ ranges = palloc0_array(RangeType *, range_count);
for (i = 0; i < range_count; i++)
ranges[i] = DatumGetRangeTypeP(state->dvalues[i]);
@@ -2746,7 +2747,7 @@ multirange_unnest(PG_FUNCTION_ARGS)
mr = PG_GETARG_MULTIRANGE_P(0);
/* allocate memory for user context */
- fctx = (multirange_unnest_fctx *) palloc(sizeof(multirange_unnest_fctx));
+ fctx = palloc_object(multirange_unnest_fctx);
/* initialize state */
fctx->mr = mr;
diff --git a/src/backend/utils/adt/multirangetypes_selfuncs.c b/src/backend/utils/adt/multirangetypes_selfuncs.c
index b87bcf3ea30..47f2276909e 100644
--- a/src/backend/utils/adt/multirangetypes_selfuncs.c
+++ b/src/backend/utils/adt/multirangetypes_selfuncs.c
@@ -496,8 +496,8 @@ calc_hist_selectivity(TypeCacheEntry *typcache, VariableStatData *vardata,
* bounds.
*/
nhist = hslot.nvalues;
- hist_lower = (RangeBound *) palloc(sizeof(RangeBound) * nhist);
- hist_upper = (RangeBound *) palloc(sizeof(RangeBound) * nhist);
+ hist_lower = palloc_array(RangeBound, nhist);
+ hist_upper = palloc_array(RangeBound, nhist);
for (i = 0; i < nhist; i++)
{
bool empty;
diff --git a/src/backend/utils/adt/name.c b/src/backend/utils/adt/name.c
index b2487881d54..d9ea779f039 100644
--- a/src/backend/utils/adt/name.c
+++ b/src/backend/utils/adt/name.c
@@ -299,7 +299,7 @@ current_schemas(PG_FUNCTION_ARGS)
int i;
ArrayType *array;
- names = (Datum *) palloc(list_length(search_path) * sizeof(Datum));
+ names = palloc_array(Datum, list_length(search_path));
i = 0;
foreach(l, search_path)
{
diff --git a/src/backend/utils/adt/network.c b/src/backend/utils/adt/network.c
index 450dacd031c..105930d56c9 100644
--- a/src/backend/utils/adt/network.c
+++ b/src/backend/utils/adt/network.c
@@ -77,7 +77,7 @@ network_in(char *src, bool is_cidr, Node *escontext)
int bits;
inet *dst;
- dst = (inet *) palloc0(sizeof(inet));
+ dst = palloc0_object(inet);
/*
* First, check to see if this is an IPv6 or IPv4 address. IPv6 addresses
@@ -198,7 +198,7 @@ network_recv(StringInfo buf, bool is_cidr)
i;
/* make sure any unused bits in a CIDR value are zeroed */
- addr = (inet *) palloc0(sizeof(inet));
+ addr = palloc0_object(inet);
ip_family(addr) = pq_getmsgbyte(buf);
if (ip_family(addr) != PGSQL_AF_INET &&
@@ -367,7 +367,7 @@ cidr_set_masklen(PG_FUNCTION_ARGS)
inet *
cidr_set_masklen_internal(const inet *src, int bits)
{
- inet *dst = (inet *) palloc0(sizeof(inet));
+ inet *dst = palloc0_object(inet);
ip_family(dst) = ip_family(src);
ip_bits(dst) = bits;
@@ -448,7 +448,7 @@ network_sortsupport(PG_FUNCTION_ARGS)
oldcontext = MemoryContextSwitchTo(ssup->ssup_cxt);
- uss = palloc(sizeof(network_sortsupport_state));
+ uss = palloc_object(network_sortsupport_state);
uss->input_count = 0;
uss->estimating = true;
initHyperLogLog(&uss->abbr_card, 10);
@@ -1288,7 +1288,7 @@ network_broadcast(PG_FUNCTION_ARGS)
*b;
/* make sure any unused bits are zeroed */
- dst = (inet *) palloc0(sizeof(inet));
+ dst = palloc0_object(inet);
maxbytes = ip_addrsize(ip);
bits = ip_bits(ip);
@@ -1332,7 +1332,7 @@ network_network(PG_FUNCTION_ARGS)
*b;
/* make sure any unused bits are zeroed */
- dst = (inet *) palloc0(sizeof(inet));
+ dst = palloc0_object(inet);
bits = ip_bits(ip);
a = ip_addr(ip);
@@ -1375,7 +1375,7 @@ network_netmask(PG_FUNCTION_ARGS)
unsigned char *b;
/* make sure any unused bits are zeroed */
- dst = (inet *) palloc0(sizeof(inet));
+ dst = palloc0_object(inet);
bits = ip_bits(ip);
b = ip_addr(dst);
@@ -1418,7 +1418,7 @@ network_hostmask(PG_FUNCTION_ARGS)
unsigned char *b;
/* make sure any unused bits are zeroed */
- dst = (inet *) palloc0(sizeof(inet));
+ dst = palloc0_object(inet);
maxbytes = ip_addrsize(ip);
bits = ip_maxbits(ip) - ip_bits(ip);
@@ -1853,7 +1853,7 @@ inetnot(PG_FUNCTION_ARGS)
inet *ip = PG_GETARG_INET_PP(0);
inet *dst;
- dst = (inet *) palloc0(sizeof(inet));
+ dst = palloc0_object(inet);
{
int nb = ip_addrsize(ip);
@@ -1879,7 +1879,7 @@ inetand(PG_FUNCTION_ARGS)
inet *ip2 = PG_GETARG_INET_PP(1);
inet *dst;
- dst = (inet *) palloc0(sizeof(inet));
+ dst = palloc0_object(inet);
if (ip_family(ip) != ip_family(ip2))
ereport(ERROR,
@@ -1911,7 +1911,7 @@ inetor(PG_FUNCTION_ARGS)
inet *ip2 = PG_GETARG_INET_PP(1);
inet *dst;
- dst = (inet *) palloc0(sizeof(inet));
+ dst = palloc0_object(inet);
if (ip_family(ip) != ip_family(ip2))
ereport(ERROR,
@@ -1941,7 +1941,7 @@ internal_inetpl(inet *ip, int64 addend)
{
inet *dst;
- dst = (inet *) palloc0(sizeof(inet));
+ dst = palloc0_object(inet);
{
int nb = ip_addrsize(ip);
diff --git a/src/backend/utils/adt/network_gist.c b/src/backend/utils/adt/network_gist.c
index a08c4953789..30145f5985a 100644
--- a/src/backend/utils/adt/network_gist.c
+++ b/src/backend/utils/adt/network_gist.c
@@ -475,7 +475,7 @@ build_inet_union_key(int family, int minbits, int commonbits,
GistInetKey *result;
/* Make sure any unused bits are zeroed. */
- result = (GistInetKey *) palloc0(sizeof(GistInetKey));
+ result = palloc0_object(GistInetKey);
gk_ip_family(result) = family;
gk_ip_minbits(result) = minbits;
@@ -546,13 +546,13 @@ inet_gist_compress(PG_FUNCTION_ARGS)
if (entry->leafkey)
{
- retval = palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
if (DatumGetPointer(entry->key) != NULL)
{
inet *in = DatumGetInetPP(entry->key);
GistInetKey *r;
- r = (GistInetKey *) palloc0(sizeof(GistInetKey));
+ r = palloc0_object(GistInetKey);
gk_ip_family(r) = ip_family(in);
gk_ip_minbits(r) = ip_bits(in);
@@ -594,14 +594,14 @@ inet_gist_fetch(PG_FUNCTION_ARGS)
GISTENTRY *retval;
inet *dst;
- dst = (inet *) palloc0(sizeof(inet));
+ dst = palloc0_object(inet);
ip_family(dst) = gk_ip_family(key);
ip_bits(dst) = gk_ip_minbits(key);
memcpy(ip_addr(dst), gk_ip_addr(key), ip_addrsize(dst));
SET_INET_VARSIZE(dst);
- retval = palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
gistentryinit(*retval, InetPGetDatum(dst), entry->rel, entry->page,
entry->offset, false);
diff --git a/src/backend/utils/adt/numeric.c b/src/backend/utils/adt/numeric.c
index 40dcbc7b671..c586671ee51 100644
--- a/src/backend/utils/adt/numeric.c
+++ b/src/backend/utils/adt/numeric.c
@@ -1776,8 +1776,7 @@ generate_series_step_numeric(PG_FUNCTION_ARGS)
oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
/* allocate memory for user context */
- fctx = (generate_series_numeric_fctx *)
- palloc(sizeof(generate_series_numeric_fctx));
+ fctx = palloc_object(generate_series_numeric_fctx);
/*
* Use fctx to keep state from call to call. Seed current with the
@@ -2137,7 +2136,7 @@ numeric_sortsupport(PG_FUNCTION_ARGS)
NumericSortSupport *nss;
MemoryContext oldcontext = MemoryContextSwitchTo(ssup->ssup_cxt);
- nss = palloc(sizeof(NumericSortSupport));
+ nss = palloc_object(NumericSortSupport);
/*
* palloc a buffer for handling unaligned packed values in addition to
@@ -4943,7 +4942,7 @@ makeNumericAggState(FunctionCallInfo fcinfo, bool calcSumX2)
old_context = MemoryContextSwitchTo(agg_context);
- state = (NumericAggState *) palloc0(sizeof(NumericAggState));
+ state = palloc0_object(NumericAggState);
state->calcSumX2 = calcSumX2;
state->agg_context = agg_context;
@@ -4961,7 +4960,7 @@ makeNumericAggStateCurrentContext(bool calcSumX2)
{
NumericAggState *state;
- state = (NumericAggState *) palloc0(sizeof(NumericAggState));
+ state = palloc0_object(NumericAggState);
state->calcSumX2 = calcSumX2;
state->agg_context = CurrentMemoryContext;
@@ -5606,7 +5605,7 @@ makeInt128AggState(FunctionCallInfo fcinfo, bool calcSumX2)
old_context = MemoryContextSwitchTo(agg_context);
- state = (Int128AggState *) palloc0(sizeof(Int128AggState));
+ state = palloc0_object(Int128AggState);
state->calcSumX2 = calcSumX2;
MemoryContextSwitchTo(old_context);
@@ -5623,7 +5622,7 @@ makeInt128AggStateCurrentContext(bool calcSumX2)
{
Int128AggState *state;
- state = (Int128AggState *) palloc0(sizeof(Int128AggState));
+ state = palloc0_object(Int128AggState);
state->calcSumX2 = calcSumX2;
return state;
@@ -12504,8 +12503,8 @@ accum_sum_rescale(NumericSumAccum *accum, const NumericVar *val)
weightdiff = accum_weight - old_weight;
- new_pos_digits = palloc0(accum_ndigits * sizeof(int32));
- new_neg_digits = palloc0(accum_ndigits * sizeof(int32));
+ new_pos_digits = palloc0_array(int32, accum_ndigits);
+ new_neg_digits = palloc0_array(int32, accum_ndigits);
if (accum->pos_digits)
{
diff --git a/src/backend/utils/adt/oracle_compat.c b/src/backend/utils/adt/oracle_compat.c
index 2cba7cd1621..cec3d600d3d 100644
--- a/src/backend/utils/adt/oracle_compat.c
+++ b/src/backend/utils/adt/oracle_compat.c
@@ -406,7 +406,7 @@ dotrim(const char *string, int stringlen,
int str_len;
stringchars = (const char **) palloc(stringlen * sizeof(char *));
- stringmblen = (int *) palloc(stringlen * sizeof(int));
+ stringmblen = palloc_array(int, stringlen);
stringnchars = 0;
p = string;
len = stringlen;
@@ -420,7 +420,7 @@ dotrim(const char *string, int stringlen,
}
setchars = (const char **) palloc(setlen * sizeof(char *));
- setmblen = (int *) palloc(setlen * sizeof(int));
+ setmblen = palloc_array(int, setlen);
setnchars = 0;
p = set;
len = setlen;
diff --git a/src/backend/utils/adt/orderedsetaggs.c b/src/backend/utils/adt/orderedsetaggs.c
index 9457d239715..3a5af61416e 100644
--- a/src/backend/utils/adt/orderedsetaggs.c
+++ b/src/backend/utils/adt/orderedsetaggs.c
@@ -153,7 +153,7 @@ ordered_set_startup(FunctionCallInfo fcinfo, bool use_tuples)
qcontext = fcinfo->flinfo->fn_mcxt;
oldcontext = MemoryContextSwitchTo(qcontext);
- qstate = (OSAPerQueryState *) palloc0(sizeof(OSAPerQueryState));
+ qstate = palloc0_object(OSAPerQueryState);
qstate->aggref = aggref;
qstate->qcontext = qcontext;
@@ -278,7 +278,7 @@ ordered_set_startup(FunctionCallInfo fcinfo, bool use_tuples)
/* Now build the stuff we need in group-lifespan context */
oldcontext = MemoryContextSwitchTo(gcontext);
- osastate = (OSAPerGroupState *) palloc(sizeof(OSAPerGroupState));
+ osastate = palloc_object(OSAPerGroupState);
osastate->qstate = qstate;
osastate->gcontext = gcontext;
@@ -668,7 +668,7 @@ setup_pct_info(int num_percentiles,
struct pct_info *pct_info;
int i;
- pct_info = (struct pct_info *) palloc(num_percentiles * sizeof(struct pct_info));
+ pct_info = palloc_array(struct pct_info, num_percentiles);
for (i = 0; i < num_percentiles; i++)
{
@@ -774,8 +774,8 @@ percentile_disc_multi_final(PG_FUNCTION_ARGS)
osastate->number_of_rows,
false);
- result_datum = (Datum *) palloc(num_percentiles * sizeof(Datum));
- result_isnull = (bool *) palloc(num_percentiles * sizeof(bool));
+ result_datum = palloc_array(Datum, num_percentiles);
+ result_isnull = palloc_array(bool, num_percentiles);
/*
* Start by dealing with any nulls in the param array - those are sorted
@@ -897,8 +897,8 @@ percentile_cont_multi_final_common(FunctionCallInfo fcinfo,
osastate->number_of_rows,
true);
- result_datum = (Datum *) palloc(num_percentiles * sizeof(Datum));
- result_isnull = (bool *) palloc(num_percentiles * sizeof(bool));
+ result_datum = palloc_array(Datum, num_percentiles);
+ result_isnull = palloc_array(bool, num_percentiles);
/*
* Start by dealing with any nulls in the param array - those are sorted
diff --git a/src/backend/utils/adt/pg_locale_libc.c b/src/backend/utils/adt/pg_locale_libc.c
index 8f9a8637897..f65617500bb 100644
--- a/src/backend/utils/adt/pg_locale_libc.c
+++ b/src/backend/utils/adt/pg_locale_libc.c
@@ -208,7 +208,7 @@ strlower_libc_mb(char *dest, size_t destsize, const char *src, ssize_t srclen,
errmsg("out of memory")));
/* Output workspace cannot have more codes than input bytes */
- workspace = (wchar_t *) palloc((srclen + 1) * sizeof(wchar_t));
+ workspace = palloc_array(wchar_t, (srclen + 1));
char2wchar(workspace, srclen + 1, src, srclen, locale);
@@ -303,7 +303,7 @@ strtitle_libc_mb(char *dest, size_t destsize, const char *src, ssize_t srclen,
errmsg("out of memory")));
/* Output workspace cannot have more codes than input bytes */
- workspace = (wchar_t *) palloc((srclen + 1) * sizeof(wchar_t));
+ workspace = palloc_array(wchar_t, (srclen + 1));
char2wchar(workspace, srclen + 1, src, srclen, locale);
@@ -391,7 +391,7 @@ strupper_libc_mb(char *dest, size_t destsize, const char *src, ssize_t srclen,
errmsg("out of memory")));
/* Output workspace cannot have more codes than input bytes */
- workspace = (wchar_t *) palloc((srclen + 1) * sizeof(wchar_t));
+ workspace = palloc_array(wchar_t, (srclen + 1));
char2wchar(workspace, srclen + 1, src, srclen, locale);
diff --git a/src/backend/utils/adt/rangetypes_gist.c b/src/backend/utils/adt/rangetypes_gist.c
index a60ee985e74..04a7d5c6ceb 100644
--- a/src/backend/utils/adt/rangetypes_gist.c
+++ b/src/backend/utils/adt/rangetypes_gist.c
@@ -251,7 +251,7 @@ multirange_gist_compress(PG_FUNCTION_ARGS)
MultirangeType *mr = DatumGetMultirangeTypeP(entry->key);
RangeType *r;
TypeCacheEntry *typcache;
- GISTENTRY *retval = palloc(sizeof(GISTENTRY));
+ GISTENTRY *retval = palloc_object(GISTENTRY);
typcache = multirange_get_typcache(fcinfo, MultirangeTypeGetOid(mr));
r = multirange_get_union_range(typcache->rngtype, mr);
@@ -1240,8 +1240,7 @@ range_gist_single_sorting_split(TypeCacheEntry *typcache,
maxoff = entryvec->n - 1;
- sortItems = (SingleBoundSortItem *)
- palloc(maxoff * sizeof(SingleBoundSortItem));
+ sortItems = palloc_array(SingleBoundSortItem, maxoff);
/*
* Prepare auxiliary array and sort the values.
@@ -1343,8 +1342,8 @@ range_gist_double_sorting_split(TypeCacheEntry *typcache,
context.first = true;
/* Allocate arrays for sorted range bounds */
- by_lower = (NonEmptyRange *) palloc(nentries * sizeof(NonEmptyRange));
- by_upper = (NonEmptyRange *) palloc(nentries * sizeof(NonEmptyRange));
+ by_lower = palloc_array(NonEmptyRange, nentries);
+ by_upper = palloc_array(NonEmptyRange, nentries);
/* Fill arrays of bounds */
for (i = FirstOffsetNumber; i <= maxoff; i = OffsetNumberNext(i))
@@ -1509,7 +1508,7 @@ range_gist_double_sorting_split(TypeCacheEntry *typcache,
* either group without affecting overlap along selected axis.
*/
common_entries_count = 0;
- common_entries = (CommonEntry *) palloc(nentries * sizeof(CommonEntry));
+ common_entries = palloc_array(CommonEntry, nentries);
/*
* Distribute entries which can be distributed unambiguously, and collect
diff --git a/src/backend/utils/adt/rangetypes_selfuncs.c b/src/backend/utils/adt/rangetypes_selfuncs.c
index d126abc5a82..dd2216474d8 100644
--- a/src/backend/utils/adt/rangetypes_selfuncs.c
+++ b/src/backend/utils/adt/rangetypes_selfuncs.c
@@ -412,8 +412,8 @@ calc_hist_selectivity(TypeCacheEntry *typcache, VariableStatData *vardata,
* bounds.
*/
nhist = hslot.nvalues;
- hist_lower = (RangeBound *) palloc(sizeof(RangeBound) * nhist);
- hist_upper = (RangeBound *) palloc(sizeof(RangeBound) * nhist);
+ hist_lower = palloc_array(RangeBound, nhist);
+ hist_upper = palloc_array(RangeBound, nhist);
for (i = 0; i < nhist; i++)
{
range_deserialize(typcache, DatumGetRangeTypeP(hslot.values[i]),
diff --git a/src/backend/utils/adt/rangetypes_spgist.c b/src/backend/utils/adt/rangetypes_spgist.c
index 9b6d7061a18..41eeb4b98ad 100644
--- a/src/backend/utils/adt/rangetypes_spgist.c
+++ b/src/backend/utils/adt/rangetypes_spgist.c
@@ -216,8 +216,8 @@ spg_range_quad_picksplit(PG_FUNCTION_ARGS)
RangeTypeGetOid(DatumGetRangeTypeP(in->datums[0])));
/* Allocate memory for bounds */
- lowerBounds = palloc(sizeof(RangeBound) * in->nTuples);
- upperBounds = palloc(sizeof(RangeBound) * in->nTuples);
+ lowerBounds = palloc_array(RangeBound, in->nTuples);
+ upperBounds = palloc_array(RangeBound, in->nTuples);
j = 0;
/* Deserialize bounds of ranges, count non-empty ranges */
diff --git a/src/backend/utils/adt/rangetypes_typanalyze.c b/src/backend/utils/adt/rangetypes_typanalyze.c
index 9dc73af1992..5c7d5bca678 100644
--- a/src/backend/utils/adt/rangetypes_typanalyze.c
+++ b/src/backend/utils/adt/rangetypes_typanalyze.c
@@ -151,9 +151,9 @@ compute_range_stats(VacAttrStats *stats, AnalyzeAttrFetchFunc fetchfunc,
has_subdiff = OidIsValid(typcache->rng_subdiff_finfo.fn_oid);
/* Allocate memory to hold range bounds and lengths of the sample ranges. */
- lowers = (RangeBound *) palloc(sizeof(RangeBound) * samplerows);
- uppers = (RangeBound *) palloc(sizeof(RangeBound) * samplerows);
- lengths = (float8 *) palloc(sizeof(float8) * samplerows);
+ lowers = palloc_array(RangeBound, samplerows);
+ uppers = palloc_array(RangeBound, samplerows);
+ lengths = palloc_array(float8, samplerows);
/* Loop over the sample ranges. */
for (range_no = 0; range_no < samplerows; range_no++)
@@ -288,7 +288,7 @@ compute_range_stats(VacAttrStats *stats, AnalyzeAttrFetchFunc fetchfunc,
if (num_hist > num_bins)
num_hist = num_bins + 1;
- bound_hist_values = (Datum *) palloc(num_hist * sizeof(Datum));
+ bound_hist_values = palloc_array(Datum, num_hist);
/*
* The object of this loop is to construct ranges from first and
@@ -352,7 +352,7 @@ compute_range_stats(VacAttrStats *stats, AnalyzeAttrFetchFunc fetchfunc,
if (num_hist > num_bins)
num_hist = num_bins + 1;
- length_hist_values = (Datum *) palloc(num_hist * sizeof(Datum));
+ length_hist_values = palloc_array(Datum, num_hist);
/*
* The object of this loop is to copy the first and last lengths[]
@@ -401,7 +401,7 @@ compute_range_stats(VacAttrStats *stats, AnalyzeAttrFetchFunc fetchfunc,
stats->statypalign[slot_idx] = 'd';
/* Store the fraction of empty ranges */
- emptyfrac = (float4 *) palloc(sizeof(float4));
+ emptyfrac = palloc_object(float4);
*emptyfrac = ((double) empty_cnt) / ((double) non_null_cnt);
stats->stanumbers[slot_idx] = emptyfrac;
stats->numnumbers[slot_idx] = 1;
diff --git a/src/backend/utils/adt/regexp.c b/src/backend/utils/adt/regexp.c
index edee1f7880b..f331f466930 100644
--- a/src/backend/utils/adt/regexp.c
+++ b/src/backend/utils/adt/regexp.c
@@ -189,7 +189,7 @@ RE_compile_and_cache(text *text_re, int cflags, Oid collation)
*/
/* Convert pattern string to wide characters */
- pattern = (pg_wchar *) palloc((text_re_len + 1) * sizeof(pg_wchar));
+ pattern = palloc_array(pg_wchar, (text_re_len + 1));
pattern_len = pg_mb2wchar_with_len(text_re_val,
pattern,
text_re_len);
@@ -329,7 +329,7 @@ RE_execute(regex_t *re, char *dat, int dat_len,
bool match;
/* Convert data string to wide characters */
- data = (pg_wchar *) palloc((dat_len + 1) * sizeof(pg_wchar));
+ data = palloc_array(pg_wchar, (dat_len + 1));
data_len = pg_mb2wchar_with_len(dat, data, dat_len);
/* Perform RE match and return result */
@@ -1420,7 +1420,7 @@ setup_regexp_matches(text *orig_str, text *pattern, pg_re_flags *re_flags,
bool ignore_degenerate,
bool fetching_unmatched)
{
- regexp_matches_ctx *matchctx = palloc0(sizeof(regexp_matches_ctx));
+ regexp_matches_ctx *matchctx = palloc0_object(regexp_matches_ctx);
int eml = pg_database_encoding_max_length();
int orig_len;
pg_wchar *wide_str;
@@ -1440,7 +1440,7 @@ setup_regexp_matches(text *orig_str, text *pattern, pg_re_flags *re_flags,
/* convert string to pg_wchar form for matching */
orig_len = VARSIZE_ANY_EXHDR(orig_str);
- wide_str = (pg_wchar *) palloc(sizeof(pg_wchar) * (orig_len + 1));
+ wide_str = palloc_array(pg_wchar, (orig_len + 1));
wide_len = pg_mb2wchar_with_len(VARDATA_ANY(orig_str), wide_str, orig_len);
/* set up the compiled pattern */
@@ -1463,7 +1463,7 @@ setup_regexp_matches(text *orig_str, text *pattern, pg_re_flags *re_flags,
}
/* temporary output space for RE package */
- pmatch = palloc(sizeof(regmatch_t) * pmatch_len);
+ pmatch = palloc_array(regmatch_t, pmatch_len);
/*
* the real output space (grown dynamically if needed)
diff --git a/src/backend/utils/adt/rowtypes.c b/src/backend/utils/adt/rowtypes.c
index fe5edc0027d..8c5a75ce167 100644
--- a/src/backend/utils/adt/rowtypes.c
+++ b/src/backend/utils/adt/rowtypes.c
@@ -140,8 +140,8 @@ record_in(PG_FUNCTION_ARGS)
my_extra->ncolumns = ncolumns;
}
- values = (Datum *) palloc(ncolumns * sizeof(Datum));
- nulls = (bool *) palloc(ncolumns * sizeof(bool));
+ values = palloc_array(Datum, ncolumns);
+ nulls = palloc_array(bool, ncolumns);
/*
* Scan the string. We use "buf" to accumulate the de-quoted data for
@@ -383,8 +383,8 @@ record_out(PG_FUNCTION_ARGS)
my_extra->ncolumns = ncolumns;
}
- values = (Datum *) palloc(ncolumns * sizeof(Datum));
- nulls = (bool *) palloc(ncolumns * sizeof(bool));
+ values = palloc_array(Datum, ncolumns);
+ nulls = palloc_array(bool, ncolumns);
/* Break down the tuple into fields */
heap_deform_tuple(&tuple, tupdesc, values, nulls);
@@ -539,8 +539,8 @@ record_recv(PG_FUNCTION_ARGS)
my_extra->ncolumns = ncolumns;
}
- values = (Datum *) palloc(ncolumns * sizeof(Datum));
- nulls = (bool *) palloc(ncolumns * sizeof(bool));
+ values = palloc_array(Datum, ncolumns);
+ nulls = palloc_array(bool, ncolumns);
/* Fetch number of columns user thinks it has */
usercols = pq_getmsgint(buf, 4);
@@ -741,8 +741,8 @@ record_send(PG_FUNCTION_ARGS)
my_extra->ncolumns = ncolumns;
}
- values = (Datum *) palloc(ncolumns * sizeof(Datum));
- nulls = (bool *) palloc(ncolumns * sizeof(bool));
+ values = palloc_array(Datum, ncolumns);
+ nulls = palloc_array(bool, ncolumns);
/* Break down the tuple into fields */
heap_deform_tuple(&tuple, tupdesc, values, nulls);
@@ -901,11 +901,11 @@ record_cmp(FunctionCallInfo fcinfo)
}
/* Break down the tuples into fields */
- values1 = (Datum *) palloc(ncolumns1 * sizeof(Datum));
- nulls1 = (bool *) palloc(ncolumns1 * sizeof(bool));
+ values1 = palloc_array(Datum, ncolumns1);
+ nulls1 = palloc_array(bool, ncolumns1);
heap_deform_tuple(&tuple1, tupdesc1, values1, nulls1);
- values2 = (Datum *) palloc(ncolumns2 * sizeof(Datum));
- nulls2 = (bool *) palloc(ncolumns2 * sizeof(bool));
+ values2 = palloc_array(Datum, ncolumns2);
+ nulls2 = palloc_array(bool, ncolumns2);
heap_deform_tuple(&tuple2, tupdesc2, values2, nulls2);
/*
@@ -1145,11 +1145,11 @@ record_eq(PG_FUNCTION_ARGS)
}
/* Break down the tuples into fields */
- values1 = (Datum *) palloc(ncolumns1 * sizeof(Datum));
- nulls1 = (bool *) palloc(ncolumns1 * sizeof(bool));
+ values1 = palloc_array(Datum, ncolumns1);
+ nulls1 = palloc_array(bool, ncolumns1);
heap_deform_tuple(&tuple1, tupdesc1, values1, nulls1);
- values2 = (Datum *) palloc(ncolumns2 * sizeof(Datum));
- nulls2 = (bool *) palloc(ncolumns2 * sizeof(bool));
+ values2 = palloc_array(Datum, ncolumns2);
+ nulls2 = palloc_array(bool, ncolumns2);
heap_deform_tuple(&tuple2, tupdesc2, values2, nulls2);
/*
@@ -1425,11 +1425,11 @@ record_image_cmp(FunctionCallInfo fcinfo)
}
/* Break down the tuples into fields */
- values1 = (Datum *) palloc(ncolumns1 * sizeof(Datum));
- nulls1 = (bool *) palloc(ncolumns1 * sizeof(bool));
+ values1 = palloc_array(Datum, ncolumns1);
+ nulls1 = palloc_array(bool, ncolumns1);
heap_deform_tuple(&tuple1, tupdesc1, values1, nulls1);
- values2 = (Datum *) palloc(ncolumns2 * sizeof(Datum));
- nulls2 = (bool *) palloc(ncolumns2 * sizeof(bool));
+ values2 = palloc_array(Datum, ncolumns2);
+ nulls2 = palloc_array(bool, ncolumns2);
heap_deform_tuple(&tuple2, tupdesc2, values2, nulls2);
/*
@@ -1671,11 +1671,11 @@ record_image_eq(PG_FUNCTION_ARGS)
}
/* Break down the tuples into fields */
- values1 = (Datum *) palloc(ncolumns1 * sizeof(Datum));
- nulls1 = (bool *) palloc(ncolumns1 * sizeof(bool));
+ values1 = palloc_array(Datum, ncolumns1);
+ nulls1 = palloc_array(bool, ncolumns1);
heap_deform_tuple(&tuple1, tupdesc1, values1, nulls1);
- values2 = (Datum *) palloc(ncolumns2 * sizeof(Datum));
- nulls2 = (bool *) palloc(ncolumns2 * sizeof(bool));
+ values2 = palloc_array(Datum, ncolumns2);
+ nulls2 = palloc_array(bool, ncolumns2);
heap_deform_tuple(&tuple2, tupdesc2, values2, nulls2);
/*
@@ -1863,8 +1863,8 @@ hash_record(PG_FUNCTION_ARGS)
}
/* Break down the tuple into fields */
- values = (Datum *) palloc(ncolumns * sizeof(Datum));
- nulls = (bool *) palloc(ncolumns * sizeof(bool));
+ values = palloc_array(Datum, ncolumns);
+ nulls = palloc_array(bool, ncolumns);
heap_deform_tuple(&tuple, tupdesc, values, nulls);
for (int i = 0; i < ncolumns; i++)
@@ -1984,8 +1984,8 @@ hash_record_extended(PG_FUNCTION_ARGS)
}
/* Break down the tuple into fields */
- values = (Datum *) palloc(ncolumns * sizeof(Datum));
- nulls = (bool *) palloc(ncolumns * sizeof(bool));
+ values = palloc_array(Datum, ncolumns);
+ nulls = palloc_array(bool, ncolumns);
heap_deform_tuple(&tuple, tupdesc, values, nulls);
for (int i = 0; i < ncolumns; i++)
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 54dad975553..0dc176d5369 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -2567,7 +2567,7 @@ pg_get_constraintdef_worker(Oid constraintId, bool fullCommand,
deconstruct_array_builtin(DatumGetArrayTypeP(val), OIDOID,
&elems, NULL, &nElems);
- operators = (Oid *) palloc(nElems * sizeof(Oid));
+ operators = palloc_array(Oid, nElems);
for (i = 0; i < nElems; i++)
operators[i] = DatumGetObjectId(elems[i]);
@@ -3707,7 +3707,7 @@ deparse_context_for(const char *aliasname, Oid relid)
deparse_namespace *dpns;
RangeTblEntry *rte;
- dpns = (deparse_namespace *) palloc0(sizeof(deparse_namespace));
+ dpns = palloc0_object(deparse_namespace);
/* Build a minimal RTE for the rel */
rte = makeNode(RangeTblEntry);
@@ -3751,7 +3751,7 @@ deparse_context_for_plan_tree(PlannedStmt *pstmt, List *rtable_names)
{
deparse_namespace *dpns;
- dpns = (deparse_namespace *) palloc0(sizeof(deparse_namespace));
+ dpns = palloc0_object(deparse_namespace);
/* Initialize fields that stay the same across the whole plan tree */
dpns->rtable = pstmt->rtable;
@@ -4394,7 +4394,7 @@ set_relation_column_names(deparse_namespace *dpns, RangeTblEntry *rte,
tupdesc = RelationGetDescr(rel);
ncolumns = tupdesc->natts;
- real_colnames = (char **) palloc(ncolumns * sizeof(char *));
+ real_colnames = palloc_array(char *, ncolumns);
for (i = 0; i < ncolumns; i++)
{
@@ -4438,7 +4438,7 @@ set_relation_column_names(deparse_namespace *dpns, RangeTblEntry *rte,
colnames = rte->eref->colnames;
ncolumns = list_length(colnames);
- real_colnames = (char **) palloc(ncolumns * sizeof(char *));
+ real_colnames = palloc_array(char *, ncolumns);
i = 0;
foreach(lc, colnames)
diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index d3d1e485bb2..6bdc1566413 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -2490,8 +2490,8 @@ eqjoinsel_inner(Oid opfuncoid, Oid collation,
fcinfo->args[0].isnull = false;
fcinfo->args[1].isnull = false;
- hasmatch1 = (bool *) palloc0(sslot1->nvalues * sizeof(bool));
- hasmatch2 = (bool *) palloc0(sslot2->nvalues * sizeof(bool));
+ hasmatch1 = palloc0_array(bool, sslot1->nvalues);
+ hasmatch2 = palloc0_array(bool, sslot2->nvalues);
/*
* Note we assume that each MCV will match at most one member of the
@@ -2720,8 +2720,8 @@ eqjoinsel_semi(Oid opfuncoid, Oid collation,
fcinfo->args[0].isnull = false;
fcinfo->args[1].isnull = false;
- hasmatch1 = (bool *) palloc0(sslot1->nvalues * sizeof(bool));
- hasmatch2 = (bool *) palloc0(clamped_nvalues2 * sizeof(bool));
+ hasmatch1 = palloc0_array(bool, sslot1->nvalues);
+ hasmatch2 = palloc0_array(bool, clamped_nvalues2);
/*
* Note we assume that each MCV will match at most one member of the
@@ -3345,7 +3345,7 @@ add_unique_group_var(PlannerInfo *root, List *varinfos,
}
}
- varinfo = (GroupVarInfo *) palloc(sizeof(GroupVarInfo));
+ varinfo = palloc_object(GroupVarInfo);
varinfo->var = var;
varinfo->rel = vardata->rel;
diff --git a/src/backend/utils/adt/timestamp.c b/src/backend/utils/adt/timestamp.c
index ba9bae05069..24f9b132638 100644
--- a/src/backend/utils/adt/timestamp.c
+++ b/src/backend/utils/adt/timestamp.c
@@ -936,7 +936,7 @@ interval_in(PG_FUNCTION_ARGS)
PG_RETURN_NULL();
}
- result = (Interval *) palloc(sizeof(Interval));
+ result = palloc_object(Interval);
switch (dtype)
{
@@ -1003,7 +1003,7 @@ interval_recv(PG_FUNCTION_ARGS)
int32 typmod = PG_GETARG_INT32(2);
Interval *interval;
- interval = (Interval *) palloc(sizeof(Interval));
+ interval = palloc_object(Interval);
interval->time = pq_getmsgint64(buf);
interval->day = pq_getmsgint(buf, sizeof(interval->day));
@@ -1330,7 +1330,7 @@ interval_scale(PG_FUNCTION_ARGS)
int32 typmod = PG_GETARG_INT32(1);
Interval *result;
- result = palloc(sizeof(Interval));
+ result = palloc_object(Interval);
*result = *interval;
AdjustIntervalForTypmod(result, typmod, NULL);
@@ -1544,7 +1544,7 @@ make_interval(PG_FUNCTION_ARGS)
if (isinf(secs) || isnan(secs))
goto out_of_range;
- result = (Interval *) palloc(sizeof(Interval));
+ result = palloc_object(Interval);
/* years and months -> months */
if (pg_mul_s32_overflow(years, MONTHS_PER_YEAR, &result->month) ||
@@ -2782,7 +2782,7 @@ timestamp_mi(PG_FUNCTION_ARGS)
Timestamp dt2 = PG_GETARG_TIMESTAMP(1);
Interval *result;
- result = (Interval *) palloc(sizeof(Interval));
+ result = palloc_object(Interval);
/*
* Handle infinities.
@@ -2877,7 +2877,7 @@ interval_justify_interval(PG_FUNCTION_ARGS)
TimeOffset wholeday;
int32 wholemonth;
- result = (Interval *) palloc(sizeof(Interval));
+ result = palloc_object(Interval);
result->month = span->month;
result->day = span->day;
result->time = span->time;
@@ -2956,7 +2956,7 @@ interval_justify_hours(PG_FUNCTION_ARGS)
Interval *result;
TimeOffset wholeday;
- result = (Interval *) palloc(sizeof(Interval));
+ result = palloc_object(Interval);
result->month = span->month;
result->day = span->day;
result->time = span->time;
@@ -2998,7 +2998,7 @@ interval_justify_days(PG_FUNCTION_ARGS)
Interval *result;
int32 wholemonth;
- result = (Interval *) palloc(sizeof(Interval));
+ result = palloc_object(Interval);
result->month = span->month;
result->day = span->day;
result->time = span->time;
@@ -3400,7 +3400,7 @@ interval_um(PG_FUNCTION_ARGS)
Interval *interval = PG_GETARG_INTERVAL_P(0);
Interval *result;
- result = (Interval *) palloc(sizeof(Interval));
+ result = palloc_object(Interval);
interval_um_internal(interval, result);
PG_RETURN_INTERVAL_P(result);
@@ -3458,7 +3458,7 @@ interval_pl(PG_FUNCTION_ARGS)
Interval *span2 = PG_GETARG_INTERVAL_P(1);
Interval *result;
- result = (Interval *) palloc(sizeof(Interval));
+ result = palloc_object(Interval);
/*
* Handle infinities.
@@ -3514,7 +3514,7 @@ interval_mi(PG_FUNCTION_ARGS)
Interval *span2 = PG_GETARG_INTERVAL_P(1);
Interval *result;
- result = (Interval *) palloc(sizeof(Interval));
+ result = palloc_object(Interval);
/*
* Handle infinities.
@@ -3568,7 +3568,7 @@ interval_mul(PG_FUNCTION_ARGS)
orig_day = span->day;
Interval *result;
- result = (Interval *) palloc(sizeof(Interval));
+ result = palloc_object(Interval);
/*
* Handle NaN and infinities.
@@ -3698,7 +3698,7 @@ interval_div(PG_FUNCTION_ARGS)
orig_day = span->day;
Interval *result;
- result = (Interval *) palloc(sizeof(Interval));
+ result = palloc_object(Interval);
if (factor == 0.0)
ereport(ERROR,
@@ -3927,7 +3927,7 @@ makeIntervalAggState(FunctionCallInfo fcinfo)
old_context = MemoryContextSwitchTo(agg_context);
- state = (IntervalAggState *) palloc0(sizeof(IntervalAggState));
+ state = palloc0_object(IntervalAggState);
MemoryContextSwitchTo(old_context);
@@ -4114,7 +4114,7 @@ interval_avg_deserialize(PG_FUNCTION_ARGS)
initReadOnlyStringInfo(&buf, VARDATA_ANY(sstate),
VARSIZE_ANY_EXHDR(sstate));
- result = (IntervalAggState *) palloc0(sizeof(IntervalAggState));
+ result = palloc0_object(IntervalAggState);
/* N */
result->N = pq_getmsgint64(&buf);
@@ -4181,7 +4181,7 @@ interval_avg(PG_FUNCTION_ARGS)
(errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),
errmsg("interval out of range")));
- result = (Interval *) palloc(sizeof(Interval));
+ result = palloc_object(Interval);
if (state->pInfcount > 0)
INTERVAL_NOEND(result);
else
@@ -4218,7 +4218,7 @@ interval_sum(PG_FUNCTION_ARGS)
(errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),
errmsg("interval out of range")));
- result = (Interval *) palloc(sizeof(Interval));
+ result = palloc_object(Interval);
if (state->pInfcount > 0)
INTERVAL_NOEND(result);
@@ -4251,7 +4251,7 @@ timestamp_age(PG_FUNCTION_ARGS)
struct pg_tm tt2,
*tm2 = &tt2;
- result = (Interval *) palloc(sizeof(Interval));
+ result = palloc_object(Interval);
/*
* Handle infinities.
@@ -4399,7 +4399,7 @@ timestamptz_age(PG_FUNCTION_ARGS)
int tz1;
int tz2;
- result = (Interval *) palloc(sizeof(Interval));
+ result = palloc_object(Interval);
/*
* Handle infinities.
@@ -5072,7 +5072,7 @@ interval_trunc(PG_FUNCTION_ARGS)
struct pg_itm tt,
*tm = &tt;
- result = (Interval *) palloc(sizeof(Interval));
+ result = palloc_object(Interval);
lowunits = downcase_truncate_identifier(VARDATA_ANY(units),
VARSIZE_ANY_EXHDR(units),
@@ -6622,8 +6622,7 @@ generate_series_timestamp(PG_FUNCTION_ARGS)
oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
/* allocate memory for user context */
- fctx = (generate_series_timestamp_fctx *)
- palloc(sizeof(generate_series_timestamp_fctx));
+ fctx = palloc_object(generate_series_timestamp_fctx);
/*
* Use fctx to keep state from call to call. Seed current with the
@@ -6707,8 +6706,7 @@ generate_series_timestamptz_internal(FunctionCallInfo fcinfo)
oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
/* allocate memory for user context */
- fctx = (generate_series_timestamptz_fctx *)
- palloc(sizeof(generate_series_timestamptz_fctx));
+ fctx = palloc_object(generate_series_timestamptz_fctx);
/*
* Use fctx to keep state from call to call. Seed current with the
diff --git a/src/backend/utils/adt/tsginidx.c b/src/backend/utils/adt/tsginidx.c
index 2712fd89df0..619ba5a21e9 100644
--- a/src/backend/utils/adt/tsginidx.c
+++ b/src/backend/utils/adt/tsginidx.c
@@ -73,7 +73,7 @@ gin_extract_tsvector(PG_FUNCTION_ARGS)
int i;
WordEntry *we = ARRPTR(vector);
- entries = (Datum *) palloc(sizeof(Datum) * vector->size);
+ entries = palloc_array(Datum, vector->size);
for (i = 0; i < vector->size; i++)
{
@@ -133,7 +133,7 @@ gin_extract_tsquery(PG_FUNCTION_ARGS)
}
*nentries = j;
- entries = (Datum *) palloc(sizeof(Datum) * j);
+ entries = palloc_array(Datum, j);
partialmatch = *ptr_partialmatch = (bool *) palloc(sizeof(bool) * j);
/*
@@ -142,7 +142,7 @@ gin_extract_tsquery(PG_FUNCTION_ARGS)
* consistent method. We use the same map for each entry.
*/
*extra_data = (Pointer *) palloc(sizeof(Pointer) * j);
- map_item_operand = (int *) palloc0(sizeof(int) * query->size);
+ map_item_operand = palloc0_array(int, query->size);
/* Now rescan the VAL items and fill in the arrays */
j = 0;
diff --git a/src/backend/utils/adt/tsgistidx.c b/src/backend/utils/adt/tsgistidx.c
index 935187b37c7..3bc0da32368 100644
--- a/src/backend/utils/adt/tsgistidx.c
+++ b/src/backend/utils/adt/tsgistidx.c
@@ -212,7 +212,7 @@ gtsvector_compress(PG_FUNCTION_ARGS)
res = ressign;
}
- retval = (GISTENTRY *) palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(res),
entry->rel, entry->page,
entry->offset, false);
@@ -231,7 +231,7 @@ gtsvector_compress(PG_FUNCTION_ARGS)
}
res = gtsvector_alloc(SIGNKEY | ALLISTRUE, siglen, sign);
- retval = (GISTENTRY *) palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(res),
entry->rel, entry->page,
entry->offset, false);
@@ -251,7 +251,7 @@ gtsvector_decompress(PG_FUNCTION_ARGS)
if (key != (SignTSVector *) DatumGetPointer(entry->key))
{
- GISTENTRY *retval = (GISTENTRY *) palloc(sizeof(GISTENTRY));
+ GISTENTRY *retval = palloc_object(GISTENTRY);
gistentryinit(*retval, PointerGetDatum(key),
entry->rel, entry->page,
@@ -641,7 +641,7 @@ gtsvector_picksplit(PG_FUNCTION_ARGS)
v->spl_left = (OffsetNumber *) palloc(nbytes);
v->spl_right = (OffsetNumber *) palloc(nbytes);
- cache = (CACHESIGN *) palloc(sizeof(CACHESIGN) * (maxoff + 2));
+ cache = palloc_array(CACHESIGN, (maxoff + 2));
cache_sign = palloc(siglen * (maxoff + 2));
for (j = 0; j < maxoff + 2; j++)
@@ -688,7 +688,7 @@ gtsvector_picksplit(PG_FUNCTION_ARGS)
maxoff = OffsetNumberNext(maxoff);
fillcache(&cache[maxoff], GETENTRY(entryvec, maxoff), siglen);
/* sort before ... */
- costvector = (SPLITCOST *) palloc(sizeof(SPLITCOST) * maxoff);
+ costvector = palloc_array(SPLITCOST, maxoff);
for (j = FirstOffsetNumber; j <= maxoff; j = OffsetNumberNext(j))
{
costvector[j - 1].pos = j;
diff --git a/src/backend/utils/adt/tsquery.c b/src/backend/utils/adt/tsquery.c
index b1bad7bd60c..bb119879b51 100644
--- a/src/backend/utils/adt/tsquery.c
+++ b/src/backend/utils/adt/tsquery.c
@@ -534,7 +534,7 @@ pushOperator(TSQueryParserState state, int8 oper, int16 distance)
Assert(oper == OP_NOT || oper == OP_AND || oper == OP_OR || oper == OP_PHRASE);
- tmp = (QueryOperator *) palloc0(sizeof(QueryOperator));
+ tmp = palloc0_object(QueryOperator);
tmp->type = QI_OPR;
tmp->oper = oper;
tmp->distance = (oper == OP_PHRASE) ? distance : 0;
@@ -559,7 +559,7 @@ pushValue_internal(TSQueryParserState state, pg_crc32 valcrc, int distance, int
errmsg("operand is too long in tsquery: \"%s\"",
state->buffer)));
- tmp = (QueryOperand *) palloc0(sizeof(QueryOperand));
+ tmp = palloc0_object(QueryOperand);
tmp->type = QI_VAL;
tmp->weight = weight;
tmp->prefix = prefix;
@@ -617,7 +617,7 @@ pushStop(TSQueryParserState state)
{
QueryOperand *tmp;
- tmp = (QueryOperand *) palloc0(sizeof(QueryOperand));
+ tmp = palloc0_object(QueryOperand);
tmp->type = QI_VALSTOP;
state->polstr = lcons(tmp, state->polstr);
diff --git a/src/backend/utils/adt/tsquery_cleanup.c b/src/backend/utils/adt/tsquery_cleanup.c
index 590d7c7989c..45de2da900c 100644
--- a/src/backend/utils/adt/tsquery_cleanup.c
+++ b/src/backend/utils/adt/tsquery_cleanup.c
@@ -32,7 +32,7 @@ typedef struct NODE
static NODE *
maketree(QueryItem *in)
{
- NODE *node = (NODE *) palloc(sizeof(NODE));
+ NODE *node = palloc_object(NODE);
/* since this function recurses, it could be driven to stack overflow. */
check_stack_depth();
diff --git a/src/backend/utils/adt/tsquery_gist.c b/src/backend/utils/adt/tsquery_gist.c
index f7f94c1c760..55fc93ebef5 100644
--- a/src/backend/utils/adt/tsquery_gist.c
+++ b/src/backend/utils/adt/tsquery_gist.c
@@ -33,7 +33,7 @@ gtsquery_compress(PG_FUNCTION_ARGS)
{
TSQuerySign sign;
- retval = (GISTENTRY *) palloc(sizeof(GISTENTRY));
+ retval = palloc_object(GISTENTRY);
sign = makeTSQuerySign(DatumGetTSQuery(entry->key));
gistentryinit(*retval, TSQuerySignGetDatum(sign),
@@ -213,7 +213,7 @@ gtsquery_picksplit(PG_FUNCTION_ARGS)
datum_r = GETENTRY(entryvec, seed_2);
maxoff = OffsetNumberNext(maxoff);
- costvector = (SPLITCOST *) palloc(sizeof(SPLITCOST) * maxoff);
+ costvector = palloc_array(SPLITCOST, maxoff);
for (j = FirstOffsetNumber; j <= maxoff; j = OffsetNumberNext(j))
{
costvector[j - 1].pos = j;
diff --git a/src/backend/utils/adt/tsquery_op.c b/src/backend/utils/adt/tsquery_op.c
index bb77e923062..bb5999b1b16 100644
--- a/src/backend/utils/adt/tsquery_op.c
+++ b/src/backend/utils/adt/tsquery_op.c
@@ -32,7 +32,7 @@ tsquery_numnode(PG_FUNCTION_ARGS)
static QTNode *
join_tsqueries(TSQuery a, TSQuery b, int8 operator, uint16 distance)
{
- QTNode *res = (QTNode *) palloc0(sizeof(QTNode));
+ QTNode *res = palloc0_object(QTNode);
res->flags |= QTN_NEEDFREE;
@@ -165,7 +165,7 @@ tsquery_not(PG_FUNCTION_ARGS)
if (a->size == 0)
PG_RETURN_POINTER(a);
- res = (QTNode *) palloc0(sizeof(QTNode));
+ res = palloc0_object(QTNode);
res->flags |= QTN_NEEDFREE;
@@ -272,7 +272,7 @@ collectTSQueryValues(TSQuery a, int *nvalues_p)
int nvalues = 0;
int i;
- values = (char **) palloc(sizeof(char *) * a->size);
+ values = palloc_array(char *, a->size);
for (i = 0; i < a->size; i++)
{
diff --git a/src/backend/utils/adt/tsquery_rewrite.c b/src/backend/utils/adt/tsquery_rewrite.c
index 2f9e81fbfea..5d165d96934 100644
--- a/src/backend/utils/adt/tsquery_rewrite.c
+++ b/src/backend/utils/adt/tsquery_rewrite.c
@@ -92,7 +92,7 @@ findeq(QTNode *node, QTNode *ex, QTNode *subs, bool *isfind)
node->valnode->qoperator.oper == OP_OR);
/* matched[] will record which children of node matched */
- matched = (bool *) palloc0(node->nchild * sizeof(bool));
+ matched = palloc0_array(bool, node->nchild);
nmatched = 0;
i = j = 0;
while (i < node->nchild && j < ex->nchild)
diff --git a/src/backend/utils/adt/tsquery_util.c b/src/backend/utils/adt/tsquery_util.c
index 1c24b041aa2..f1ec5a5a27a 100644
--- a/src/backend/utils/adt/tsquery_util.c
+++ b/src/backend/utils/adt/tsquery_util.c
@@ -24,7 +24,7 @@
QTNode *
QT2QTN(QueryItem *in, char *operand)
{
- QTNode *node = (QTNode *) palloc0(sizeof(QTNode));
+ QTNode *node = palloc0_object(QTNode);
/* since this function recurses, it could be driven to stack overflow. */
check_stack_depth();
@@ -262,7 +262,7 @@ QTNBinary(QTNode *in)
while (in->nchild > 2)
{
- QTNode *nn = (QTNode *) palloc0(sizeof(QTNode));
+ QTNode *nn = palloc0_object(QTNode);
nn->valnode = (QueryItem *) palloc0(sizeof(QueryItem));
nn->child = (QTNode **) palloc0(sizeof(QTNode *) * 2);
@@ -400,7 +400,7 @@ QTNCopy(QTNode *in)
/* since this function recurses, it could be driven to stack overflow. */
check_stack_depth();
- out = (QTNode *) palloc(sizeof(QTNode));
+ out = palloc_object(QTNode);
*out = *in;
out->valnode = (QueryItem *) palloc(sizeof(QueryItem));
diff --git a/src/backend/utils/adt/tsrank.c b/src/backend/utils/adt/tsrank.c
index 38f8505fd17..29cd32c48ef 100644
--- a/src/backend/utils/adt/tsrank.c
+++ b/src/backend/utils/adt/tsrank.c
@@ -160,7 +160,7 @@ SortAndUniqItems(TSQuery q, int *size)
**ptr,
**prevptr;
- ptr = res = (QueryOperand **) palloc(sizeof(QueryOperand *) * *size);
+ ptr = res = palloc_array(QueryOperand *, *size);
/* Collect all operands from the tree to res */
while ((*size)--)
@@ -225,7 +225,7 @@ calc_rank_and(const float *w, TSVector t, TSQuery q)
pfree(item);
return calc_rank_or(w, t, q);
}
- pos = (WordEntryPosVector **) palloc0(sizeof(WordEntryPosVector *) * q->size);
+ pos = palloc0_array(WordEntryPosVector *, q->size);
/* A dummy WordEntryPos array to use when haspos is false */
posnull.npos = 1;
@@ -743,7 +743,7 @@ get_docrep(TSVector txt, QueryRepresentation *qr, int *doclen)
cur = 0;
DocRepresentation *doc;
- doc = (DocRepresentation *) palloc(sizeof(DocRepresentation) * len);
+ doc = palloc_array(DocRepresentation, len);
/*
* Iterate through query to make DocRepresentation for words and it's
@@ -780,7 +780,8 @@ get_docrep(TSVector txt, QueryRepresentation *qr, int *doclen)
while (cur + dimt >= len)
{
len *= 2;
- doc = (DocRepresentation *) repalloc(doc, sizeof(DocRepresentation) * len);
+ doc = repalloc_array(doc, DocRepresentation,
+ len);
}
/* iterations over entry's positions */
diff --git a/src/backend/utils/adt/tsvector.c b/src/backend/utils/adt/tsvector.c
index 650be842f28..47b8a46caea 100644
--- a/src/backend/utils/adt/tsvector.c
+++ b/src/backend/utils/adt/tsvector.c
@@ -205,7 +205,7 @@ tsvectorin(PG_FUNCTION_ARGS)
state = init_tsvector_parser(buf, 0, escontext);
arrlen = 64;
- arr = (WordEntryIN *) palloc(sizeof(WordEntryIN) * arrlen);
+ arr = palloc_array(WordEntryIN, arrlen);
cur = tmpbuf = (char *) palloc(buflen);
while (gettoken_tsvector(state, &token, &toklen, &pos, &poslen, NULL))
@@ -229,8 +229,7 @@ tsvectorin(PG_FUNCTION_ARGS)
if (len >= arrlen)
{
arrlen *= 2;
- arr = (WordEntryIN *)
- repalloc(arr, sizeof(WordEntryIN) * arrlen);
+ arr = repalloc_array(arr, WordEntryIN, arrlen);
}
while ((cur - tmpbuf) + toklen >= buflen)
{
diff --git a/src/backend/utils/adt/tsvector_op.c b/src/backend/utils/adt/tsvector_op.c
index 1fa1275ca63..b6998ba98f0 100644
--- a/src/backend/utils/adt/tsvector_op.c
+++ b/src/backend/utils/adt/tsvector_op.c
@@ -594,7 +594,7 @@ tsvector_delete_arr(PG_FUNCTION_ARGS)
* here we optimize things for that scenario: iterate through lexarr
* performing binary search of each lexeme from lexarr in tsvector.
*/
- skip_indices = palloc0(nlex * sizeof(int));
+ skip_indices = palloc0_array(int, nlex);
for (i = skip_count = 0; i < nlex; i++)
{
char *lex;
@@ -686,8 +686,8 @@ tsvector_unnest(PG_FUNCTION_ARGS)
* that in two separate arrays.
*/
posv = _POSVECPTR(tsin, arrin + i);
- positions = palloc(posv->npos * sizeof(Datum));
- weights = palloc(posv->npos * sizeof(Datum));
+ positions = palloc_array(Datum, posv->npos);
+ weights = palloc_array(Datum, posv->npos);
for (j = 0; j < posv->npos; j++)
{
positions[j] = Int16GetDatum(WEP_GETPOS(posv->pos[j]));
@@ -725,7 +725,7 @@ tsvector_to_array(PG_FUNCTION_ARGS)
int i;
ArrayType *array;
- elements = palloc(tsin->size * sizeof(Datum));
+ elements = palloc_array(Datum, tsin->size);
for (i = 0; i < tsin->size; i++)
{
@@ -1391,7 +1391,8 @@ checkcondition_str(void *checkval, QueryOperand *val, ExecPhraseData *data)
if (totalpos == 0)
{
totalpos = 256;
- allpos = palloc(sizeof(WordEntryPos) * totalpos);
+ allpos = palloc_array(WordEntryPos,
+ totalpos);
}
else
{
diff --git a/src/backend/utils/adt/tsvector_parser.c b/src/backend/utils/adt/tsvector_parser.c
index e1620d3ed1f..7bf2f71b899 100644
--- a/src/backend/utils/adt/tsvector_parser.c
+++ b/src/backend/utils/adt/tsvector_parser.c
@@ -322,13 +322,16 @@ gettoken_tsvector(TSVectorParseState state,
if (posalen == 0)
{
posalen = 4;
- pos = (WordEntryPos *) palloc(sizeof(WordEntryPos) * posalen);
+ pos = palloc_array(WordEntryPos,
+ posalen);
npos = 0;
}
else if (npos + 1 >= posalen)
{
posalen *= 2;
- pos = (WordEntryPos *) repalloc(pos, sizeof(WordEntryPos) * posalen);
+ pos = repalloc_array(pos,
+ WordEntryPos,
+ posalen);
}
npos++;
WEP_SETPOS(pos[npos - 1], LIMITPOS(atoi(state->prsbuf)));
diff --git a/src/backend/utils/adt/uuid.c b/src/backend/utils/adt/uuid.c
index 4f8402ef925..d40fd791e40 100644
--- a/src/backend/utils/adt/uuid.c
+++ b/src/backend/utils/adt/uuid.c
@@ -76,7 +76,7 @@ uuid_in(PG_FUNCTION_ARGS)
char *uuid_str = PG_GETARG_CSTRING(0);
pg_uuid_t *uuid;
- uuid = (pg_uuid_t *) palloc(sizeof(*uuid));
+ uuid = palloc_object(pg_uuid_t);
string_to_uuid(uuid_str, uuid, fcinfo->context);
PG_RETURN_UUID_P(uuid);
}
@@ -284,7 +284,7 @@ uuid_sortsupport(PG_FUNCTION_ARGS)
oldcontext = MemoryContextSwitchTo(ssup->ssup_cxt);
- uss = palloc(sizeof(uuid_sortsupport_state));
+ uss = palloc_object(uuid_sortsupport_state);
uss->input_count = 0;
uss->estimating = true;
initHyperLogLog(&uss->abbr_card, 10);
diff --git a/src/backend/utils/adt/varlena.c b/src/backend/utils/adt/varlena.c
index 34796f2e27c..2ddec094595 100644
--- a/src/backend/utils/adt/varlena.c
+++ b/src/backend/utils/adt/varlena.c
@@ -1934,7 +1934,7 @@ varstr_sortsupport(SortSupport ssup, Oid typid, Oid collid)
*/
if (abbreviate || !collate_c)
{
- sss = palloc(sizeof(VarStringSortSupport));
+ sss = palloc_object(VarStringSortSupport);
sss->buf1 = palloc(TEXTBUFLEN);
sss->buflen1 = TEXTBUFLEN;
sss->buf2 = palloc(TEXTBUFLEN);
@@ -4237,7 +4237,7 @@ replace_text_regexp(text *src_text, text *pattern_text,
initStringInfo(&buf);
/* Convert data string to wide characters. */
- data = (pg_wchar *) palloc((src_text_len + 1) * sizeof(pg_wchar));
+ data = palloc_array(pg_wchar, (src_text_len + 1));
data_len = pg_mb2wchar_with_len(VARDATA_ANY(src_text), data, src_text_len);
/* Check whether replace_text has escapes, especially regexp submatches. */
@@ -6370,7 +6370,7 @@ unicode_normalize_func(PG_FUNCTION_ARGS)
/* convert to pg_wchar */
size = pg_mbstrlen_with_len(VARDATA_ANY(input), VARSIZE_ANY_EXHDR(input));
- input_chars = palloc((size + 1) * sizeof(pg_wchar));
+ input_chars = palloc_array(pg_wchar, (size + 1));
p = (unsigned char *) VARDATA_ANY(input);
for (i = 0; i < size; i++)
{
@@ -6438,7 +6438,7 @@ unicode_is_normalized(PG_FUNCTION_ARGS)
/* convert to pg_wchar */
size = pg_mbstrlen_with_len(VARDATA_ANY(input), VARSIZE_ANY_EXHDR(input));
- input_chars = palloc((size + 1) * sizeof(pg_wchar));
+ input_chars = palloc_array(pg_wchar, (size + 1));
p = (unsigned char *) VARDATA_ANY(input);
for (i = 0; i < size; i++)
{
diff --git a/src/backend/utils/adt/xml.c b/src/backend/utils/adt/xml.c
index db8d0d6a7e8..c9402cc0c38 100644
--- a/src/backend/utils/adt/xml.c
+++ b/src/backend/utils/adt/xml.c
@@ -1205,7 +1205,7 @@ pg_xml_init(PgXmlStrictness strictness)
pg_xml_init_library();
/* Create error handling context structure */
- errcxt = (PgXmlErrorContext *) palloc(sizeof(PgXmlErrorContext));
+ errcxt = palloc_object(PgXmlErrorContext);
errcxt->magic = ERRCXT_MAGIC;
errcxt->strictness = strictness;
errcxt->err_occurred = false;
@@ -1364,7 +1364,7 @@ xml_pnstrdup(const xmlChar *str, size_t len)
{
xmlChar *result;
- result = (xmlChar *) palloc((len + 1) * sizeof(xmlChar));
+ result = palloc_array(xmlChar, (len + 1));
memcpy(result, str, len * sizeof(xmlChar));
result[len] = 0;
return result;
@@ -1376,7 +1376,7 @@ pg_xmlCharStrndup(const char *str, size_t len)
{
xmlChar *result;
- result = (xmlChar *) palloc((len + 1) * sizeof(xmlChar));
+ result = palloc_array(xmlChar, (len + 1));
memcpy(result, str, len);
result[len] = '\0';
@@ -4686,7 +4686,7 @@ XmlTableInitOpaque(TableFuncScanState *state, int natts)
XmlTableBuilderData *xtCxt;
PgXmlErrorContext *xmlerrcxt;
- xtCxt = palloc0(sizeof(XmlTableBuilderData));
+ xtCxt = palloc0_object(XmlTableBuilderData);
xtCxt->magic = XMLTABLE_CONTEXT_MAGIC;
xtCxt->natts = natts;
xtCxt->xpathscomp = palloc0(sizeof(xmlXPathCompExprPtr) * natts);
diff --git a/src/backend/utils/cache/catcache.c b/src/backend/utils/cache/catcache.c
index 9ad7681f155..efc44546eae 100644
--- a/src/backend/utils/cache/catcache.c
+++ b/src/backend/utils/cache/catcache.c
@@ -913,7 +913,7 @@ InitCatCache(int id,
*/
if (CacheHdr == NULL)
{
- CacheHdr = (CatCacheHeader *) palloc(sizeof(CatCacheHeader));
+ CacheHdr = palloc_object(CatCacheHeader);
slist_init(&CacheHdr->ch_caches);
CacheHdr->ch_ntup = 0;
#ifdef CATCACHE_STATS
@@ -2214,7 +2214,7 @@ CatalogCacheCreateEntry(CatCache *cache, HeapTuple ntp, Datum *arguments,
{
/* Set up keys for a negative cache entry */
oldcxt = MemoryContextSwitchTo(CacheMemoryContext);
- ct = (CatCTup *) palloc(sizeof(CatCTup));
+ ct = palloc_object(CatCTup);
/*
* Store keys - they'll point into separately allocated memory if not
diff --git a/src/backend/utils/cache/evtcache.c b/src/backend/utils/cache/evtcache.c
index ce596bf5638..8c7955ecda7 100644
--- a/src/backend/utils/cache/evtcache.c
+++ b/src/backend/utils/cache/evtcache.c
@@ -172,7 +172,7 @@ BuildEventTriggerCache(void)
continue;
/* Allocate new cache item. */
- item = palloc0(sizeof(EventTriggerCacheItem));
+ item = palloc0_object(EventTriggerCacheItem);
item->fnoid = form->evtfoid;
item->enabled = form->evtenabled;
diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index f41d314eae3..ec8da2855c0 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -704,7 +704,7 @@ PrepareInplaceInvalidationState(void)
Assert(inplaceInvalInfo == NULL);
/* gone after WAL insertion CritSection ends, so use current context */
- myInfo = (InvalidationInfo *) palloc0(sizeof(InvalidationInfo));
+ myInfo = palloc0_object(InvalidationInfo);
/* Stash our messages past end of the transactional messages, if any. */
if (transInvalInfo != NULL)
@@ -1029,8 +1029,7 @@ inplaceGetInvalidationMessages(SharedInvalidationMessage **msgs,
*RelcacheInitFileInval = inplaceInvalInfo->RelcacheInitFileInval;
nummsgs = NumMessagesInGroup(&inplaceInvalInfo->CurrentCmdInvalidMsgs);
- *msgs = msgarray = (SharedInvalidationMessage *)
- palloc(nummsgs * sizeof(SharedInvalidationMessage));
+ *msgs = msgarray = palloc_array(SharedInvalidationMessage, nummsgs);
nmsgs = 0;
ProcessMessageSubGroupMulti(&inplaceInvalInfo->CurrentCmdInvalidMsgs,
diff --git a/src/backend/utils/cache/lsyscache.c b/src/backend/utils/cache/lsyscache.c
index 7a9af03c960..6227366a3b2 100644
--- a/src/backend/utils/cache/lsyscache.c
+++ b/src/backend/utils/cache/lsyscache.c
@@ -624,8 +624,7 @@ get_op_btree_interpretation(Oid opno)
op_strategy = (StrategyNumber) op_form->amopstrategy;
Assert(op_strategy >= 1 && op_strategy <= 5);
- thisresult = (OpBtreeInterpretation *)
- palloc(sizeof(OpBtreeInterpretation));
+ thisresult = palloc_object(OpBtreeInterpretation);
thisresult->opfamily_id = op_form->amopfamily;
thisresult->strategy = op_strategy;
thisresult->oplefttype = op_form->amoplefttype;
@@ -667,8 +666,7 @@ get_op_btree_interpretation(Oid opno)
continue;
/* OK, report it with "strategy" COMPARE_NE */
- thisresult = (OpBtreeInterpretation *)
- palloc(sizeof(OpBtreeInterpretation));
+ thisresult = palloc_object(OpBtreeInterpretation);
thisresult->opfamily_id = op_form->amopfamily;
thisresult->strategy = COMPARE_NE;
thisresult->oplefttype = op_form->amoplefttype;
diff --git a/src/backend/utils/cache/plancache.c b/src/backend/utils/cache/plancache.c
index 55db8f53705..5ac2b3a6c84 100644
--- a/src/backend/utils/cache/plancache.c
+++ b/src/backend/utils/cache/plancache.c
@@ -216,7 +216,7 @@ CreateCachedPlan(RawStmt *raw_parse_tree,
*/
oldcxt = MemoryContextSwitchTo(source_context);
- plansource = (CachedPlanSource *) palloc0(sizeof(CachedPlanSource));
+ plansource = palloc0_object(CachedPlanSource);
plansource->magic = CACHEDPLANSOURCE_MAGIC;
plansource->raw_parse_tree = copyObject(raw_parse_tree);
plansource->query_string = pstrdup(query_string);
@@ -285,7 +285,7 @@ CreateOneShotCachedPlan(RawStmt *raw_parse_tree,
* Create and fill the CachedPlanSource struct within the caller's memory
* context. Most fields are just left empty for the moment.
*/
- plansource = (CachedPlanSource *) palloc0(sizeof(CachedPlanSource));
+ plansource = palloc0_object(CachedPlanSource);
plansource->magic = CACHEDPLANSOURCE_MAGIC;
plansource->raw_parse_tree = raw_parse_tree;
plansource->query_string = query_string;
@@ -992,7 +992,7 @@ BuildCachedPlan(CachedPlanSource *plansource, List *qlist,
/*
* Create and fill the CachedPlan struct within the new context.
*/
- plan = (CachedPlan *) palloc(sizeof(CachedPlan));
+ plan = palloc_object(CachedPlan);
plan->magic = CACHEDPLAN_MAGIC;
plan->stmt_list = plist;
@@ -1556,7 +1556,7 @@ CopyCachedPlan(CachedPlanSource *plansource)
oldcxt = MemoryContextSwitchTo(source_context);
- newsource = (CachedPlanSource *) palloc0(sizeof(CachedPlanSource));
+ newsource = palloc0_object(CachedPlanSource);
newsource->magic = CACHEDPLANSOURCE_MAGIC;
newsource->raw_parse_tree = copyObject(plansource->raw_parse_tree);
newsource->query_string = pstrdup(plansource->query_string);
@@ -1702,7 +1702,7 @@ GetCachedExpression(Node *expr)
oldcxt = MemoryContextSwitchTo(cexpr_context);
- cexpr = (CachedExpression *) palloc(sizeof(CachedExpression));
+ cexpr = palloc_object(CachedExpression);
cexpr->magic = CACHEDEXPR_MAGIC;
cexpr->expr = copyObject(expr);
cexpr->is_valid = true;
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 43219a9629c..a7164e1f345 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -851,8 +851,7 @@ RelationBuildRuleLock(Relation relation)
if (numlocks >= maxlocks)
{
maxlocks *= 2;
- rules = (RewriteRule **)
- repalloc(rules, sizeof(RewriteRule *) * maxlocks);
+ rules = repalloc_array(rules, RewriteRule *, maxlocks);
}
rules[numlocks++] = rule;
}
@@ -1066,8 +1065,8 @@ RelationBuildDesc(Oid targetRelId, bool insertIt)
int allocsize;
allocsize = in_progress_list_maxlen * 2;
- in_progress_list = repalloc(in_progress_list,
- allocsize * sizeof(*in_progress_list));
+ in_progress_list = repalloc_array(in_progress_list,
+ InProgressEnt, allocsize);
in_progress_list_maxlen = allocsize;
}
in_progress_offset = in_progress_list_len++;
@@ -1961,7 +1960,7 @@ formrdesc(const char *relationName, Oid relationReltype,
/* mark not-null status */
if (has_not_null)
{
- TupleConstr *constr = (TupleConstr *) palloc0(sizeof(TupleConstr));
+ TupleConstr *constr = palloc0_object(TupleConstr);
constr->has_not_null = true;
relation->rd_att->constr = constr;
@@ -3062,7 +3061,7 @@ RememberToFreeTupleDescAtEOX(TupleDesc td)
oldcxt = MemoryContextSwitchTo(CacheMemoryContext);
- EOXactTupleDescArray = (TupleDesc *) palloc(16 * sizeof(TupleDesc));
+ EOXactTupleDescArray = palloc_array(TupleDesc, 16);
EOXactTupleDescArrayLen = 16;
NextEOXactTupleDescNum = 0;
MemoryContextSwitchTo(oldcxt);
@@ -3073,8 +3072,8 @@ RememberToFreeTupleDescAtEOX(TupleDesc td)
Assert(EOXactTupleDescArrayLen > 0);
- EOXactTupleDescArray = (TupleDesc *) repalloc(EOXactTupleDescArray,
- newlen * sizeof(TupleDesc));
+ EOXactTupleDescArray = repalloc_array(EOXactTupleDescArray,
+ TupleDesc, newlen);
EOXactTupleDescArrayLen = newlen;
}
@@ -3124,7 +3123,7 @@ AssertPendingSyncs_RelationCache(void)
*/
PushActiveSnapshot(GetTransactionSnapshot());
maxrels = 1;
- rels = palloc(maxrels * sizeof(*rels));
+ rels = palloc_array(Relation, maxrels);
nrels = 0;
hash_seq_init(&status, GetLockMethodLocalHash());
while ((locallock = (LOCALLOCK *) hash_seq_search(&status)) != NULL)
@@ -3144,7 +3143,7 @@ AssertPendingSyncs_RelationCache(void)
if (nrels >= maxrels)
{
maxrels *= 2;
- rels = repalloc(rels, maxrels * sizeof(*rels));
+ rels = repalloc_array(rels, Relation, maxrels);
}
rels[nrels++] = r;
}
@@ -3572,7 +3571,7 @@ RelationBuildLocalRelation(const char *relname,
if (has_not_null)
{
- TupleConstr *constr = (TupleConstr *) palloc0(sizeof(TupleConstr));
+ TupleConstr *constr = palloc0_object(TupleConstr);
constr->has_not_null = true;
rel->rd_att->constr = constr;
@@ -5590,9 +5589,9 @@ RelationGetExclusionInfo(Relation indexRelation,
indnkeyatts = IndexRelationGetNumberOfKeyAttributes(indexRelation);
/* Allocate result space in caller context */
- *operators = ops = (Oid *) palloc(sizeof(Oid) * indnkeyatts);
- *procs = funcs = (Oid *) palloc(sizeof(Oid) * indnkeyatts);
- *strategies = strats = (uint16 *) palloc(sizeof(uint16) * indnkeyatts);
+ *operators = ops = palloc_array(Oid, indnkeyatts);
+ *procs = funcs = palloc_array(Oid, indnkeyatts);
+ *strategies = strats = palloc_array(uint16, indnkeyatts);
/* Quick exit if we have the data cached already */
if (indexRelation->rd_exclstrats != NULL)
@@ -5887,7 +5886,7 @@ RelationBuildPublicationDesc(Relation relation, PublicationDesc *pubdesc)
static bytea **
CopyIndexAttOptions(bytea **srcopts, int natts)
{
- bytea **opts = palloc(sizeof(*opts) * natts);
+ bytea **opts = palloc_array(bytea *, natts);
for (int i = 0; i < natts; i++)
{
@@ -5919,7 +5918,7 @@ RelationGetIndexAttOptions(Relation relation, bool copy)
return copy ? CopyIndexAttOptions(opts, natts) : opts;
/* Get and parse opclass options. */
- opts = palloc0(sizeof(*opts) * natts);
+ opts = palloc0_array(bytea *, natts);
for (i = 0; i < natts; i++)
{
@@ -6114,7 +6113,7 @@ load_relcache_init_file(bool shared)
* helps to guard against broken init files.
*/
max_rels = 100;
- rels = (Relation *) palloc(max_rels * sizeof(Relation));
+ rels = palloc_array(Relation, max_rels);
num_rels = 0;
nailed_rels = nailed_indexes = 0;
@@ -6149,7 +6148,7 @@ load_relcache_init_file(bool shared)
if (num_rels >= max_rels)
{
max_rels *= 2;
- rels = (Relation *) repalloc(rels, max_rels * sizeof(Relation));
+ rels = repalloc_array(rels, Relation, max_rels);
}
rel = rels[num_rels++] = (Relation) palloc(len);
@@ -6212,7 +6211,7 @@ load_relcache_init_file(bool shared)
/* mark not-null status */
if (has_not_null)
{
- TupleConstr *constr = (TupleConstr *) palloc0(sizeof(TupleConstr));
+ TupleConstr *constr = palloc0_object(TupleConstr);
constr->has_not_null = true;
rel->rd_att->constr = constr;
diff --git a/src/backend/utils/cache/typcache.c b/src/backend/utils/cache/typcache.c
index 5a3b3788d02..1696b517508 100644
--- a/src/backend/utils/cache/typcache.c
+++ b/src/backend/utils/cache/typcache.c
@@ -443,8 +443,8 @@ lookup_type_cache(Oid type_id, int flags)
int allocsize;
allocsize = in_progress_list_maxlen * 2;
- in_progress_list = repalloc(in_progress_list,
- allocsize * sizeof(*in_progress_list));
+ in_progress_list = repalloc_array(in_progress_list, Oid,
+ allocsize);
in_progress_list_maxlen = allocsize;
}
in_progress_offset = in_progress_list_len++;
@@ -1216,14 +1216,15 @@ load_domaintype_info(TypeCacheEntry *typentry)
if (ccons == NULL)
{
cconslen = 8;
- ccons = (DomainConstraintState **)
- palloc(cconslen * sizeof(DomainConstraintState *));
+ ccons = palloc_array(DomainConstraintState *,
+ cconslen);
}
else if (nccons >= cconslen)
{
cconslen *= 2;
- ccons = (DomainConstraintState **)
- repalloc(ccons, cconslen * sizeof(DomainConstraintState *));
+ ccons = repalloc_array(ccons,
+ DomainConstraintState *,
+ cconslen);
}
ccons[nccons++] = r;
}
@@ -2751,7 +2752,7 @@ load_enum_cache_data(TypeCacheEntry *tcache)
* through.
*/
maxitems = 64;
- items = (EnumItem *) palloc(sizeof(EnumItem) * maxitems);
+ items = palloc_array(EnumItem, maxitems);
numitems = 0;
/* Scan pg_enum for the members of the target enum type. */
@@ -2773,7 +2774,7 @@ load_enum_cache_data(TypeCacheEntry *tcache)
if (numitems >= maxitems)
{
maxitems *= 2;
- items = (EnumItem *) repalloc(items, sizeof(EnumItem) * maxitems);
+ items = repalloc_array(items, EnumItem, maxitems);
}
items[numitems].enum_oid = en->oid;
items[numitems].sort_order = en->enumsortorder;
diff --git a/src/backend/utils/error/elog.c b/src/backend/utils/error/elog.c
index 860bbd40d42..1aca888c5af 100644
--- a/src/backend/utils/error/elog.c
+++ b/src/backend/utils/error/elog.c
@@ -1757,7 +1757,7 @@ CopyErrorData(void)
Assert(CurrentMemoryContext != ErrorContext);
/* Copy the struct itself */
- newedata = (ErrorData *) palloc(sizeof(ErrorData));
+ newedata = palloc_object(ErrorData);
memcpy(newedata, edata, sizeof(ErrorData));
/*
diff --git a/src/backend/utils/fmgr/fmgr.c b/src/backend/utils/fmgr/fmgr.c
index aa89ae8fe1a..31197a24925 100644
--- a/src/backend/utils/fmgr/fmgr.c
+++ b/src/backend/utils/fmgr/fmgr.c
@@ -1806,7 +1806,7 @@ OidSendFunctionCall(Oid functionId, Datum val)
Datum
Int64GetDatum(int64 X)
{
- int64 *retval = (int64 *) palloc(sizeof(int64));
+ int64 *retval = palloc_object(int64);
*retval = X;
return PointerGetDatum(retval);
@@ -1815,7 +1815,7 @@ Int64GetDatum(int64 X)
Datum
Float8GetDatum(float8 X)
{
- float8 *retval = (float8 *) palloc(sizeof(float8));
+ float8 *retval = palloc_object(float8);
*retval = X;
return PointerGetDatum(retval);
diff --git a/src/backend/utils/fmgr/funcapi.c b/src/backend/utils/fmgr/funcapi.c
index 5f2317211c9..8ebc99a1398 100644
--- a/src/backend/utils/fmgr/funcapi.c
+++ b/src/backend/utils/fmgr/funcapi.c
@@ -1570,7 +1570,7 @@ get_func_input_arg_names(Datum proargnames, Datum proargmodes,
}
/* extract input-argument names */
- inargnames = (char **) palloc(numargs * sizeof(char *));
+ inargnames = palloc_array(char *, numargs);
numinargs = 0;
for (i = 0; i < numargs; i++)
{
@@ -1809,8 +1809,8 @@ build_function_result_tupdesc_d(char prokind,
return NULL;
/* extract output-argument types and names */
- outargtypes = (Oid *) palloc(numargs * sizeof(Oid));
- outargnames = (char **) palloc(numargs * sizeof(char *));
+ outargtypes = palloc_array(Oid, numargs);
+ outargnames = palloc_array(char *, numargs);
numoutargs = 0;
for (i = 0; i < numargs; i++)
{
@@ -2040,7 +2040,7 @@ extract_variadic_args(FunctionCallInfo fcinfo, int variadic_start,
&nargs);
/* All the elements of the array have the same type */
- types_res = (Oid *) palloc0(nargs * sizeof(Oid));
+ types_res = palloc0_array(Oid, nargs);
for (i = 0; i < nargs; i++)
types_res[i] = element_type;
}
@@ -2048,9 +2048,9 @@ extract_variadic_args(FunctionCallInfo fcinfo, int variadic_start,
{
nargs = PG_NARGS() - variadic_start;
Assert(nargs > 0);
- nulls_res = (bool *) palloc0(nargs * sizeof(bool));
- args_res = (Datum *) palloc0(nargs * sizeof(Datum));
- types_res = (Oid *) palloc0(nargs * sizeof(Oid));
+ nulls_res = palloc0_array(bool, nargs);
+ args_res = palloc0_array(Datum, nargs);
+ types_res = palloc0_array(Oid, nargs);
for (i = 0; i < nargs; i++)
{
diff --git a/src/backend/utils/init/postinit.c b/src/backend/utils/init/postinit.c
index 01bb6a410cb..592d18266de 100644
--- a/src/backend/utils/init/postinit.c
+++ b/src/backend/utils/init/postinit.c
@@ -1234,7 +1234,7 @@ process_startup_options(Port *port, bool am_superuser)
maxac = 2 + (strlen(port->cmdline_options) + 1) / 2;
- av = (char **) palloc(maxac * sizeof(char *));
+ av = palloc_array(char *, maxac);
ac = 0;
av[ac++] = "postgres";
diff --git a/src/backend/utils/mb/mbutils.c b/src/backend/utils/mb/mbutils.c
index 885dc11d51d..3e502fc15fa 100644
--- a/src/backend/utils/mb/mbutils.c
+++ b/src/backend/utils/mb/mbutils.c
@@ -1783,7 +1783,7 @@ pgwin32_message_to_UTF16(const char *str, int len, int *utf16len)
*/
if (codepage != 0)
{
- utf16 = (WCHAR *) palloc(sizeof(WCHAR) * (len + 1));
+ utf16 = palloc_array(WCHAR, (len + 1));
dstlen = MultiByteToWideChar(codepage, 0, str, len, utf16, len);
utf16[dstlen] = (WCHAR) 0;
}
@@ -1807,7 +1807,7 @@ pgwin32_message_to_UTF16(const char *str, int len, int *utf16len)
else
utf8 = (char *) str;
- utf16 = (WCHAR *) palloc(sizeof(WCHAR) * (len + 1));
+ utf16 = palloc_array(WCHAR, (len + 1));
dstlen = MultiByteToWideChar(CP_UTF8, 0, utf8, len, utf16, len);
utf16[dstlen] = (WCHAR) 0;
diff --git a/src/backend/utils/misc/conffiles.c b/src/backend/utils/misc/conffiles.c
index 23ebad4749b..d3df1a6ef1e 100644
--- a/src/backend/utils/misc/conffiles.c
+++ b/src/backend/utils/misc/conffiles.c
@@ -108,7 +108,7 @@ GetConfFilesInDir(const char *includedir, const char *calling_file,
* them prior to caller processing the contents.
*/
size_filenames = 32;
- filenames = (char **) palloc(size_filenames * sizeof(char *));
+ filenames = palloc_array(char *, size_filenames);
*num_filenames = 0;
while ((de = ReadDir(d, directory)) != NULL)
@@ -144,8 +144,8 @@ GetConfFilesInDir(const char *includedir, const char *calling_file,
if (*num_filenames >= size_filenames)
{
size_filenames += 32;
- filenames = (char **) repalloc(filenames,
- size_filenames * sizeof(char *));
+ filenames = repalloc_array(filenames, char *,
+ size_filenames);
}
filenames[*num_filenames] = pstrdup(filename);
(*num_filenames)++;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index f822b069f41..70e2dc23d92 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -877,7 +877,7 @@ get_guc_variables(int *num_vars)
int i;
*num_vars = hash_get_num_entries(guc_hashtab);
- result = palloc(sizeof(struct config_generic *) * *num_vars);
+ result = palloc_array(struct config_generic *, *num_vars);
/* Extract pointers from the hash table */
i = 0;
@@ -4573,7 +4573,7 @@ replace_auto_config_value(ConfigVariable **head_p, ConfigVariable **tail_p,
return;
/* OK, append a new entry */
- item = palloc(sizeof *item);
+ item = palloc_object(ConfigVariable);
item->name = pstrdup(name);
item->value = pstrdup(value);
item->errmsg = NULL;
@@ -5339,7 +5339,8 @@ get_explain_guc_options(int *num)
* While only a fraction of all the GUC variables are marked GUC_EXPLAIN,
* it doesn't seem worth dynamically resizing this array.
*/
- result = palloc(sizeof(struct config_generic *) * hash_get_num_entries(guc_hashtab));
+ result = palloc_array(struct config_generic *,
+ hash_get_num_entries(guc_hashtab));
/* We need only consider GUCs with source not PGC_S_DEFAULT */
dlist_foreach(iter, &guc_nondef_list)
diff --git a/src/backend/utils/misc/tzparser.c b/src/backend/utils/misc/tzparser.c
index 6aaf7395ba8..d7e84bab981 100644
--- a/src/backend/utils/misc/tzparser.c
+++ b/src/backend/utils/misc/tzparser.c
@@ -466,7 +466,7 @@ load_tzoffsets(const char *filename)
/* Initialize array at a reasonable size */
arraysize = 128;
- array = (tzEntry *) palloc(arraysize * sizeof(tzEntry));
+ array = palloc_array(tzEntry, arraysize);
/* Parse the file(s) */
n = ParseTzFile(filename, 0, &array, &arraysize, 0);
diff --git a/src/backend/utils/mmgr/dsa.c b/src/backend/utils/mmgr/dsa.c
index 17d4f7a7a06..3107757dce8 100644
--- a/src/backend/utils/mmgr/dsa.c
+++ b/src/backend/utils/mmgr/dsa.c
@@ -1280,7 +1280,7 @@ create_internal(void *place, size_t size,
* area. Other backends will need to obtain their own dsa_area object by
* attaching.
*/
- area = palloc(sizeof(dsa_area));
+ area = palloc_object(dsa_area);
area->control = control;
area->resowner = CurrentResourceOwner;
memset(area->segment_maps, 0, sizeof(dsa_segment_map) * DSA_MAX_SEGMENTS);
@@ -1336,7 +1336,7 @@ attach_internal(void *place, dsm_segment *segment, dsa_handle handle)
(DSA_SEGMENT_HEADER_MAGIC ^ handle ^ 0));
/* Build the backend-local area object. */
- area = palloc(sizeof(dsa_area));
+ area = palloc_object(dsa_area);
area->control = control;
area->resowner = CurrentResourceOwner;
memset(&area->segment_maps[0], 0,
diff --git a/src/backend/utils/sort/logtape.c b/src/backend/utils/sort/logtape.c
index 47e601ef62c..b48f373a25e 100644
--- a/src/backend/utils/sort/logtape.c
+++ b/src/backend/utils/sort/logtape.c
@@ -560,7 +560,7 @@ LogicalTapeSetCreate(bool preallocate, SharedFileSet *fileset, int worker)
/*
* Create top-level struct including per-tape LogicalTape structs.
*/
- lts = (LogicalTapeSet *) palloc(sizeof(LogicalTapeSet));
+ lts = palloc_object(LogicalTapeSet);
lts->nBlocksAllocated = 0L;
lts->nBlocksWritten = 0L;
lts->nHoleBlocks = 0L;
@@ -700,7 +700,7 @@ ltsCreateTape(LogicalTapeSet *lts)
/*
* Create per-tape struct. Note we allocate the I/O buffer lazily.
*/
- lt = palloc(sizeof(LogicalTape));
+ lt = palloc_object(LogicalTape);
lt->tapeSet = lts;
lt->writing = true;
lt->frozen = false;
diff --git a/src/backend/utils/sort/sharedtuplestore.c b/src/backend/utils/sort/sharedtuplestore.c
index 2f031c32909..965d94447e7 100644
--- a/src/backend/utils/sort/sharedtuplestore.c
+++ b/src/backend/utils/sort/sharedtuplestore.c
@@ -161,7 +161,7 @@ sts_initialize(SharedTuplestore *sts, int participants,
sts->participants[i].writing = false;
}
- accessor = palloc0(sizeof(SharedTuplestoreAccessor));
+ accessor = palloc0_object(SharedTuplestoreAccessor);
accessor->participant = my_participant_number;
accessor->sts = sts;
accessor->fileset = fileset;
@@ -183,7 +183,7 @@ sts_attach(SharedTuplestore *sts,
Assert(my_participant_number < sts->nparticipants);
- accessor = palloc0(sizeof(SharedTuplestoreAccessor));
+ accessor = palloc0_object(SharedTuplestoreAccessor);
accessor->participant = my_participant_number;
accessor->sts = sts;
accessor->fileset = fileset;
diff --git a/src/backend/utils/sort/tuplesort.c b/src/backend/utils/sort/tuplesort.c
index bda1bffa3cc..dbcd8227ff1 100644
--- a/src/backend/utils/sort/tuplesort.c
+++ b/src/backend/utils/sort/tuplesort.c
@@ -677,7 +677,7 @@ tuplesort_begin_common(int workMem, SortCoordinate coordinate, int sortopt)
*/
oldcontext = MemoryContextSwitchTo(maincontext);
- state = (Tuplesortstate *) palloc0(sizeof(Tuplesortstate));
+ state = palloc0_object(Tuplesortstate);
if (trace_sort)
pg_rusage_init(&state->ru_start);
diff --git a/src/backend/utils/sort/tuplesortvariants.c b/src/backend/utils/sort/tuplesortvariants.c
index 913c4ef455e..8cefb4c9e25 100644
--- a/src/backend/utils/sort/tuplesortvariants.c
+++ b/src/backend/utils/sort/tuplesortvariants.c
@@ -254,7 +254,7 @@ tuplesort_begin_cluster(TupleDesc tupDesc,
Assert(indexRel->rd_rel->relam == BTREE_AM_OID);
oldcontext = MemoryContextSwitchTo(base->maincontext);
- arg = (TuplesortClusterArg *) palloc0(sizeof(TuplesortClusterArg));
+ arg = palloc0_object(TuplesortClusterArg);
if (trace_sort)
elog(LOG,
@@ -362,7 +362,7 @@ tuplesort_begin_index_btree(Relation heapRel,
int i;
oldcontext = MemoryContextSwitchTo(base->maincontext);
- arg = (TuplesortIndexBTreeArg *) palloc(sizeof(TuplesortIndexBTreeArg));
+ arg = palloc_object(TuplesortIndexBTreeArg);
if (trace_sort)
elog(LOG,
@@ -444,7 +444,7 @@ tuplesort_begin_index_hash(Relation heapRel,
TuplesortIndexHashArg *arg;
oldcontext = MemoryContextSwitchTo(base->maincontext);
- arg = (TuplesortIndexHashArg *) palloc(sizeof(TuplesortIndexHashArg));
+ arg = palloc_object(TuplesortIndexHashArg);
if (trace_sort)
elog(LOG,
@@ -493,7 +493,7 @@ tuplesort_begin_index_gist(Relation heapRel,
int i;
oldcontext = MemoryContextSwitchTo(base->maincontext);
- arg = (TuplesortIndexBTreeArg *) palloc(sizeof(TuplesortIndexBTreeArg));
+ arg = palloc_object(TuplesortIndexBTreeArg);
if (trace_sort)
elog(LOG,
@@ -582,7 +582,7 @@ tuplesort_begin_datum(Oid datumType, Oid sortOperator, Oid sortCollation,
bool typbyval;
oldcontext = MemoryContextSwitchTo(base->maincontext);
- arg = (TuplesortDatumArg *) palloc(sizeof(TuplesortDatumArg));
+ arg = palloc_object(TuplesortDatumArg);
if (trace_sort)
elog(LOG,
diff --git a/src/backend/utils/sort/tuplestore.c b/src/backend/utils/sort/tuplestore.c
index aacec8b7993..1d1e681a251 100644
--- a/src/backend/utils/sort/tuplestore.c
+++ b/src/backend/utils/sort/tuplestore.c
@@ -257,7 +257,7 @@ tuplestore_begin_common(int eflags, bool interXact, int maxKBytes)
{
Tuplestorestate *state;
- state = (Tuplestorestate *) palloc0(sizeof(Tuplestorestate));
+ state = palloc0_object(Tuplestorestate);
state->status = TSS_INMEM;
state->eflags = eflags;
diff --git a/src/backend/utils/time/combocid.c b/src/backend/utils/time/combocid.c
index 13b3927191e..86c7b11fd7d 100644
--- a/src/backend/utils/time/combocid.c
+++ b/src/backend/utils/time/combocid.c
@@ -242,8 +242,8 @@ GetComboCommandId(CommandId cmin, CommandId cmax)
{
int newsize = sizeComboCids * 2;
- comboCids = (ComboCidKeyData *)
- repalloc(comboCids, sizeof(ComboCidKeyData) * newsize);
+ comboCids = repalloc_array(comboCids, ComboCidKeyData,
+ newsize);
sizeComboCids = newsize;
}
diff --git a/src/backend/utils/time/snapmgr.c b/src/backend/utils/time/snapmgr.c
index 8f1508b1ee2..64c08e34b82 100644
--- a/src/backend/utils/time/snapmgr.c
+++ b/src/backend/utils/time/snapmgr.c
@@ -1107,7 +1107,7 @@ ExportSnapshot(Snapshot snapshot)
snapshot = CopySnapshot(snapshot);
oldcxt = MemoryContextSwitchTo(TopTransactionContext);
- esnap = (ExportedSnapshot *) palloc(sizeof(ExportedSnapshot));
+ esnap = palloc_object(ExportedSnapshot);
esnap->snapfile = pstrdup(path);
esnap->snapshot = snapshot;
exportedSnapshots = lappend(exportedSnapshots, esnap);
diff --git a/src/bin/pg_basebackup/astreamer_inject.c b/src/bin/pg_basebackup/astreamer_inject.c
index 15334e458ad..e77de72f7ac 100644
--- a/src/bin/pg_basebackup/astreamer_inject.c
+++ b/src/bin/pg_basebackup/astreamer_inject.c
@@ -68,7 +68,7 @@ astreamer_recovery_injector_new(astreamer *next,
{
astreamer_recovery_injector *streamer;
- streamer = palloc0(sizeof(astreamer_recovery_injector));
+ streamer = palloc0_object(astreamer_recovery_injector);
*((const astreamer_ops **) &streamer->base.bbs_ops) =
&astreamer_recovery_injector_ops;
streamer->base.bbs_next = next;
diff --git a/src/bin/pg_combinebackup/load_manifest.c b/src/bin/pg_combinebackup/load_manifest.c
index 485fe518e41..266cd558f74 100644
--- a/src/bin/pg_combinebackup/load_manifest.c
+++ b/src/bin/pg_combinebackup/load_manifest.c
@@ -298,7 +298,7 @@ combinebackup_per_wal_range_cb(JsonManifestParseContext *context,
manifest_wal_range *range;
/* Allocate and initialize a struct describing this WAL range. */
- range = palloc(sizeof(manifest_wal_range));
+ range = palloc_object(manifest_wal_range);
range->tli = tli;
range->start_lsn = start_lsn;
range->end_lsn = end_lsn;
diff --git a/src/bin/pg_dump/common.c b/src/bin/pg_dump/common.c
index 56b6c368acf..fef56cf3f20 100644
--- a/src/bin/pg_dump/common.c
+++ b/src/bin/pg_dump/common.c
@@ -352,7 +352,7 @@ flagInhTables(Archive *fout, TableInfo *tblinfo, int numTables,
tblinfo[i].numParents,
tblinfo[i].dobj.name);
- attachinfo = (TableAttachInfo *) palloc(sizeof(TableAttachInfo));
+ attachinfo = palloc_object(TableAttachInfo);
attachinfo->dobj.objType = DO_TABLE_ATTACH;
attachinfo->dobj.catId.tableoid = 0;
attachinfo->dobj.catId.oid = 0;
diff --git a/src/bin/pg_verifybackup/astreamer_verify.c b/src/bin/pg_verifybackup/astreamer_verify.c
index 6c3a132ea3a..ed236bd601c 100644
--- a/src/bin/pg_verifybackup/astreamer_verify.c
+++ b/src/bin/pg_verifybackup/astreamer_verify.c
@@ -68,7 +68,7 @@ astreamer_verify_content_new(astreamer *next, verifier_context *context,
{
astreamer_verify *streamer;
- streamer = palloc0(sizeof(astreamer_verify));
+ streamer = palloc0_object(astreamer_verify);
*((const astreamer_ops **) &streamer->base.bbs_ops) =
&astreamer_verify_ops;
diff --git a/src/bin/pg_verifybackup/pg_verifybackup.c b/src/bin/pg_verifybackup/pg_verifybackup.c
index 7c720ab98bd..2c576371036 100644
--- a/src/bin/pg_verifybackup/pg_verifybackup.c
+++ b/src/bin/pg_verifybackup/pg_verifybackup.c
@@ -583,7 +583,7 @@ verifybackup_per_wal_range_cb(JsonManifestParseContext *context,
manifest_wal_range *range;
/* Allocate and initialize a struct describing this WAL range. */
- range = palloc(sizeof(manifest_wal_range));
+ range = palloc_object(manifest_wal_range);
range->tli = tli;
range->start_lsn = start_lsn;
range->end_lsn = end_lsn;
diff --git a/src/common/blkreftable.c b/src/common/blkreftable.c
index 6d9c1dfddbc..6478ec8c3c0 100644
--- a/src/common/blkreftable.c
+++ b/src/common/blkreftable.c
@@ -234,7 +234,7 @@ static void BlockRefTableFileTerminate(BlockRefTableBuffer *buffer);
BlockRefTable *
CreateEmptyBlockRefTable(void)
{
- BlockRefTable *brtab = palloc(sizeof(BlockRefTable));
+ BlockRefTable *brtab = palloc_object(BlockRefTable);
/*
* Even completely empty database has a few hundred relation forks, so it
@@ -496,8 +496,8 @@ WriteBlockRefTable(BlockRefTable *brtab,
BlockRefTableEntry *brtentry;
/* Extract entries into serializable format and sort them. */
- sdata =
- palloc(brtab->hash->members * sizeof(BlockRefTableSerializedEntry));
+ sdata = palloc_array(BlockRefTableSerializedEntry,
+ brtab->hash->members);
blockreftable_start_iterate(brtab->hash, &it);
while ((brtentry = blockreftable_iterate(brtab->hash, &it)) != NULL)
{
@@ -584,7 +584,7 @@ CreateBlockRefTableReader(io_callback_fn read_callback,
uint32 magic;
/* Initialize data structure. */
- reader = palloc0(sizeof(BlockRefTableReader));
+ reader = palloc0_object(BlockRefTableReader);
reader->buffer.io_callback = read_callback;
reader->buffer.io_callback_arg = read_callback_arg;
reader->error_filename = error_filename;
@@ -794,7 +794,7 @@ CreateBlockRefTableWriter(io_callback_fn write_callback,
uint32 magic = BLOCKREFTABLE_MAGIC;
/* Prepare buffer and CRC check and save callbacks. */
- writer = palloc0(sizeof(BlockRefTableWriter));
+ writer = palloc0_object(BlockRefTableWriter);
writer->buffer.io_callback = write_callback;
writer->buffer.io_callback_arg = write_callback_arg;
INIT_CRC32C(writer->buffer.crc);
@@ -874,7 +874,7 @@ DestroyBlockRefTableWriter(BlockRefTableWriter *writer)
BlockRefTableEntry *
CreateBlockRefTableEntry(RelFileLocator rlocator, ForkNumber forknum)
{
- BlockRefTableEntry *entry = palloc0(sizeof(BlockRefTableEntry));
+ BlockRefTableEntry *entry = palloc0_object(BlockRefTableEntry);
memcpy(&entry->key.rlocator, &rlocator, sizeof(RelFileLocator));
entry->key.forknum = forknum;
@@ -1070,7 +1070,7 @@ BlockRefTableEntryMarkBlockModified(BlockRefTableEntry *entry,
unsigned j;
/* Allocate a new chunk. */
- newchunk = palloc0(MAX_ENTRIES_PER_CHUNK * sizeof(uint16));
+ newchunk = palloc0_array(uint16, MAX_ENTRIES_PER_CHUNK);
/* Set the bit for each existing entry. */
for (j = 0; j < entry->chunk_usage[chunkno]; ++j)
diff --git a/src/common/parse_manifest.c b/src/common/parse_manifest.c
index 05858578207..531efa10dec 100644
--- a/src/common/parse_manifest.c
+++ b/src/common/parse_manifest.c
@@ -132,8 +132,8 @@ json_parse_manifest_incremental_init(JsonManifestParseContext *context)
JsonManifestParseState *parse;
pg_cryptohash_ctx *manifest_ctx;
- incstate = palloc(sizeof(JsonManifestParseIncrementalState));
- parse = palloc(sizeof(JsonManifestParseState));
+ incstate = palloc_object(JsonManifestParseIncrementalState);
+ parse = palloc_object(JsonManifestParseState);
parse->context = context;
parse->state = JM_EXPECT_TOPLEVEL_START;
diff --git a/src/common/pgfnames.c b/src/common/pgfnames.c
index 8fb79105714..bcb5f5bb6f7 100644
--- a/src/common/pgfnames.c
+++ b/src/common/pgfnames.c
@@ -49,7 +49,7 @@ pgfnames(const char *path)
return NULL;
}
- filenames = (char **) palloc(fnsize * sizeof(char *));
+ filenames = palloc_array(char *, fnsize);
while (errno = 0, (file = readdir(dir)) != NULL)
{
@@ -58,8 +58,8 @@ pgfnames(const char *path)
if (numnames + 1 >= fnsize)
{
fnsize *= 2;
- filenames = (char **) repalloc(filenames,
- fnsize * sizeof(char *));
+ filenames = repalloc_array(filenames, char *,
+ fnsize);
}
filenames[numnames++] = pstrdup(file->d_name);
}
diff --git a/src/common/rmtree.c b/src/common/rmtree.c
index 2f364f84ae5..47cd0a4d8a1 100644
--- a/src/common/rmtree.c
+++ b/src/common/rmtree.c
@@ -64,7 +64,7 @@ rmtree(const char *path, bool rmtopdir)
return false;
}
- dirnames = (char **) palloc(sizeof(char *) * dirnames_capacity);
+ dirnames = palloc_array(char *, dirnames_capacity);
while (errno = 0, (de = readdir(dir)))
{
diff --git a/src/fe_utils/astreamer_file.c b/src/fe_utils/astreamer_file.c
index c6856285086..940f0f5a1a6 100644
--- a/src/fe_utils/astreamer_file.c
+++ b/src/fe_utils/astreamer_file.c
@@ -82,7 +82,7 @@ astreamer_plain_writer_new(char *pathname, FILE *file)
{
astreamer_plain_writer *streamer;
- streamer = palloc0(sizeof(astreamer_plain_writer));
+ streamer = palloc0_object(astreamer_plain_writer);
*((const astreamer_ops **) &streamer->base.bbs_ops) =
&astreamer_plain_writer_ops;
@@ -189,7 +189,7 @@ astreamer_extractor_new(const char *basepath,
{
astreamer_extractor *streamer;
- streamer = palloc0(sizeof(astreamer_extractor));
+ streamer = palloc0_object(astreamer_extractor);
*((const astreamer_ops **) &streamer->base.bbs_ops) =
&astreamer_extractor_ops;
streamer->basepath = pstrdup(basepath);
diff --git a/src/fe_utils/astreamer_gzip.c b/src/fe_utils/astreamer_gzip.c
index a395f57edcd..06e2670d363 100644
--- a/src/fe_utils/astreamer_gzip.c
+++ b/src/fe_utils/astreamer_gzip.c
@@ -102,7 +102,7 @@ astreamer_gzip_writer_new(char *pathname, FILE *file,
#ifdef HAVE_LIBZ
astreamer_gzip_writer *streamer;
- streamer = palloc0(sizeof(astreamer_gzip_writer));
+ streamer = palloc0_object(astreamer_gzip_writer);
*((const astreamer_ops **) &streamer->base.bbs_ops) =
&astreamer_gzip_writer_ops;
@@ -241,7 +241,7 @@ astreamer_gzip_decompressor_new(astreamer *next)
Assert(next != NULL);
- streamer = palloc0(sizeof(astreamer_gzip_decompressor));
+ streamer = palloc0_object(astreamer_gzip_decompressor);
*((const astreamer_ops **) &streamer->base.bbs_ops) =
&astreamer_gzip_decompressor_ops;
diff --git a/src/fe_utils/astreamer_lz4.c b/src/fe_utils/astreamer_lz4.c
index 781aaf99f38..9dc886a7926 100644
--- a/src/fe_utils/astreamer_lz4.c
+++ b/src/fe_utils/astreamer_lz4.c
@@ -78,7 +78,7 @@ astreamer_lz4_compressor_new(astreamer *next, pg_compress_specification *compres
Assert(next != NULL);
- streamer = palloc0(sizeof(astreamer_lz4_frame));
+ streamer = palloc0_object(astreamer_lz4_frame);
*((const astreamer_ops **) &streamer->base.bbs_ops) =
&astreamer_lz4_compressor_ops;
@@ -282,7 +282,7 @@ astreamer_lz4_decompressor_new(astreamer *next)
Assert(next != NULL);
- streamer = palloc0(sizeof(astreamer_lz4_frame));
+ streamer = palloc0_object(astreamer_lz4_frame);
*((const astreamer_ops **) &streamer->base.bbs_ops) =
&astreamer_lz4_decompressor_ops;
diff --git a/src/fe_utils/astreamer_tar.c b/src/fe_utils/astreamer_tar.c
index 088e2357920..896f8ab4970 100644
--- a/src/fe_utils/astreamer_tar.c
+++ b/src/fe_utils/astreamer_tar.c
@@ -94,7 +94,7 @@ astreamer_tar_parser_new(astreamer *next)
{
astreamer_tar_parser *streamer;
- streamer = palloc0(sizeof(astreamer_tar_parser));
+ streamer = palloc0_object(astreamer_tar_parser);
*((const astreamer_ops **) &streamer->base.bbs_ops) =
&astreamer_tar_parser_ops;
streamer->base.bbs_next = next;
@@ -357,7 +357,7 @@ astreamer_tar_archiver_new(astreamer *next)
{
astreamer_tar_archiver *streamer;
- streamer = palloc0(sizeof(astreamer_tar_archiver));
+ streamer = palloc0_object(astreamer_tar_archiver);
*((const astreamer_ops **) &streamer->base.bbs_ops) =
&astreamer_tar_archiver_ops;
streamer->base.bbs_next = next;
@@ -463,7 +463,7 @@ astreamer_tar_terminator_new(astreamer *next)
{
astreamer *streamer;
- streamer = palloc0(sizeof(astreamer));
+ streamer = palloc0_object(astreamer);
*((const astreamer_ops **) &streamer->bbs_ops) =
&astreamer_tar_terminator_ops;
streamer->bbs_next = next;
diff --git a/src/fe_utils/astreamer_zstd.c b/src/fe_utils/astreamer_zstd.c
index bacdcc150c4..6666f1abeb3 100644
--- a/src/fe_utils/astreamer_zstd.c
+++ b/src/fe_utils/astreamer_zstd.c
@@ -75,7 +75,7 @@ astreamer_zstd_compressor_new(astreamer *next, pg_compress_specification *compre
Assert(next != NULL);
- streamer = palloc0(sizeof(astreamer_zstd_frame));
+ streamer = palloc0_object(astreamer_zstd_frame);
*((const astreamer_ops **) &streamer->base.bbs_ops) =
&astreamer_zstd_compressor_ops;
@@ -266,7 +266,7 @@ astreamer_zstd_decompressor_new(astreamer *next)
Assert(next != NULL);
- streamer = palloc0(sizeof(astreamer_zstd_frame));
+ streamer = palloc0_object(astreamer_zstd_frame);
*((const astreamer_ops **) &streamer->base.bbs_ops) =
&astreamer_zstd_decompressor_ops;
diff --git a/src/pl/plperl/plperl.c b/src/pl/plperl/plperl.c
index 1b1677e333b..8e94efd1f39 100644
--- a/src/pl/plperl/plperl.c
+++ b/src/pl/plperl/plperl.c
@@ -1079,8 +1079,8 @@ plperl_build_tuple_result(HV *perlhash, TupleDesc td)
HE *he;
HeapTuple tup;
- values = palloc0(sizeof(Datum) * td->natts);
- nulls = palloc(sizeof(bool) * td->natts);
+ values = palloc0_array(Datum, td->natts);
+ nulls = palloc_array(bool, td->natts);
memset(nulls, true, sizeof(bool) * td->natts);
hv_iterinit(perlhash);
@@ -1499,7 +1499,7 @@ plperl_ref_from_pg_array(Datum arg, Oid typid)
* Currently we make no effort to cache any of the stuff we look up here,
* which is bad.
*/
- info = palloc0(sizeof(plperl_array_info));
+ info = palloc0_object(plperl_array_info);
/* get element type information, including output conversion function */
get_type_io_data(elementtype, IOFunc_output,
@@ -1785,9 +1785,9 @@ plperl_modify_tuple(HV *hvTD, TriggerData *tdata, HeapTuple otup)
tupdesc = tdata->tg_relation->rd_att;
natts = tupdesc->natts;
- modvalues = (Datum *) palloc0(natts * sizeof(Datum));
- modnulls = (bool *) palloc0(natts * sizeof(bool));
- modrepls = (bool *) palloc0(natts * sizeof(bool));
+ modvalues = palloc0_array(Datum, natts);
+ modnulls = palloc0_array(bool, natts);
+ modrepls = palloc0_array(bool, natts);
hv_iterinit(hvNew);
while ((he = hv_iternext(hvNew)))
@@ -2794,7 +2794,7 @@ compile_plperl_function(Oid fn_oid, bool is_trigger, bool is_event_trigger)
* struct prodesc and subsidiary data must all live in proc_cxt.
************************************************************/
oldcontext = MemoryContextSwitchTo(proc_cxt);
- prodesc = (plperl_proc_desc *) palloc0(sizeof(plperl_proc_desc));
+ prodesc = palloc0_object(plperl_proc_desc);
prodesc->proname = pstrdup(NameStr(procStruct->proname));
MemoryContextSetIdentifier(proc_cxt, prodesc->proname);
prodesc->fn_cxt = proc_cxt;
@@ -3590,7 +3590,7 @@ plperl_spi_prepare(char *query, int argc, SV **argv)
"PL/Perl spi_prepare query",
ALLOCSET_SMALL_SIZES);
MemoryContextSwitchTo(plan_cxt);
- qdesc = (plperl_query_desc *) palloc0(sizeof(plperl_query_desc));
+ qdesc = palloc0_object(plperl_query_desc);
snprintf(qdesc->qname, sizeof(qdesc->qname), "%p", qdesc);
qdesc->plan_cxt = plan_cxt;
qdesc->nargs = argc;
@@ -3768,7 +3768,7 @@ plperl_spi_exec_prepared(char *query, HV *attr, int argc, SV **argv)
if (argc > 0)
{
nulls = (char *) palloc(argc);
- argvalues = (Datum *) palloc(argc * sizeof(Datum));
+ argvalues = palloc_array(Datum, argc);
}
else
{
@@ -3881,7 +3881,7 @@ plperl_spi_query_prepared(char *query, int argc, SV **argv)
if (argc > 0)
{
nulls = (char *) palloc(argc);
- argvalues = (Datum *) palloc(argc * sizeof(Datum));
+ argvalues = palloc_array(Datum, argc);
}
else
{
diff --git a/src/pl/plpgsql/src/pl_comp.c b/src/pl/plpgsql/src/pl_comp.c
index 9dc8218292d..9e2119e8eb2 100644
--- a/src/pl/plpgsql/src/pl_comp.c
+++ b/src/pl/plpgsql/src/pl_comp.c
@@ -406,8 +406,9 @@ do_compile(FunctionCallInfo fcinfo,
forValidator,
plpgsql_error_funcname);
- in_arg_varnos = (int *) palloc(numargs * sizeof(int));
- out_arg_variables = (PLpgSQL_variable **) palloc(numargs * sizeof(PLpgSQL_variable *));
+ in_arg_varnos = palloc_array(int, numargs);
+ out_arg_variables = palloc_array(PLpgSQL_variable *,
+ numargs);
MemoryContextSwitchTo(func_cxt);
@@ -879,7 +880,7 @@ plpgsql_compile_inline(char *proc_source)
plpgsql_check_syntax = check_function_bodies;
/* Function struct does not live past current statement */
- function = (PLpgSQL_function *) palloc0(sizeof(PLpgSQL_function));
+ function = palloc0_object(PLpgSQL_function);
plpgsql_curr_compile = function;
@@ -1054,7 +1055,7 @@ add_dummy_return(PLpgSQL_function *function)
{
PLpgSQL_stmt_block *new;
- new = palloc0(sizeof(PLpgSQL_stmt_block));
+ new = palloc0_object(PLpgSQL_stmt_block);
new->cmd_type = PLPGSQL_STMT_BLOCK;
new->stmtid = ++function->nstatements;
new->body = list_make1(function->action);
@@ -1066,7 +1067,7 @@ add_dummy_return(PLpgSQL_function *function)
{
PLpgSQL_stmt_return *new;
- new = palloc0(sizeof(PLpgSQL_stmt_return));
+ new = palloc0_object(PLpgSQL_stmt_return);
new->cmd_type = PLPGSQL_STMT_RETURN;
new->stmtid = ++function->nstatements;
new->expr = NULL;
@@ -1861,7 +1862,7 @@ plpgsql_build_variable(const char *refname, int lineno, PLpgSQL_type *dtype,
/* Ordinary scalar datatype */
PLpgSQL_var *var;
- var = palloc0(sizeof(PLpgSQL_var));
+ var = palloc0_object(PLpgSQL_var);
var->dtype = PLPGSQL_DTYPE_VAR;
var->refname = pstrdup(refname);
var->lineno = lineno;
@@ -1918,7 +1919,7 @@ plpgsql_build_record(const char *refname, int lineno,
{
PLpgSQL_rec *rec;
- rec = palloc0(sizeof(PLpgSQL_rec));
+ rec = palloc0_object(PLpgSQL_rec);
rec->dtype = PLPGSQL_DTYPE_REC;
rec->refname = pstrdup(refname);
rec->lineno = lineno;
@@ -1944,7 +1945,7 @@ build_row_from_vars(PLpgSQL_variable **vars, int numvars)
PLpgSQL_row *row;
int i;
- row = palloc0(sizeof(PLpgSQL_row));
+ row = palloc0_object(PLpgSQL_row);
row->dtype = PLPGSQL_DTYPE_ROW;
row->refname = "(unnamed row)";
row->lineno = -1;
@@ -2025,7 +2026,7 @@ plpgsql_build_recfield(PLpgSQL_rec *rec, const char *fldname)
}
/* nope, so make a new one */
- recfield = palloc0(sizeof(PLpgSQL_recfield));
+ recfield = palloc0_object(PLpgSQL_recfield);
recfield->dtype = PLPGSQL_DTYPE_RECFIELD;
recfield->fieldname = pstrdup(fldname);
recfield->recparentno = rec->dno;
@@ -2087,7 +2088,7 @@ build_datatype(HeapTuple typeTup, int32 typmod,
errmsg("type \"%s\" is only a shell",
NameStr(typeStruct->typname))));
- typ = (PLpgSQL_type *) palloc(sizeof(PLpgSQL_type));
+ typ = palloc_object(PLpgSQL_type);
typ->typname = pstrdup(NameStr(typeStruct->typname));
typ->typoid = typeStruct->oid;
@@ -2271,7 +2272,7 @@ plpgsql_parse_err_condition(char *condname)
*/
if (strcmp(condname, "others") == 0)
{
- new = palloc(sizeof(PLpgSQL_condition));
+ new = palloc_object(PLpgSQL_condition);
new->sqlerrstate = 0;
new->condname = condname;
new->next = NULL;
@@ -2283,7 +2284,7 @@ plpgsql_parse_err_condition(char *condname)
{
if (strcmp(condname, exception_label_map[i].label) == 0)
{
- new = palloc(sizeof(PLpgSQL_condition));
+ new = palloc_object(PLpgSQL_condition);
new->sqlerrstate = exception_label_map[i].sqlerrstate;
new->condname = condname;
new->next = prev;
@@ -2327,7 +2328,8 @@ plpgsql_adddatum(PLpgSQL_datum *newdatum)
if (plpgsql_nDatums == datums_alloc)
{
datums_alloc *= 2;
- plpgsql_Datums = repalloc(plpgsql_Datums, sizeof(PLpgSQL_datum *) * datums_alloc);
+ plpgsql_Datums = repalloc_array(plpgsql_Datums,
+ PLpgSQL_datum *, datums_alloc);
}
newdatum->dno = plpgsql_nDatums;
diff --git a/src/pl/plpgsql/src/pl_exec.c b/src/pl/plpgsql/src/pl_exec.c
index e5b0da04e3c..8813efdc0a5 100644
--- a/src/pl/plpgsql/src/pl_exec.c
+++ b/src/pl/plpgsql/src/pl_exec.c
@@ -1481,7 +1481,7 @@ plpgsql_fulfill_promise(PLpgSQL_execstate *estate,
int lbs[1];
int i;
- elems = palloc(sizeof(Datum) * nelems);
+ elems = palloc_array(Datum, nelems);
for (i = 0; i < nelems; i++)
elems[i] = CStringGetTextDatum(estate->trigdata->tg_trigger->tgargs[i]);
dims[0] = nelems;
@@ -2324,7 +2324,7 @@ make_callstmt_target(PLpgSQL_execstate *estate, PLpgSQL_expr *expr)
*/
MemoryContextSwitchTo(estate->func->fn_cxt);
- row = (PLpgSQL_row *) palloc0(sizeof(PLpgSQL_row));
+ row = palloc0_object(PLpgSQL_row);
row->dtype = PLPGSQL_DTYPE_ROW;
row->refname = "(unnamed row)";
row->lineno = -1;
diff --git a/src/pl/plpython/plpy_cursorobject.c b/src/pl/plpython/plpy_cursorobject.c
index bb3fa8a3909..9ecd44ebc05 100644
--- a/src/pl/plpython/plpy_cursorobject.c
+++ b/src/pl/plpython/plpy_cursorobject.c
@@ -214,8 +214,8 @@ PLy_cursor_plan(PyObject *ob, PyObject *args)
if (nargs > 0)
{
- values = (Datum *) palloc(nargs * sizeof(Datum));
- nulls = (char *) palloc(nargs * sizeof(char));
+ values = palloc_array(Datum, nargs);
+ nulls = palloc_array(char, nargs);
}
else
{
diff --git a/src/pl/plpython/plpy_exec.c b/src/pl/plpython/plpy_exec.c
index 0e84bb90829..d8955954b8c 100644
--- a/src/pl/plpython/plpy_exec.c
+++ b/src/pl/plpython/plpy_exec.c
@@ -959,9 +959,9 @@ PLy_modify_tuple(PLyProcedure *proc, PyObject *pltd, TriggerData *tdata,
tupdesc = RelationGetDescr(tdata->tg_relation);
- modvalues = (Datum *) palloc0(tupdesc->natts * sizeof(Datum));
- modnulls = (bool *) palloc0(tupdesc->natts * sizeof(bool));
- modrepls = (bool *) palloc0(tupdesc->natts * sizeof(bool));
+ modvalues = palloc0_array(Datum, tupdesc->natts);
+ modnulls = palloc0_array(bool, tupdesc->natts);
+ modrepls = palloc0_array(bool, tupdesc->natts);
for (i = 0; i < nkeys; i++)
{
diff --git a/src/pl/plpython/plpy_procedure.c b/src/pl/plpython/plpy_procedure.c
index c35a3b801ab..1bbd5440db3 100644
--- a/src/pl/plpython/plpy_procedure.c
+++ b/src/pl/plpython/plpy_procedure.c
@@ -161,7 +161,7 @@ PLy_procedure_create(HeapTuple procTup, Oid fn_oid, bool is_trigger)
oldcxt = MemoryContextSwitchTo(cxt);
- proc = (PLyProcedure *) palloc0(sizeof(PLyProcedure));
+ proc = palloc0_object(PLyProcedure);
proc->mcxt = cxt;
PG_TRY();
diff --git a/src/pl/plpython/plpy_spi.c b/src/pl/plpython/plpy_spi.c
index 77fbfd6c868..e81844f9562 100644
--- a/src/pl/plpython/plpy_spi.c
+++ b/src/pl/plpython/plpy_spi.c
@@ -233,8 +233,8 @@ PLy_spi_execute_plan(PyObject *ob, PyObject *list, long limit)
if (nargs > 0)
{
- values = (Datum *) palloc(nargs * sizeof(Datum));
- nulls = (char *) palloc(nargs * sizeof(char));
+ values = palloc_array(Datum, nargs);
+ nulls = palloc_array(char, nargs);
}
else
{
diff --git a/src/pl/plpython/plpy_typeio.c b/src/pl/plpython/plpy_typeio.c
index db14c5f8dae..fba77277a02 100644
--- a/src/pl/plpython/plpy_typeio.c
+++ b/src/pl/plpython/plpy_typeio.c
@@ -1350,8 +1350,8 @@ PLyMapping_ToComposite(PLyObToDatum *arg, TupleDesc desc, PyObject *mapping)
Assert(PyMapping_Check(mapping));
/* Build tuple */
- values = palloc(sizeof(Datum) * desc->natts);
- nulls = palloc(sizeof(bool) * desc->natts);
+ values = palloc_array(Datum, desc->natts);
+ nulls = palloc_array(bool, desc->natts);
for (i = 0; i < desc->natts; ++i)
{
char *key;
@@ -1432,8 +1432,8 @@ PLySequence_ToComposite(PLyObToDatum *arg, TupleDesc desc, PyObject *sequence)
errmsg("length of returned sequence did not match number of columns in row")));
/* Build tuple */
- values = palloc(sizeof(Datum) * desc->natts);
- nulls = palloc(sizeof(bool) * desc->natts);
+ values = palloc_array(Datum, desc->natts);
+ nulls = palloc_array(bool, desc->natts);
idx = 0;
for (i = 0; i < desc->natts; ++i)
{
@@ -1490,8 +1490,8 @@ PLyGenericObject_ToComposite(PLyObToDatum *arg, TupleDesc desc, PyObject *object
volatile int i;
/* Build tuple */
- values = palloc(sizeof(Datum) * desc->natts);
- nulls = palloc(sizeof(bool) * desc->natts);
+ values = palloc_array(Datum, desc->natts);
+ nulls = palloc_array(bool, desc->natts);
for (i = 0; i < desc->natts; ++i)
{
char *key;
diff --git a/src/pl/tcl/pltcl.c b/src/pl/tcl/pltcl.c
index feb6a76b56c..3e4afcf27e7 100644
--- a/src/pl/tcl/pltcl.c
+++ b/src/pl/tcl/pltcl.c
@@ -1583,7 +1583,7 @@ compile_pltcl_function(Oid fn_oid, Oid tgreloid,
* struct prodesc and subsidiary data must all live in proc_cxt.
************************************************************/
oldcontext = MemoryContextSwitchTo(proc_cxt);
- prodesc = (pltcl_proc_desc *) palloc0(sizeof(pltcl_proc_desc));
+ prodesc = palloc0_object(pltcl_proc_desc);
prodesc->user_proname = pstrdup(user_proname);
MemoryContextSetIdentifier(proc_cxt, prodesc->user_proname);
prodesc->internal_proname = pstrdup(internal_proname);
@@ -2665,7 +2665,7 @@ pltcl_SPI_prepare(ClientData cdata, Tcl_Interp *interp,
"PL/Tcl spi_prepare query",
ALLOCSET_SMALL_SIZES);
MemoryContextSwitchTo(plan_cxt);
- qdesc = (pltcl_query_desc *) palloc0(sizeof(pltcl_query_desc));
+ qdesc = palloc0_object(pltcl_query_desc);
snprintf(qdesc->qname, sizeof(qdesc->qname), "%p", qdesc);
qdesc->nargs = nargs;
qdesc->argtypes = (Oid *) palloc(nargs * sizeof(Oid));
@@ -2913,7 +2913,7 @@ pltcl_SPI_execute_plan(ClientData cdata, Tcl_Interp *interp,
* Setup the value array for SPI_execute_plan() using
* the type specific input functions
************************************************************/
- argvalues = (Datum *) palloc(callObjc * sizeof(Datum));
+ argvalues = palloc_array(Datum, callObjc);
for (j = 0; j < callObjc; j++)
{
@@ -3284,7 +3284,7 @@ pltcl_build_tuple_result(Tcl_Interp *interp, Tcl_Obj **kvObjv, int kvObjc,
attinmeta = NULL;
}
- values = (char **) palloc0(tupdesc->natts * sizeof(char *));
+ values = palloc0_array(char *, tupdesc->natts);
if (kvObjc % 2 != 0)
ereport(ERROR,
diff --git a/src/test/modules/dummy_index_am/dummy_index_am.c b/src/test/modules/dummy_index_am/dummy_index_am.c
index 7586f8ae5e1..81a951cae9e 100644
--- a/src/test/modules/dummy_index_am/dummy_index_am.c
+++ b/src/test/modules/dummy_index_am/dummy_index_am.c
@@ -138,7 +138,7 @@ dibuild(Relation heap, Relation index, IndexInfo *indexInfo)
{
IndexBuildResult *result;
- result = (IndexBuildResult *) palloc(sizeof(IndexBuildResult));
+ result = palloc_object(IndexBuildResult);
/* let's pretend that no tuples were scanned */
result->heap_tuples = 0;
diff --git a/src/test/modules/plsample/plsample.c b/src/test/modules/plsample/plsample.c
index 78802a94f62..e1ba580068d 100644
--- a/src/test/modules/plsample/plsample.c
+++ b/src/test/modules/plsample/plsample.c
@@ -141,7 +141,7 @@ plsample_func_handler(PG_FUNCTION_ARGS)
"PL/Sample function",
ALLOCSET_SMALL_SIZES);
- arg_out_func = (FmgrInfo *) palloc0(fcinfo->nargs * sizeof(FmgrInfo));
+ arg_out_func = palloc0_array(FmgrInfo, fcinfo->nargs);
numargs = get_func_arg_info(pl_tuple, &argtypes, &argnames, &argmodes);
/*
diff --git a/src/test/modules/test_integerset/test_integerset.c b/src/test/modules/test_integerset/test_integerset.c
index cfdc6762785..4f92e710208 100644
--- a/src/test/modules/test_integerset/test_integerset.c
+++ b/src/test/modules/test_integerset/test_integerset.c
@@ -147,7 +147,7 @@ test_pattern(const test_spec *spec)
/* Pre-process the pattern, creating an array of integers from it. */
patternlen = strlen(spec->pattern_str);
- pattern_values = palloc(patternlen * sizeof(uint64));
+ pattern_values = palloc_array(uint64, patternlen);
pattern_num_values = 0;
for (int i = 0; i < patternlen; i++)
{
@@ -385,7 +385,7 @@ test_single_value_and_filler(uint64 value, uint64 filler_min, uint64 filler_max)
intset = intset_create();
- iter_expected = palloc(sizeof(uint64) * (filler_max - filler_min + 1));
+ iter_expected = palloc_array(uint64, (filler_max - filler_min + 1));
if (value < filler_min)
{
intset_add_member(intset, value);
diff --git a/src/test/modules/test_parser/test_parser.c b/src/test/modules/test_parser/test_parser.c
index 15ed3617cb5..5940d167c7d 100644
--- a/src/test/modules/test_parser/test_parser.c
+++ b/src/test/modules/test_parser/test_parser.c
@@ -46,7 +46,7 @@ PG_FUNCTION_INFO_V1(testprs_lextype);
Datum
testprs_start(PG_FUNCTION_ARGS)
{
- ParserState *pst = (ParserState *) palloc0(sizeof(ParserState));
+ ParserState *pst = palloc0_object(ParserState);
pst->buffer = (char *) PG_GETARG_POINTER(0);
pst->len = PG_GETARG_INT32(1);
@@ -112,7 +112,7 @@ testprs_lextype(PG_FUNCTION_ARGS)
* the same lexids like Teodor in the default word parser; in this way we
* can reuse the headline function of the default word parser.
*/
- LexDescr *descr = (LexDescr *) palloc(sizeof(LexDescr) * (2 + 1));
+ LexDescr *descr = palloc_array(LexDescr, (2 + 1));
/* there are only two types in this parser */
descr[0].lexid = 3;
diff --git a/src/test/modules/test_radixtree/test_radixtree.c b/src/test/modules/test_radixtree/test_radixtree.c
index 32de6a3123e..884e9ff0d9c 100644
--- a/src/test/modules/test_radixtree/test_radixtree.c
+++ b/src/test/modules/test_radixtree/test_radixtree.c
@@ -185,7 +185,7 @@ test_basic(rt_node_class_test_elem *test_info, int shift, bool asc)
elog(NOTICE, "testing node %s with shift %d and %s keys",
test_info->class_name, shift, asc ? "ascending" : "descending");
- keys = palloc(sizeof(uint64) * children);
+ keys = palloc_array(uint64, children);
for (int i = 0; i < children; i++)
{
if (asc)
diff --git a/src/test/modules/test_rbtree/test_rbtree.c b/src/test/modules/test_rbtree/test_rbtree.c
index 9113f1c8d52..105364d141b 100644
--- a/src/test/modules/test_rbtree/test_rbtree.c
+++ b/src/test/modules/test_rbtree/test_rbtree.c
@@ -96,7 +96,7 @@ GetPermutation(int size)
int *permutation;
int i;
- permutation = (int *) palloc(size * sizeof(int));
+ permutation = palloc_array(int, size);
permutation[0] = 0;
@@ -417,8 +417,8 @@ testdelete(int size, int delsize)
rbt_populate(tree, size, 1);
/* Choose unique ids to delete */
- deleteIds = (int *) palloc(delsize * sizeof(int));
- chosen = (bool *) palloc0(size * sizeof(bool));
+ deleteIds = palloc_array(int, delsize);
+ chosen = palloc0_array(bool, size);
for (i = 0; i < delsize; i++)
{
diff --git a/src/test/modules/test_regex/test_regex.c b/src/test/modules/test_regex/test_regex.c
index 2548a0ef7b1..7bf4375b4aa 100644
--- a/src/test/modules/test_regex/test_regex.c
+++ b/src/test/modules/test_regex/test_regex.c
@@ -168,7 +168,7 @@ test_re_compile(text *text_re, int cflags, Oid collation,
char errMsg[100];
/* Convert pattern string to wide characters */
- pattern = (pg_wchar *) palloc((text_re_len + 1) * sizeof(pg_wchar));
+ pattern = palloc_array(pg_wchar, (text_re_len + 1));
pattern_len = pg_mb2wchar_with_len(text_re_val,
pattern,
text_re_len);
@@ -436,7 +436,7 @@ setup_test_matches(text *orig_str,
Oid collation,
bool use_subpatterns)
{
- test_regex_ctx *matchctx = palloc0(sizeof(test_regex_ctx));
+ test_regex_ctx *matchctx = palloc0_object(test_regex_ctx);
int eml = pg_database_encoding_max_length();
int orig_len;
pg_wchar *wide_str;
@@ -457,7 +457,7 @@ setup_test_matches(text *orig_str,
/* convert string to pg_wchar form for matching */
orig_len = VARSIZE_ANY_EXHDR(orig_str);
- wide_str = (pg_wchar *) palloc(sizeof(pg_wchar) * (orig_len + 1));
+ wide_str = palloc_array(pg_wchar, (orig_len + 1));
wide_len = pg_mb2wchar_with_len(VARDATA_ANY(orig_str), wide_str, orig_len);
/* do we want to remember subpatterns? */
@@ -474,7 +474,7 @@ setup_test_matches(text *orig_str,
}
/* temporary output space for RE package */
- pmatch = palloc(sizeof(regmatch_t) * pmatch_len);
+ pmatch = palloc_array(regmatch_t, pmatch_len);
/*
* the real output space (grown dynamically if needed)
diff --git a/src/test/modules/test_resowner/test_resowner_basic.c b/src/test/modules/test_resowner/test_resowner_basic.c
index 8f794996371..635235d2245 100644
--- a/src/test/modules/test_resowner/test_resowner_basic.c
+++ b/src/test/modules/test_resowner/test_resowner_basic.c
@@ -64,7 +64,7 @@ test_resowner_priorities(PG_FUNCTION_ARGS)
parent = ResourceOwnerCreate(CurrentResourceOwner, "test parent");
child = ResourceOwnerCreate(parent, "test child");
- before_desc = palloc(nkinds * sizeof(ResourceOwnerDesc));
+ before_desc = palloc_array(ResourceOwnerDesc, nkinds);
for (int i = 0; i < nkinds; i++)
{
before_desc[i].name = psprintf("test resource before locks %d", i);
@@ -73,7 +73,7 @@ test_resowner_priorities(PG_FUNCTION_ARGS)
before_desc[i].ReleaseResource = ReleaseString;
before_desc[i].DebugPrint = PrintString;
}
- after_desc = palloc(nkinds * sizeof(ResourceOwnerDesc));
+ after_desc = palloc_array(ResourceOwnerDesc, nkinds);
for (int i = 0; i < nkinds; i++)
{
after_desc[i].name = psprintf("test resource after locks %d", i);
diff --git a/src/test/modules/test_resowner/test_resowner_many.c b/src/test/modules/test_resowner/test_resowner_many.c
index 1f64939404f..0f61abb30a5 100644
--- a/src/test/modules/test_resowner/test_resowner_many.c
+++ b/src/test/modules/test_resowner/test_resowner_many.c
@@ -121,7 +121,7 @@ RememberManyTestResources(ResourceOwner owner,
for (int i = 0; i < nresources; i++)
{
- ManyTestResource *mres = palloc(sizeof(ManyTestResource));
+ ManyTestResource *mres = palloc_object(ManyTestResource);
mres->kind = &kinds[kind_idx];
dlist_node_init(&mres->node);
@@ -226,7 +226,7 @@ test_resowner_many(PG_FUNCTION_ARGS)
elog(ERROR, "nforget_al must between 0 and 'nremember_al'");
/* Initialize all the different resource kinds to use */
- before_kinds = palloc(nkinds * sizeof(ManyTestResourceKind));
+ before_kinds = palloc_array(ManyTestResourceKind, nkinds);
for (int i = 0; i < nkinds; i++)
{
InitManyTestResourceKind(&before_kinds[i],
@@ -234,7 +234,7 @@ test_resowner_many(PG_FUNCTION_ARGS)
RESOURCE_RELEASE_BEFORE_LOCKS,
RELEASE_PRIO_FIRST + i);
}
- after_kinds = palloc(nkinds * sizeof(ManyTestResourceKind));
+ after_kinds = palloc_array(ManyTestResourceKind, nkinds);
for (int i = 0; i < nkinds; i++)
{
InitManyTestResourceKind(&after_kinds[i],
diff --git a/src/test/modules/test_rls_hooks/test_rls_hooks.c b/src/test/modules/test_rls_hooks/test_rls_hooks.c
index b1f161cf7bb..86453f96147 100644
--- a/src/test/modules/test_rls_hooks/test_rls_hooks.c
+++ b/src/test/modules/test_rls_hooks/test_rls_hooks.c
@@ -44,7 +44,7 @@ List *
test_rls_hooks_permissive(CmdType cmdtype, Relation relation)
{
List *policies = NIL;
- RowSecurityPolicy *policy = palloc0(sizeof(RowSecurityPolicy));
+ RowSecurityPolicy *policy = palloc0_object(RowSecurityPolicy);
Datum role;
FuncCall *n;
Node *e;
@@ -112,7 +112,7 @@ List *
test_rls_hooks_restrictive(CmdType cmdtype, Relation relation)
{
List *policies = NIL;
- RowSecurityPolicy *policy = palloc0(sizeof(RowSecurityPolicy));
+ RowSecurityPolicy *policy = palloc0_object(RowSecurityPolicy);
Datum role;
FuncCall *n;
Node *e;
diff --git a/src/test/modules/worker_spi/worker_spi.c b/src/test/modules/worker_spi/worker_spi.c
index 5b87d4f7038..646a9e2bcae 100644
--- a/src/test/modules/worker_spi/worker_spi.c
+++ b/src/test/modules/worker_spi/worker_spi.c
@@ -142,7 +142,7 @@ worker_spi_main(Datum main_arg)
char *p;
bits32 flags = 0;
- table = palloc(sizeof(worktable));
+ table = palloc_object(worktable);
sprintf(name, "schema%d", index);
table->schema = pstrdup(name);
table->name = pstrdup("counted");
diff --git a/src/test/regress/regress.c b/src/test/regress/regress.c
index 5c7158f72b1..4db93c6dbdf 100644
--- a/src/test/regress/regress.c
+++ b/src/test/regress/regress.c
@@ -191,7 +191,7 @@ widget_in(PG_FUNCTION_ARGS)
errmsg("invalid input syntax for type %s: \"%s\"",
"widget", str)));
- result = (WIDGET *) palloc(sizeof(WIDGET));
+ result = palloc_object(WIDGET);
result->center.x = atof(coord[0]);
result->center.y = atof(coord[1]);
result->radius = atof(coord[2]);
@@ -380,8 +380,8 @@ ttdummy(PG_FUNCTION_ARGS)
SPI_connect();
/* Fetch tuple values and nulls */
- cvals = (Datum *) palloc(natts * sizeof(Datum));
- cnulls = (char *) palloc(natts * sizeof(char));
+ cvals = palloc_array(Datum, natts);
+ cnulls = palloc_array(char, natts);
for (i = 0; i < natts; i++)
{
cvals[i] = SPI_getbinval((newtuple != NULL) ? newtuple : trigtuple,
@@ -412,7 +412,7 @@ ttdummy(PG_FUNCTION_ARGS)
char *query;
/* allocate space in preparation */
- ctypes = (Oid *) palloc(natts * sizeof(Oid));
+ ctypes = palloc_array(Oid, natts);
query = (char *) palloc(100 + 16 * natts);
/*
@@ -499,7 +499,7 @@ Datum
int44in(PG_FUNCTION_ARGS)
{
char *input_string = PG_GETARG_CSTRING(0);
- int32 *result = (int32 *) palloc(4 * sizeof(int32));
+ int32 *result = palloc_array(int32, 4);
int i;
i = sscanf(input_string,
@@ -576,8 +576,8 @@ make_tuple_indirect(PG_FUNCTION_ARGS)
tuple.t_tableOid = InvalidOid;
tuple.t_data = rec;
- values = (Datum *) palloc(ncolumns * sizeof(Datum));
- nulls = (bool *) palloc(ncolumns * sizeof(bool));
+ values = palloc_array(Datum, ncolumns);
+ nulls = palloc_array(bool, ncolumns);
heap_deform_tuple(&tuple, tupdesc, values, nulls);
@@ -657,7 +657,7 @@ get_environ(PG_FUNCTION_ARGS)
for (char **s = environ; *s; s++)
nvals++;
- env = palloc(nvals * sizeof(Datum));
+ env = palloc_array(Datum, nvals);
for (int i = 0; i < nvals; i++)
env[i] = CStringGetTextDatum(environ[i]);
diff --git a/src/timezone/pgtz.c b/src/timezone/pgtz.c
index 671b4d76237..f32ebb9bfdd 100644
--- a/src/timezone/pgtz.c
+++ b/src/timezone/pgtz.c
@@ -396,7 +396,7 @@ struct pg_tzenum
pg_tzenum *
pg_tzenumerate_start(void)
{
- pg_tzenum *ret = (pg_tzenum *) palloc0(sizeof(pg_tzenum));
+ pg_tzenum *ret = palloc0_object(pg_tzenum);
char *startdir = pstrdup(pg_TZDIR());
ret->baselen = strlen(startdir) + 1;
diff --git a/src/tutorial/complex.c b/src/tutorial/complex.c
index 6798a9e6ba6..46dc54e62d0 100644
--- a/src/tutorial/complex.c
+++ b/src/tutorial/complex.c
@@ -41,7 +41,7 @@ complex_in(PG_FUNCTION_ARGS)
errmsg("invalid input syntax for type %s: \"%s\"",
"complex", str)));
- result = (Complex *) palloc(sizeof(Complex));
+ result = palloc_object(Complex);
result->x = x;
result->y = y;
PG_RETURN_POINTER(result);
@@ -73,7 +73,7 @@ complex_recv(PG_FUNCTION_ARGS)
StringInfo buf = (StringInfo) PG_GETARG_POINTER(0);
Complex *result;
- result = (Complex *) palloc(sizeof(Complex));
+ result = palloc_object(Complex);
result->x = pq_getmsgfloat8(buf);
result->y = pq_getmsgfloat8(buf);
PG_RETURN_POINTER(result);
@@ -108,7 +108,7 @@ complex_add(PG_FUNCTION_ARGS)
Complex *b = (Complex *) PG_GETARG_POINTER(1);
Complex *result;
- result = (Complex *) palloc(sizeof(Complex));
+ result = palloc_object(Complex);
result->x = a->x + b->x;
result->y = a->y + b->y;
PG_RETURN_POINTER(result);
diff --git a/src/tutorial/funcs.c b/src/tutorial/funcs.c
index f597777a1ff..a1a9da80fc5 100644
--- a/src/tutorial/funcs.c
+++ b/src/tutorial/funcs.c
@@ -48,7 +48,7 @@ makepoint(PG_FUNCTION_ARGS)
{
Point *pointx = PG_GETARG_POINT_P(0);
Point *pointy = PG_GETARG_POINT_P(1);
- Point *new_point = (Point *) palloc(sizeof(Point));
+ Point *new_point = palloc_object(Point);
new_point->x = pointx->x;
new_point->y = pointy->y;
--
2.43.0
On Sat, Jan 18, 2025 at 08:44:00PM +0100, Mats Kindahl wrote:
For PostgreSQL 16, Peter extended the palloc()/pg_malloc() interface in
commit 2016055a92f to provide more type-safety, but these functions are not
widely used. This semantic patch captures and replaces all uses of palloc()
where palloc_array() or palloc_object() could be used instead. It
deliberately does not touch cases where it is not clear that the
replacement can be done.
I am not sure how much a dependency to coccicheck would cost (usually
such changes should require a case-by-case analysis rather than a
blind automation), but palloc_array() and palloc_object() are
available down to v13.
Based on this argument, it would be tempting to apply this rule
across the stable branches to reduce conflict churn. However this is
an improvement in readability, like the talloc() things as Peter has
mentioned, hence it should be a HEAD-only thing. I do like the idea
of forcing more the object-palloc style on HEAD in the tree in some
areas of the code, even if it would come with some backpatching cost
for existing code.
Thoughts? Perhaps this has been discussed previously?
--
Michael
On Sun, Jan 19, 2025 at 2:10 AM Michael Paquier <michael@paquier.xyz> wrote:
On Sat, Jan 18, 2025 at 08:44:00PM +0100, Mats Kindahl wrote:
For PostgreSQL 16, Peter extended the palloc()/pg_malloc() interface in
commit 2016055a92f to provide more type-safety, but these functions arenot
widely used. This semantic patch captures and replaces all uses of
palloc()
where palloc_array() or palloc_object() could be used instead. It
deliberately does not touch cases where it is not clear that the
replacement can be done.I am not sure how much a dependency to coccicheck would cost (usually
such changes should require a case-by-case analysis rather than a
blind automation), but palloc_array() and palloc_object() are
available down to v13.
This script is intended to be conservative in that it should not do
replacements that are not clearly suitable for palloc_array/palloc_object.
Since the intention is that it should automatically generate patches on
cases that can be improved, I think the best strategy is to be conservative
and not do replacements unless it is clear that it is a good replacement.
Based on this argument, it would be tempting to apply this rule
across the stable branches to reduce conflict churn. However this is
an improvement in readability, like the talloc() things as Peter has
mentioned, hence it should be a HEAD-only thing.
I would argue that it is a HEAD-only thing. Main reason is that backports
always risk creating extra work, even if they look innocent, and it is
usually better to only backport patches that *really* need to be backported.
I do like the idea
of forcing more the object-palloc style on HEAD in the tree in some
areas of the code, even if it would come with some backpatching cost
for existing code.Thoughts? Perhaps this has been discussed previously?
My main reasoning around this patch is that the palloc_array and
palloc_object were introduced for a reason, in this case for type-safety
and readability, and not using it widely in the code base sort of defeats
the purpose of adding the functions at all. Doing it manually is a chore,
but with Coccinelle we can do these kinds of large rewrites easily.
--
Best wishes,
Mats Kindahl, Timescale
On Sat, Jan 18, 2025 at 8:44 PM Mats Kindahl <mats@timescale.com> wrote:
On Tue, Jan 14, 2025 at 4:19 PM Aleksander Alekseev <
aleksander@timescale.com> wrote:IMO the best solution would be re-submitting all the patches to this
thread. Also please make sure the patchset is registered on the
nearest open CF [1] This will ensure that the patchset is built on our
CI (aka cfbot [2]) and will not be lost.[1]: https://commitfest.postgresql.org/
[2]: http://cfbot.cputube.org/
Hi all,
Here is a new set of patches rebased on the latest version of the Postgres.
I decided to just include the semantic patches in each patch since it is
trivial to generate the patch and build using:
ninja coccicheck-patch | patch -d .. -p1 && ninja
I repeat the description from the previous patch set and add comments where
things have changed, but I have also added two semantic patches, which are
described last.
For those of you that are not aware of it: Coccinelle is a tool for pattern
matching and text transformation for C code and can be used for detection
of problematic programming patterns and to make complex, tree-wide patches
easy. It is aware of the structure of C code and is better suited to make
complicated changes than what is possible using normal text substitution
tools like Sed and Perl. I've noticed it's been used at a few cases way
back to fix issues.[1]Coccinelle have been successfully been used in the Linux project since
2008 and is now an established tool for Linux development and a large
number of semantic patches have been added to the source tree to capture
everything from generic issues (like eliminating the redundant A in
expressions like "!A || (A && B)") to more Linux-specific problems like
adding a missing call to kfree().Although PostgreSQL is nowhere the size of the Linux kernel, it is
nevertheless of a significant size and would benefit from incorporating
Coccinelle into the development. I noticed it's been used in a few cases
way back (like 10 years back) to fix issues in the PostgreSQL code, but I
thought it might be useful to make it part of normal development practice
to, among other things:- Identify and correct bugs in the source code both during development and
review.
- Make large-scale changes to the source tree to improve the code based on
new insights.
- Encode and enforce APIs by ensuring that function calls are used
correctly.
- Use improved coding patterns for more efficient code.
- Allow extensions to automatically update code for later PostgreSQL
versions.To that end, I created a series of patches to show how it could be used in
the PostgreSQL tree. It is a lot easier to discuss concrete code and I
split it up into separate messages since that makes it easier to discuss
each individual patch. The series contains code to make it easy to work
with Coccinelle during development and reviews, as well as examples of
semantic patches that capture problems, demonstrate how to make large-scale
changes, how to enforce APIs, and also improve some coding patterns.The first three patches contain the coccicheck.py script and the
integration with the build system (both Meson and Autoconf).# Coccicheck Script
It is a re-implementation of the coccicheck script that the Linux kernel
uses. We cannot immediately use the coccicheck script since it is quite
closely tied to the Linux source code tree and we need to have something
that both supports Autoconf and Meson. Since Python seems to be used more
and more in the tree, it seems to be the most natural choice. (I have no
strong opinion on what language to use, but think it would be good to have
something that is as platform-independent as possible.)The intention is that we should be able to use the Linux semantic patches
directly, so it supports the "Requires" and "Options" keywords, which can
be used to require a specific version of spatch(1) and add options to the
execution of that semantic patch, respectively.
I have added support for using multiple jobs similar to how "make -jN"
works. This is also supported by the autoconf and ninja builds
# Autoconf support
The changes to Autoconf modifies the configure.ac and related files (in
particular Makefile.global.in). At this point, I have deliberately not
added support for pgxs so extensions cannot use coccicheck through the
PostgreSQL installation. This is something that we can add later though.The semantic patches are expected to live in cocci/ directory under the
root and the patch uses the pattern cocci/**/*.cocci to find all semantic
patches. Right now there are no subdirectories for the semantic patches,
but this might be something we want to add to create different categories
of scripts.The coccicheck target is used in the same way as for the Linux kernel,
that is, to generate and apply all patches suggested by the semantic
patches, you type:make coccicheck MODE=patch | patch -p1
Linux as support for a few more variables: V to set the verbosity, J to
use multiple jobs for processing the semantic patches, M to select a
different directory to apply the semantic patches to, and COCCI to use a
single specific semantic patch rather than all available. I have not added
support for this right now, but if you think this is valuable, it should be
straightforward to add.I used autoconf 2.69, as mentioned in configure.ac, but that generate a
bigger diff than I expected. Any advice here is welcome.
Using the parameter "JOBS" allow you to use multiple jobs, e.g.:
make coccicheck MODE=patch JOBS=4 | patch -p1
# Meson Support
The support for Meson is done by adding three coccicheck targets: one for
each mode. To apply all patches suggested by the semantic patches using
ninja (as is done in [2]), you type the following in the build directory
generated by Meson (e.g., the "build/" subdirectory).ninja coccicheck-patch | patch -p1 -d ..
If you want to pass other flags you have to set the SPFLAGS environment
variable when calling ninja:SPFLAGS=--debug ninja coccicheck-report
If you want to use multiple jobs, you use something like this:
JOBS=4 ninja coccicheck-patch | patch -d .. -p1
# Semantic Patch: Wrong type for palloc()
This is the first example of a semantic patch and shows how to capture and
fix a common problem.If you use an palloc() to allocate memory for an object (or an array of
objects) and by mistake type something like:StringInfoData *info = palloc(sizeof(StringInfoData*));
You will not allocate enough memory for storing the object. This semantic
patch catches any cases where you are either allocating an array of objects
or a single object that do not have corret types in this sense, more
precisely, it captures assignments to a variable of type T* where palloc()
uses sizeof(T) either alone or with a single expression (assuming this is
an array count).The semantic patch is overzealous in the sense that using the wrong
typedef will suggest a change (this can be seen in the patch). Although the
sizes of these are the same, it is probably be better to just follow the
convention of always using the type "T*" for any "palloc(sizeof(T))" since
the typedef can change at any point and would then introduce a bug.
Coccicheck can easily fix this for you, so it is straightforward to enforce
this. It also simplifies other automated checking to follow this convention.We don't really have any real bugs as a result from this, but we have one
case where an allocation of "sizeof(LLVMBasicBlockRef*)" is allocated to an
"LLVMBasicBlockRef*", which strictly speaking is not correct (it should be
"sizeof(LLVMBasicBlockRef)"). However, since they are both pointers, there
is no risk of incorrect allocation size. One typedef usage that does not
match.# Semantic Patch: Introduce palloc_array() and palloc_object() where
possibleThis is an example of a large-scale refactoring to improve the code.
For PostgreSQL 16, Peter extended the palloc()/pg_malloc() interface in
commit 2016055a92f to provide more type-safety, but these functions are not
widely used. This semantic patch captures and replaces all uses of palloc()
where palloc_array() or palloc_object() could be used instead. It
deliberately does not touch cases where it is not clear that the
replacement can be done.
# Semantic Patch: replace code with pg_cmp_*
This is an example of a large-scale refactoring to improve the code.
In commit 3b42bdb4716 and 6b80394781c overflow-safe comparison functions
were introduced, but they are not widely used. This semantic patch
identifies some of the more common cases and replaces them with calls to
the corresponding pg_cmp_* function.
The patches give a few instructions of improvement in performance when
checking with Godbolt. It's not much, but since it is so easy to apply
them, it might still be worthwhile.
# Semantic Patch: Replace dynamic allocation of StringInfo with
StringInfoData
Use improved coding patterns for more efficient code.
This semantic patch replaces uses of StringInfo with StringInfoData where
the info is dynamically allocated but (optionally) freed at the end of the
block. This will avoid one dynamic allocation that otherwise has to be
dealt with.
For example, this code:
StringInfo info = makeStringInfo();
...
appendStringInfo(info, ...);
...
return do_stuff(..., info->data, ...);
Can be replaced with:
StringInfoData info;
initStringInfo(&info);
...
appendStringInfo(&info, ...);
...
return do_stuff(..., info.data, ...);
It does not do a replacement in these cases:
- If the variable is assigned to an expression. In this case, the
pointer can "leak" outside the function either through a global variable or
a parameter assignment.
- If an assignment is done to the expression. This cannot leak the data,
but could mean a value-assignment of a structure, so we avoid this case.
- If the pointer is returned.
The cases that this semantic patch fixed when I uploaded the first version
of the other patches seems to have been dealt with, but having it as part
of the code base prevents such cases from surfacing again.
[1]: https://coccinelle.gitlabpages.inria.fr/website/
[2]: https://www.postgresql.org/docs/current/install-meson.html
--
Best wishes,
Mats Kindahl, Timescale
Attachments:
0005-Semantic-patch-for-palloc_array-and-palloc_object.v3.patchtext/x-patch; charset=US-ASCII; name=0005-Semantic-patch-for-palloc_array-and-palloc_object.v3.patchDownload
From 3836532bf2a2715fd4ba6af4ba5a15a8a2a94d64 Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Sun, 29 Dec 2024 20:23:25 +0100
Subject: Semantic patch for palloc_array and palloc_object
Macros were added to the palloc API in commit 2016055a92f to improve
type-safety, but very few instances were replaced. This adds a cocci script to
do that replacement. The semantic patch deliberately do not replace instances
where the type of the variable and the type used in the macro does not match.
---
cocci/palloc_array.cocci | 157 +++++++++++++++++++++++++++++++++++++++
1 file changed, 157 insertions(+)
create mode 100644 cocci/palloc_array.cocci
diff --git a/cocci/palloc_array.cocci b/cocci/palloc_array.cocci
new file mode 100644
index 00000000000..aeeab74c3a9
--- /dev/null
+++ b/cocci/palloc_array.cocci
@@ -0,0 +1,157 @@
+// Since PG16 there are array versions of common palloc operations, so
+// we can use those instead.
+//
+// We ignore cases where we have a anonymous struct and also when the
+// type of the variable being assigned to is different from the
+// inferred type.
+//
+// Options: --no-includes --include-headers
+
+virtual patch
+virtual report
+virtual context
+
+// These rules (soN) are needed to rewrite types of the form
+// sizeof(T[C]) to C * sizeof(T) since Cocci cannot (currently) handle
+// it.
+@initialize:python@
+@@
+import re
+
+CRE = re.compile(r'(.*)\s+\[\s+(\d+)\s+\]$')
+
+def is_array_type(s):
+ mre = CRE.match(s)
+ return (mre is not None)
+
+@so1 depends on patch@
+type T : script:python() { is_array_type(T) };
+@@
+palloc(sizeof(T))
+
+@script:python so2 depends on patch@
+T << so1.T;
+T2;
+E;
+@@
+mre = CRE.match(T)
+coccinelle.T2 = cocci.make_type(mre.group(1))
+coccinelle.E = cocci.make_expr(mre.group(2))
+
+@depends on patch@
+type so1.T;
+type so2.T2;
+expression so2.E;
+@@
+- palloc(sizeof(T))
++ palloc(E * sizeof(T2))
+
+@r1 depends on report || context@
+type T !~ "^struct {";
+expression E;
+position p;
+idexpression T *I;
+identifier alloc = {palloc0, palloc};
+@@
+* I = alloc@p(E * sizeof(T))
+
+@script:python depends on report@
+p << r1.p;
+alloc << r1.alloc;
+@@
+coccilib.report.print_report(p[0], f"this {alloc} can be replaced with {alloc}_array")
+
+@depends on patch@
+type T !~ "^struct {";
+expression E;
+T *P;
+idexpression T* I;
+constant C;
+identifier alloc = {palloc0, palloc};
+fresh identifier alloc_array = alloc ## "_array";
+@@
+(
+- I = (T*) alloc(E * sizeof( \( *P \| P[C] \) ))
++ I = alloc_array(T, E)
+|
+- I = (T*) alloc(E * sizeof(T))
++ I = alloc_array(T, E)
+|
+- I = alloc(E * sizeof( \( *P \| P[C] \) ))
++ I = alloc_array(T, E)
+|
+- I = alloc(E * sizeof(T))
++ I = alloc_array(T, E)
+)
+
+@r3 depends on report || context@
+type T !~ "^struct {";
+expression E;
+idexpression T *P;
+idexpression T *I;
+position p;
+@@
+* I = repalloc@p(P, E * sizeof(T))
+
+@script:python depends on report@
+p << r3.p;
+@@
+coccilib.report.print_report(p[0], "this repalloc can be replaced with repalloc_array")
+
+@depends on patch@
+type T !~ "^struct {";
+expression E;
+idexpression T *P1;
+idexpression T *P2;
+idexpression T *I;
+constant C;
+@@
+(
+- I = (T*) repalloc(P1, E * sizeof( \( *P2 \| P2[C] \) ))
++ I = repalloc_array(P1, T, E)
+|
+- I = (T*) repalloc(P1, E * sizeof(T))
++ I = repalloc_array(P1, T, E)
+|
+- I = repalloc(P1, E * sizeof( \( *P2 \| P2[C] \) ))
++ I = repalloc_array(P1, T, E)
+|
+- I = repalloc(P1, E * sizeof(T))
++ I = repalloc_array(P1, T, E)
+)
+
+@r4 depends on report || context@
+type T !~ "^struct {";
+position p;
+idexpression T* I;
+identifier alloc = {palloc, palloc0};
+@@
+* I = alloc@p(sizeof(T))
+
+@script:python depends on report@
+p << r4.p;
+alloc << r4.alloc;
+@@
+coccilib.report.print_report(p[0], f"this {alloc} can be replaced with {alloc}_object")
+
+@depends on patch@
+type T !~ "^struct {";
+T* P;
+idexpression T *I;
+constant C;
+identifier alloc = {palloc, palloc0};
+fresh identifier alloc_object = alloc ## "_object";
+@@
+(
+- I = (T*) alloc(sizeof( \( *P \| P[C] \) ))
++ I = alloc_object(T)
+|
+- I = (T*) alloc(sizeof(T))
++ I = alloc_object(T)
+|
+- I = alloc(sizeof( \( *P \| P[C] \) ))
++ I = alloc_object(T)
+|
+- I = alloc(sizeof(T))
++ I = alloc_object(T)
+)
--
2.43.0
0003-Add-meson-build-for-coccicheck.v3.patchtext/x-patch; charset=US-ASCII; name=0003-Add-meson-build-for-coccicheck.v3.patchDownload
From 2b9e5b5f02e3ea1cd7e05e81f46878091e6bbd50 Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Wed, 1 Jan 2025 14:15:51 +0100
Subject: Add meson build for coccicheck
This commit adds a run target `coccicheck` to meson build files.
Since ninja does not accept parameters the same way make does, there are three
run targets defined---"coccicheck-patch", "coccicheck-report", and
"coccicheck-context"---that you can use to generate a patch, get a report, and
get the context respectively. For example, to patch the tree from the "build"
subdirectory created by the meson run:
ninja coccicheck-patch | patch -d .. -p1
---
meson.build | 29 +++++++++++++++++++++++++++++
meson_options.txt | 7 ++++++-
src/makefiles/meson.build | 6 ++++++
3 files changed, 41 insertions(+), 1 deletion(-)
diff --git a/meson.build b/meson.build
index 13c13748e5d..044171f80e4 100644
--- a/meson.build
+++ b/meson.build
@@ -348,6 +348,7 @@ missing = find_program('config/missing', native: true)
cp = find_program('cp', required: false, native: true)
xmllint_bin = find_program(get_option('XMLLINT'), native: true, required: false)
xsltproc_bin = find_program(get_option('XSLTPROC'), native: true, required: false)
+spatch = find_program(get_option('SPATCH'), native: true, required: false)
bison_flags = []
if bison.found()
@@ -1642,6 +1643,33 @@ else
endif
+###############################################################
+# Option: Coccinelle checks
+###############################################################
+
+coccicheck_opt = get_option('coccicheck')
+coccicheck_dep = not_found_dep
+if not coccicheck_opt.disabled()
+ if spatch.found()
+ coccicheck_dep = declare_dependency()
+ elif coccicheck_opt.enabled()
+ error('missing required tools (spatch needed) for Coccinelle checks')
+ endif
+endif
+
+coccicheck_modes = ['context', 'report', 'patch']
+
+foreach mode : coccicheck_modes
+ run_target('coccicheck-' + mode,
+ command: [python, files('src/tools/coccicheck.py'),
+ '--mode', mode,
+ '--spatch', spatch,
+ '--patchdir', '@SOURCE_ROOT@',
+ '@SOURCE_ROOT@/cocci/**/*.cocci',
+ '@SOURCE_ROOT@/src',
+ '@SOURCE_ROOT@/contrib',
+ ])
+endforeach
###############################################################
# Compiler tests
@@ -3808,6 +3836,7 @@ if meson.version().version_compare('>=0.57')
{
'bison': '@0@ @1@'.format(bison.full_path(), bison_version),
'dtrace': dtrace,
+ 'spatch': spatch,
'flex': '@0@ @1@'.format(flex.full_path(), flex_version),
},
section: 'Programs',
diff --git a/meson_options.txt b/meson_options.txt
index 702c4517145..37d6d43af93 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -43,6 +43,9 @@ option('cassert', type: 'boolean', value: false,
option('tap_tests', type: 'feature', value: 'auto',
description: 'Enable TAP tests')
+option('coccicheck', type: 'feature', value: 'auto',
+ description: 'Enable Coccinelle checks')
+
option('injection_points', type: 'boolean', value: false,
description: 'Enable injection points')
@@ -52,7 +55,6 @@ option('PG_TEST_EXTRA', type: 'string', value: '',
option('PG_GIT_REVISION', type: 'string', value: 'HEAD',
description: 'git revision to be packaged by pgdist target')
-
# Compilation options
option('extra_include_dirs', type: 'array', value: [],
@@ -195,6 +197,9 @@ option('PYTHON', type: 'array', value: ['python3', 'python'],
option('SED', type: 'string', value: 'gsed',
description: 'Path to sed binary')
+option('SPATCH', type: 'string', value: 'spatch',
+ description: 'Path to spatch binary, used for SmPL patches')
+
option('STRIP', type: 'string', value: 'strip',
description: 'Path to strip binary, used for PGXS emulation')
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 60e13d50235..c66156d9046 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -57,6 +57,7 @@ pgxs_kv = {
'enable_injection_points': get_option('injection_points') ? 'yes' : 'no',
'enable_tap_tests': tap_tests_enabled ? 'yes' : 'no',
'enable_debug': get_option('debug') ? 'yes' : 'no',
+ 'enable_coccicheck': spatch.found() ? 'yes' : 'no',
'enable_coverage': 'no',
'enable_dtrace': dtrace.found() ? 'yes' : 'no',
@@ -151,6 +152,7 @@ pgxs_bins = {
'TAR': tar,
'ZSTD': program_zstd,
'DTRACE': dtrace,
+ 'SPATCH': spatch,
}
pgxs_empty = [
@@ -166,6 +168,10 @@ pgxs_empty = [
'DBTOEPUB',
'FOP',
+ # Coccinelle is not supported by pgxs
+ 'SPATCH',
+ 'SPFLAGS',
+
# supporting coverage for pgxs-in-meson build doesn't seem worth it
'GENHTML',
'LCOV',
--
2.43.0
0004-Semantic-patch-for-sizeof-using-palloc.v3.patchtext/x-patch; charset=US-ASCII; name=0004-Semantic-patch-for-sizeof-using-palloc.v3.patchDownload
From 189fed4b2a3603a0cb02c54cd3c4208d0631312f Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Sun, 5 Jan 2025 19:26:47 +0100
Subject: Semantic patch for sizeof() using palloc()
If palloc() is used to allocate elements of type T it should be assigned to a
variable of type T* or risk indexes out of bounds. This semantic patch checks
that allocations to variables of type T* are using sizeof(T) when allocating
memory using palloc().
---
cocci/palloc_sizeof.cocci | 49 +++++++++++++++++++++++++++++++++++++++
1 file changed, 49 insertions(+)
create mode 100644 cocci/palloc_sizeof.cocci
diff --git a/cocci/palloc_sizeof.cocci b/cocci/palloc_sizeof.cocci
new file mode 100644
index 00000000000..5f8593c2687
--- /dev/null
+++ b/cocci/palloc_sizeof.cocci
@@ -0,0 +1,49 @@
+virtual report
+virtual context
+virtual patch
+
+@initialize:python@
+@@
+import re
+
+CONST_CRE = re.compile(r'\bconst\b')
+
+def is_simple_type(s):
+ return s != 'void' and not CONST_CRE.search(s)
+
+@r1 depends on report || context@
+type T1 : script:python () { is_simple_type(T1) };
+idexpression T1 *I;
+type T2 != T1;
+position p;
+expression E;
+identifier func = {palloc, palloc0};
+@@
+(
+* I = func@p(sizeof(T2))
+|
+* I = func@p(E * sizeof(T2))
+)
+
+@script:python depends on report@
+T1 << r1.T1;
+T2 << r1.T2;
+I << r1.I;
+p << r1.p;
+@@
+coccilib.report.print_report(p[0], f"'{I}' has type '{T1}*' but 'sizeof({T2})' is used to allocate memory")
+
+@depends on patch@
+type T1 : script:python () { is_simple_type(T1) };
+idexpression T1 *I;
+type T2 != T1;
+expression E;
+identifier func = {palloc, palloc0};
+@@
+(
+- I = func(sizeof(T2))
++ I = func(sizeof(T1))
+|
+- I = func(E * sizeof(T2))
++ I = func(E * sizeof(T1))
+)
--
2.43.0
0006-Semantic-patch-for-pg_cmp_-functions.v3.patchtext/x-patch; charset=US-ASCII; name=0006-Semantic-patch-for-pg_cmp_-functions.v3.patchDownload
From b8d9a245f4a47b25b4a64ddbd4d0c0a17a3a1c3a Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Thu, 23 Jan 2025 02:46:14 +0100
Subject: Semantic patch for pg_cmp_* functions
In commit 3b42bdb4716 and 6b80394781c overflow-safe comparison functions where
introduced, but they are not widely used. This semantic patch identifies some
of the more common cases and replaces them with calls to the corresponding
pg_cmp_* function.
---
cocci/use_pg_cmp.cocci | 125 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 125 insertions(+)
create mode 100644 cocci/use_pg_cmp.cocci
diff --git a/cocci/use_pg_cmp.cocci b/cocci/use_pg_cmp.cocci
new file mode 100644
index 00000000000..8a258e61e5d
--- /dev/null
+++ b/cocci/use_pg_cmp.cocci
@@ -0,0 +1,125 @@
+// Find cases where we can use the new pg_cmp_* functions.
+//
+// Copyright 2025 Mats Kindahl, Timescale.
+//
+// Options: --no-includes --include-headers
+
+virtual report
+virtual context
+virtual patch
+
+@initialize:python@
+@@
+
+import re
+
+TYPMAP = {
+ 'BlockNumber': 'pg_cmp_u32',
+ 'ForkNumber': 'pg_cmp_s32',
+ 'OffsetNumber': 'pg_cmp_s16',
+ 'int': 'pg_cmp_s32',
+ 'int16': 'pg_cmp_s16',
+ 'int32': 'pg_cmp_s32',
+ 'uint16': 'pg_cmp_u16',
+ 'uint32': 'pg_cmp_u32',
+ 'unsigned int': 'pg_cmp_u32',
+}
+
+def is_valid(expr):
+ return not re.search(r'DatumGet[A-Za-z]+', expr)
+
+@r1e depends on context || report expression@
+type TypeName : script:python() { TypeName in TYPMAP };
+position pos;
+TypeName lhs : script:python() { is_valid(lhs) };
+TypeName rhs : script:python() { is_valid(rhs) };
+@@
+* lhs@pos < rhs ? -1 : lhs > rhs ? 1 : 0
+
+@script:python depends on report@
+lhs << r1e.lhs;
+rhs << r1e.rhs;
+pos << r1e.pos;
+@@
+coccilib.report.print_report(pos[0], f"conditional checks between '{lhs}' and '{rhs}' can be replaced with a PostgreSQL comparison function")
+
+@r1 depends on context || report@
+type TypeName : script:python() { TypeName in TYPMAP };
+position pos;
+TypeName lhs : script:python() { is_valid(lhs) };
+TypeName rhs : script:python() { is_valid(rhs) };
+@@
+(
+* if@pos (lhs < rhs) return -1; else if (lhs > rhs) return 1; return 0;
+|
+* if@pos (lhs < rhs) return -1; else if (lhs > rhs) return 1; else return 0;
+|
+* if@pos (lhs < rhs) return -1; if (lhs > rhs) return 1; return 0;
+|
+* if@pos (lhs > rhs) return 1; if (lhs < rhs) return -1; return 0;
+|
+* if@pos (lhs == rhs) return 0; if (lhs > rhs) return 1; return -1;
+|
+* if@pos (lhs == rhs) return 0; return lhs > rhs ? 1 : -1;
+|
+* if@pos (lhs == rhs) return 0; return lhs < rhs ? -1 : 1;
+)
+
+@script:python depends on report@
+lhs << r1.lhs;
+rhs << r1.rhs;
+pos << r1.pos;
+@@
+coccilib.report.print_report(pos[0], f"conditional checks between '{lhs}' and '{rhs}' can be replaced with a PostgreSQL comparison function")
+
+@expr_repl depends on patch expression@
+type TypeName : script:python() { TypeName in TYPMAP };
+fresh identifier cmp = script:python(TypeName) { TYPMAP[TypeName] };
+TypeName lhs : script:python() { is_valid(lhs) };
+TypeName rhs : script:python() { is_valid(rhs) };
+@@
+- lhs < rhs ? -1 : lhs > rhs ? 1 : 0
++ cmp(lhs,rhs)
+
+@stmt_repl depends on patch@
+type TypeName : script:python() { TypeName in TYPMAP };
+fresh identifier cmp = script:python(TypeName) { TYPMAP[TypeName] };
+TypeName lhs : script:python() { is_valid(lhs) };
+TypeName rhs : script:python() { is_valid(rhs) };
+@@
+(
+- if (lhs < rhs) return -1; if (lhs > rhs) return 1; return 0;
++ return cmp(lhs,rhs);
+|
+- if (lhs < rhs) return -1; else if (lhs > rhs) return 1; return 0;
++ return cmp(lhs,rhs);
+|
+- if (lhs < rhs) return -1; else if (lhs > rhs) return 1; else return 0;
++ return cmp(lhs,rhs);
+|
+- if (lhs > rhs) return 1; if (lhs < rhs) return -1; return 0;
++ return cmp(lhs,rhs);
+|
+- if (lhs > rhs) return 1; else if (lhs < rhs) return -1; return 0;
++ return cmp(lhs,rhs);
+|
+- if (lhs == rhs) return 0; if (lhs > rhs) return 1; return -1;
++ return cmp(lhs,rhs);
+|
+- if (lhs == rhs) return 0; return lhs > rhs ? 1 : -1;
++ return cmp(lhs,rhs);
+|
+- if (lhs == rhs) return 0; return lhs < rhs ? -1 : 1;
++ return cmp(lhs,rhs);
+)
+
+// Add an include if there were none and we had to do some
+// replacements
+@has_include depends on patch@
+@@
+ #include "common/int.h"
+
+@depends on patch && !has_include && (stmt_repl || expr_repl)@
+@@
+ #include ...
++ #include "common/int.h"
--
2.43.0
0007-Semantic-patch-to-use-stack-allocated-StringInfoData.v3.patchtext/x-patch; charset=US-ASCII; name=0007-Semantic-patch-to-use-stack-allocated-StringInfoData.v3.patchDownload
From 8786c5d81e65bb2e9eb3f3799ed29684df3df85e Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Tue, 28 Jan 2025 14:09:41 +0100
Subject: Semantic patch to use stack-allocated StringInfoData
This semantic patch replace uses of StringInfo with StringInfoData where the
info is dynamically allocated but (optionally) freed at the end of the block.
This will avoid one dynamic allocation that otherwise have to be dealt with.
For example, this code:
StringInfo info = makeStringInfo();
...
appendStringInfo(info, ...);
...
return do_stuff(..., info->data, ...);
Can be replaced with:
StringInfoData info;
initStringInfo(&info);
...
appendStringInfo(&info, ...);
...
return do_stuff(..., info.data, ...);
It does not do a replacement in these cases:
- If the variable is assigned to an expression. In this case, the pointer can
"leak" outside the function either through a global variable or a parameter
assignment.
- If an assignment is done to the expression. This cannot leak the data, but
could mean a value-assignment of a structure, so we avoid this case.
- If the pointer is returned.
---
cocci/use_stringinfodata.cocci | 155 +++++++++++++++++++++++++++++++++
1 file changed, 155 insertions(+)
create mode 100644 cocci/use_stringinfodata.cocci
diff --git a/cocci/use_stringinfodata.cocci b/cocci/use_stringinfodata.cocci
new file mode 100644
index 00000000000..4186027f8c9
--- /dev/null
+++ b/cocci/use_stringinfodata.cocci
@@ -0,0 +1,155 @@
+// Replace uses of StringInfo with StringInfoData where the info is
+// dynamically allocated but (optionally) freed at the end of the
+// block. This will avoid one dynamic allocation that otherwise have
+// to be dealt with.
+//
+// For example, this code:
+//
+// StringInfo info = makeStringInfo();
+// ...
+// appendStringInfo(info, ...);
+// ...
+// return do_stuff(..., info->data, ...);
+//
+// Can be replaced with:
+//
+// StringInfoData info;
+// initStringInfo(&info);
+// ...
+// appendStringInfo(&info, ...);
+// ...
+// return do_stuff(..., info.data, ...);
+
+virtual report
+virtual context
+virtual patch
+
+// This rule captures the position of the makeStringInfo() and bases
+// all changes around that. It matches the case that we should *not*
+// replace, that is, those that either (1) return the pointer or (2)
+// assign the pointer to a variable or (3) assign a variable to the
+// pointer.
+//
+// The first two cases are matched because they could potentially leak
+// the pointer outside the function, for some expressions, but the
+// last one is just a convenience.
+//
+// If we replace this, the resulting change will result in a value
+// copy of a structure, which might not be optimal, so we do not do a
+// replacement.
+@id1 exists@
+typedef StringInfo;
+local idexpression StringInfo info;
+position pos;
+expression E;
+@@
+ info@pos = makeStringInfo()
+ ...
+(
+ return info;
+|
+ info = E
+|
+ E = info
+)
+
+@r1 depends on !patch disable decl_init exists@
+identifier info, fld;
+position dpos, pos != id1.pos;
+@@
+(
+* StringInfo@dpos info;
+ ...
+* info@pos = makeStringInfo();
+|
+* StringInfo@dpos info@pos = makeStringInfo();
+)
+<...
+(
+* \(pfree\|destroyStringInfo\)(info);
+|
+* info->fld
+|
+* *info
+|
+* info
+)
+...>
+
+@script:python depends on report@
+info << r1.info;
+dpos << r1.dpos;
+@@
+coccilib.report.print_report(dpos[0], f"Variable '{info}' of type StringInfo can be defined using StringInfoData")
+
+@depends on patch disable decl_init exists@
+identifier info, fld;
+position pos != id1.pos;
+@@
+- StringInfo info;
++ StringInfoData info;
+ ...
+- info@pos = makeStringInfo();
++ initStringInfo(&info);
+<...
+(
+- \(destroyStringInfo\|pfree\)(info);
+|
+ info
+- ->fld
++ .fld
+|
+- *info
++ info
+|
+- info
++ &info
+)
+...>
+
+// Here we repeat the matching of the "bad case" since we cannot
+// inherit over modifications
+@id2 exists@
+typedef StringInfo;
+local idexpression StringInfo info;
+position pos;
+expression E;
+@@
+ info@pos = makeStringInfo()
+ ...
+(
+ return info;
+|
+ info = E
+|
+ E = info
+)
+
+@depends on patch exists@
+identifier info, fld;
+position pos != id2.pos;
+statement S, S1;
+@@
+- StringInfo info@pos = makeStringInfo();
++ StringInfoData info;
+ ... when != S
+(
+<...
+(
+- \(destroyStringInfo\|pfree\)(info);
+|
+ info
+- ->fld
++ .fld
+|
+- *info
++ info
+|
+- info
++ &info
+)
+...>
+&
++ initStringInfo(&info);
+ S1
+)
--
2.43.0
0001-Add-initial-coccicheck-script.v3.patchtext/x-patch; charset=US-ASCII; name=0001-Add-initial-coccicheck-script.v3.patchDownload
From 882f3b150c61884b9e176a1f506747c35235ab0a Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Sun, 29 Dec 2024 19:35:58 +0100
Subject: Add initial coccicheck script
The coccicheck.py script can be used to run several semantics patches on a
source tree to either generate a report, see the context of the modification
(what lines that requires changes), or generate a patch to correct an issue.
usage: coccicheck.py [-h] [--verbose] [--spatch SPATCH]
[--spflags SPFLAGS]
[--mode {patch,report,context}] [--jobs JOBS]
[--include DIR] [--patchdir DIR]
pattern path [path ...]
positional arguments:
pattern Pattern for Cocci files to use.
path Directory or source path to process.
options:
-h, --help show this help message and exit
--verbose, -v
--spatch SPATCH Path to spatch binary. Defaults to value of
environment variable SPATCH.
--spflags SPFLAGS Flags to pass to spatch call. Defaults to
value of enviroment variable SPFLAGS.
--mode {patch,report,context}
Mode to use for coccinelle. Defaults to
value of environment variable MODE.
--jobs JOBS Number of jobs to use for spatch. Defaults
to value of environment variable JOBS.
--include DIR, -I DIR
Extra include directories.
--patchdir DIR Path for which patch should be created
relative to.
---
src/tools/coccicheck.py | 185 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 185 insertions(+)
create mode 100755 src/tools/coccicheck.py
diff --git a/src/tools/coccicheck.py b/src/tools/coccicheck.py
new file mode 100755
index 00000000000..838f8184c54
--- /dev/null
+++ b/src/tools/coccicheck.py
@@ -0,0 +1,185 @@
+#!/usr/bin/env python3
+
+"""Run Coccinelle on a set of files and directories.
+
+This is a re-written version of the Linux ``coccicheck`` script.
+
+Coccicheck can run in two different modes (the original have four
+different modes):
+
+- *patch*: patch files using the cocci file.
+
+- *report*: report will report any improvements that this script can
+ make, but not show any patch.
+
+- *context*: show the context where the patch can be applied.
+
+The program will take a single cocci file and call spatch(1) with a
+set of paths that can be either files or directories.
+
+When starting, the cocci file will be parsed and any lines containing
+"Options:" or "Requires:" will be treated specially.
+
+- Lines containing "Options:" will have a list of options to add to
+ the call of the spatch(1) program. These options will be added last.
+
+- Lines containing "Requires:" can contain a version of spatch(1) that
+ is required for this cocci file. If the version requirements are not
+ satisfied, the file will not be used.
+
+When calling spatch(1), it will set the virtual rules "patch",
+"report", or "context" and the cocci file can use these to act
+differently depending on the mode.
+
+The following environment variables can be set:
+
+SPATCH: Path to spatch program. This will be used if no path is
+ passed using the option --spatch.
+
+SPFLAGS: Extra flags to use when calling spatch. These will be added
+ last.
+
+MODE: Mode to use. It will be used if no --mode is passed to
+ coccicheck.py.
+
+"""
+
+import argparse
+import os
+import sys
+import subprocess
+import re
+
+from pathlib import PurePath, Path
+from packaging import version
+
+VERSION_CRE = re.compile(
+ r'spatch version (\S+) compiled with OCaml version (\S+)'
+)
+
+
+def parse_metadata(cocci_file):
+ """Parse metadata in Cocci file."""
+ metadata = {}
+ with open(cocci_file) as fh:
+ for line in fh:
+ mre = re.match(r'(Options|Requires):(.*)', line, re.IGNORECASE)
+ if mre:
+ metadata[mre.group(1).lower()] = mre.group(2)
+ return metadata
+
+
+def get_config(args):
+ """Compute configuration information."""
+ # Figure out spatch version. We just need to read the first line
+ config = {}
+ cmd = [args.spatch, '--version']
+ with subprocess.Popen(cmd, stdout=subprocess.PIPE, text=True) as proc:
+ for line in proc.stdout:
+ mre = VERSION_CRE.match(line)
+ if mre:
+ config['spatch_version'] = mre.group(1)
+ break
+ return config
+
+
+def run_spatch(cocci_file, args, config, env):
+ """Run coccinelle on the provided file."""
+ if args.verbose > 1:
+ print("processing cocci file", cocci_file)
+ spatch_version = config['spatch_version']
+ metadata = parse_metadata(cocci_file)
+
+ # Check that we have a valid version
+ if 'required' in metadata:
+ required_version = version.parse(metadata['required'])
+ if required_version < spatch_version:
+ print(
+ f'Skipping SmPL patch {cocci_file}: '
+ f'requires {required_version} (had {spatch_version})'
+ )
+ return
+
+ command = [
+ args.spatch,
+ "-D", args.mode,
+ "--cocci-file", cocci_file,
+ "--very-quiet",
+ ]
+
+ if 'options' in metadata:
+ command.append(metadata['options'])
+ if args.mode == 'report':
+ command.append('--no-show-diff')
+ if args.patchdir:
+ command.extend(['--patch', args.patchdir])
+ if args.jobs:
+ command.extend(['--jobs', args.jobs])
+ if args.spflags:
+ command.append(args.spflags)
+
+ for path in args.path:
+ subprocess.run(command + [path], env=env, check=True)
+
+
+def coccinelle(args, config, env):
+ """Run coccinelle on all files matching the provided pattern."""
+ root = '/' if PurePath(args.cocci).is_absolute() else '.'
+ count = 0
+ for cocci_file in Path(root).glob(args.cocci):
+ count += 1
+ run_spatch(cocci_file, args, config, env)
+ return count
+
+
+def main(argv):
+ """Run coccicheck."""
+ parser = argparse.ArgumentParser()
+ parser.add_argument('--verbose', '-v', action='count', default=0)
+ parser.add_argument('--spatch', type=PurePath, metavar='SPATCH',
+ default=os.environ.get('SPATCH'),
+ help=('Path to spatch binary. Defaults to '
+ 'value of environment variable SPATCH.'))
+ parser.add_argument('--spflags', type=PurePath,
+ metavar='SPFLAGS',
+ default=os.environ.get('SPFLAGS', None),
+ help=('Flags to pass to spatch call. Defaults '
+ 'to value of enviroment variable SPFLAGS.'))
+ parser.add_argument('--mode', choices=['patch', 'report', 'context'],
+ default=os.environ.get('MODE', 'report'),
+ help=('Mode to use for coccinelle. Defaults to '
+ 'value of environment variable MODE.'))
+ parser.add_argument('--jobs', default=os.environ.get('JOBS', None),
+ help=('Number of jobs to use for spatch. Defaults to '
+ 'value of environment variable JOBS.'))
+ parser.add_argument('--include', '-I', type=PurePath,
+ metavar='DIR',
+ help='Extra include directories.')
+ parser.add_argument('--patchdir', type=PurePath, metavar='DIR',
+ help=('Path for which patch should be created '
+ 'relative to.'))
+ parser.add_argument('cocci', metavar='pattern',
+ help='Pattern for Cocci files to use.')
+ parser.add_argument('path', nargs='+', type=PurePath,
+ help='Directory or source path to process.')
+
+ args = parser.parse_args(argv)
+
+ if args.verbose > 1:
+ print("arguments:", args)
+
+ if args.spatch is None:
+ parser.error('spatch is part of the Coccinelle project and is '
+ 'available at http://coccinelle.lip6.fr/')
+
+ if coccinelle(args, get_config(args), os.environ) == 0:
+ parser.error(f'no coccinelle files found matching {args.cocci}')
+
+
+if __name__ == '__main__':
+ try:
+ main(sys.argv[1:])
+ except KeyboardInterrupt:
+ print("Execution aborted")
+ except Exception as exc:
+ print(exc)
--
2.43.0
0002-Create-coccicheck-target-for-autoconf.v3.patchtext/x-patch; charset=US-ASCII; name=0002-Create-coccicheck-target-for-autoconf.v3.patchDownload
From 1c0e47883dca57ab9febf9af8b28bffa1e75c4f0 Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Mon, 30 Dec 2024 19:58:07 +0100
Subject: Create coccicheck target for autoconf
This adds a coccicheck target for the autoconf-based build system. The
coccicheck target accepts one parameter MODE, which can be either "patch",
"report", or "context". The "patch" mode will generate a patch that can be
applied to the source tree, the "report" mode will generate a list of file
locations with information about what can be changed, and the "context" mode
will just highlight the line that will be affected by the semantic patch.
The following will generate a patch and apply it to the source code tree:
make coccicheck MODE=patch | patch -p1
---
configure | 100 ++++++++++++++++++++++++++++++++++++++---
configure.ac | 12 +++++
src/Makefile.global.in | 24 +++++++++-
src/makefiles/pgxs.mk | 3 ++
4 files changed, 132 insertions(+), 7 deletions(-)
diff --git a/configure b/configure
index 93fddd69981..109a4868de8 100755
--- a/configure
+++ b/configure
@@ -772,6 +772,9 @@ enable_coverage
GENHTML
LCOV
GCOV
+enable_coccicheck
+SPFLAGS
+SPATCH
enable_debug
enable_rpath
default_port
@@ -839,6 +842,7 @@ with_pgport
enable_rpath
enable_debug
enable_profiling
+enable_coccicheck
enable_coverage
enable_dtrace
enable_tap_tests
@@ -1534,6 +1538,7 @@ Optional Features:
executables
--enable-debug build with debugging symbols (-g)
--enable-profiling build with profiling enabled
+ --enable-coccicheck enable Coccinelle checks (requires spatch)
--enable-coverage build with coverage testing instrumentation
--enable-dtrace build with DTrace support
--enable-tap-tests enable TAP tests (requires Perl and IPC::Run)
@@ -3330,6 +3335,91 @@ fi
+#
+# --enable-coccicheck enables Coccinelle check target "coccicheck"
+#
+
+
+# Check whether --enable-coccicheck was given.
+if test "${enable_coccicheck+set}" = set; then :
+ enableval=$enable_coccicheck;
+ case $enableval in
+ yes)
+ if test -z "$SPATCH"; then
+ for ac_prog in spatch
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_SPATCH+:} false; then :
+ $as_echo_n "(cached) " >&6
+else
+ case $SPATCH in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_SPATCH="$SPATCH" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_path_SPATCH="$as_dir/$ac_word$ac_exec_ext"
+ $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+SPATCH=$ac_cv_path_SPATCH
+if test -n "$SPATCH"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $SPATCH" >&5
+$as_echo "$SPATCH" >&6; }
+else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+ test -n "$SPATCH" && break
+done
+
+else
+ # Report the value of SPATCH in configure's output in all cases.
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for SPATCH" >&5
+$as_echo_n "checking for SPATCH... " >&6; }
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $SPATCH" >&5
+$as_echo "$SPATCH" >&6; }
+fi
+
+if test -z "$SPATCH"; then
+ as_fn_error $? "spatch not found" "$LINENO" 5
+fi
+
+ ;;
+ no)
+ :
+ ;;
+ *)
+ as_fn_error $? "no argument expected for --enable-coccicheck option" "$LINENO" 5
+ ;;
+ esac
+
+else
+ enable_coccicheck=no
+
+fi
+
+
+
+
#
# --enable-coverage enables generation of code coverage metrics with gcov
#
@@ -14998,7 +15088,7 @@ else
We can't simply define LARGE_OFF_T to be 9223372036854775807,
since some C++ compilers masquerading as C compilers
incorrectly reject 9223372036854775807. */
-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
&& LARGE_OFF_T % 2147483647 == 1)
? 1 : -1];
@@ -15044,7 +15134,7 @@ else
We can't simply define LARGE_OFF_T to be 9223372036854775807,
since some C++ compilers masquerading as C compilers
incorrectly reject 9223372036854775807. */
-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
&& LARGE_OFF_T % 2147483647 == 1)
? 1 : -1];
@@ -15068,7 +15158,7 @@ rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
We can't simply define LARGE_OFF_T to be 9223372036854775807,
since some C++ compilers masquerading as C compilers
incorrectly reject 9223372036854775807. */
-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
&& LARGE_OFF_T % 2147483647 == 1)
? 1 : -1];
@@ -15113,7 +15203,7 @@ else
We can't simply define LARGE_OFF_T to be 9223372036854775807,
since some C++ compilers masquerading as C compilers
incorrectly reject 9223372036854775807. */
-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
&& LARGE_OFF_T % 2147483647 == 1)
? 1 : -1];
@@ -15137,7 +15227,7 @@ rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
We can't simply define LARGE_OFF_T to be 9223372036854775807,
since some C++ compilers masquerading as C compilers
incorrectly reject 9223372036854775807. */
-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
&& LARGE_OFF_T % 2147483647 == 1)
? 1 : -1];
diff --git a/configure.ac b/configure.ac
index b6d02f5ecc7..fdcda3a2d57 100644
--- a/configure.ac
+++ b/configure.ac
@@ -199,6 +199,18 @@ AC_SUBST(enable_debug)
PGAC_ARG_BOOL(enable, profiling, no,
[build with profiling enabled ])
+#
+# --enable-coccicheck enables Coccinelle check target "coccicheck"
+#
+PGAC_ARG_BOOL(enable, coccicheck, no,
+ [enable Coccinelle checks (requires spatch)],
+[PGAC_PATH_PROGS(SPATCH, spatch)
+if test -z "$SPATCH"; then
+ AC_MSG_ERROR([spatch not found])
+fi
+AC_SUBST(SPFLAGS)])
+AC_SUBST(enable_coccicheck)
+
#
# --enable-coverage enables generation of code coverage metrics with gcov
#
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 3b620bac5ac..cf603e20b7e 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -19,7 +19,7 @@
#
# Meta configuration
-standard_targets = all install installdirs uninstall clean distclean coverage check checkprep installcheck init-po update-po
+standard_targets = all install installdirs uninstall clean distclean coccicheck coverage check checkprep installcheck init-po update-po
# these targets should recurse even into subdirectories not being built:
standard_always_targets = clean distclean
@@ -201,6 +201,7 @@ enable_rpath = @enable_rpath@
enable_nls = @enable_nls@
enable_debug = @enable_debug@
enable_dtrace = @enable_dtrace@
+enable_coccicheck = @enable_coccicheck@
enable_coverage = @enable_coverage@
enable_injection_points = @enable_injection_points@
enable_tap_tests = @enable_tap_tests@
@@ -374,7 +375,7 @@ CLDR_VERSION = 45
# If a particular subdirectory knows this isn't needed in itself or its
# children, it can set NO_GENERATED_HEADERS.
-all install check installcheck: submake-generated-headers
+all install check installcheck coccicheck: submake-generated-headers
.PHONY: submake-generated-headers
@@ -523,6 +524,11 @@ FOP = @FOP@
XMLLINT = @XMLLINT@
XSLTPROC = @XSLTPROC@
+# Coccinelle
+
+SPATCH = @SPATCH@
+SPFLAGS = @SPFLAGS@
+
# Code coverage
GCOV = @GCOV@
@@ -993,6 +999,20 @@ endif # nls.mk
endif # enable_nls
+##########################################################################
+#
+# Coccinelle checks
+#
+
+ifeq ($(enable_coccicheck), yes)
+coccicheck_py = $(top_srcdir)/src/tools/coccicheck.py
+coccicheck = SPATCH=$(SPATCH) SPFLAGS=$(SPFLAGS) $(PYTHON) $(coccicheck_py)
+
+.PHONY: coccicheck
+coccicheck:
+ $(coccicheck) --mode=$(MODE) 'cocci/**/*.cocci' $(top_srcdir)
+endif # enable_coccicheck
+
##########################################################################
#
# Coverage
diff --git a/src/makefiles/pgxs.mk b/src/makefiles/pgxs.mk
index 0de3737e789..144459dccd2 100644
--- a/src/makefiles/pgxs.mk
+++ b/src/makefiles/pgxs.mk
@@ -95,6 +95,9 @@ endif
ifeq ($(FLEX),)
FLEX = flex
endif
+ifeq ($(SPATCH),)
+SPATCH = spatch
+endif
endif # PGXS
--
2.43.0
On Sun, Mar 2, 2025 at 2:26 PM Mats Kindahl <mats@timescale.com> wrote:
On Sat, Jan 18, 2025 at 8:44 PM Mats Kindahl <mats@timescale.com> wrote:
On Tue, Jan 14, 2025 at 4:19 PM Aleksander Alekseev <
aleksander@timescale.com> wrote:IMO the best solution would be re-submitting all the patches to this
thread. Also please make sure the patchset is registered on the
nearest open CF [1] This will ensure that the patchset is built on our
CI (aka cfbot [2]) and will not be lost.[1]: https://commitfest.postgresql.org/
[2]: http://cfbot.cputube.org/Hi all,
Here is a new set of patches rebased on the latest version of the
Postgres. I decided to just include the semantic patches in each patch
since it is trivial to generate the patch and build using:ninja coccicheck-patch | patch -d .. -p1 && ninja
I repeat the description from the previous patch set and add comments
where things have changed, but I have also added two semantic patches,
which are described last.For those of you that are not aware of it: Coccinelle is a tool for
pattern matching and text transformation for C code and can be used for
detection of problematic programming patterns and to make complex,
tree-wide patches easy. It is aware of the structure of C code and is
better suited to make complicated changes than what is possible using
normal text substitution tools like Sed and Perl. I've noticed it's been
used at a few cases way back to fix issues.[1]Coccinelle have been successfully been used in the Linux project since
2008 and is now an established tool for Linux development and a large
number of semantic patches have been added to the source tree to capture
everything from generic issues (like eliminating the redundant A in
expressions like "!A || (A && B)") to more Linux-specific problems like
adding a missing call to kfree().Although PostgreSQL is nowhere the size of the Linux kernel, it is
nevertheless of a significant size and would benefit from incorporating
Coccinelle into the development. I noticed it's been used in a few cases
way back (like 10 years back) to fix issues in the PostgreSQL code, but I
thought it might be useful to make it part of normal development practice
to, among other things:- Identify and correct bugs in the source code both during development
and review.
- Make large-scale changes to the source tree to improve the code based
on new insights.
- Encode and enforce APIs by ensuring that function calls are used
correctly.
- Use improved coding patterns for more efficient code.
- Allow extensions to automatically update code for later PostgreSQL
versions.To that end, I created a series of patches to show how it could be used
in the PostgreSQL tree. It is a lot easier to discuss concrete code and I
split it up into separate messages since that makes it easier to discuss
each individual patch. The series contains code to make it easy to work
with Coccinelle during development and reviews, as well as examples of
semantic patches that capture problems, demonstrate how to make large-scale
changes, how to enforce APIs, and also improve some coding patterns.The first three patches contain the coccicheck.py script and the
integration with the build system (both Meson and Autoconf).# Coccicheck Script
It is a re-implementation of the coccicheck script that the Linux kernel
uses. We cannot immediately use the coccicheck script since it is quite
closely tied to the Linux source code tree and we need to have something
that both supports Autoconf and Meson. Since Python seems to be used more
and more in the tree, it seems to be the most natural choice. (I have no
strong opinion on what language to use, but think it would be good to have
something that is as platform-independent as possible.)The intention is that we should be able to use the Linux semantic patches
directly, so it supports the "Requires" and "Options" keywords, which can
be used to require a specific version of spatch(1) and add options to the
execution of that semantic patch, respectively.I have added support for using multiple jobs similar to how "make -jN"
works. This is also supported by the autoconf and ninja builds# Autoconf support
The changes to Autoconf modifies the configure.ac and related files (in
particular Makefile.global.in). At this point, I have deliberately not
added support for pgxs so extensions cannot use coccicheck through the
PostgreSQL installation. This is something that we can add later though.The semantic patches are expected to live in cocci/ directory under the
root and the patch uses the pattern cocci/**/*.cocci to find all semantic
patches. Right now there are no subdirectories for the semantic patches,
but this might be something we want to add to create different categories
of scripts.The coccicheck target is used in the same way as for the Linux kernel,
that is, to generate and apply all patches suggested by the semantic
patches, you type:make coccicheck MODE=patch | patch -p1
Linux as support for a few more variables: V to set the verbosity, J to
use multiple jobs for processing the semantic patches, M to select a
different directory to apply the semantic patches to, and COCCI to use a
single specific semantic patch rather than all available. I have not added
support for this right now, but if you think this is valuable, it should be
straightforward to add.I used autoconf 2.69, as mentioned in configure.ac, but that generate a
bigger diff than I expected. Any advice here is welcome.Using the parameter "JOBS" allow you to use multiple jobs, e.g.:
make coccicheck MODE=patch JOBS=4 | patch -p1
# Meson Support
The support for Meson is done by adding three coccicheck targets: one for
each mode. To apply all patches suggested by the semantic patches using
ninja (as is done in [2]), you type the following in the build directory
generated by Meson (e.g., the "build/" subdirectory).ninja coccicheck-patch | patch -p1 -d ..
If you want to pass other flags you have to set the SPFLAGS environment
variable when calling ninja:SPFLAGS=--debug ninja coccicheck-report
If you want to use multiple jobs, you use something like this:
JOBS=4 ninja coccicheck-patch | patch -d .. -p1
# Semantic Patch: Wrong type for palloc()
This is the first example of a semantic patch and shows how to capture
and fix a common problem.If you use an palloc() to allocate memory for an object (or an array of
objects) and by mistake type something like:StringInfoData *info = palloc(sizeof(StringInfoData*));
You will not allocate enough memory for storing the object. This semantic
patch catches any cases where you are either allocating an array of objects
or a single object that do not have corret types in this sense, more
precisely, it captures assignments to a variable of type T* where palloc()
uses sizeof(T) either alone or with a single expression (assuming this is
an array count).The semantic patch is overzealous in the sense that using the wrong
typedef will suggest a change (this can be seen in the patch). Although the
sizes of these are the same, it is probably be better to just follow the
convention of always using the type "T*" for any "palloc(sizeof(T))" since
the typedef can change at any point and would then introduce a bug.
Coccicheck can easily fix this for you, so it is straightforward to enforce
this. It also simplifies other automated checking to follow this convention.We don't really have any real bugs as a result from this, but we have one
case where an allocation of "sizeof(LLVMBasicBlockRef*)" is allocated to an
"LLVMBasicBlockRef*", which strictly speaking is not correct (it should be
"sizeof(LLVMBasicBlockRef)"). However, since they are both pointers, there
is no risk of incorrect allocation size. One typedef usage that does not
match.# Semantic Patch: Introduce palloc_array() and palloc_object() where
possibleThis is an example of a large-scale refactoring to improve the code.
For PostgreSQL 16, Peter extended the palloc()/pg_malloc() interface in
commit 2016055a92f to provide more type-safety, but these functions are not
widely used. This semantic patch captures and replaces all uses of palloc()
where palloc_array() or palloc_object() could be used instead. It
deliberately does not touch cases where it is not clear that the
replacement can be done.# Semantic Patch: replace code with pg_cmp_*
This is an example of a large-scale refactoring to improve the code.
In commit 3b42bdb4716 and 6b80394781c overflow-safe comparison functions
were introduced, but they are not widely used. This semantic patch
identifies some of the more common cases and replaces them with calls to
the corresponding pg_cmp_* function.The patches give a few instructions of improvement in performance when
checking with Godbolt. It's not much, but since it is so easy to apply
them, it might still be worthwhile.# Semantic Patch: Replace dynamic allocation of StringInfo with
StringInfoDataUse improved coding patterns for more efficient code.
This semantic patch replaces uses of StringInfo with StringInfoData where
the info is dynamically allocated but (optionally) freed at the end of the
block. This will avoid one dynamic allocation that otherwise has to be
dealt with.For example, this code:
StringInfo info = makeStringInfo();
...
appendStringInfo(info, ...);
...
return do_stuff(..., info->data, ...);Can be replaced with:
StringInfoData info;
initStringInfo(&info);
...
appendStringInfo(&info, ...);
...
return do_stuff(..., info.data, ...);It does not do a replacement in these cases:
- If the variable is assigned to an expression. In this case, the
pointer can "leak" outside the function either through a global variable or
a parameter assignment.
- If an assignment is done to the expression. This cannot leak the
data, but could mean a value-assignment of a structure, so we avoid this
case.
- If the pointer is returned.The cases that this semantic patch fixed when I uploaded the first version
of the other patches seems to have been dealt with, but having it as part
of the code base prevents such cases from surfacing again.[1]: https://coccinelle.gitlabpages.inria.fr/website/
[2]: https://www.postgresql.org/docs/current/install-meson.html--
Best wishes,
Mats Kindahl, Timescale
Hi all,
There was a problem with the meson.build file causing errors in the build
farm (because spatch is not installed), so here is a new set of patches.
Only the one with the meson.build has changed, but I am unsure how patches
are picked up, so adding a new version of all files here.
--
Best wishes,
Mats Kindahl, Timescale
Attachments:
0007-Semantic-patch-to-use-stack-allocated-StringInfoData.v4.patchtext/x-patch; charset=US-ASCII; name=0007-Semantic-patch-to-use-stack-allocated-StringInfoData.v4.patchDownload
From 3362026ed695b6a564eaba0c8f5fbd2bda1f53da Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Tue, 28 Jan 2025 14:09:41 +0100
Subject: Semantic patch to use stack-allocated StringInfoData
This semantic patch replace uses of StringInfo with StringInfoData where the
info is dynamically allocated but (optionally) freed at the end of the block.
This will avoid one dynamic allocation that otherwise have to be dealt with.
For example, this code:
StringInfo info = makeStringInfo();
...
appendStringInfo(info, ...);
...
return do_stuff(..., info->data, ...);
Can be replaced with:
StringInfoData info;
initStringInfo(&info);
...
appendStringInfo(&info, ...);
...
return do_stuff(..., info.data, ...);
It does not do a replacement in these cases:
- If the variable is assigned to an expression. In this case, the pointer can
"leak" outside the function either through a global variable or a parameter
assignment.
- If an assignment is done to the expression. This cannot leak the data, but
could mean a value-assignment of a structure, so we avoid this case.
- If the pointer is returned.
---
cocci/use_stringinfodata.cocci | 155 +++++++++++++++++++++++++++++++++
1 file changed, 155 insertions(+)
create mode 100644 cocci/use_stringinfodata.cocci
diff --git a/cocci/use_stringinfodata.cocci b/cocci/use_stringinfodata.cocci
new file mode 100644
index 00000000000..4186027f8c9
--- /dev/null
+++ b/cocci/use_stringinfodata.cocci
@@ -0,0 +1,155 @@
+// Replace uses of StringInfo with StringInfoData where the info is
+// dynamically allocated but (optionally) freed at the end of the
+// block. This will avoid one dynamic allocation that otherwise have
+// to be dealt with.
+//
+// For example, this code:
+//
+// StringInfo info = makeStringInfo();
+// ...
+// appendStringInfo(info, ...);
+// ...
+// return do_stuff(..., info->data, ...);
+//
+// Can be replaced with:
+//
+// StringInfoData info;
+// initStringInfo(&info);
+// ...
+// appendStringInfo(&info, ...);
+// ...
+// return do_stuff(..., info.data, ...);
+
+virtual report
+virtual context
+virtual patch
+
+// This rule captures the position of the makeStringInfo() and bases
+// all changes around that. It matches the case that we should *not*
+// replace, that is, those that either (1) return the pointer or (2)
+// assign the pointer to a variable or (3) assign a variable to the
+// pointer.
+//
+// The first two cases are matched because they could potentially leak
+// the pointer outside the function, for some expressions, but the
+// last one is just a convenience.
+//
+// If we replace this, the resulting change will result in a value
+// copy of a structure, which might not be optimal, so we do not do a
+// replacement.
+@id1 exists@
+typedef StringInfo;
+local idexpression StringInfo info;
+position pos;
+expression E;
+@@
+ info@pos = makeStringInfo()
+ ...
+(
+ return info;
+|
+ info = E
+|
+ E = info
+)
+
+@r1 depends on !patch disable decl_init exists@
+identifier info, fld;
+position dpos, pos != id1.pos;
+@@
+(
+* StringInfo@dpos info;
+ ...
+* info@pos = makeStringInfo();
+|
+* StringInfo@dpos info@pos = makeStringInfo();
+)
+<...
+(
+* \(pfree\|destroyStringInfo\)(info);
+|
+* info->fld
+|
+* *info
+|
+* info
+)
+...>
+
+@script:python depends on report@
+info << r1.info;
+dpos << r1.dpos;
+@@
+coccilib.report.print_report(dpos[0], f"Variable '{info}' of type StringInfo can be defined using StringInfoData")
+
+@depends on patch disable decl_init exists@
+identifier info, fld;
+position pos != id1.pos;
+@@
+- StringInfo info;
++ StringInfoData info;
+ ...
+- info@pos = makeStringInfo();
++ initStringInfo(&info);
+<...
+(
+- \(destroyStringInfo\|pfree\)(info);
+|
+ info
+- ->fld
++ .fld
+|
+- *info
++ info
+|
+- info
++ &info
+)
+...>
+
+// Here we repeat the matching of the "bad case" since we cannot
+// inherit over modifications
+@id2 exists@
+typedef StringInfo;
+local idexpression StringInfo info;
+position pos;
+expression E;
+@@
+ info@pos = makeStringInfo()
+ ...
+(
+ return info;
+|
+ info = E
+|
+ E = info
+)
+
+@depends on patch exists@
+identifier info, fld;
+position pos != id2.pos;
+statement S, S1;
+@@
+- StringInfo info@pos = makeStringInfo();
++ StringInfoData info;
+ ... when != S
+(
+<...
+(
+- \(destroyStringInfo\|pfree\)(info);
+|
+ info
+- ->fld
++ .fld
+|
+- *info
++ info
+|
+- info
++ &info
+)
+...>
+&
++ initStringInfo(&info);
+ S1
+)
--
2.43.0
0006-Semantic-patch-for-pg_cmp_-functions.v4.patchtext/x-patch; charset=US-ASCII; name=0006-Semantic-patch-for-pg_cmp_-functions.v4.patchDownload
From fa1f167246fe97aa26811898d7e19f456bd69896 Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Thu, 23 Jan 2025 02:46:14 +0100
Subject: Semantic patch for pg_cmp_* functions
In commit 3b42bdb4716 and 6b80394781c overflow-safe comparison functions where
introduced, but they are not widely used. This semantic patch identifies some
of the more common cases and replaces them with calls to the corresponding
pg_cmp_* function.
---
cocci/use_pg_cmp.cocci | 125 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 125 insertions(+)
create mode 100644 cocci/use_pg_cmp.cocci
diff --git a/cocci/use_pg_cmp.cocci b/cocci/use_pg_cmp.cocci
new file mode 100644
index 00000000000..8a258e61e5d
--- /dev/null
+++ b/cocci/use_pg_cmp.cocci
@@ -0,0 +1,125 @@
+// Find cases where we can use the new pg_cmp_* functions.
+//
+// Copyright 2025 Mats Kindahl, Timescale.
+//
+// Options: --no-includes --include-headers
+
+virtual report
+virtual context
+virtual patch
+
+@initialize:python@
+@@
+
+import re
+
+TYPMAP = {
+ 'BlockNumber': 'pg_cmp_u32',
+ 'ForkNumber': 'pg_cmp_s32',
+ 'OffsetNumber': 'pg_cmp_s16',
+ 'int': 'pg_cmp_s32',
+ 'int16': 'pg_cmp_s16',
+ 'int32': 'pg_cmp_s32',
+ 'uint16': 'pg_cmp_u16',
+ 'uint32': 'pg_cmp_u32',
+ 'unsigned int': 'pg_cmp_u32',
+}
+
+def is_valid(expr):
+ return not re.search(r'DatumGet[A-Za-z]+', expr)
+
+@r1e depends on context || report expression@
+type TypeName : script:python() { TypeName in TYPMAP };
+position pos;
+TypeName lhs : script:python() { is_valid(lhs) };
+TypeName rhs : script:python() { is_valid(rhs) };
+@@
+* lhs@pos < rhs ? -1 : lhs > rhs ? 1 : 0
+
+@script:python depends on report@
+lhs << r1e.lhs;
+rhs << r1e.rhs;
+pos << r1e.pos;
+@@
+coccilib.report.print_report(pos[0], f"conditional checks between '{lhs}' and '{rhs}' can be replaced with a PostgreSQL comparison function")
+
+@r1 depends on context || report@
+type TypeName : script:python() { TypeName in TYPMAP };
+position pos;
+TypeName lhs : script:python() { is_valid(lhs) };
+TypeName rhs : script:python() { is_valid(rhs) };
+@@
+(
+* if@pos (lhs < rhs) return -1; else if (lhs > rhs) return 1; return 0;
+|
+* if@pos (lhs < rhs) return -1; else if (lhs > rhs) return 1; else return 0;
+|
+* if@pos (lhs < rhs) return -1; if (lhs > rhs) return 1; return 0;
+|
+* if@pos (lhs > rhs) return 1; if (lhs < rhs) return -1; return 0;
+|
+* if@pos (lhs == rhs) return 0; if (lhs > rhs) return 1; return -1;
+|
+* if@pos (lhs == rhs) return 0; return lhs > rhs ? 1 : -1;
+|
+* if@pos (lhs == rhs) return 0; return lhs < rhs ? -1 : 1;
+)
+
+@script:python depends on report@
+lhs << r1.lhs;
+rhs << r1.rhs;
+pos << r1.pos;
+@@
+coccilib.report.print_report(pos[0], f"conditional checks between '{lhs}' and '{rhs}' can be replaced with a PostgreSQL comparison function")
+
+@expr_repl depends on patch expression@
+type TypeName : script:python() { TypeName in TYPMAP };
+fresh identifier cmp = script:python(TypeName) { TYPMAP[TypeName] };
+TypeName lhs : script:python() { is_valid(lhs) };
+TypeName rhs : script:python() { is_valid(rhs) };
+@@
+- lhs < rhs ? -1 : lhs > rhs ? 1 : 0
++ cmp(lhs,rhs)
+
+@stmt_repl depends on patch@
+type TypeName : script:python() { TypeName in TYPMAP };
+fresh identifier cmp = script:python(TypeName) { TYPMAP[TypeName] };
+TypeName lhs : script:python() { is_valid(lhs) };
+TypeName rhs : script:python() { is_valid(rhs) };
+@@
+(
+- if (lhs < rhs) return -1; if (lhs > rhs) return 1; return 0;
++ return cmp(lhs,rhs);
+|
+- if (lhs < rhs) return -1; else if (lhs > rhs) return 1; return 0;
++ return cmp(lhs,rhs);
+|
+- if (lhs < rhs) return -1; else if (lhs > rhs) return 1; else return 0;
++ return cmp(lhs,rhs);
+|
+- if (lhs > rhs) return 1; if (lhs < rhs) return -1; return 0;
++ return cmp(lhs,rhs);
+|
+- if (lhs > rhs) return 1; else if (lhs < rhs) return -1; return 0;
++ return cmp(lhs,rhs);
+|
+- if (lhs == rhs) return 0; if (lhs > rhs) return 1; return -1;
++ return cmp(lhs,rhs);
+|
+- if (lhs == rhs) return 0; return lhs > rhs ? 1 : -1;
++ return cmp(lhs,rhs);
+|
+- if (lhs == rhs) return 0; return lhs < rhs ? -1 : 1;
++ return cmp(lhs,rhs);
+)
+
+// Add an include if there were none and we had to do some
+// replacements
+@has_include depends on patch@
+@@
+ #include "common/int.h"
+
+@depends on patch && !has_include && (stmt_repl || expr_repl)@
+@@
+ #include ...
++ #include "common/int.h"
--
2.43.0
0001-Add-initial-coccicheck-script.v4.patchtext/x-patch; charset=US-ASCII; name=0001-Add-initial-coccicheck-script.v4.patchDownload
From 882f3b150c61884b9e176a1f506747c35235ab0a Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Sun, 29 Dec 2024 19:35:58 +0100
Subject: Add initial coccicheck script
The coccicheck.py script can be used to run several semantics patches on a
source tree to either generate a report, see the context of the modification
(what lines that requires changes), or generate a patch to correct an issue.
usage: coccicheck.py [-h] [--verbose] [--spatch SPATCH]
[--spflags SPFLAGS]
[--mode {patch,report,context}] [--jobs JOBS]
[--include DIR] [--patchdir DIR]
pattern path [path ...]
positional arguments:
pattern Pattern for Cocci files to use.
path Directory or source path to process.
options:
-h, --help show this help message and exit
--verbose, -v
--spatch SPATCH Path to spatch binary. Defaults to value of
environment variable SPATCH.
--spflags SPFLAGS Flags to pass to spatch call. Defaults to
value of enviroment variable SPFLAGS.
--mode {patch,report,context}
Mode to use for coccinelle. Defaults to
value of environment variable MODE.
--jobs JOBS Number of jobs to use for spatch. Defaults
to value of environment variable JOBS.
--include DIR, -I DIR
Extra include directories.
--patchdir DIR Path for which patch should be created
relative to.
---
src/tools/coccicheck.py | 185 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 185 insertions(+)
create mode 100755 src/tools/coccicheck.py
diff --git a/src/tools/coccicheck.py b/src/tools/coccicheck.py
new file mode 100755
index 00000000000..838f8184c54
--- /dev/null
+++ b/src/tools/coccicheck.py
@@ -0,0 +1,185 @@
+#!/usr/bin/env python3
+
+"""Run Coccinelle on a set of files and directories.
+
+This is a re-written version of the Linux ``coccicheck`` script.
+
+Coccicheck can run in two different modes (the original have four
+different modes):
+
+- *patch*: patch files using the cocci file.
+
+- *report*: report will report any improvements that this script can
+ make, but not show any patch.
+
+- *context*: show the context where the patch can be applied.
+
+The program will take a single cocci file and call spatch(1) with a
+set of paths that can be either files or directories.
+
+When starting, the cocci file will be parsed and any lines containing
+"Options:" or "Requires:" will be treated specially.
+
+- Lines containing "Options:" will have a list of options to add to
+ the call of the spatch(1) program. These options will be added last.
+
+- Lines containing "Requires:" can contain a version of spatch(1) that
+ is required for this cocci file. If the version requirements are not
+ satisfied, the file will not be used.
+
+When calling spatch(1), it will set the virtual rules "patch",
+"report", or "context" and the cocci file can use these to act
+differently depending on the mode.
+
+The following environment variables can be set:
+
+SPATCH: Path to spatch program. This will be used if no path is
+ passed using the option --spatch.
+
+SPFLAGS: Extra flags to use when calling spatch. These will be added
+ last.
+
+MODE: Mode to use. It will be used if no --mode is passed to
+ coccicheck.py.
+
+"""
+
+import argparse
+import os
+import sys
+import subprocess
+import re
+
+from pathlib import PurePath, Path
+from packaging import version
+
+VERSION_CRE = re.compile(
+ r'spatch version (\S+) compiled with OCaml version (\S+)'
+)
+
+
+def parse_metadata(cocci_file):
+ """Parse metadata in Cocci file."""
+ metadata = {}
+ with open(cocci_file) as fh:
+ for line in fh:
+ mre = re.match(r'(Options|Requires):(.*)', line, re.IGNORECASE)
+ if mre:
+ metadata[mre.group(1).lower()] = mre.group(2)
+ return metadata
+
+
+def get_config(args):
+ """Compute configuration information."""
+ # Figure out spatch version. We just need to read the first line
+ config = {}
+ cmd = [args.spatch, '--version']
+ with subprocess.Popen(cmd, stdout=subprocess.PIPE, text=True) as proc:
+ for line in proc.stdout:
+ mre = VERSION_CRE.match(line)
+ if mre:
+ config['spatch_version'] = mre.group(1)
+ break
+ return config
+
+
+def run_spatch(cocci_file, args, config, env):
+ """Run coccinelle on the provided file."""
+ if args.verbose > 1:
+ print("processing cocci file", cocci_file)
+ spatch_version = config['spatch_version']
+ metadata = parse_metadata(cocci_file)
+
+ # Check that we have a valid version
+ if 'required' in metadata:
+ required_version = version.parse(metadata['required'])
+ if required_version < spatch_version:
+ print(
+ f'Skipping SmPL patch {cocci_file}: '
+ f'requires {required_version} (had {spatch_version})'
+ )
+ return
+
+ command = [
+ args.spatch,
+ "-D", args.mode,
+ "--cocci-file", cocci_file,
+ "--very-quiet",
+ ]
+
+ if 'options' in metadata:
+ command.append(metadata['options'])
+ if args.mode == 'report':
+ command.append('--no-show-diff')
+ if args.patchdir:
+ command.extend(['--patch', args.patchdir])
+ if args.jobs:
+ command.extend(['--jobs', args.jobs])
+ if args.spflags:
+ command.append(args.spflags)
+
+ for path in args.path:
+ subprocess.run(command + [path], env=env, check=True)
+
+
+def coccinelle(args, config, env):
+ """Run coccinelle on all files matching the provided pattern."""
+ root = '/' if PurePath(args.cocci).is_absolute() else '.'
+ count = 0
+ for cocci_file in Path(root).glob(args.cocci):
+ count += 1
+ run_spatch(cocci_file, args, config, env)
+ return count
+
+
+def main(argv):
+ """Run coccicheck."""
+ parser = argparse.ArgumentParser()
+ parser.add_argument('--verbose', '-v', action='count', default=0)
+ parser.add_argument('--spatch', type=PurePath, metavar='SPATCH',
+ default=os.environ.get('SPATCH'),
+ help=('Path to spatch binary. Defaults to '
+ 'value of environment variable SPATCH.'))
+ parser.add_argument('--spflags', type=PurePath,
+ metavar='SPFLAGS',
+ default=os.environ.get('SPFLAGS', None),
+ help=('Flags to pass to spatch call. Defaults '
+ 'to value of enviroment variable SPFLAGS.'))
+ parser.add_argument('--mode', choices=['patch', 'report', 'context'],
+ default=os.environ.get('MODE', 'report'),
+ help=('Mode to use for coccinelle. Defaults to '
+ 'value of environment variable MODE.'))
+ parser.add_argument('--jobs', default=os.environ.get('JOBS', None),
+ help=('Number of jobs to use for spatch. Defaults to '
+ 'value of environment variable JOBS.'))
+ parser.add_argument('--include', '-I', type=PurePath,
+ metavar='DIR',
+ help='Extra include directories.')
+ parser.add_argument('--patchdir', type=PurePath, metavar='DIR',
+ help=('Path for which patch should be created '
+ 'relative to.'))
+ parser.add_argument('cocci', metavar='pattern',
+ help='Pattern for Cocci files to use.')
+ parser.add_argument('path', nargs='+', type=PurePath,
+ help='Directory or source path to process.')
+
+ args = parser.parse_args(argv)
+
+ if args.verbose > 1:
+ print("arguments:", args)
+
+ if args.spatch is None:
+ parser.error('spatch is part of the Coccinelle project and is '
+ 'available at http://coccinelle.lip6.fr/')
+
+ if coccinelle(args, get_config(args), os.environ) == 0:
+ parser.error(f'no coccinelle files found matching {args.cocci}')
+
+
+if __name__ == '__main__':
+ try:
+ main(sys.argv[1:])
+ except KeyboardInterrupt:
+ print("Execution aborted")
+ except Exception as exc:
+ print(exc)
--
2.43.0
0003-Add-meson-build-for-coccicheck.v4.patchtext/x-patch; charset=US-ASCII; name=0003-Add-meson-build-for-coccicheck.v4.patchDownload
From 7a0e89c7909e7a205f4724a11d73d00f38538e37 Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Wed, 1 Jan 2025 14:15:51 +0100
Subject: Add meson build for coccicheck
This commit adds a run target `coccicheck` to meson build files.
Since ninja does not accept parameters the same way make does, there are three
run targets defined---"coccicheck-patch", "coccicheck-report", and
"coccicheck-context"---that you can use to generate a patch, get a report, and
get the context respectively. For example, to patch the tree from the "build"
subdirectory created by the meson run:
ninja coccicheck-patch | patch -d .. -p1
---
meson.build | 30 ++++++++++++++++++++++++++++++
meson_options.txt | 7 ++++++-
src/makefiles/meson.build | 6 ++++++
3 files changed, 42 insertions(+), 1 deletion(-)
diff --git a/meson.build b/meson.build
index 13c13748e5d..0e0828f97f0 100644
--- a/meson.build
+++ b/meson.build
@@ -348,6 +348,7 @@ missing = find_program('config/missing', native: true)
cp = find_program('cp', required: false, native: true)
xmllint_bin = find_program(get_option('XMLLINT'), native: true, required: false)
xsltproc_bin = find_program(get_option('XSLTPROC'), native: true, required: false)
+spatch = find_program(get_option('SPATCH'), native: true, required: false)
bison_flags = []
if bison.found()
@@ -1642,6 +1643,34 @@ else
endif
+###############################################################
+# Option: Coccinelle checks
+###############################################################
+
+coccicheck_opt = get_option('coccicheck')
+coccicheck_dep = not_found_dep
+if not coccicheck_opt.disabled()
+ if spatch.found()
+ coccicheck_dep = declare_dependency()
+ elif coccicheck_opt.enabled()
+ error('missing required tools (spatch needed) for Coccinelle checks')
+ endif
+endif
+
+if coccicheck_opt.enabled()
+ coccicheck_modes = ['context', 'report', 'patch']
+ foreach mode : coccicheck_modes
+ run_target('coccicheck-' + mode,
+ command: [python, files('src/tools/coccicheck.py'),
+ '--mode', mode,
+ '--spatch', spatch,
+ '--patchdir', '@SOURCE_ROOT@',
+ '@SOURCE_ROOT@/cocci/**/*.cocci',
+ '@SOURCE_ROOT@/src',
+ '@SOURCE_ROOT@/contrib',
+ ])
+ endforeach
+endif
###############################################################
# Compiler tests
@@ -3808,6 +3837,7 @@ if meson.version().version_compare('>=0.57')
{
'bison': '@0@ @1@'.format(bison.full_path(), bison_version),
'dtrace': dtrace,
+ 'spatch': spatch,
'flex': '@0@ @1@'.format(flex.full_path(), flex_version),
},
section: 'Programs',
diff --git a/meson_options.txt b/meson_options.txt
index 702c4517145..37d6d43af93 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -43,6 +43,9 @@ option('cassert', type: 'boolean', value: false,
option('tap_tests', type: 'feature', value: 'auto',
description: 'Enable TAP tests')
+option('coccicheck', type: 'feature', value: 'auto',
+ description: 'Enable Coccinelle checks')
+
option('injection_points', type: 'boolean', value: false,
description: 'Enable injection points')
@@ -52,7 +55,6 @@ option('PG_TEST_EXTRA', type: 'string', value: '',
option('PG_GIT_REVISION', type: 'string', value: 'HEAD',
description: 'git revision to be packaged by pgdist target')
-
# Compilation options
option('extra_include_dirs', type: 'array', value: [],
@@ -195,6 +197,9 @@ option('PYTHON', type: 'array', value: ['python3', 'python'],
option('SED', type: 'string', value: 'gsed',
description: 'Path to sed binary')
+option('SPATCH', type: 'string', value: 'spatch',
+ description: 'Path to spatch binary, used for SmPL patches')
+
option('STRIP', type: 'string', value: 'strip',
description: 'Path to strip binary, used for PGXS emulation')
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 60e13d50235..c66156d9046 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -57,6 +57,7 @@ pgxs_kv = {
'enable_injection_points': get_option('injection_points') ? 'yes' : 'no',
'enable_tap_tests': tap_tests_enabled ? 'yes' : 'no',
'enable_debug': get_option('debug') ? 'yes' : 'no',
+ 'enable_coccicheck': spatch.found() ? 'yes' : 'no',
'enable_coverage': 'no',
'enable_dtrace': dtrace.found() ? 'yes' : 'no',
@@ -151,6 +152,7 @@ pgxs_bins = {
'TAR': tar,
'ZSTD': program_zstd,
'DTRACE': dtrace,
+ 'SPATCH': spatch,
}
pgxs_empty = [
@@ -166,6 +168,10 @@ pgxs_empty = [
'DBTOEPUB',
'FOP',
+ # Coccinelle is not supported by pgxs
+ 'SPATCH',
+ 'SPFLAGS',
+
# supporting coverage for pgxs-in-meson build doesn't seem worth it
'GENHTML',
'LCOV',
--
2.43.0
0004-Semantic-patch-for-sizeof-using-palloc.v4.patchtext/x-patch; charset=US-ASCII; name=0004-Semantic-patch-for-sizeof-using-palloc.v4.patchDownload
From 3d7bca9422d5e9c851e42f442afeeb2dfc2104c3 Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Sun, 5 Jan 2025 19:26:47 +0100
Subject: Semantic patch for sizeof() using palloc()
If palloc() is used to allocate elements of type T it should be assigned to a
variable of type T* or risk indexes out of bounds. This semantic patch checks
that allocations to variables of type T* are using sizeof(T) when allocating
memory using palloc().
---
cocci/palloc_sizeof.cocci | 49 +++++++++++++++++++++++++++++++++++++++
1 file changed, 49 insertions(+)
create mode 100644 cocci/palloc_sizeof.cocci
diff --git a/cocci/palloc_sizeof.cocci b/cocci/palloc_sizeof.cocci
new file mode 100644
index 00000000000..5f8593c2687
--- /dev/null
+++ b/cocci/palloc_sizeof.cocci
@@ -0,0 +1,49 @@
+virtual report
+virtual context
+virtual patch
+
+@initialize:python@
+@@
+import re
+
+CONST_CRE = re.compile(r'\bconst\b')
+
+def is_simple_type(s):
+ return s != 'void' and not CONST_CRE.search(s)
+
+@r1 depends on report || context@
+type T1 : script:python () { is_simple_type(T1) };
+idexpression T1 *I;
+type T2 != T1;
+position p;
+expression E;
+identifier func = {palloc, palloc0};
+@@
+(
+* I = func@p(sizeof(T2))
+|
+* I = func@p(E * sizeof(T2))
+)
+
+@script:python depends on report@
+T1 << r1.T1;
+T2 << r1.T2;
+I << r1.I;
+p << r1.p;
+@@
+coccilib.report.print_report(p[0], f"'{I}' has type '{T1}*' but 'sizeof({T2})' is used to allocate memory")
+
+@depends on patch@
+type T1 : script:python () { is_simple_type(T1) };
+idexpression T1 *I;
+type T2 != T1;
+expression E;
+identifier func = {palloc, palloc0};
+@@
+(
+- I = func(sizeof(T2))
++ I = func(sizeof(T1))
+|
+- I = func(E * sizeof(T2))
++ I = func(E * sizeof(T1))
+)
--
2.43.0
0002-Create-coccicheck-target-for-autoconf.v4.patchtext/x-patch; charset=US-ASCII; name=0002-Create-coccicheck-target-for-autoconf.v4.patchDownload
From 1c0e47883dca57ab9febf9af8b28bffa1e75c4f0 Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Mon, 30 Dec 2024 19:58:07 +0100
Subject: Create coccicheck target for autoconf
This adds a coccicheck target for the autoconf-based build system. The
coccicheck target accepts one parameter MODE, which can be either "patch",
"report", or "context". The "patch" mode will generate a patch that can be
applied to the source tree, the "report" mode will generate a list of file
locations with information about what can be changed, and the "context" mode
will just highlight the line that will be affected by the semantic patch.
The following will generate a patch and apply it to the source code tree:
make coccicheck MODE=patch | patch -p1
---
configure | 100 ++++++++++++++++++++++++++++++++++++++---
configure.ac | 12 +++++
src/Makefile.global.in | 24 +++++++++-
src/makefiles/pgxs.mk | 3 ++
4 files changed, 132 insertions(+), 7 deletions(-)
diff --git a/configure b/configure
index 93fddd69981..109a4868de8 100755
--- a/configure
+++ b/configure
@@ -772,6 +772,9 @@ enable_coverage
GENHTML
LCOV
GCOV
+enable_coccicheck
+SPFLAGS
+SPATCH
enable_debug
enable_rpath
default_port
@@ -839,6 +842,7 @@ with_pgport
enable_rpath
enable_debug
enable_profiling
+enable_coccicheck
enable_coverage
enable_dtrace
enable_tap_tests
@@ -1534,6 +1538,7 @@ Optional Features:
executables
--enable-debug build with debugging symbols (-g)
--enable-profiling build with profiling enabled
+ --enable-coccicheck enable Coccinelle checks (requires spatch)
--enable-coverage build with coverage testing instrumentation
--enable-dtrace build with DTrace support
--enable-tap-tests enable TAP tests (requires Perl and IPC::Run)
@@ -3330,6 +3335,91 @@ fi
+#
+# --enable-coccicheck enables Coccinelle check target "coccicheck"
+#
+
+
+# Check whether --enable-coccicheck was given.
+if test "${enable_coccicheck+set}" = set; then :
+ enableval=$enable_coccicheck;
+ case $enableval in
+ yes)
+ if test -z "$SPATCH"; then
+ for ac_prog in spatch
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_SPATCH+:} false; then :
+ $as_echo_n "(cached) " >&6
+else
+ case $SPATCH in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_SPATCH="$SPATCH" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_path_SPATCH="$as_dir/$ac_word$ac_exec_ext"
+ $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+SPATCH=$ac_cv_path_SPATCH
+if test -n "$SPATCH"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $SPATCH" >&5
+$as_echo "$SPATCH" >&6; }
+else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+ test -n "$SPATCH" && break
+done
+
+else
+ # Report the value of SPATCH in configure's output in all cases.
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for SPATCH" >&5
+$as_echo_n "checking for SPATCH... " >&6; }
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $SPATCH" >&5
+$as_echo "$SPATCH" >&6; }
+fi
+
+if test -z "$SPATCH"; then
+ as_fn_error $? "spatch not found" "$LINENO" 5
+fi
+
+ ;;
+ no)
+ :
+ ;;
+ *)
+ as_fn_error $? "no argument expected for --enable-coccicheck option" "$LINENO" 5
+ ;;
+ esac
+
+else
+ enable_coccicheck=no
+
+fi
+
+
+
+
#
# --enable-coverage enables generation of code coverage metrics with gcov
#
@@ -14998,7 +15088,7 @@ else
We can't simply define LARGE_OFF_T to be 9223372036854775807,
since some C++ compilers masquerading as C compilers
incorrectly reject 9223372036854775807. */
-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
&& LARGE_OFF_T % 2147483647 == 1)
? 1 : -1];
@@ -15044,7 +15134,7 @@ else
We can't simply define LARGE_OFF_T to be 9223372036854775807,
since some C++ compilers masquerading as C compilers
incorrectly reject 9223372036854775807. */
-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
&& LARGE_OFF_T % 2147483647 == 1)
? 1 : -1];
@@ -15068,7 +15158,7 @@ rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
We can't simply define LARGE_OFF_T to be 9223372036854775807,
since some C++ compilers masquerading as C compilers
incorrectly reject 9223372036854775807. */
-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
&& LARGE_OFF_T % 2147483647 == 1)
? 1 : -1];
@@ -15113,7 +15203,7 @@ else
We can't simply define LARGE_OFF_T to be 9223372036854775807,
since some C++ compilers masquerading as C compilers
incorrectly reject 9223372036854775807. */
-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
&& LARGE_OFF_T % 2147483647 == 1)
? 1 : -1];
@@ -15137,7 +15227,7 @@ rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
We can't simply define LARGE_OFF_T to be 9223372036854775807,
since some C++ compilers masquerading as C compilers
incorrectly reject 9223372036854775807. */
-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
&& LARGE_OFF_T % 2147483647 == 1)
? 1 : -1];
diff --git a/configure.ac b/configure.ac
index b6d02f5ecc7..fdcda3a2d57 100644
--- a/configure.ac
+++ b/configure.ac
@@ -199,6 +199,18 @@ AC_SUBST(enable_debug)
PGAC_ARG_BOOL(enable, profiling, no,
[build with profiling enabled ])
+#
+# --enable-coccicheck enables Coccinelle check target "coccicheck"
+#
+PGAC_ARG_BOOL(enable, coccicheck, no,
+ [enable Coccinelle checks (requires spatch)],
+[PGAC_PATH_PROGS(SPATCH, spatch)
+if test -z "$SPATCH"; then
+ AC_MSG_ERROR([spatch not found])
+fi
+AC_SUBST(SPFLAGS)])
+AC_SUBST(enable_coccicheck)
+
#
# --enable-coverage enables generation of code coverage metrics with gcov
#
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 3b620bac5ac..cf603e20b7e 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -19,7 +19,7 @@
#
# Meta configuration
-standard_targets = all install installdirs uninstall clean distclean coverage check checkprep installcheck init-po update-po
+standard_targets = all install installdirs uninstall clean distclean coccicheck coverage check checkprep installcheck init-po update-po
# these targets should recurse even into subdirectories not being built:
standard_always_targets = clean distclean
@@ -201,6 +201,7 @@ enable_rpath = @enable_rpath@
enable_nls = @enable_nls@
enable_debug = @enable_debug@
enable_dtrace = @enable_dtrace@
+enable_coccicheck = @enable_coccicheck@
enable_coverage = @enable_coverage@
enable_injection_points = @enable_injection_points@
enable_tap_tests = @enable_tap_tests@
@@ -374,7 +375,7 @@ CLDR_VERSION = 45
# If a particular subdirectory knows this isn't needed in itself or its
# children, it can set NO_GENERATED_HEADERS.
-all install check installcheck: submake-generated-headers
+all install check installcheck coccicheck: submake-generated-headers
.PHONY: submake-generated-headers
@@ -523,6 +524,11 @@ FOP = @FOP@
XMLLINT = @XMLLINT@
XSLTPROC = @XSLTPROC@
+# Coccinelle
+
+SPATCH = @SPATCH@
+SPFLAGS = @SPFLAGS@
+
# Code coverage
GCOV = @GCOV@
@@ -993,6 +999,20 @@ endif # nls.mk
endif # enable_nls
+##########################################################################
+#
+# Coccinelle checks
+#
+
+ifeq ($(enable_coccicheck), yes)
+coccicheck_py = $(top_srcdir)/src/tools/coccicheck.py
+coccicheck = SPATCH=$(SPATCH) SPFLAGS=$(SPFLAGS) $(PYTHON) $(coccicheck_py)
+
+.PHONY: coccicheck
+coccicheck:
+ $(coccicheck) --mode=$(MODE) 'cocci/**/*.cocci' $(top_srcdir)
+endif # enable_coccicheck
+
##########################################################################
#
# Coverage
diff --git a/src/makefiles/pgxs.mk b/src/makefiles/pgxs.mk
index 0de3737e789..144459dccd2 100644
--- a/src/makefiles/pgxs.mk
+++ b/src/makefiles/pgxs.mk
@@ -95,6 +95,9 @@ endif
ifeq ($(FLEX),)
FLEX = flex
endif
+ifeq ($(SPATCH),)
+SPATCH = spatch
+endif
endif # PGXS
--
2.43.0
0005-Semantic-patch-for-palloc_array-and-palloc_object.v4.patchtext/x-patch; charset=US-ASCII; name=0005-Semantic-patch-for-palloc_array-and-palloc_object.v4.patchDownload
From f5f4d31746ed21233eeff46f21e48938c4d20fef Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Sun, 29 Dec 2024 20:23:25 +0100
Subject: Semantic patch for palloc_array and palloc_object
Macros were added to the palloc API in commit 2016055a92f to improve
type-safety, but very few instances were replaced. This adds a cocci script to
do that replacement. The semantic patch deliberately do not replace instances
where the type of the variable and the type used in the macro does not match.
---
cocci/palloc_array.cocci | 157 +++++++++++++++++++++++++++++++++++++++
1 file changed, 157 insertions(+)
create mode 100644 cocci/palloc_array.cocci
diff --git a/cocci/palloc_array.cocci b/cocci/palloc_array.cocci
new file mode 100644
index 00000000000..aeeab74c3a9
--- /dev/null
+++ b/cocci/palloc_array.cocci
@@ -0,0 +1,157 @@
+// Since PG16 there are array versions of common palloc operations, so
+// we can use those instead.
+//
+// We ignore cases where we have a anonymous struct and also when the
+// type of the variable being assigned to is different from the
+// inferred type.
+//
+// Options: --no-includes --include-headers
+
+virtual patch
+virtual report
+virtual context
+
+// These rules (soN) are needed to rewrite types of the form
+// sizeof(T[C]) to C * sizeof(T) since Cocci cannot (currently) handle
+// it.
+@initialize:python@
+@@
+import re
+
+CRE = re.compile(r'(.*)\s+\[\s+(\d+)\s+\]$')
+
+def is_array_type(s):
+ mre = CRE.match(s)
+ return (mre is not None)
+
+@so1 depends on patch@
+type T : script:python() { is_array_type(T) };
+@@
+palloc(sizeof(T))
+
+@script:python so2 depends on patch@
+T << so1.T;
+T2;
+E;
+@@
+mre = CRE.match(T)
+coccinelle.T2 = cocci.make_type(mre.group(1))
+coccinelle.E = cocci.make_expr(mre.group(2))
+
+@depends on patch@
+type so1.T;
+type so2.T2;
+expression so2.E;
+@@
+- palloc(sizeof(T))
++ palloc(E * sizeof(T2))
+
+@r1 depends on report || context@
+type T !~ "^struct {";
+expression E;
+position p;
+idexpression T *I;
+identifier alloc = {palloc0, palloc};
+@@
+* I = alloc@p(E * sizeof(T))
+
+@script:python depends on report@
+p << r1.p;
+alloc << r1.alloc;
+@@
+coccilib.report.print_report(p[0], f"this {alloc} can be replaced with {alloc}_array")
+
+@depends on patch@
+type T !~ "^struct {";
+expression E;
+T *P;
+idexpression T* I;
+constant C;
+identifier alloc = {palloc0, palloc};
+fresh identifier alloc_array = alloc ## "_array";
+@@
+(
+- I = (T*) alloc(E * sizeof( \( *P \| P[C] \) ))
++ I = alloc_array(T, E)
+|
+- I = (T*) alloc(E * sizeof(T))
++ I = alloc_array(T, E)
+|
+- I = alloc(E * sizeof( \( *P \| P[C] \) ))
++ I = alloc_array(T, E)
+|
+- I = alloc(E * sizeof(T))
++ I = alloc_array(T, E)
+)
+
+@r3 depends on report || context@
+type T !~ "^struct {";
+expression E;
+idexpression T *P;
+idexpression T *I;
+position p;
+@@
+* I = repalloc@p(P, E * sizeof(T))
+
+@script:python depends on report@
+p << r3.p;
+@@
+coccilib.report.print_report(p[0], "this repalloc can be replaced with repalloc_array")
+
+@depends on patch@
+type T !~ "^struct {";
+expression E;
+idexpression T *P1;
+idexpression T *P2;
+idexpression T *I;
+constant C;
+@@
+(
+- I = (T*) repalloc(P1, E * sizeof( \( *P2 \| P2[C] \) ))
++ I = repalloc_array(P1, T, E)
+|
+- I = (T*) repalloc(P1, E * sizeof(T))
++ I = repalloc_array(P1, T, E)
+|
+- I = repalloc(P1, E * sizeof( \( *P2 \| P2[C] \) ))
++ I = repalloc_array(P1, T, E)
+|
+- I = repalloc(P1, E * sizeof(T))
++ I = repalloc_array(P1, T, E)
+)
+
+@r4 depends on report || context@
+type T !~ "^struct {";
+position p;
+idexpression T* I;
+identifier alloc = {palloc, palloc0};
+@@
+* I = alloc@p(sizeof(T))
+
+@script:python depends on report@
+p << r4.p;
+alloc << r4.alloc;
+@@
+coccilib.report.print_report(p[0], f"this {alloc} can be replaced with {alloc}_object")
+
+@depends on patch@
+type T !~ "^struct {";
+T* P;
+idexpression T *I;
+constant C;
+identifier alloc = {palloc, palloc0};
+fresh identifier alloc_object = alloc ## "_object";
+@@
+(
+- I = (T*) alloc(sizeof( \( *P \| P[C] \) ))
++ I = alloc_object(T)
+|
+- I = (T*) alloc(sizeof(T))
++ I = alloc_object(T)
+|
+- I = alloc(sizeof( \( *P \| P[C] \) ))
++ I = alloc_object(T)
+|
+- I = alloc(sizeof(T))
++ I = alloc_object(T)
+)
--
2.43.0
Hi all,
Here is an updated set of patches based on the latest HEAD of PostgreSQL.
Best wishes,
Mats Kindahl
Attachments:
0001-Add-initial-coccicheck-script.v5.patchtext/x-patch; charset=UTF-8; name=0001-Add-initial-coccicheck-script.v5.patchDownload
From 5fa03693cca229dc2228348e642a243eff6c670a Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Sun, 29 Dec 2024 19:35:58 +0100
Subject: [PATCH 1/7] Add initial coccicheck script
The coccicheck.py script can be used to run several semantics patches on a
source tree to either generate a report, see the context of the modification
(what lines that requires changes), or generate a patch to correct an issue.
usage: coccicheck.py [-h] [--verbose] [--spatch SPATCH]
[--spflags SPFLAGS]
[--mode {patch,report,context}] [--jobs JOBS]
[--include DIR] [--patchdir DIR]
pattern path [path ...]
positional arguments:
pattern Pattern for Cocci files to use.
path Directory or source path to process.
options:
-h, --help show this help message and exit
--verbose, -v
--spatch SPATCH Path to spatch binary. Defaults to value of
environment variable SPATCH.
--spflags SPFLAGS Flags to pass to spatch call. Defaults to
value of enviroment variable SPFLAGS.
--mode {patch,report,context}
Mode to use for coccinelle. Defaults to
value of environment variable MODE.
--jobs JOBS Number of jobs to use for spatch. Defaults
to value of environment variable JOBS.
--include DIR, -I DIR
Extra include directories.
--patchdir DIR Path for which patch should be created
relative to.
---
src/tools/coccicheck.py | 185 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 185 insertions(+)
create mode 100755 src/tools/coccicheck.py
diff --git a/src/tools/coccicheck.py b/src/tools/coccicheck.py
new file mode 100755
index 00000000000..838f8184c54
--- /dev/null
+++ b/src/tools/coccicheck.py
@@ -0,0 +1,185 @@
+#!/usr/bin/env python3
+
+"""Run Coccinelle on a set of files and directories.
+
+This is a re-written version of the Linux ``coccicheck`` script.
+
+Coccicheck can run in two different modes (the original have four
+different modes):
+
+- *patch*: patch files using the cocci file.
+
+- *report*: report will report any improvements that this script can
+ make, but not show any patch.
+
+- *context*: show the context where the patch can be applied.
+
+The program will take a single cocci file and call spatch(1) with a
+set of paths that can be either files or directories.
+
+When starting, the cocci file will be parsed and any lines containing
+"Options:" or "Requires:" will be treated specially.
+
+- Lines containing "Options:" will have a list of options to add to
+ the call of the spatch(1) program. These options will be added last.
+
+- Lines containing "Requires:" can contain a version of spatch(1) that
+ is required for this cocci file. If the version requirements are not
+ satisfied, the file will not be used.
+
+When calling spatch(1), it will set the virtual rules "patch",
+"report", or "context" and the cocci file can use these to act
+differently depending on the mode.
+
+The following environment variables can be set:
+
+SPATCH: Path to spatch program. This will be used if no path is
+ passed using the option --spatch.
+
+SPFLAGS: Extra flags to use when calling spatch. These will be added
+ last.
+
+MODE: Mode to use. It will be used if no --mode is passed to
+ coccicheck.py.
+
+"""
+
+import argparse
+import os
+import sys
+import subprocess
+import re
+
+from pathlib import PurePath, Path
+from packaging import version
+
+VERSION_CRE = re.compile(
+ r'spatch version (\S+) compiled with OCaml version (\S+)'
+)
+
+
+def parse_metadata(cocci_file):
+ """Parse metadata in Cocci file."""
+ metadata = {}
+ with open(cocci_file) as fh:
+ for line in fh:
+ mre = re.match(r'(Options|Requires):(.*)', line, re.IGNORECASE)
+ if mre:
+ metadata[mre.group(1).lower()] = mre.group(2)
+ return metadata
+
+
+def get_config(args):
+ """Compute configuration information."""
+ # Figure out spatch version. We just need to read the first line
+ config = {}
+ cmd = [args.spatch, '--version']
+ with subprocess.Popen(cmd, stdout=subprocess.PIPE, text=True) as proc:
+ for line in proc.stdout:
+ mre = VERSION_CRE.match(line)
+ if mre:
+ config['spatch_version'] = mre.group(1)
+ break
+ return config
+
+
+def run_spatch(cocci_file, args, config, env):
+ """Run coccinelle on the provided file."""
+ if args.verbose > 1:
+ print("processing cocci file", cocci_file)
+ spatch_version = config['spatch_version']
+ metadata = parse_metadata(cocci_file)
+
+ # Check that we have a valid version
+ if 'required' in metadata:
+ required_version = version.parse(metadata['required'])
+ if required_version < spatch_version:
+ print(
+ f'Skipping SmPL patch {cocci_file}: '
+ f'requires {required_version} (had {spatch_version})'
+ )
+ return
+
+ command = [
+ args.spatch,
+ "-D", args.mode,
+ "--cocci-file", cocci_file,
+ "--very-quiet",
+ ]
+
+ if 'options' in metadata:
+ command.append(metadata['options'])
+ if args.mode == 'report':
+ command.append('--no-show-diff')
+ if args.patchdir:
+ command.extend(['--patch', args.patchdir])
+ if args.jobs:
+ command.extend(['--jobs', args.jobs])
+ if args.spflags:
+ command.append(args.spflags)
+
+ for path in args.path:
+ subprocess.run(command + [path], env=env, check=True)
+
+
+def coccinelle(args, config, env):
+ """Run coccinelle on all files matching the provided pattern."""
+ root = '/' if PurePath(args.cocci).is_absolute() else '.'
+ count = 0
+ for cocci_file in Path(root).glob(args.cocci):
+ count += 1
+ run_spatch(cocci_file, args, config, env)
+ return count
+
+
+def main(argv):
+ """Run coccicheck."""
+ parser = argparse.ArgumentParser()
+ parser.add_argument('--verbose', '-v', action='count', default=0)
+ parser.add_argument('--spatch', type=PurePath, metavar='SPATCH',
+ default=os.environ.get('SPATCH'),
+ help=('Path to spatch binary. Defaults to '
+ 'value of environment variable SPATCH.'))
+ parser.add_argument('--spflags', type=PurePath,
+ metavar='SPFLAGS',
+ default=os.environ.get('SPFLAGS', None),
+ help=('Flags to pass to spatch call. Defaults '
+ 'to value of enviroment variable SPFLAGS.'))
+ parser.add_argument('--mode', choices=['patch', 'report', 'context'],
+ default=os.environ.get('MODE', 'report'),
+ help=('Mode to use for coccinelle. Defaults to '
+ 'value of environment variable MODE.'))
+ parser.add_argument('--jobs', default=os.environ.get('JOBS', None),
+ help=('Number of jobs to use for spatch. Defaults to '
+ 'value of environment variable JOBS.'))
+ parser.add_argument('--include', '-I', type=PurePath,
+ metavar='DIR',
+ help='Extra include directories.')
+ parser.add_argument('--patchdir', type=PurePath, metavar='DIR',
+ help=('Path for which patch should be created '
+ 'relative to.'))
+ parser.add_argument('cocci', metavar='pattern',
+ help='Pattern for Cocci files to use.')
+ parser.add_argument('path', nargs='+', type=PurePath,
+ help='Directory or source path to process.')
+
+ args = parser.parse_args(argv)
+
+ if args.verbose > 1:
+ print("arguments:", args)
+
+ if args.spatch is None:
+ parser.error('spatch is part of the Coccinelle project and is '
+ 'available at http://coccinelle.lip6.fr/')
+
+ if coccinelle(args, get_config(args), os.environ) == 0:
+ parser.error(f'no coccinelle files found matching {args.cocci}')
+
+
+if __name__ == '__main__':
+ try:
+ main(sys.argv[1:])
+ except KeyboardInterrupt:
+ print("Execution aborted")
+ except Exception as exc:
+ print(exc)
--
2.43.0
0002-Create-coccicheck-target-for-autoconf.v5.patchtext/x-patch; charset=UTF-8; name=0002-Create-coccicheck-target-for-autoconf.v5.patchDownload
From 97d4fa2f9e7b71fc730ff564a51f3dee4198f403 Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Mon, 30 Dec 2024 19:58:07 +0100
Subject: [PATCH 2/7] Create coccicheck target for autoconf
This adds a coccicheck target for the autoconf-based build system. The
coccicheck target accepts one parameter MODE, which can be either "patch",
"report", or "context". The "patch" mode will generate a patch that can be
applied to the source tree, the "report" mode will generate a list of file
locations with information about what can be changed, and the "context" mode
will just highlight the line that will be affected by the semantic patch.
The following will generate a patch and apply it to the source code tree:
make coccicheck MODE=patch | patch -p1
---
configure | 100 ++++++++++++++++++++++++++++++++++++++---
configure.ac | 12 +++++
src/Makefile.global.in | 24 +++++++++-
src/makefiles/pgxs.mk | 3 ++
4 files changed, 132 insertions(+), 7 deletions(-)
diff --git a/configure b/configure
index 22cd866147b..1091f9bc54f 100755
--- a/configure
+++ b/configure
@@ -779,6 +779,9 @@ enable_coverage
GENHTML
LCOV
GCOV
+enable_coccicheck
+SPFLAGS
+SPATCH
enable_debug
enable_rpath
default_port
@@ -846,6 +849,7 @@ with_pgport
enable_rpath
enable_debug
enable_profiling
+enable_coccicheck
enable_coverage
enable_dtrace
enable_tap_tests
@@ -1547,6 +1551,7 @@ Optional Features:
executables
--enable-debug build with debugging symbols (-g)
--enable-profiling build with profiling enabled
+ --enable-coccicheck enable Coccinelle checks (requires spatch)
--enable-coverage build with coverage testing instrumentation
--enable-dtrace build with DTrace support
--enable-tap-tests enable TAP tests (requires Perl and IPC::Run)
@@ -3347,6 +3352,91 @@ fi
+#
+# --enable-coccicheck enables Coccinelle check target "coccicheck"
+#
+
+
+# Check whether --enable-coccicheck was given.
+if test "${enable_coccicheck+set}" = set; then :
+ enableval=$enable_coccicheck;
+ case $enableval in
+ yes)
+ if test -z "$SPATCH"; then
+ for ac_prog in spatch
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_SPATCH+:} false; then :
+ $as_echo_n "(cached) " >&6
+else
+ case $SPATCH in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_SPATCH="$SPATCH" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_path_SPATCH="$as_dir/$ac_word$ac_exec_ext"
+ $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+SPATCH=$ac_cv_path_SPATCH
+if test -n "$SPATCH"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $SPATCH" >&5
+$as_echo "$SPATCH" >&6; }
+else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+ test -n "$SPATCH" && break
+done
+
+else
+ # Report the value of SPATCH in configure's output in all cases.
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for SPATCH" >&5
+$as_echo_n "checking for SPATCH... " >&6; }
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $SPATCH" >&5
+$as_echo "$SPATCH" >&6; }
+fi
+
+if test -z "$SPATCH"; then
+ as_fn_error $? "spatch not found" "$LINENO" 5
+fi
+
+ ;;
+ no)
+ :
+ ;;
+ *)
+ as_fn_error $? "no argument expected for --enable-coccicheck option" "$LINENO" 5
+ ;;
+ esac
+
+else
+ enable_coccicheck=no
+
+fi
+
+
+
+
#
# --enable-coverage enables generation of code coverage metrics with gcov
#
@@ -15183,7 +15273,7 @@ else
We can't simply define LARGE_OFF_T to be 9223372036854775807,
since some C++ compilers masquerading as C compilers
incorrectly reject 9223372036854775807. */
-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
&& LARGE_OFF_T % 2147483647 == 1)
? 1 : -1];
@@ -15229,7 +15319,7 @@ else
We can't simply define LARGE_OFF_T to be 9223372036854775807,
since some C++ compilers masquerading as C compilers
incorrectly reject 9223372036854775807. */
-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
&& LARGE_OFF_T % 2147483647 == 1)
? 1 : -1];
@@ -15253,7 +15343,7 @@ rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
We can't simply define LARGE_OFF_T to be 9223372036854775807,
since some C++ compilers masquerading as C compilers
incorrectly reject 9223372036854775807. */
-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
&& LARGE_OFF_T % 2147483647 == 1)
? 1 : -1];
@@ -15298,7 +15388,7 @@ else
We can't simply define LARGE_OFF_T to be 9223372036854775807,
since some C++ compilers masquerading as C compilers
incorrectly reject 9223372036854775807. */
-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
&& LARGE_OFF_T % 2147483647 == 1)
? 1 : -1];
@@ -15322,7 +15412,7 @@ rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
We can't simply define LARGE_OFF_T to be 9223372036854775807,
since some C++ compilers masquerading as C compilers
incorrectly reject 9223372036854775807. */
-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
&& LARGE_OFF_T % 2147483647 == 1)
? 1 : -1];
diff --git a/configure.ac b/configure.ac
index e44943aa6fe..b75d7f49df6 100644
--- a/configure.ac
+++ b/configure.ac
@@ -193,6 +193,18 @@ AC_SUBST(enable_debug)
PGAC_ARG_BOOL(enable, profiling, no,
[build with profiling enabled ])
+#
+# --enable-coccicheck enables Coccinelle check target "coccicheck"
+#
+PGAC_ARG_BOOL(enable, coccicheck, no,
+ [enable Coccinelle checks (requires spatch)],
+[PGAC_PATH_PROGS(SPATCH, spatch)
+if test -z "$SPATCH"; then
+ AC_MSG_ERROR([spatch not found])
+fi
+AC_SUBST(SPFLAGS)])
+AC_SUBST(enable_coccicheck)
+
#
# --enable-coverage enables generation of code coverage metrics with gcov
#
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 0aa389bc710..56977518705 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -19,7 +19,7 @@
#
# Meta configuration
-standard_targets = all install installdirs uninstall clean distclean coverage check checkprep installcheck init-po update-po
+standard_targets = all install installdirs uninstall clean distclean coccicheck coverage check checkprep installcheck init-po update-po
# these targets should recurse even into subdirectories not being built:
standard_always_targets = clean distclean
@@ -208,6 +208,7 @@ enable_rpath = @enable_rpath@
enable_nls = @enable_nls@
enable_debug = @enable_debug@
enable_dtrace = @enable_dtrace@
+enable_coccicheck = @enable_coccicheck@
enable_coverage = @enable_coverage@
enable_injection_points = @enable_injection_points@
enable_tap_tests = @enable_tap_tests@
@@ -389,7 +390,7 @@ CLDR_VERSION = 47
# If a particular subdirectory knows this isn't needed in itself or its
# children, it can set NO_GENERATED_HEADERS.
-all install check installcheck: submake-generated-headers
+all install check installcheck coccicheck: submake-generated-headers
.PHONY: submake-generated-headers
@@ -538,6 +539,11 @@ FOP = @FOP@
XMLLINT = @XMLLINT@
XSLTPROC = @XSLTPROC@
+# Coccinelle
+
+SPATCH = @SPATCH@
+SPFLAGS = @SPFLAGS@
+
# Code coverage
GCOV = @GCOV@
@@ -1005,6 +1011,20 @@ endif # nls.mk
endif # enable_nls
+##########################################################################
+#
+# Coccinelle checks
+#
+
+ifeq ($(enable_coccicheck), yes)
+coccicheck_py = $(top_srcdir)/src/tools/coccicheck.py
+coccicheck = SPATCH=$(SPATCH) SPFLAGS=$(SPFLAGS) $(PYTHON) $(coccicheck_py)
+
+.PHONY: coccicheck
+coccicheck:
+ $(coccicheck) --mode=$(MODE) 'cocci/**/*.cocci' $(top_srcdir)
+endif # enable_coccicheck
+
##########################################################################
#
# Coverage
diff --git a/src/makefiles/pgxs.mk b/src/makefiles/pgxs.mk
index 039cee3dfe5..0f4e6eab619 100644
--- a/src/makefiles/pgxs.mk
+++ b/src/makefiles/pgxs.mk
@@ -95,6 +95,9 @@ endif
ifeq ($(FLEX),)
FLEX = flex
endif
+ifeq ($(SPATCH),)
+SPATCH = spatch
+endif
endif # PGXS
--
2.43.0
0003-Add-meson-build-for-coccicheck.v5.patchtext/x-patch; charset=UTF-8; name=0003-Add-meson-build-for-coccicheck.v5.patchDownload
From 5a720f711e38ee8e474482f24d086f8a0deabe5c Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Wed, 1 Jan 2025 14:15:51 +0100
Subject: [PATCH 3/7] Add meson build for coccicheck
This commit adds a run target `coccicheck` to meson build files.
Since ninja does not accept parameters the same way make does, there are three
run targets defined---"coccicheck-patch", "coccicheck-report", and
"coccicheck-context"---that you can use to generate a patch, get a report, and
get the context respectively. For example, to patch the tree from the "build"
subdirectory created by the meson run:
ninja coccicheck-patch | patch -d .. -p1
---
meson.build | 30 ++++++++++++++++++++++++++++++
meson_options.txt | 7 ++++++-
src/makefiles/meson.build | 6 ++++++
3 files changed, 42 insertions(+), 1 deletion(-)
diff --git a/meson.build b/meson.build
index d71c7c8267e..53543c6c50a 100644
--- a/meson.build
+++ b/meson.build
@@ -350,6 +350,7 @@ missing = find_program('config/missing', native: true)
cp = find_program('cp', required: false, native: true)
xmllint_bin = find_program(get_option('XMLLINT'), native: true, required: false)
xsltproc_bin = find_program(get_option('XSLTPROC'), native: true, required: false)
+spatch = find_program(get_option('SPATCH'), native: true, required: false)
bison_flags = []
if bison.found()
@@ -1722,6 +1723,34 @@ else
endif
+###############################################################
+# Option: Coccinelle checks
+###############################################################
+
+coccicheck_opt = get_option('coccicheck')
+coccicheck_dep = not_found_dep
+if not coccicheck_opt.disabled()
+ if spatch.found()
+ coccicheck_dep = declare_dependency()
+ elif coccicheck_opt.enabled()
+ error('missing required tools (spatch needed) for Coccinelle checks')
+ endif
+endif
+
+if coccicheck_opt.enabled()
+ coccicheck_modes = ['context', 'report', 'patch']
+ foreach mode : coccicheck_modes
+ run_target('coccicheck-' + mode,
+ command: [python, files('src/tools/coccicheck.py'),
+ '--mode', mode,
+ '--spatch', spatch,
+ '--patchdir', '@SOURCE_ROOT@',
+ '@SOURCE_ROOT@/cocci/**/*.cocci',
+ '@SOURCE_ROOT@/src',
+ '@SOURCE_ROOT@/contrib',
+ ])
+ endforeach
+endif
###############################################################
# Compiler tests
@@ -3948,6 +3977,7 @@ summary(
{
'bison': '@0@ @1@'.format(bison.full_path(), bison_version),
'dtrace': dtrace,
+ 'spatch': spatch,
'flex': '@0@ @1@'.format(flex.full_path(), flex_version),
},
section: 'Programs',
diff --git a/meson_options.txt b/meson_options.txt
index 06bf5627d3c..f9f1b919667 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -43,6 +43,9 @@ option('cassert', type: 'boolean', value: false,
option('tap_tests', type: 'feature', value: 'auto',
description: 'Enable TAP tests')
+option('coccicheck', type: 'feature', value: 'auto',
+ description: 'Enable Coccinelle checks')
+
option('injection_points', type: 'boolean', value: false,
description: 'Enable injection points')
@@ -52,7 +55,6 @@ option('PG_TEST_EXTRA', type: 'string', value: '',
option('PG_GIT_REVISION', type: 'string', value: 'HEAD',
description: 'git revision to be packaged by pgdist target')
-
# Compilation options
option('extra_include_dirs', type: 'array', value: [],
@@ -201,6 +203,9 @@ option('PYTHON', type: 'array', value: ['python3', 'python'],
option('SED', type: 'string', value: 'gsed',
description: 'Path to sed binary')
+option('SPATCH', type: 'string', value: 'spatch',
+ description: 'Path to spatch binary, used for SmPL patches')
+
option('STRIP', type: 'string', value: 'strip',
description: 'Path to strip binary, used for PGXS emulation')
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 0def244c901..35616f524f2 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -57,6 +57,7 @@ pgxs_kv = {
'enable_injection_points': get_option('injection_points') ? 'yes' : 'no',
'enable_tap_tests': tap_tests_enabled ? 'yes' : 'no',
'enable_debug': get_option('debug') ? 'yes' : 'no',
+ 'enable_coccicheck': spatch.found() ? 'yes' : 'no',
'enable_coverage': 'no',
'enable_dtrace': dtrace.found() ? 'yes' : 'no',
@@ -149,6 +150,7 @@ pgxs_bins = {
'TAR': tar,
'ZSTD': program_zstd,
'DTRACE': dtrace,
+ 'SPATCH': spatch,
}
pgxs_empty = [
@@ -164,6 +166,10 @@ pgxs_empty = [
'DBTOEPUB',
'FOP',
+ # Coccinelle is not supported by pgxs
+ 'SPATCH',
+ 'SPFLAGS',
+
# supporting coverage for pgxs-in-meson build doesn't seem worth it
'GENHTML',
'LCOV',
--
2.43.0
0004-Semantic-patch-for-sizeof-using-palloc.v5.patchtext/x-patch; charset=UTF-8; name=0004-Semantic-patch-for-sizeof-using-palloc.v5.patchDownload
From ac961bebf4338642c218940e9208c1dbc7a46426 Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Sun, 5 Jan 2025 19:26:47 +0100
Subject: [PATCH 4/7] Semantic patch for sizeof() using palloc()
If palloc() is used to allocate elements of type T it should be assigned to a
variable of type T* or risk indexes out of bounds. This semantic patch checks
that allocations to variables of type T* are using sizeof(T) when allocating
memory using palloc().
---
cocci/palloc_sizeof.cocci | 49 +++++++++++++++++++++++++++++++++++++++
1 file changed, 49 insertions(+)
create mode 100644 cocci/palloc_sizeof.cocci
diff --git a/cocci/palloc_sizeof.cocci b/cocci/palloc_sizeof.cocci
new file mode 100644
index 00000000000..5f8593c2687
--- /dev/null
+++ b/cocci/palloc_sizeof.cocci
@@ -0,0 +1,49 @@
+virtual report
+virtual context
+virtual patch
+
+@initialize:python@
+@@
+import re
+
+CONST_CRE = re.compile(r'\bconst\b')
+
+def is_simple_type(s):
+ return s != 'void' and not CONST_CRE.search(s)
+
+@r1 depends on report || context@
+type T1 : script:python () { is_simple_type(T1) };
+idexpression T1 *I;
+type T2 != T1;
+position p;
+expression E;
+identifier func = {palloc, palloc0};
+@@
+(
+* I = func@p(sizeof(T2))
+|
+* I = func@p(E * sizeof(T2))
+)
+
+@script:python depends on report@
+T1 << r1.T1;
+T2 << r1.T2;
+I << r1.I;
+p << r1.p;
+@@
+coccilib.report.print_report(p[0], f"'{I}' has type '{T1}*' but 'sizeof({T2})' is used to allocate memory")
+
+@depends on patch@
+type T1 : script:python () { is_simple_type(T1) };
+idexpression T1 *I;
+type T2 != T1;
+expression E;
+identifier func = {palloc, palloc0};
+@@
+(
+- I = func(sizeof(T2))
++ I = func(sizeof(T1))
+|
+- I = func(E * sizeof(T2))
++ I = func(E * sizeof(T1))
+)
--
2.43.0
0005-Semantic-patch-for-palloc_array-and-palloc_object.v5.patchtext/x-patch; charset=UTF-8; name=0005-Semantic-patch-for-palloc_array-and-palloc_object.v5.patchDownload
From 5d60778fa0dc07e8ece7fbb8cf0f4f924c3d35bf Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Sun, 29 Dec 2024 20:23:25 +0100
Subject: [PATCH 5/7] Semantic patch for palloc_array and palloc_object
Macros were added to the palloc API in commit 2016055a92f to improve
type-safety, but very few instances were replaced. This adds a cocci script to
do that replacement. The semantic patch deliberately do not replace instances
where the type of the variable and the type used in the macro does not match.
---
cocci/palloc_array.cocci | 157 +++++++++++++++++++++++++++++++++++++++
1 file changed, 157 insertions(+)
create mode 100644 cocci/palloc_array.cocci
diff --git a/cocci/palloc_array.cocci b/cocci/palloc_array.cocci
new file mode 100644
index 00000000000..aeeab74c3a9
--- /dev/null
+++ b/cocci/palloc_array.cocci
@@ -0,0 +1,157 @@
+// Since PG16 there are array versions of common palloc operations, so
+// we can use those instead.
+//
+// We ignore cases where we have a anonymous struct and also when the
+// type of the variable being assigned to is different from the
+// inferred type.
+//
+// Options: --no-includes --include-headers
+
+virtual patch
+virtual report
+virtual context
+
+// These rules (soN) are needed to rewrite types of the form
+// sizeof(T[C]) to C * sizeof(T) since Cocci cannot (currently) handle
+// it.
+@initialize:python@
+@@
+import re
+
+CRE = re.compile(r'(.*)\s+\[\s+(\d+)\s+\]$')
+
+def is_array_type(s):
+ mre = CRE.match(s)
+ return (mre is not None)
+
+@so1 depends on patch@
+type T : script:python() { is_array_type(T) };
+@@
+palloc(sizeof(T))
+
+@script:python so2 depends on patch@
+T << so1.T;
+T2;
+E;
+@@
+mre = CRE.match(T)
+coccinelle.T2 = cocci.make_type(mre.group(1))
+coccinelle.E = cocci.make_expr(mre.group(2))
+
+@depends on patch@
+type so1.T;
+type so2.T2;
+expression so2.E;
+@@
+- palloc(sizeof(T))
++ palloc(E * sizeof(T2))
+
+@r1 depends on report || context@
+type T !~ "^struct {";
+expression E;
+position p;
+idexpression T *I;
+identifier alloc = {palloc0, palloc};
+@@
+* I = alloc@p(E * sizeof(T))
+
+@script:python depends on report@
+p << r1.p;
+alloc << r1.alloc;
+@@
+coccilib.report.print_report(p[0], f"this {alloc} can be replaced with {alloc}_array")
+
+@depends on patch@
+type T !~ "^struct {";
+expression E;
+T *P;
+idexpression T* I;
+constant C;
+identifier alloc = {palloc0, palloc};
+fresh identifier alloc_array = alloc ## "_array";
+@@
+(
+- I = (T*) alloc(E * sizeof( \( *P \| P[C] \) ))
++ I = alloc_array(T, E)
+|
+- I = (T*) alloc(E * sizeof(T))
++ I = alloc_array(T, E)
+|
+- I = alloc(E * sizeof( \( *P \| P[C] \) ))
++ I = alloc_array(T, E)
+|
+- I = alloc(E * sizeof(T))
++ I = alloc_array(T, E)
+)
+
+@r3 depends on report || context@
+type T !~ "^struct {";
+expression E;
+idexpression T *P;
+idexpression T *I;
+position p;
+@@
+* I = repalloc@p(P, E * sizeof(T))
+
+@script:python depends on report@
+p << r3.p;
+@@
+coccilib.report.print_report(p[0], "this repalloc can be replaced with repalloc_array")
+
+@depends on patch@
+type T !~ "^struct {";
+expression E;
+idexpression T *P1;
+idexpression T *P2;
+idexpression T *I;
+constant C;
+@@
+(
+- I = (T*) repalloc(P1, E * sizeof( \( *P2 \| P2[C] \) ))
++ I = repalloc_array(P1, T, E)
+|
+- I = (T*) repalloc(P1, E * sizeof(T))
++ I = repalloc_array(P1, T, E)
+|
+- I = repalloc(P1, E * sizeof( \( *P2 \| P2[C] \) ))
++ I = repalloc_array(P1, T, E)
+|
+- I = repalloc(P1, E * sizeof(T))
++ I = repalloc_array(P1, T, E)
+)
+
+@r4 depends on report || context@
+type T !~ "^struct {";
+position p;
+idexpression T* I;
+identifier alloc = {palloc, palloc0};
+@@
+* I = alloc@p(sizeof(T))
+
+@script:python depends on report@
+p << r4.p;
+alloc << r4.alloc;
+@@
+coccilib.report.print_report(p[0], f"this {alloc} can be replaced with {alloc}_object")
+
+@depends on patch@
+type T !~ "^struct {";
+T* P;
+idexpression T *I;
+constant C;
+identifier alloc = {palloc, palloc0};
+fresh identifier alloc_object = alloc ## "_object";
+@@
+(
+- I = (T*) alloc(sizeof( \( *P \| P[C] \) ))
++ I = alloc_object(T)
+|
+- I = (T*) alloc(sizeof(T))
++ I = alloc_object(T)
+|
+- I = alloc(sizeof( \( *P \| P[C] \) ))
++ I = alloc_object(T)
+|
+- I = alloc(sizeof(T))
++ I = alloc_object(T)
+)
--
2.43.0
0006-Semantic-patch-for-pg_cmp_-functions.v5.patchtext/x-patch; charset=UTF-8; name=0006-Semantic-patch-for-pg_cmp_-functions.v5.patchDownload
From 8f432d4c62dd5de13894c101a9b967df84ffbd93 Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Thu, 23 Jan 2025 02:46:14 +0100
Subject: [PATCH 6/7] Semantic patch for pg_cmp_* functions
In commit 3b42bdb4716 and 6b80394781c overflow-safe comparison functions where
introduced, but they are not widely used. This semantic patch identifies some
of the more common cases and replaces them with calls to the corresponding
pg_cmp_* function.
---
cocci/use_pg_cmp.cocci | 125 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 125 insertions(+)
create mode 100644 cocci/use_pg_cmp.cocci
diff --git a/cocci/use_pg_cmp.cocci b/cocci/use_pg_cmp.cocci
new file mode 100644
index 00000000000..8a258e61e5d
--- /dev/null
+++ b/cocci/use_pg_cmp.cocci
@@ -0,0 +1,125 @@
+// Find cases where we can use the new pg_cmp_* functions.
+//
+// Copyright 2025 Mats Kindahl, Timescale.
+//
+// Options: --no-includes --include-headers
+
+virtual report
+virtual context
+virtual patch
+
+@initialize:python@
+@@
+
+import re
+
+TYPMAP = {
+ 'BlockNumber': 'pg_cmp_u32',
+ 'ForkNumber': 'pg_cmp_s32',
+ 'OffsetNumber': 'pg_cmp_s16',
+ 'int': 'pg_cmp_s32',
+ 'int16': 'pg_cmp_s16',
+ 'int32': 'pg_cmp_s32',
+ 'uint16': 'pg_cmp_u16',
+ 'uint32': 'pg_cmp_u32',
+ 'unsigned int': 'pg_cmp_u32',
+}
+
+def is_valid(expr):
+ return not re.search(r'DatumGet[A-Za-z]+', expr)
+
+@r1e depends on context || report expression@
+type TypeName : script:python() { TypeName in TYPMAP };
+position pos;
+TypeName lhs : script:python() { is_valid(lhs) };
+TypeName rhs : script:python() { is_valid(rhs) };
+@@
+* lhs@pos < rhs ? -1 : lhs > rhs ? 1 : 0
+
+@script:python depends on report@
+lhs << r1e.lhs;
+rhs << r1e.rhs;
+pos << r1e.pos;
+@@
+coccilib.report.print_report(pos[0], f"conditional checks between '{lhs}' and '{rhs}' can be replaced with a PostgreSQL comparison function")
+
+@r1 depends on context || report@
+type TypeName : script:python() { TypeName in TYPMAP };
+position pos;
+TypeName lhs : script:python() { is_valid(lhs) };
+TypeName rhs : script:python() { is_valid(rhs) };
+@@
+(
+* if@pos (lhs < rhs) return -1; else if (lhs > rhs) return 1; return 0;
+|
+* if@pos (lhs < rhs) return -1; else if (lhs > rhs) return 1; else return 0;
+|
+* if@pos (lhs < rhs) return -1; if (lhs > rhs) return 1; return 0;
+|
+* if@pos (lhs > rhs) return 1; if (lhs < rhs) return -1; return 0;
+|
+* if@pos (lhs == rhs) return 0; if (lhs > rhs) return 1; return -1;
+|
+* if@pos (lhs == rhs) return 0; return lhs > rhs ? 1 : -1;
+|
+* if@pos (lhs == rhs) return 0; return lhs < rhs ? -1 : 1;
+)
+
+@script:python depends on report@
+lhs << r1.lhs;
+rhs << r1.rhs;
+pos << r1.pos;
+@@
+coccilib.report.print_report(pos[0], f"conditional checks between '{lhs}' and '{rhs}' can be replaced with a PostgreSQL comparison function")
+
+@expr_repl depends on patch expression@
+type TypeName : script:python() { TypeName in TYPMAP };
+fresh identifier cmp = script:python(TypeName) { TYPMAP[TypeName] };
+TypeName lhs : script:python() { is_valid(lhs) };
+TypeName rhs : script:python() { is_valid(rhs) };
+@@
+- lhs < rhs ? -1 : lhs > rhs ? 1 : 0
++ cmp(lhs,rhs)
+
+@stmt_repl depends on patch@
+type TypeName : script:python() { TypeName in TYPMAP };
+fresh identifier cmp = script:python(TypeName) { TYPMAP[TypeName] };
+TypeName lhs : script:python() { is_valid(lhs) };
+TypeName rhs : script:python() { is_valid(rhs) };
+@@
+(
+- if (lhs < rhs) return -1; if (lhs > rhs) return 1; return 0;
++ return cmp(lhs,rhs);
+|
+- if (lhs < rhs) return -1; else if (lhs > rhs) return 1; return 0;
++ return cmp(lhs,rhs);
+|
+- if (lhs < rhs) return -1; else if (lhs > rhs) return 1; else return 0;
++ return cmp(lhs,rhs);
+|
+- if (lhs > rhs) return 1; if (lhs < rhs) return -1; return 0;
++ return cmp(lhs,rhs);
+|
+- if (lhs > rhs) return 1; else if (lhs < rhs) return -1; return 0;
++ return cmp(lhs,rhs);
+|
+- if (lhs == rhs) return 0; if (lhs > rhs) return 1; return -1;
++ return cmp(lhs,rhs);
+|
+- if (lhs == rhs) return 0; return lhs > rhs ? 1 : -1;
++ return cmp(lhs,rhs);
+|
+- if (lhs == rhs) return 0; return lhs < rhs ? -1 : 1;
++ return cmp(lhs,rhs);
+)
+
+// Add an include if there were none and we had to do some
+// replacements
+@has_include depends on patch@
+@@
+ #include "common/int.h"
+
+@depends on patch && !has_include && (stmt_repl || expr_repl)@
+@@
+ #include ...
++ #include "common/int.h"
--
2.43.0
0007-Semantic-patch-to-use-stack-allocated-StringInfoData.v5.patchtext/x-patch; charset=UTF-8; name=0007-Semantic-patch-to-use-stack-allocated-StringInfoData.v5.patchDownload
From 792cf4e077761f31f7fcda0555b1169a8b1a62ba Mon Sep 17 00:00:00 2001
From: Mats Kindahl <mats@kindahl.net>
Date: Tue, 28 Jan 2025 14:09:41 +0100
Subject: [PATCH 7/7] Semantic patch to use stack-allocated StringInfoData
This semantic patch replace uses of StringInfo with StringInfoData where the
info is dynamically allocated but (optionally) freed at the end of the block.
This will avoid one dynamic allocation that otherwise have to be dealt with.
For example, this code:
StringInfo info = makeStringInfo();
...
appendStringInfo(info, ...);
...
return do_stuff(..., info->data, ...);
Can be replaced with:
StringInfoData info;
initStringInfo(&info);
...
appendStringInfo(&info, ...);
...
return do_stuff(..., info.data, ...);
It does not do a replacement in these cases:
- If the variable is assigned to an expression. In this case, the pointer can
"leak" outside the function either through a global variable or a parameter
assignment.
- If an assignment is done to the expression. This cannot leak the data, but
could mean a value-assignment of a structure, so we avoid this case.
- If the pointer is returned.
---
cocci/use_stringinfodata.cocci | 155 +++++++++++++++++++++++++++++++++
1 file changed, 155 insertions(+)
create mode 100644 cocci/use_stringinfodata.cocci
diff --git a/cocci/use_stringinfodata.cocci b/cocci/use_stringinfodata.cocci
new file mode 100644
index 00000000000..4186027f8c9
--- /dev/null
+++ b/cocci/use_stringinfodata.cocci
@@ -0,0 +1,155 @@
+// Replace uses of StringInfo with StringInfoData where the info is
+// dynamically allocated but (optionally) freed at the end of the
+// block. This will avoid one dynamic allocation that otherwise have
+// to be dealt with.
+//
+// For example, this code:
+//
+// StringInfo info = makeStringInfo();
+// ...
+// appendStringInfo(info, ...);
+// ...
+// return do_stuff(..., info->data, ...);
+//
+// Can be replaced with:
+//
+// StringInfoData info;
+// initStringInfo(&info);
+// ...
+// appendStringInfo(&info, ...);
+// ...
+// return do_stuff(..., info.data, ...);
+
+virtual report
+virtual context
+virtual patch
+
+// This rule captures the position of the makeStringInfo() and bases
+// all changes around that. It matches the case that we should *not*
+// replace, that is, those that either (1) return the pointer or (2)
+// assign the pointer to a variable or (3) assign a variable to the
+// pointer.
+//
+// The first two cases are matched because they could potentially leak
+// the pointer outside the function, for some expressions, but the
+// last one is just a convenience.
+//
+// If we replace this, the resulting change will result in a value
+// copy of a structure, which might not be optimal, so we do not do a
+// replacement.
+@id1 exists@
+typedef StringInfo;
+local idexpression StringInfo info;
+position pos;
+expression E;
+@@
+ info@pos = makeStringInfo()
+ ...
+(
+ return info;
+|
+ info = E
+|
+ E = info
+)
+
+@r1 depends on !patch disable decl_init exists@
+identifier info, fld;
+position dpos, pos != id1.pos;
+@@
+(
+* StringInfo@dpos info;
+ ...
+* info@pos = makeStringInfo();
+|
+* StringInfo@dpos info@pos = makeStringInfo();
+)
+<...
+(
+* \(pfree\|destroyStringInfo\)(info);
+|
+* info->fld
+|
+* *info
+|
+* info
+)
+...>
+
+@script:python depends on report@
+info << r1.info;
+dpos << r1.dpos;
+@@
+coccilib.report.print_report(dpos[0], f"Variable '{info}' of type StringInfo can be defined using StringInfoData")
+
+@depends on patch disable decl_init exists@
+identifier info, fld;
+position pos != id1.pos;
+@@
+- StringInfo info;
++ StringInfoData info;
+ ...
+- info@pos = makeStringInfo();
++ initStringInfo(&info);
+<...
+(
+- \(destroyStringInfo\|pfree\)(info);
+|
+ info
+- ->fld
++ .fld
+|
+- *info
++ info
+|
+- info
++ &info
+)
+...>
+
+// Here we repeat the matching of the "bad case" since we cannot
+// inherit over modifications
+@id2 exists@
+typedef StringInfo;
+local idexpression StringInfo info;
+position pos;
+expression E;
+@@
+ info@pos = makeStringInfo()
+ ...
+(
+ return info;
+|
+ info = E
+|
+ E = info
+)
+
+@depends on patch exists@
+identifier info, fld;
+position pos != id2.pos;
+statement S, S1;
+@@
+- StringInfo info@pos = makeStringInfo();
++ StringInfoData info;
+ ... when != S
+(
+<...
+(
+- \(destroyStringInfo\|pfree\)(info);
+|
+ info
+- ->fld
++ .fld
+|
+- *info
++ info
+|
+- info
++ &info
+)
+...>
+&
++ initStringInfo(&info);
+ S1
+)
--
2.43.0
Hi,
On Thu, Oct 30, 2025 at 02:32:45PM +0100, Mats Kindahl wrote:
Hi all,
Here is an updated set of patches based on the latest HEAD of PostgreSQL.
Thanks for those patches and the initiative! Sorry to be late, I started
to look at Coccinelle "for PostgreSQL" in [1]/messages/by-id/aQMtR/m4kW4Rkul+@ip-10-97-1-34.eu-west-3.compute.internal (to ensure some macros are used)
and saw this thread.
I did not look at the patches in detail, but sharing some thoughts.
I agree that Coccinelle could be/is useful in relation to PostgreSQL development,
but I think that we'd need to determine why it would useful to add the
coccicheck.py script and the new dependencies in autoconf/meson.
Coming back to your points and thinking if it could be used as a tool or integrated
into the build system:
1) Identify and correct bugs in the source code both during development and
review
I agree that makes sense here. Having some bugs detected "automatically" would
be great.
2) Make large-scale changes to the source tree to improve the code based on
new insights
I'm not sure we need to introduce coccicheck.py and add dependencies in
autoconf/meson for this. The developer would need to know that he could use
the Coccinelle and if he already knows then nothing prevents him from using it
in his development tool box.
3) Encode and enforce APIs by ensuring that function calls are used
correctly
Same as 2) That said we could also imagine running yearly checks automatically
too using coccicheck.py.
4) Use improved coding patterns for more efficient code
Same as 3) from my point of view.
5) Allow extensions to automatically update code for later PostgreSQL
versions
Same as 2) from my point of view.
So, I think that the current proposal (i.e build system integration) is a good
fit for 1), less so for 3) and 4) and not necessarily needed for 2) and 5).
The proposal will add new dependencies (as Michael stated up-thread) and introduce
a new language (SmPL) that folks would need to be comfortable with to review
the .cocci scripts.
I don't have an answer to it but I think that the main question is: Should we
integrate this into the build system, or just document it as an optional
developer tool (wiki or such and provide .cocci scripts example)?
[1]: /messages/by-id/aQMtR/m4kW4Rkul+@ip-10-97-1-34.eu-west-3.compute.internal
Regards,
--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com
On 11/3/25 10:55, Bertrand Drouvot wrote:
Hi,
On Thu, Oct 30, 2025 at 02:32:45PM +0100, Mats Kindahl wrote:
Hi all,
Here is an updated set of patches based on the latest HEAD of PostgreSQL.
Thanks for those patches and the initiative! Sorry to be late, I started
to look at Coccinelle "for PostgreSQL" in [1] (to ensure some macros are used)
and saw this thread.
Hi Bertrand and thank you for the comments.
I did not look at the patches in detail, but sharing some thoughts.
I agree that Coccinelle could be/is useful in relation to PostgreSQL development,
but I think that we'd need to determine why it would useful to add the
coccicheck.py script and the new dependencies in autoconf/meson.Coming back to your points and thinking if it could be used as a tool or integrated
into the build system:1) Identify and correct bugs in the source code both during development and
reviewI agree that makes sense here. Having some bugs detected "automatically" would
be great.2) Make large-scale changes to the source tree to improve the code based on
new insightsI'm not sure we need to introduce coccicheck.py and add dependencies in
autoconf/meson for this. The developer would need to know that he could use
the Coccinelle and if he already knows then nothing prevents him from using it
in his development tool box.3) Encode and enforce APIs by ensuring that function calls are used
correctlySame as 2) That said we could also imagine running yearly checks automatically
too using coccicheck.py.4) Use improved coding patterns for more efficient code
Same as 3) from my point of view.
5) Allow extensions to automatically update code for later PostgreSQL
versionsSame as 2) from my point of view.
So, I think that the current proposal (i.e build system integration) is a good
fit for 1), less so for 3) and 4) and not necessarily needed for 2) and 5).The proposal will add new dependencies (as Michael stated up-thread) and introduce
a new language (SmPL) that folks would need to be comfortable with to review
the .cocci scripts.I don't have an answer to it but I think that the main question is: Should we
integrate this into the build system, or just document it as an optional
developer tool (wiki or such and provide .cocci scripts example)?
All of these are good points. My main reason for proposing this is that
I think that using Coccinelle more systematically would improve the
quality of the code and make it easier to maintain. The build system
changes and the coccicheck.py script is a proposal for how to make this
happen, but I realize there are alternative ways as well as tradeoffs to
be made (e.g., is the extra maintenance burden worth the improvement in
quality?)
I think there are three issues here:
1. Shall the semantic patches be in the source tree?
2. Shall we use the coccicheck.py script?
3. Shall we support this in the build system, that is, meson and automake?
Putting the coccicheck.py script aside for a short while, I think there
is value in keeping the semantic patches in the source tree rather than
adding them to a wiki for the following reasons:
It makes it easy for reviewers to check that the code reviewed follows
"best practices" (of course, assuming that the .cocci scripts represent
"best practices") instead of having to download and run these scripts
from another source. Having them on a separate page would reduce the
value since it makes it more difficult to use.
If they are part of the repository the .cocci scripts will be maintained
and updated to match the code in the source tree. If they are on a
separate wiki page, or a different repository, they are likely to be
version dependent and code might change so that they are not relevant
any more, which makes it more difficult to use them for reviewing and
checking code since that would also require updating them to match the
new code before applying them.
A drawback is that it is an extra maintenance burden (probably small,
but extra work nevertheless) and would require the semantic patches to
be executed regularly, by the build system or by reviewers, and check
that they do not report anything. If these checks are not done
regularly, it is possible that the scripts would become obsolete after a
while. By making it easy to run the scripts, we are more likely to run
them and discover issues in the scripts as well as in the code.
About integrating it with the build system. Note that there are three
modes: report, context, and (generate) patch. The important modes are
report and patch.
The main advantage of integrating it with the build system is is that it
is easy to run, which makes it easy to ensure that the semantic patches
do not report anything for both reviewers, developers, and build
systems. This will improve the quality of both the semantic patches and
the source code.
As you say, it does add an extra dependency, but this should optional
and only be used if spatch is installed. For parties not interested in
using spatch, hence does not have spatch installed, it will not be used
so it should not interfere with them.
Integrating it with the build system would also allow builders and
reviewers to use the same method for checking the code using report mode
without regard to what version is being used, you would run the checks
the same way for each minor release, for example.
As you say, coccicheck.py script is strictly speaking not needed to use
Coccinelle and is more intended as a convenience. The advantages of
using something like the coccicheck.py script is that:
The coccicheck.py script reads options from the semantic patches and
uses them when running spatch. If using spatch directly, you would have
to check the scripts, extract the options, and then use them when
running spatch. Not impossible, but an extra step that makes it more
inconvenient to use Coccinelle, which in turn makes it less likely to be
used and maintained.
You could automate the extraction of options from the semantic patches,
but this script provides a platform-independent method to run the checks
for reviewers, developers, and build systems. Note that different
semantic patches can use different options, so if you want to automate
this, you would need to write something like coccicheck.py anyway.
Best wishes,
Mats Kindahl
Show quoted text
[1]:/messages/by-id/aQMtR/m4kW4Rkul+@ip-10-97-1-34.eu-west-3.compute.internal
Regards,