Inconsistency between table am callback and table function names
The general theme for table function names seem to be
"table_<am_callback_name>". For example table_scan_getnextslot() and its
corresponding callback scan_getnextslot(). Most of the table functions and
callbacks follow mentioned convention except following ones
table_beginscan
table_endscan
table_rescan
table_fetch_row_version
table_get_latest_tid
table_insert
table_insert_speculative
table_complete_speculative
table_delete
table_update
table_lock_tuple
the corresponding callback names for them are
scan_begin
scan_end
scan_rescan
tuple_fetch_row_version
tuple_get_latest_tid
tuple_insert
tuple_insert_speculative
tuple_delete
tuple_update
tuple_lock
It confuses while browsing through the code and hence I would like to
propose we make them consistent. Either fix the callback names or table
functions but all should follow the same convention, makes it easy to
browse around and refer to as well. Personally, I would say fix the table
function names as callback names seem fine. So, for example, make it
table_scan_begin().
Also, some of these table function names read little odd
table_relation_set_new_filenode
table_relation_nontransactional_truncate
table_relation_copy_data
table_relation_copy_for_cluster
table_relation_vacuum
table_relation_estimate_size
Can we drop relation word from callback names and as a result from these
function names as well? Just have callback names as set_new_filenode,
copy_data, estimate_size?
Also, a question about comments. Currently, redundant comments are written
above callback functions as also above table functions. They differ
sometimes a little bit on descriptions but majority content still being the
same. Should we have only one place finalized to have the comments to keep
them in sync and also know which one to refer to?
Plus, file name amapi.h now seems very broad, if possible should be renamed
to indexamapi.h or indexam.h to follow tableam.h. No idea what's our policy
around header file renames.
Hi,
On 2019-05-08 00:32:22 -0700, Ashwin Agrawal wrote:
The general theme for table function names seem to be
"table_<am_callback_name>". For example table_scan_getnextslot() and its
corresponding callback scan_getnextslot(). Most of the table functions and
callbacks follow mentioned convention except following onestable_beginscan
table_endscan
table_rescan
table_fetch_row_version
table_get_latest_tid
table_insert
table_insert_speculative
table_complete_speculative
table_delete
table_update
table_lock_tuplethe corresponding callback names for them are
scan_begin
scan_end
scan_rescan
The mismatch here is just due of backward compat with the existing
function names.
tuple_fetch_row_version
tuple_get_latest_tid
Hm, I'd not object to adding a tuple_ to the wrapper.
tuple_insert
tuple_insert_speculative
tuple_delete
tuple_update
tuple_lock
That again is to keep the naming similar to the existing functions.
Also, some of these table function names read little odd
table_relation_set_new_filenode
table_relation_nontransactional_truncate
table_relation_copy_data
table_relation_copy_for_cluster
table_relation_vacuum
table_relation_estimate_sizeCan we drop relation word from callback names and as a result from these
function names as well? Just have callback names as set_new_filenode,
copy_data, estimate_size?
I'm strongly against that. These all work on a full relation size,
rather than on individual tuples, and that seems worth pointing out.
Also, a question about comments. Currently, redundant comments are written
above callback functions as also above table functions. They differ
sometimes a little bit on descriptions but majority content still being the
same. Should we have only one place finalized to have the comments to keep
them in sync and also know which one to refer to?
Note that the non-differing comments usually just refer to the other
place. And there's legitimate differences in most of the ones that are
both at the callback and the external functions - since the audience of
both are difference, that IMO makes sense.
Plus, file name amapi.h now seems very broad, if possible should be renamed
to indexamapi.h or indexam.h to follow tableam.h. No idea what's our policy
around header file renames.
We probably should rename it, but not in 12...
Greetings,
Andres Freund
On Wed, May 8, 2019 at 2:51 PM Andres Freund <andres@anarazel.de> wrote:
Hi,
On 2019-05-08 00:32:22 -0700, Ashwin Agrawal wrote:
The general theme for table function names seem to be
"table_<am_callback_name>". For example table_scan_getnextslot() and its
corresponding callback scan_getnextslot(). Most of the table functions and
callbacks follow mentioned convention except following onestable_beginscan
table_endscan
table_rescan
table_fetch_row_version
table_get_latest_tid
table_insert
table_insert_speculative
table_complete_speculative
table_delete
table_update
table_lock_tuplethe corresponding callback names for them are
scan_begin
scan_end
scan_rescanThe mismatch here is just due of backward compat with the existing
function names.
I am missing something here, would like to know more. table_ seem all
new fresh naming. Hence IMO having consistency with surrounding and
related code carries more weight as I don't know backward compat
serving what purpose. Heap function names can continue to call with
same old names for backward compat if required.
Also, a question about comments. Currently, redundant comments are written
above callback functions as also above table functions. They differ
sometimes a little bit on descriptions but majority content still being the
same. Should we have only one place finalized to have the comments to keep
them in sync and also know which one to refer to?Note that the non-differing comments usually just refer to the other
place. And there's legitimate differences in most of the ones that are
both at the callback and the external functions - since the audience of
both are difference, that IMO makes sense.
Not having consistency is the main aspect I wish to bring to
attention. Like for some callback functions the comment is
/* see table_insert() for reference about parameters */
void (*tuple_insert) (Relation rel, TupleTableSlot *slot,
CommandId cid, int options,
struct BulkInsertStateData *bistate);
/* see table_insert_speculative() for reference about parameters
*/
void (*tuple_insert_speculative) (Relation rel,
TupleTableSlot *slot,
CommandId cid,
int options,
struct
BulkInsertStateData *bistate,
uint32 specToken);
Whereas for some other callbacks the parameter explanation exist in
both the places. Seems we should be consistent.
I feel in long run becomes pain to keep them in sync as comments
evolve. Like for example
/*
* Estimate the size of shared memory needed for a parallel scan
of this
* relation. The snapshot does not need to be accounted for.
*/
Size (*parallelscan_estimate) (Relation rel);
parallescan_estimate is not having snapshot argument passed to it, but
table_parallescan_estimate does. So, this way chances are high they
going out of sync and missing to modify in both the places. Agree
though on audience being different for both. So, seems going with the
refer XXX for parameters seems fine approach for all the callbacks and
then only specific things to flag out for the AM layer to be aware can
live above the callback function.
Plus, file name amapi.h now seems very broad, if possible should be renamed
to indexamapi.h or indexam.h to follow tableam.h. No idea what's our policy
around header file renames.We probably should rename it, but not in 12...
Okay good to know.
Hi,
On 2019-05-08 17:05:07 -0700, Ashwin Agrawal wrote:
On Wed, May 8, 2019 at 2:51 PM Andres Freund <andres@anarazel.de> wrote:
On 2019-05-08 00:32:22 -0700, Ashwin Agrawal wrote:
The general theme for table function names seem to be
"table_<am_callback_name>". For example table_scan_getnextslot() and its
corresponding callback scan_getnextslot(). Most of the table functions and
callbacks follow mentioned convention except following onestable_beginscan
table_endscan
table_rescan
table_fetch_row_version
table_get_latest_tid
table_insert
table_insert_speculative
table_complete_speculative
table_delete
table_update
table_lock_tuplethe corresponding callback names for them are
scan_begin
scan_end
scan_rescanThe mismatch here is just due of backward compat with the existing
function names.I am missing something here, would like to know more. table_ seem all
new fresh naming. Hence IMO having consistency with surrounding and
related code carries more weight as I don't know backward compat
serving what purpose. Heap function names can continue to call with
same old names for backward compat if required.
The changes necessary for tableam were already huge. Changing naming
schemes for functions that are used all over the backend (e.g. ~80 calls
to table_beginscan), and where there's other wrapper functions that also
widely used (237 calls to systable_beginscan) which didn't have to be
touched, at the same time would have made it even harder to review.
Greetings,
Andres Freund
On Thu, May 9, 2019 at 8:52 AM Andres Freund <andres@anarazel.de> wrote:
The changes necessary for tableam were already huge. Changing naming
schemes for functions that are used all over the backend (e.g. ~80 calls
to table_beginscan), and where there's other wrapper functions that also
widely used (237 calls to systable_beginscan) which didn't have to be
touched, at the same time would have made it even harder to review.
If there are no objections to renaming now, as separate independent
patch, I am happy to do the same and send it across. Will rename to
make it consistent as mentioned at start of the thread leaving
table_relation_xxx() ones as is today.
Hi,
On 2019-05-10 10:43:44 -0700, Ashwin Agrawal wrote:
On Thu, May 9, 2019 at 8:52 AM Andres Freund <andres@anarazel.de> wrote:
The changes necessary for tableam were already huge. Changing naming
schemes for functions that are used all over the backend (e.g. ~80 calls
to table_beginscan), and where there's other wrapper functions that also
widely used (237 calls to systable_beginscan) which didn't have to be
touched, at the same time would have made it even harder to review.If there are no objections to renaming now, as separate independent
patch, I am happy to do the same and send it across. Will rename to
make it consistent as mentioned at start of the thread leaving
table_relation_xxx() ones as is today.
What would you want to rename precisely? Don't think it's useful to
start sending patches before we agree on something concrete. I'm not on
board with patching hundreds systable_beginscan calls (that'll break a
lot of external code, besides the churn of in-core code), nor with the
APIs around that having a diverging name scheme.
Greetings,
Andres Freund
On Fri, May 10, 2019 at 10:51 AM Andres Freund <andres@anarazel.de> wrote:
Hi,
On 2019-05-10 10:43:44 -0700, Ashwin Agrawal wrote:
On Thu, May 9, 2019 at 8:52 AM Andres Freund <andres@anarazel.de> wrote:
The changes necessary for tableam were already huge. Changing naming
schemes for functions that are used all over the backend (e.g. ~80 calls
to table_beginscan), and where there's other wrapper functions that also
widely used (237 calls to systable_beginscan) which didn't have to be
touched, at the same time would have made it even harder to review.If there are no objections to renaming now, as separate independent
patch, I am happy to do the same and send it across. Will rename to
make it consistent as mentioned at start of the thread leaving
table_relation_xxx() ones as is today.What would you want to rename precisely? Don't think it's useful to
start sending patches before we agree on something concrete. I'm not on
board with patching hundreds systable_beginscan calls (that'll break a
lot of external code, besides the churn of in-core code), nor with the
APIs around that having a diverging name scheme.
Meant to stick the question mark in that email, somehow missed. Yes
not planning to spend any time on it if objections. Here is the list
of renames I wish to perform.
Lets start with low hanging ones.
table_rescan -> table_scan_rescan
git grep table_rescan | wc -l
6
table_insert -> table_tuple_insert
git grep tuple_insert | wc -l
13
table_insert_speculative -> table_tuple_insert_speculative
git grep tuple_insert_speculative | wc -l
5
table_delete -> table_tuple_delete (table_delete reads incorrect as
not deleting the table)
git grep tuple_delete | wc -l
8
table_update -> table_tuple_update
git grep tuple_update | wc -l
5
table_lock_tuple -> table_tuple_lock
git grep tuple_lock | wc -l
26
Below two you already mentioned no objections to rename
table_fetch_row_version -> table_tuple_fetch_row_version
table_get_latest_tid -> table_tuple_get_latest_tid
Now, table_beginscan and table_endscan are the ones which are
wide-spread. Desire seems we should keep it consistent with
systable_beginscan. Understand the constraints and churn aspect, given
that diverging naming scheme is unavoidable. Why not leave
systable_beginscan as it is and only rename table_beginscan and its
counterparts table_beginscan_xxx() atleast?
Index interfaces and table interfaces have some diverged naming scheme
already like index_getnext_slot and table_scan_getnextslot. Not
proposing to change them. But at least reducing wherever possible
sooner would be helpful.
Hi,
On 2019-05-10 12:43:06 -0700, Ashwin Agrawal wrote:
On Fri, May 10, 2019 at 10:51 AM Andres Freund <andres@anarazel.de> wrote:
Hi,
On 2019-05-10 10:43:44 -0700, Ashwin Agrawal wrote:
On Thu, May 9, 2019 at 8:52 AM Andres Freund <andres@anarazel.de> wrote:
The changes necessary for tableam were already huge. Changing naming
schemes for functions that are used all over the backend (e.g. ~80 calls
to table_beginscan), and where there's other wrapper functions that also
widely used (237 calls to systable_beginscan) which didn't have to be
touched, at the same time would have made it even harder to review.If there are no objections to renaming now, as separate independent
patch, I am happy to do the same and send it across. Will rename to
make it consistent as mentioned at start of the thread leaving
table_relation_xxx() ones as is today.What would you want to rename precisely? Don't think it's useful to
start sending patches before we agree on something concrete. I'm not on
board with patching hundreds systable_beginscan calls (that'll break a
lot of external code, besides the churn of in-core code), nor with the
APIs around that having a diverging name scheme.Meant to stick the question mark in that email, somehow missed. Yes
not planning to spend any time on it if objections. Here is the list
of renames I wish to perform.Lets start with low hanging ones.
table_rescan -> table_scan_rescan
git grep table_rescan | wc -l
6table_insert -> table_tuple_insert
git grep tuple_insert | wc -l
13table_insert_speculative -> table_tuple_insert_speculative
git grep tuple_insert_speculative | wc -l
5table_delete -> table_tuple_delete (table_delete reads incorrect as
not deleting the table)
git grep tuple_delete | wc -l
8table_update -> table_tuple_update
git grep tuple_update | wc -l
5table_lock_tuple -> table_tuple_lock
git grep tuple_lock | wc -l
26Below two you already mentioned no objections to rename
table_fetch_row_version -> table_tuple_fetch_row_version
table_get_latest_tid -> table_tuple_get_latest_tidNow, table_beginscan and table_endscan are the ones which are
wide-spread. Desire seems we should keep it consistent with
systable_beginscan. Understand the constraints and churn aspect, given
that diverging naming scheme is unavoidable. Why not leave
systable_beginscan as it is and only rename table_beginscan and its
counterparts table_beginscan_xxx() atleast?Index interfaces and table interfaces have some diverged naming scheme
already like index_getnext_slot and table_scan_getnextslot. Not
proposing to change them. But at least reducing wherever possible
sooner would be helpful.
My personal opinion is that this is more churn than I think is useful to
tackle after feature freeze, with not sufficient benefits. If others
chime in, voting to do this, I'm OK with doing that, but otherwise I
think there's more important stuff to do.
Greetings,
Andres Freund
On 2019-May-10, Andres Freund wrote:
My personal opinion is that this is more churn than I think is useful to
tackle after feature freeze, with not sufficient benefits. If others
chime in, voting to do this, I'm OK with doing that, but otherwise I
think there's more important stuff to do.
One issue is that if we don't change things now, we can never change it
afterwards, so we should make some effort to ensure that naming is
sensible. And we already changed the names of the whole interface.
I'm not voting to accept all of Ashwin's proposals right away, only to
have the names considered.
--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Hi,
On 2019-05-10 16:18:32 -0400, Alvaro Herrera wrote:
On 2019-May-10, Andres Freund wrote:
My personal opinion is that this is more churn than I think is useful to
tackle after feature freeze, with not sufficient benefits. If others
chime in, voting to do this, I'm OK with doing that, but otherwise I
think there's more important stuff to do.One issue is that if we don't change things now, we can never change it
afterwards, so we should make some effort to ensure that naming is
sensible. And we already changed the names of the whole interface.
Well, the point is that there's symmetry with a lot of similar functions
that were *not* affected by the tableam changes. Cf. systable_beginscan
et al. We could add wrappers etc to make it less painful, but then
there's no urgency either.
Greetings,
Andres Freund
Alvaro Herrera <alvherre@2ndquadrant.com> writes:
On 2019-May-10, Andres Freund wrote:
My personal opinion is that this is more churn than I think is useful to
tackle after feature freeze, with not sufficient benefits. If others
chime in, voting to do this, I'm OK with doing that, but otherwise I
think there's more important stuff to do.
One issue is that if we don't change things now, we can never change it
afterwards, so we should make some effort to ensure that naming is
sensible. And we already changed the names of the whole interface.
Yeah. I do not have an opinion on whether these changes are actually
improvements, but renaming right now is way less painful than it would
be to rename post-v12. Let's try to get it right the first time,
especially with functions we already renamed in this cycle.
I do think that the "too much churn" argument has merit for places
that were *not* already changed in v12. In particular I'd vote against
renaming the systable_xxx functions.
regards, tom lane
On Fri, May 10, 2019 at 3:43 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:
Meant to stick the question mark in that email, somehow missed. Yes
not planning to spend any time on it if objections. Here is the list
of renames I wish to perform.Lets start with low hanging ones.
table_rescan -> table_scan_rescan
table_insert -> table_tuple_insert
table_insert_speculative -> table_tuple_insert_speculative
table_delete -> table_tuple_delete
table_update -> table_tuple_update
table_lock_tuple -> table_tuple_lockBelow two you already mentioned no objections to rename
table_fetch_row_version -> table_tuple_fetch_row_version
table_get_latest_tid -> table_tuple_get_latest_tidNow, table_beginscan and table_endscan are the ones which are
wide-spread.
I vote to rename all the ones where the new name would contain "tuple"
and to leave the others alone. i.e. leave table_beginscan,
table_endscan, and table_rescan as they are. I think that there's
little benefit in standardizing table_rescan but not the other two,
and we seem to agree that tinkering with the other two gets into a
painful amount of churn.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Mon, May 13, 2019 at 12:51 PM Robert Haas <robertmhaas@gmail.com> wrote:
On Fri, May 10, 2019 at 3:43 PM Ashwin Agrawal <aagrawal@pivotal.io>
wrote:Meant to stick the question mark in that email, somehow missed. Yes
not planning to spend any time on it if objections. Here is the list
of renames I wish to perform.Lets start with low hanging ones.
table_rescan -> table_scan_rescan
table_insert -> table_tuple_insert
table_insert_speculative -> table_tuple_insert_speculative
table_delete -> table_tuple_delete
table_update -> table_tuple_update
table_lock_tuple -> table_tuple_lockBelow two you already mentioned no objections to rename
table_fetch_row_version -> table_tuple_fetch_row_version
table_get_latest_tid -> table_tuple_get_latest_tidNow, table_beginscan and table_endscan are the ones which are
wide-spread.I vote to rename all the ones where the new name would contain "tuple"
and to leave the others alone. i.e. leave table_beginscan,
table_endscan, and table_rescan as they are. I think that there's
little benefit in standardizing table_rescan but not the other two,
and we seem to agree that tinkering with the other two gets into a
painful amount of churn.
Thank you. Please find the patch to rename the agreed functions. It would
be good to make all consistent instead of applying exception to three
functions but seems no consensus on it. Given table_ api are new, we could
modify them leaving systable_ ones as is, but as objections left that as is.
Attachments:
0001-Rename-table-am-wrappers-to-match-table-am-callback-.patchtext/x-patch; charset=US-ASCII; name=0001-Rename-table-am-wrappers-to-match-table-am-callback-.patchDownload
From 39fcc223a0aaeacf545e09cfe29b8d565d970234 Mon Sep 17 00:00:00 2001
From: Ashwin Agrawal <aagrawal@pivotal.io>
Date: Tue, 14 May 2019 10:10:21 -0700
Subject: [PATCH] Rename table am wrappers to match table am callback names.
Now all the table wrapper functions match the
"table_<am_callback_name>" theme, except table_beginscan(),
table_endscan() and table_rescan(). The exception was applied and
decision is not to rename the three functions based on discussion to
avoid code churn.
---
src/backend/access/heap/heapam.c | 10 ++--
src/backend/access/table/tableam.c | 46 +++++++--------
src/backend/commands/copy.c | 12 ++--
src/backend/commands/createas.c | 14 ++---
src/backend/commands/matview.c | 14 ++---
src/backend/commands/tablecmds.c | 4 +-
src/backend/commands/trigger.c | 12 ++--
src/backend/executor/execMain.c | 8 +--
src/backend/executor/execReplication.c | 16 +++---
src/backend/executor/nodeLockRows.c | 8 +--
src/backend/executor/nodeModifyTable.c | 78 ++++++++++++-------------
src/backend/executor/nodeTidscan.c | 4 +-
src/backend/utils/adt/tid.c | 4 +-
src/include/access/tableam.h | 80 +++++++++++++-------------
14 files changed, 155 insertions(+), 155 deletions(-)
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index ec9853603fd..86a964c4f48 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -1856,12 +1856,12 @@ ReleaseBulkInsertStatePin(BulkInsertState bistate)
* The new tuple is stamped with current transaction ID and the specified
* command ID.
*
- * See table_insert for comments about most of the input flags, except that
+ * See table_tuple_insert for comments about most of the input flags, except that
* this routine directly takes a tuple rather than a slot.
*
* There's corresponding HEAP_INSERT_ options to all the TABLE_INSERT_
* options, and there additionally is HEAP_INSERT_SPECULATIVE which is used to
- * implement table_insert_speculative().
+ * implement table_tuple_insert_speculative().
*
* On return the header fields of *tup are updated to match the stored tuple;
* in particular tup->t_self receives the actual TID where the tuple was
@@ -2434,7 +2434,7 @@ xmax_infomask_changed(uint16 new_infomask, uint16 old_infomask)
/*
* heap_delete - delete a tuple
*
- * See table_delete() for an explanation of the parameters, except that this
+ * See table_tuple_delete() for an explanation of the parameters, except that this
* routine directly takes a tuple rather than a slot.
*
* In the failure cases, the routine fills *tmfd with the tuple's t_ctid,
@@ -2880,7 +2880,7 @@ simple_heap_delete(Relation relation, ItemPointer tid)
/*
* heap_update - replace a tuple
*
- * See table_update() for an explanation of the parameters, except that this
+ * See table_tuple_update() for an explanation of the parameters, except that this
* routine directly takes a tuple rather than a slot.
*
* In the failure cases, the routine fills *tmfd with the tuple's t_ctid,
@@ -3951,7 +3951,7 @@ get_mxact_status_for_lock(LockTupleMode mode, bool is_update)
* *buffer: set to buffer holding tuple (pinned but not locked at exit)
* *tmfd: filled in failure cases (see below)
*
- * Function results are the same as the ones for table_lock_tuple().
+ * Function results are the same as the ones for table_tuple_lock().
*
* In the failure cases other than TM_Invisible, the routine fills
* *tmfd with the tuple's t_ctid, t_xmax (resolving a possible MultiXact,
diff --git a/src/backend/access/table/tableam.c b/src/backend/access/table/tableam.c
index baba1ea699b..2bf70fcdb60 100644
--- a/src/backend/access/table/tableam.c
+++ b/src/backend/access/table/tableam.c
@@ -219,19 +219,19 @@ table_index_fetch_tuple_check(Relation rel,
*/
/*
- * simple_table_insert - insert a tuple
+ * simple_table_tuple_insert - insert a tuple
*
- * Currently, this routine differs from table_insert only in supplying a
+ * Currently, this routine differs from table_tuple_insert only in supplying a
* default command ID and not allowing access to the speedup options.
*/
void
-simple_table_insert(Relation rel, TupleTableSlot *slot)
+simple_table_tuple_insert(Relation rel, TupleTableSlot *slot)
{
- table_insert(rel, slot, GetCurrentCommandId(true), 0, NULL);
+ table_tuple_insert(rel, slot, GetCurrentCommandId(true), 0, NULL);
}
/*
- * simple_table_delete - delete a tuple
+ * simple_table_tuple_delete - delete a tuple
*
* This routine may be used to delete a tuple when concurrent updates of
* the target tuple are not expected (for example, because we have a lock
@@ -239,16 +239,16 @@ simple_table_insert(Relation rel, TupleTableSlot *slot)
* via ereport().
*/
void
-simple_table_delete(Relation rel, ItemPointer tid, Snapshot snapshot)
+simple_table_tuple_delete(Relation rel, ItemPointer tid, Snapshot snapshot)
{
TM_Result result;
TM_FailureData tmfd;
- result = table_delete(rel, tid,
- GetCurrentCommandId(true),
- snapshot, InvalidSnapshot,
- true /* wait for commit */ ,
- &tmfd, false /* changingPart */ );
+ result = table_tuple_delete(rel, tid,
+ GetCurrentCommandId(true),
+ snapshot, InvalidSnapshot,
+ true /* wait for commit */ ,
+ &tmfd, false /* changingPart */ );
switch (result)
{
@@ -270,13 +270,13 @@ simple_table_delete(Relation rel, ItemPointer tid, Snapshot snapshot)
break;
default:
- elog(ERROR, "unrecognized table_delete status: %u", result);
+ elog(ERROR, "unrecognized table_tuple_delete status: %u", result);
break;
}
}
/*
- * simple_table_update - replace a tuple
+ * simple_table_tuple_update - replace a tuple
*
* This routine may be used to update a tuple when concurrent updates of
* the target tuple are not expected (for example, because we have a lock
@@ -284,20 +284,20 @@ simple_table_delete(Relation rel, ItemPointer tid, Snapshot snapshot)
* via ereport().
*/
void
-simple_table_update(Relation rel, ItemPointer otid,
- TupleTableSlot *slot,
- Snapshot snapshot,
- bool *update_indexes)
+simple_table_tuple_update(Relation rel, ItemPointer otid,
+ TupleTableSlot *slot,
+ Snapshot snapshot,
+ bool *update_indexes)
{
TM_Result result;
TM_FailureData tmfd;
LockTupleMode lockmode;
- result = table_update(rel, otid, slot,
- GetCurrentCommandId(true),
- snapshot, InvalidSnapshot,
- true /* wait for commit */ ,
- &tmfd, &lockmode, update_indexes);
+ result = table_tuple_update(rel, otid, slot,
+ GetCurrentCommandId(true),
+ snapshot, InvalidSnapshot,
+ true /* wait for commit */ ,
+ &tmfd, &lockmode, update_indexes);
switch (result)
{
@@ -319,7 +319,7 @@ simple_table_update(Relation rel, ItemPointer otid,
break;
default:
- elog(ERROR, "unrecognized table_update status: %u", result);
+ elog(ERROR, "unrecognized table_tuple_update status: %u", result);
break;
}
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index c39218f8dbb..a2349570df9 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -90,7 +90,7 @@ typedef enum EolType
*/
typedef enum CopyInsertMethod
{
- CIM_SINGLE, /* use table_insert or fdw routine */
+ CIM_SINGLE, /* use table_tuple_insert or fdw routine */
CIM_MULTI, /* always use table_multi_insert */
CIM_MULTI_CONDITIONAL /* use table_multi_insert only if valid */
} CopyInsertMethod;
@@ -2664,7 +2664,7 @@ CopyFrom(CopyState cstate)
PartitionTupleRouting *proute = NULL;
ErrorContextCallback errcallback;
CommandId mycid = GetCurrentCommandId(true);
- int ti_options = 0; /* start with default table_insert options */
+ int ti_options = 0; /* start with default table_tuple_insert options */
BulkInsertState bistate = NULL;
CopyInsertMethod insertMethod;
CopyMultiInsertInfo multiInsertInfo = {0}; /* pacify compiler */
@@ -2737,7 +2737,7 @@ CopyFrom(CopyState cstate)
* FSM for free space is a waste of time, even if we must use WAL because
* of archiving. This could possibly be wrong, but it's unlikely.
*
- * The comments for table_insert and RelationGetBufferForTuple specify that
+ * The comments for table_tuple_insert and RelationGetBufferForTuple specify that
* skipping WAL logging is only safe if we ensure that our tuples do not
* go into pages containing tuples from any other transactions --- but this
* must be the case if we have a new table or new relfilenode, so we need
@@ -2888,7 +2888,7 @@ CopyFrom(CopyState cstate)
/*
* It's generally more efficient to prepare a bunch of tuples for
* insertion, and insert them in one table_multi_insert() call, than call
- * table_insert() separately for every tuple. However, there are a number
+ * table_tuple_insert() separately for every tuple. However, there are a number
* of reasons why we might not be able to do this. These are explained
* below.
*/
@@ -3291,8 +3291,8 @@ CopyFrom(CopyState cstate)
else
{
/* OK, store the tuple and create index entries for it */
- table_insert(resultRelInfo->ri_RelationDesc, myslot,
- mycid, ti_options, bistate);
+ table_tuple_insert(resultRelInfo->ri_RelationDesc, myslot,
+ mycid, ti_options, bistate);
if (resultRelInfo->ri_NumIndices > 0)
recheckIndexes = ExecInsertIndexTuples(myslot,
diff --git a/src/backend/commands/createas.c b/src/backend/commands/createas.c
index 43c2fa91242..4c1d909d380 100644
--- a/src/backend/commands/createas.c
+++ b/src/backend/commands/createas.c
@@ -60,7 +60,7 @@ typedef struct
Relation rel; /* relation to write to */
ObjectAddress reladdr; /* address of rel, for ExecCreateTableAs */
CommandId output_cid; /* cmin to insert in output tuples */
- int ti_options; /* table_insert performance options */
+ int ti_options; /* table_tuple_insert performance options */
BulkInsertState bistate; /* bulk insert state */
} DR_intorel;
@@ -576,18 +576,18 @@ intorel_receive(TupleTableSlot *slot, DestReceiver *self)
/*
* Note that the input slot might not be of the type of the target
- * relation. That's supported by table_insert(), but slightly less
+ * relation. That's supported by table_tuple_insert(), but slightly less
* efficient than inserting with the right slot - but the alternative
* would be to copy into a slot of the right type, which would not be
* cheap either. This also doesn't allow accessing per-AM data (say a
* tuple's xmin), but since we don't do that here...
*/
- table_insert(myState->rel,
- slot,
- myState->output_cid,
- myState->ti_options,
- myState->bistate);
+ table_tuple_insert(myState->rel,
+ slot,
+ myState->output_cid,
+ myState->ti_options,
+ myState->bistate);
/* We know this is a newly created relation, so there are no indexes */
diff --git a/src/backend/commands/matview.c b/src/backend/commands/matview.c
index 99bf3c29f27..5d9dfbc82e4 100644
--- a/src/backend/commands/matview.c
+++ b/src/backend/commands/matview.c
@@ -54,7 +54,7 @@ typedef struct
/* These fields are filled by transientrel_startup: */
Relation transientrel; /* relation to write to */
CommandId output_cid; /* cmin to insert in output tuples */
- int ti_options; /* table_insert performance options */
+ int ti_options; /* table_tuple_insert performance options */
BulkInsertState bistate; /* bulk insert state */
} DR_transientrel;
@@ -481,18 +481,18 @@ transientrel_receive(TupleTableSlot *slot, DestReceiver *self)
/*
* Note that the input slot might not be of the type of the target
- * relation. That's supported by table_insert(), but slightly less
+ * relation. That's supported by table_tuple_insert(), but slightly less
* efficient than inserting with the right slot - but the alternative
* would be to copy into a slot of the right type, which would not be
* cheap either. This also doesn't allow accessing per-AM data (say a
* tuple's xmin), but since we don't do that here...
*/
- table_insert(myState->transientrel,
- slot,
- myState->output_cid,
- myState->ti_options,
- myState->bistate);
+ table_tuple_insert(myState->transientrel,
+ slot,
+ myState->output_cid,
+ myState->ti_options,
+ myState->bistate);
/* We know this is a newly created relation, so there are no indexes */
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index baeb13e6a0c..c002e44aa29 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -4730,7 +4730,7 @@ ATRewriteTable(AlteredTableInfo *tab, Oid OIDNewHeap, LOCKMODE lockmode)
newrel = NULL;
/*
- * Prepare a BulkInsertState and options for table_insert. Because we're
+ * Prepare a BulkInsertState and options for table_tuple_insert. Because we're
* building a new heap, we can skip WAL-logging and fsync it to disk at
* the end instead (unless WAL-logging is required for archiving or
* streaming replication). The FSM is empty too, so don't bother using it.
@@ -5003,7 +5003,7 @@ ATRewriteTable(AlteredTableInfo *tab, Oid OIDNewHeap, LOCKMODE lockmode)
/* Write the tuple out to the new relation */
if (newrel)
- table_insert(newrel, insertslot, mycid, ti_options, bistate);
+ table_tuple_insert(newrel, insertslot, mycid, ti_options, bistate);
ResetExprContext(econtext);
diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c
index 2beb3781450..a99441f0b93 100644
--- a/src/backend/commands/trigger.c
+++ b/src/backend/commands/trigger.c
@@ -3332,7 +3332,7 @@ GetTupleForTrigger(EState *estate,
*/
if (!IsolationUsesXactSnapshot())
lockflags |= TUPLE_LOCK_FLAG_FIND_LAST_VERSION;
- test = table_lock_tuple(relation, tid, estate->es_snapshot, oldslot,
+ test = table_tuple_lock(relation, tid, estate->es_snapshot, oldslot,
estate->es_output_cid,
lockmode, LockWaitBlock,
lockflags,
@@ -3386,7 +3386,7 @@ GetTupleForTrigger(EState *estate,
ereport(ERROR,
(errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
errmsg("could not serialize access due to concurrent update")));
- elog(ERROR, "unexpected table_lock_tuple status: %u", test);
+ elog(ERROR, "unexpected table_tuple_lock status: %u", test);
break;
case TM_Deleted:
@@ -3402,7 +3402,7 @@ GetTupleForTrigger(EState *estate,
break;
default:
- elog(ERROR, "unrecognized table_lock_tuple status: %u", test);
+ elog(ERROR, "unrecognized table_tuple_lock status: %u", test);
return false; /* keep compiler quiet */
}
}
@@ -3412,7 +3412,7 @@ GetTupleForTrigger(EState *estate,
* We expect the tuple to be present, thus very simple error handling
* suffices.
*/
- if (!table_fetch_row_version(relation, tid, SnapshotAny, oldslot))
+ if (!table_tuple_fetch_row_version(relation, tid, SnapshotAny, oldslot))
elog(ERROR, "failed to fetch tuple for trigger");
}
@@ -4270,7 +4270,7 @@ AfterTriggerExecute(EState *estate,
{
LocTriggerData.tg_trigslot = ExecGetTriggerOldSlot(estate, relInfo);
- if (!table_fetch_row_version(rel, &(event->ate_ctid1), SnapshotAny, LocTriggerData.tg_trigslot))
+ if (!table_tuple_fetch_row_version(rel, &(event->ate_ctid1), SnapshotAny, LocTriggerData.tg_trigslot))
elog(ERROR, "failed to fetch tuple1 for AFTER trigger");
LocTriggerData.tg_trigtuple =
ExecFetchSlotHeapTuple(LocTriggerData.tg_trigslot, false, &should_free_trig);
@@ -4287,7 +4287,7 @@ AfterTriggerExecute(EState *estate,
{
LocTriggerData.tg_newslot = ExecGetTriggerNewSlot(estate, relInfo);
- if (!table_fetch_row_version(rel, &(event->ate_ctid2), SnapshotAny, LocTriggerData.tg_newslot))
+ if (!table_tuple_fetch_row_version(rel, &(event->ate_ctid2), SnapshotAny, LocTriggerData.tg_newslot))
elog(ERROR, "failed to fetch tuple2 for AFTER trigger");
LocTriggerData.tg_newtuple =
ExecFetchSlotHeapTuple(LocTriggerData.tg_newslot, false, &should_free_new);
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index ed7c0606bf1..dd80b1e5ed8 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2436,7 +2436,7 @@ ExecBuildAuxRowMark(ExecRowMark *erm, List *targetlist)
* quals. For that result to be useful, typically the input tuple has to be
* last row version (otherwise the result isn't particularly useful) and
* locked (otherwise the result might be out of date). That's typically
- * achieved by using table_lock_tuple() with the
+ * achieved by using table_tuple_lock() with the
* TUPLE_LOCK_FLAG_FIND_LAST_VERSION flag.
*
* Returns a slot containing the new candidate update/delete tuple, or
@@ -2654,9 +2654,9 @@ EvalPlanQualFetchRowMarks(EPQState *epqstate)
else
{
/* ordinary table, fetch the tuple */
- if (!table_fetch_row_version(erm->relation,
- (ItemPointer) DatumGetPointer(datum),
- SnapshotAny, slot))
+ if (!table_tuple_fetch_row_version(erm->relation,
+ (ItemPointer) DatumGetPointer(datum),
+ SnapshotAny, slot))
elog(ERROR, "failed to fetch tuple for EvalPlanQual recheck");
}
}
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index f8f6463358f..a487831b69b 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -173,7 +173,7 @@ retry:
PushActiveSnapshot(GetLatestSnapshot());
- res = table_lock_tuple(rel, &(outslot->tts_tid), GetLatestSnapshot(),
+ res = table_tuple_lock(rel, &(outslot->tts_tid), GetLatestSnapshot(),
outslot,
GetCurrentCommandId(false),
lockmode,
@@ -208,7 +208,7 @@ retry:
elog(ERROR, "attempted to lock invisible tuple");
break;
default:
- elog(ERROR, "unexpected table_lock_tuple status: %u", res);
+ elog(ERROR, "unexpected table_tuple_lock status: %u", res);
break;
}
}
@@ -337,7 +337,7 @@ retry:
PushActiveSnapshot(GetLatestSnapshot());
- res = table_lock_tuple(rel, &(outslot->tts_tid), GetLatestSnapshot(),
+ res = table_tuple_lock(rel, &(outslot->tts_tid), GetLatestSnapshot(),
outslot,
GetCurrentCommandId(false),
lockmode,
@@ -372,7 +372,7 @@ retry:
elog(ERROR, "attempted to lock invisible tuple");
break;
default:
- elog(ERROR, "unexpected table_lock_tuple status: %u", res);
+ elog(ERROR, "unexpected table_tuple_lock status: %u", res);
break;
}
}
@@ -425,7 +425,7 @@ ExecSimpleRelationInsert(EState *estate, TupleTableSlot *slot)
ExecPartitionCheck(resultRelInfo, slot, estate, true);
/* OK, store the tuple and create index entries for it */
- simple_table_insert(resultRelInfo->ri_RelationDesc, slot);
+ simple_table_tuple_insert(resultRelInfo->ri_RelationDesc, slot);
if (resultRelInfo->ri_NumIndices > 0)
recheckIndexes = ExecInsertIndexTuples(slot, estate, false, NULL,
@@ -490,8 +490,8 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
if (resultRelInfo->ri_PartitionCheck)
ExecPartitionCheck(resultRelInfo, slot, estate, true);
- simple_table_update(rel, tid, slot,estate->es_snapshot,
- &update_indexes);
+ simple_table_tuple_update(rel, tid, slot, estate->es_snapshot,
+ &update_indexes);
if (resultRelInfo->ri_NumIndices > 0 && update_indexes)
recheckIndexes = ExecInsertIndexTuples(slot, estate, false, NULL,
@@ -535,7 +535,7 @@ ExecSimpleRelationDelete(EState *estate, EPQState *epqstate,
if (!skip_tuple)
{
/* OK, delete the tuple */
- simple_table_delete(rel, tid, estate->es_snapshot);
+ simple_table_tuple_delete(rel, tid, estate->es_snapshot);
/* AFTER ROW DELETE Triggers */
ExecARDeleteTriggers(estate, resultRelInfo,
diff --git a/src/backend/executor/nodeLockRows.c b/src/backend/executor/nodeLockRows.c
index 7674ac893c2..e8d7f24efea 100644
--- a/src/backend/executor/nodeLockRows.c
+++ b/src/backend/executor/nodeLockRows.c
@@ -185,7 +185,7 @@ lnext:
if (!IsolationUsesXactSnapshot())
lockflags |= TUPLE_LOCK_FLAG_FIND_LAST_VERSION;
- test = table_lock_tuple(erm->relation, &tid, estate->es_snapshot,
+ test = table_tuple_lock(erm->relation, &tid, estate->es_snapshot,
markSlot, estate->es_output_cid,
lockmode, erm->waitPolicy,
lockflags,
@@ -208,7 +208,7 @@ lnext:
* to fetch the updated tuple instead, but doing so would
* require changing heap_update and heap_delete to not
* complain about updating "invisible" tuples, which seems
- * pretty scary (table_lock_tuple will not complain, but few
+ * pretty scary (table_tuple_lock will not complain, but few
* callers expect TM_Invisible, and we're not one of them). So
* for now, treat the tuple as deleted and do not process.
*/
@@ -229,7 +229,7 @@ lnext:
ereport(ERROR,
(errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
errmsg("could not serialize access due to concurrent update")));
- elog(ERROR, "unexpected table_lock_tuple status: %u",
+ elog(ERROR, "unexpected table_tuple_lock status: %u",
test);
break;
@@ -246,7 +246,7 @@ lnext:
break;
default:
- elog(ERROR, "unrecognized table_lock_tuple status: %u",
+ elog(ERROR, "unrecognized table_tuple_lock status: %u",
test);
}
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index 444c0c05746..d159c950d0e 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -235,7 +235,7 @@ ExecCheckTIDVisible(EState *estate,
if (!IsolationUsesXactSnapshot())
return;
- if (!table_fetch_row_version(rel, tid, SnapshotAny, tempSlot))
+ if (!table_tuple_fetch_row_version(rel, tid, SnapshotAny, tempSlot))
elog(ERROR, "failed to fetch conflicting tuple for ON CONFLICT");
ExecCheckTupleVisible(estate, rel, tempSlot);
ExecClearTuple(tempSlot);
@@ -543,11 +543,11 @@ ExecInsert(ModifyTableState *mtstate,
specToken = SpeculativeInsertionLockAcquire(GetCurrentTransactionId());
/* insert the tuple, with the speculative token */
- table_insert_speculative(resultRelationDesc, slot,
- estate->es_output_cid,
- 0,
- NULL,
- specToken);
+ table_tuple_insert_speculative(resultRelationDesc, slot,
+ estate->es_output_cid,
+ 0,
+ NULL,
+ specToken);
/* insert index entries for tuple */
recheckIndexes = ExecInsertIndexTuples(slot, estate, true,
@@ -555,8 +555,8 @@ ExecInsert(ModifyTableState *mtstate,
arbiterIndexes);
/* adjust the tuple's state accordingly */
- table_complete_speculative(resultRelationDesc, slot,
- specToken, specConflict);
+ table_tuple_complete_speculative(resultRelationDesc, slot,
+ specToken, specConflict);
/*
* Wake up anyone waiting for our decision. They will re-check
@@ -583,9 +583,9 @@ ExecInsert(ModifyTableState *mtstate,
else
{
/* insert the tuple normally */
- table_insert(resultRelationDesc, slot,
- estate->es_output_cid,
- 0, NULL);
+ table_tuple_insert(resultRelationDesc, slot,
+ estate->es_output_cid,
+ 0, NULL);
/* insert index entries for tuple */
if (resultRelInfo->ri_NumIndices > 0)
@@ -765,13 +765,13 @@ ExecDelete(ModifyTableState *mtstate,
* mode transactions.
*/
ldelete:;
- result = table_delete(resultRelationDesc, tupleid,
- estate->es_output_cid,
- estate->es_snapshot,
- estate->es_crosscheck_snapshot,
- true /* wait for commit */ ,
- &tmfd,
- changingPart);
+ result = table_tuple_delete(resultRelationDesc, tupleid,
+ estate->es_output_cid,
+ estate->es_snapshot,
+ estate->es_crosscheck_snapshot,
+ true /* wait for commit */ ,
+ &tmfd,
+ changingPart);
switch (result)
{
@@ -831,7 +831,7 @@ ldelete:;
inputslot = EvalPlanQualSlot(epqstate, resultRelationDesc,
resultRelInfo->ri_RangeTableIndex);
- result = table_lock_tuple(resultRelationDesc, tupleid,
+ result = table_tuple_lock(resultRelationDesc, tupleid,
estate->es_snapshot,
inputslot, estate->es_output_cid,
LockTupleExclusive, LockWaitBlock,
@@ -873,7 +873,7 @@ ldelete:;
* out.
*
* See also TM_SelfModified response to
- * table_delete() above.
+ * table_tuple_delete() above.
*/
if (tmfd.cmax != estate->es_output_cid)
ereport(ERROR,
@@ -898,7 +898,7 @@ ldelete:;
* locking the latest version via
* TUPLE_LOCK_FLAG_FIND_LAST_VERSION.
*/
- elog(ERROR, "unexpected table_lock_tuple status: %u",
+ elog(ERROR, "unexpected table_tuple_lock status: %u",
result);
return NULL;
}
@@ -916,7 +916,7 @@ ldelete:;
return NULL;
default:
- elog(ERROR, "unrecognized table_delete status: %u", result);
+ elog(ERROR, "unrecognized table_tuple_delete status: %u", result);
return NULL;
}
@@ -988,8 +988,8 @@ ldelete:;
}
else
{
- if (!table_fetch_row_version(resultRelationDesc, tupleid,
- SnapshotAny, slot))
+ if (!table_tuple_fetch_row_version(resultRelationDesc, tupleid,
+ SnapshotAny, slot))
elog(ERROR, "failed to fetch deleted tuple for DELETE RETURNING");
}
}
@@ -1132,7 +1132,7 @@ ExecUpdate(ModifyTableState *mtstate,
* If we generate a new candidate tuple after EvalPlanQual testing, we
* must loop back here and recheck any RLS policies and constraints.
* (We don't need to redo triggers, however. If there are any BEFORE
- * triggers then trigger.c will have done table_lock_tuple to lock the
+ * triggers then trigger.c will have done table_tuple_lock to lock the
* correct tuple, so there's no need to do them again.)
*/
lreplace:;
@@ -1307,12 +1307,12 @@ lreplace:;
* needed for referential integrity updates in transaction-snapshot
* mode transactions.
*/
- result = table_update(resultRelationDesc, tupleid, slot,
- estate->es_output_cid,
- estate->es_snapshot,
- estate->es_crosscheck_snapshot,
- true /* wait for commit */ ,
- &tmfd, &lockmode, &update_indexes);
+ result = table_tuple_update(resultRelationDesc, tupleid, slot,
+ estate->es_output_cid,
+ estate->es_snapshot,
+ estate->es_crosscheck_snapshot,
+ true /* wait for commit */ ,
+ &tmfd, &lockmode, &update_indexes);
switch (result)
{
@@ -1371,7 +1371,7 @@ lreplace:;
inputslot = EvalPlanQualSlot(epqstate, resultRelationDesc,
resultRelInfo->ri_RangeTableIndex);
- result = table_lock_tuple(resultRelationDesc, tupleid,
+ result = table_tuple_lock(resultRelationDesc, tupleid,
estate->es_snapshot,
inputslot, estate->es_output_cid,
lockmode, LockWaitBlock,
@@ -1409,7 +1409,7 @@ lreplace:;
* otherwise error out.
*
* See also TM_SelfModified response to
- * table_update() above.
+ * table_tuple_update() above.
*/
if (tmfd.cmax != estate->es_output_cid)
ereport(ERROR,
@@ -1419,8 +1419,8 @@ lreplace:;
return NULL;
default:
- /* see table_lock_tuple call in ExecDelete() */
- elog(ERROR, "unexpected table_lock_tuple status: %u",
+ /* see table_tuple_lock call in ExecDelete() */
+ elog(ERROR, "unexpected table_tuple_lock status: %u",
result);
return NULL;
}
@@ -1437,7 +1437,7 @@ lreplace:;
return NULL;
default:
- elog(ERROR, "unrecognized table_update status: %u", result);
+ elog(ERROR, "unrecognized table_tuple_update status: %u", result);
return NULL;
}
@@ -1518,7 +1518,7 @@ ExecOnConflictUpdate(ModifyTableState *mtstate,
* previous conclusion that the tuple is conclusively committed is not
* true anymore.
*/
- test = table_lock_tuple(relation, conflictTid,
+ test = table_tuple_lock(relation, conflictTid,
estate->es_snapshot,
existing, estate->es_output_cid,
lockmode, LockWaitBlock, 0,
@@ -1609,7 +1609,7 @@ ExecOnConflictUpdate(ModifyTableState *mtstate,
return false;
default:
- elog(ERROR, "unrecognized table_lock_tuple status: %u", test);
+ elog(ERROR, "unrecognized table_tuple_lock status: %u", test);
}
/* Success, the tuple is locked. */
@@ -1674,7 +1674,7 @@ ExecOnConflictUpdate(ModifyTableState *mtstate,
/*
* Note that it is possible that the target tuple has been modified in
- * this session, after the above table_lock_tuple. We choose to not error
+ * this session, after the above table_tuple_lock. We choose to not error
* out in that case, in line with ExecUpdate's treatment of similar cases.
* This can happen if an UPDATE is triggered from within ExecQual(),
* ExecWithCheckOptions() or ExecProject() above, e.g. by selecting from a
diff --git a/src/backend/executor/nodeTidscan.c b/src/backend/executor/nodeTidscan.c
index 156be56a57d..65d863b6a56 100644
--- a/src/backend/executor/nodeTidscan.c
+++ b/src/backend/executor/nodeTidscan.c
@@ -365,9 +365,9 @@ TidNext(TidScanState *node)
* current according to our snapshot.
*/
if (node->tss_isCurrentOf)
- table_get_latest_tid(heapRelation, snapshot, &tid);
+ table_tuple_get_latest_tid(heapRelation, snapshot, &tid);
- if (table_fetch_row_version(heapRelation, &tid, snapshot, slot))
+ if (table_tuple_fetch_row_version(heapRelation, &tid, snapshot, slot))
return slot;
/* Bad TID or failed snapshot qual; try next */
diff --git a/src/backend/utils/adt/tid.c b/src/backend/utils/adt/tid.c
index 6ab26d8ea8b..036cd8829dd 100644
--- a/src/backend/utils/adt/tid.c
+++ b/src/backend/utils/adt/tid.c
@@ -380,7 +380,7 @@ currtid_byreloid(PG_FUNCTION_ARGS)
ItemPointerCopy(tid, result);
snapshot = RegisterSnapshot(GetLatestSnapshot());
- table_get_latest_tid(rel, snapshot, result);
+ table_tuple_get_latest_tid(rel, snapshot, result);
UnregisterSnapshot(snapshot);
table_close(rel, AccessShareLock);
@@ -415,7 +415,7 @@ currtid_byrelname(PG_FUNCTION_ARGS)
ItemPointerCopy(tid, result);
snapshot = RegisterSnapshot(GetLatestSnapshot());
- table_get_latest_tid(rel, snapshot, result);
+ table_tuple_get_latest_tid(rel, snapshot, result);
UnregisterSnapshot(snapshot);
table_close(rel, AccessShareLock);
diff --git a/src/include/access/tableam.h b/src/include/access/tableam.h
index ebfa0d51855..39910ee0c8c 100644
--- a/src/include/access/tableam.h
+++ b/src/include/access/tableam.h
@@ -80,7 +80,7 @@ typedef enum TM_Result
/*
- * When table_update, table_delete, or table_lock_tuple fail because the target
+ * When table_tuple_update, table_tuple_delete, or table_tuple_lock fail because the target
* tuple is already outdated, they fill in this struct to provide information
* to the caller about what happened.
*
@@ -106,13 +106,13 @@ typedef struct TM_FailureData
bool traversed;
} TM_FailureData;
-/* "options" flag bits for table_insert */
+/* "options" flag bits for table_tuple_insert */
#define TABLE_INSERT_SKIP_WAL 0x0001
#define TABLE_INSERT_SKIP_FSM 0x0002
#define TABLE_INSERT_FROZEN 0x0004
#define TABLE_INSERT_NO_LOGICAL 0x0008
-/* flag bits for table_lock_tuple */
+/* flag bits for table_tuple_lock */
/* Follow tuples whose update is in progress if lock modes don't conflict */
#define TUPLE_LOCK_FLAG_LOCK_UPDATE_IN_PROGRESS (1 << 0)
/* Follow update chain and lock latest version of tuple */
@@ -335,12 +335,12 @@ typedef struct TableAmRoutine
* ------------------------------------------------------------------------
*/
- /* see table_insert() for reference about parameters */
+ /* see table_tuple_insert() for reference about parameters */
void (*tuple_insert) (Relation rel, TupleTableSlot *slot,
CommandId cid, int options,
struct BulkInsertStateData *bistate);
- /* see table_insert_speculative() for reference about parameters */
+ /* see table_tuple_insert_speculative() for reference about parameters */
void (*tuple_insert_speculative) (Relation rel,
TupleTableSlot *slot,
CommandId cid,
@@ -348,7 +348,7 @@ typedef struct TableAmRoutine
struct BulkInsertStateData *bistate,
uint32 specToken);
- /* see table_complete_speculative() for reference about parameters */
+ /* see table_tuple_complete_speculative() for reference about parameters */
void (*tuple_complete_speculative) (Relation rel,
TupleTableSlot *slot,
uint32 specToken,
@@ -358,7 +358,7 @@ typedef struct TableAmRoutine
void (*multi_insert) (Relation rel, TupleTableSlot **slots, int nslots,
CommandId cid, int options, struct BulkInsertStateData *bistate);
- /* see table_delete() for reference about parameters */
+ /* see table_tuple_delete() for reference about parameters */
TM_Result (*tuple_delete) (Relation rel,
ItemPointer tid,
CommandId cid,
@@ -368,7 +368,7 @@ typedef struct TableAmRoutine
TM_FailureData *tmfd,
bool changingPart);
- /* see table_update() for reference about parameters */
+ /* see table_tuple_update() for reference about parameters */
TM_Result (*tuple_update) (Relation rel,
ItemPointer otid,
TupleTableSlot *slot,
@@ -380,7 +380,7 @@ typedef struct TableAmRoutine
LockTupleMode *lockmode,
bool *update_indexes);
- /* see table_lock_tuple() for reference about parameters */
+ /* see table_tuple_lock() for reference about parameters */
TM_Result (*tuple_lock) (Relation rel,
ItemPointer tid,
Snapshot snapshot,
@@ -914,7 +914,7 @@ table_index_fetch_end(struct IndexFetchTableData *scan)
* supports storing multiple row versions reachable via a single index entry
* (like heap's HOT). Whereas table_fetch_row_version only evaluates the
* tuple exactly at `tid`. Outside of index entry ->table tuple lookups,
- * table_fetch_row_version is what's usually needed.
+ * table_tuple_fetch_row_version is what's usually needed.
*/
static inline bool
table_index_fetch_tuple(struct IndexFetchTableData *scan,
@@ -957,10 +957,10 @@ extern bool table_index_fetch_tuple_check(Relation rel,
* index entry->table tuple lookups.
*/
static inline bool
-table_fetch_row_version(Relation rel,
- ItemPointer tid,
- Snapshot snapshot,
- TupleTableSlot *slot)
+table_tuple_fetch_row_version(Relation rel,
+ ItemPointer tid,
+ Snapshot snapshot,
+ TupleTableSlot *slot)
{
return rel->rd_tableam->tuple_fetch_row_version(rel, tid, snapshot, slot);
}
@@ -970,7 +970,7 @@ table_fetch_row_version(Relation rel,
* point at the newest version.
*/
static inline void
-table_get_latest_tid(Relation rel, Snapshot snapshot, ItemPointer tid)
+table_tuple_get_latest_tid(Relation rel, Snapshot snapshot, ItemPointer tid)
{
rel->rd_tableam->tuple_get_latest_tid(rel, snapshot, tid);
}
@@ -1050,8 +1050,8 @@ table_compute_xid_horizon_for_tuples(Relation rel,
* reflected in the slots contents.
*/
static inline void
-table_insert(Relation rel, TupleTableSlot *slot, CommandId cid,
- int options, struct BulkInsertStateData *bistate)
+table_tuple_insert(Relation rel, TupleTableSlot *slot, CommandId cid,
+ int options, struct BulkInsertStateData *bistate)
{
rel->rd_tableam->tuple_insert(rel, slot, cid, options,
bistate);
@@ -1066,12 +1066,12 @@ table_insert(Relation rel, TupleTableSlot *slot, CommandId cid,
*
* A transaction having performed a speculative insertion has to either abort,
* or finish the speculative insertion with
- * table_complete_speculative(succeeded = ...).
+ * table_tuple_complete_speculative(succeeded = ...).
*/
static inline void
-table_insert_speculative(Relation rel, TupleTableSlot *slot, CommandId cid,
- int options, struct BulkInsertStateData *bistate,
- uint32 specToken)
+table_tuple_insert_speculative(Relation rel, TupleTableSlot *slot, CommandId cid,
+ int options, struct BulkInsertStateData *bistate,
+ uint32 specToken)
{
rel->rd_tableam->tuple_insert_speculative(rel, slot, cid, options,
bistate, specToken);
@@ -1082,8 +1082,8 @@ table_insert_speculative(Relation rel, TupleTableSlot *slot, CommandId cid,
* succeeded is true, the tuple is fully inserted, if false, it's removed.
*/
static inline void
-table_complete_speculative(Relation rel, TupleTableSlot *slot,
- uint32 specToken, bool succeeded)
+table_tuple_complete_speculative(Relation rel, TupleTableSlot *slot,
+ uint32 specToken, bool succeeded)
{
rel->rd_tableam->tuple_complete_speculative(rel, slot, specToken,
succeeded);
@@ -1098,7 +1098,7 @@ table_complete_speculative(Relation rel, TupleTableSlot *slot,
*
* Except for taking `nslots` tuples as input, as an array of TupleTableSlots
* in `slots`, the parameters for table_multi_insert() are the same as for
- * table_insert().
+ * table_tuple_insert().
*
* Note: this leaks memory into the current memory context. You can create a
* temporary context before calling this, if that's a problem.
@@ -1115,7 +1115,7 @@ table_multi_insert(Relation rel, TupleTableSlot **slots, int nslots,
* Delete a tuple.
*
* NB: do not call this directly unless prepared to deal with
- * concurrent-update conditions. Use simple_table_delete instead.
+ * concurrent-update conditions. Use simple_table_tuple_delete instead.
*
* Input parameters:
* relation - table to be modified (caller must hold suitable lock)
@@ -1138,9 +1138,9 @@ table_multi_insert(Relation rel, TupleTableSlot **slots, int nslots,
* struct TM_FailureData for additional info.
*/
static inline TM_Result
-table_delete(Relation rel, ItemPointer tid, CommandId cid,
- Snapshot snapshot, Snapshot crosscheck, bool wait,
- TM_FailureData *tmfd, bool changingPart)
+table_tuple_delete(Relation rel, ItemPointer tid, CommandId cid,
+ Snapshot snapshot, Snapshot crosscheck, bool wait,
+ TM_FailureData *tmfd, bool changingPart)
{
return rel->rd_tableam->tuple_delete(rel, tid, cid,
snapshot, crosscheck,
@@ -1151,7 +1151,7 @@ table_delete(Relation rel, ItemPointer tid, CommandId cid,
* Update a tuple.
*
* NB: do not call this directly unless you are prepared to deal with
- * concurrent-update conditions. Use simple_table_update instead.
+ * concurrent-update conditions. Use simple_table_tuple_update instead.
*
* Input parameters:
* relation - table to be modified (caller must hold suitable lock)
@@ -1182,10 +1182,10 @@ table_delete(Relation rel, ItemPointer tid, CommandId cid,
* for additional info.
*/
static inline TM_Result
-table_update(Relation rel, ItemPointer otid, TupleTableSlot *slot,
- CommandId cid, Snapshot snapshot, Snapshot crosscheck, bool wait,
- TM_FailureData *tmfd, LockTupleMode *lockmode,
- bool *update_indexes)
+table_tuple_update(Relation rel, ItemPointer otid, TupleTableSlot *slot,
+ CommandId cid, Snapshot snapshot, Snapshot crosscheck, bool wait,
+ TM_FailureData *tmfd, LockTupleMode *lockmode,
+ bool *update_indexes)
{
return rel->rd_tableam->tuple_update(rel, otid, slot,
cid, snapshot, crosscheck,
@@ -1227,7 +1227,7 @@ table_update(Relation rel, ItemPointer otid, TupleTableSlot *slot,
* comments for struct TM_FailureData for additional info.
*/
static inline TM_Result
-table_lock_tuple(Relation rel, ItemPointer tid, Snapshot snapshot,
+table_tuple_lock(Relation rel, ItemPointer tid, Snapshot snapshot,
TupleTableSlot *slot, CommandId cid, LockTupleMode mode,
LockWaitPolicy wait_policy, uint8 flags,
TM_FailureData *tmfd)
@@ -1601,12 +1601,12 @@ table_scan_sample_next_tuple(TableScanDesc scan,
* ----------------------------------------------------------------------------
*/
-extern void simple_table_insert(Relation rel, TupleTableSlot *slot);
-extern void simple_table_delete(Relation rel, ItemPointer tid,
- Snapshot snapshot);
-extern void simple_table_update(Relation rel, ItemPointer otid,
- TupleTableSlot *slot, Snapshot snapshot,
- bool *update_indexes);
+extern void simple_table_tuple_insert(Relation rel, TupleTableSlot *slot);
+extern void simple_table_tuple_delete(Relation rel, ItemPointer tid,
+ Snapshot snapshot);
+extern void simple_table_tuple_update(Relation rel, ItemPointer otid,
+ TupleTableSlot *slot, Snapshot snapshot,
+ bool *update_indexes);
/* ----------------------------------------------------------------------------
--
2.19.1
On Wed, May 8, 2019 at 5:05 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:
Not having consistency is the main aspect I wish to bring to
attention. Like for some callback functions the comment is/* see table_insert() for reference about parameters */
void (*tuple_insert) (Relation rel, TupleTableSlot *slot,
CommandId cid, int options,
struct BulkInsertStateData *bistate);/* see table_insert_speculative() for reference about parameters
*/
void (*tuple_insert_speculative) (Relation rel,
TupleTableSlot *slot,
CommandId cid,
int options,
struct
BulkInsertStateData *bistate,
uint32 specToken);Whereas for some other callbacks the parameter explanation exist in
both the places. Seems we should be consistent.
I feel in long run becomes pain to keep them in sync as comments
evolve. Like for example/*
* Estimate the size of shared memory needed for a parallel scan
of this
* relation. The snapshot does not need to be accounted for.
*/
Size (*parallelscan_estimate) (Relation rel);parallescan_estimate is not having snapshot argument passed to it, but
table_parallescan_estimate does. So, this way chances are high they
going out of sync and missing to modify in both the places. Agree
though on audience being different for both. So, seems going with the
refer XXX for parameters seems fine approach for all the callbacks and
then only specific things to flag out for the AM layer to be aware can
live above the callback function.
The topic of consistency for comment style for all wrappers seems got lost
with other discussion, would like to seek opinion on the same.
On 2019-May-14, Ashwin Agrawal wrote:
Thank you. Please find the patch to rename the agreed functions. It would
be good to make all consistent instead of applying exception to three
functions but seems no consensus on it. Given table_ api are new, we could
modify them leaving systable_ ones as is, but as objections left that as is.
Hmm .. I'm surprised to find out that we only have one caller of
simple_table_insert, simple_table_delete, simple_table_update. I'm not
sure I agree to the new names those got in the renaming patch, since
they're not really part of table AM proper ... do we really want to
offer those as separate entry points? Why not just remove those routines?
Somewhat related: it's strange that CatalogTupleUpdate etc use
simple_heap_update instead of the tableam variants wrappers (I suppose
that's either because of bootstrapping concerns, or performance). Would
it be too strange to have CatalogTupleInsert call heap_insert()
directly, and do away with simple_heap_insert? (Equivalently for
update, delete). I think those wrappers made perfect sense when we had
simple_heap_insert all around the place ... but now that we introduced
the CatalogTupleFoo wrappers, I don't think it does any longer.
--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Hi,
On 2019-05-14 16:27:47 -0400, Alvaro Herrera wrote:
On 2019-May-14, Ashwin Agrawal wrote:
Thank you. Please find the patch to rename the agreed functions. It would
be good to make all consistent instead of applying exception to three
functions but seems no consensus on it. Given table_ api are new, we could
modify them leaving systable_ ones as is, but as objections left that as is.Hmm .. I'm surprised to find out that we only have one caller of
simple_table_insert, simple_table_delete, simple_table_update. I'm not
sure I agree to the new names those got in the renaming patch, since
they're not really part of table AM proper ... do we really want to
offer those as separate entry points? Why not just remove those routines?
I don't think it'd be better if execReplication.c has them inline - we'd
just have the exact same code inline. There's plenty extension out there
that use simple_heap_*, and without such wrappers, they'll all have to
copy the contents of simple_table_* too. Also we'll probably want to
switch CatalogTuple* over to them at some point.
Somewhat related: it's strange that CatalogTupleUpdate etc use
simple_heap_update instead of the tableam variants wrappers (I suppose
that's either because of bootstrapping concerns, or performance).
It's because the callers currently expect to work with heap tuples,
rather than slots. And changing that would have been a *LOT* of work (as
in: prohibitively much for v12). I didn't want to create a slot for
each insertion, because that'd make them slower. But as Robert said on
IM (discussing something else), we already create a slot in most cases,
via CatalogIndexInsert(). Not sure if HOT updates and deletes are
common enough to make the slot creation in those cases measurable.
Would it be too strange to have CatalogTupleInsert call heap_insert()
directly, and do away with simple_heap_insert? (Equivalently for
update, delete). I think those wrappers made perfect sense when we had
simple_heap_insert all around the place ... but now that we introduced
the CatalogTupleFoo wrappers, I don't think it does any longer.
I don't really see the advantage. Won't that just break a lot of code
that will continue to work otherwise, as long as you just use heap
tables? With the sole benefit of moving a bit of code from one place to
another?
Greetings,
Andres Freund
Hi,
On 2019-05-14 12:11:46 -0700, Ashwin Agrawal wrote:
Thank you. Please find the patch to rename the agreed functions. It would
be good to make all consistent instead of applying exception to three
functions but seems no consensus on it. Given table_ api are new, we could
modify them leaving systable_ ones as is, but as objections left that as is.
I've pushed a slightly modified version (rebase, some additional
newlines due to the longer function names) now. Thanks for the patch!
Greetings,
Andres Freund
On Thu, May 23, 2019 at 4:32 PM Andres Freund <andres@anarazel.de> wrote:
Hi,
On 2019-05-14 12:11:46 -0700, Ashwin Agrawal wrote:
Thank you. Please find the patch to rename the agreed functions. It would
be good to make all consistent instead of applying exception to three
functions but seems no consensus on it. Given table_ api are new, wecould
modify them leaving systable_ ones as is, but as objections left that as
is.
I've pushed a slightly modified version (rebase, some additional
newlines due to the longer function names) now. Thanks for the patch!
Thanks a lot Andres. With pg_intend run before the patch on master, I can
imagine possibly generated additional work for you on this.