pgindent (was Re: Header comments in the recently added files)

Started by Robert Haasalmost 15 years ago11 messages
#1Robert Haas
robertmhaas@gmail.com

On Thu, Mar 10, 2011 at 10:21 AM, Bruce Momjian <bruce@momjian.us> wrote:

Tom Lane wrote:

Itagaki Takahiro <itagaki.takahiro@gmail.com> writes:

I found trivial mistakes in the recently added files.
Will they fixed by some automated batches, or by manual?

- "Copyright (c) xxx-*2010*, PostgreSQL Global Development Group"
  in pg_collation.h, pg_foreign_table.h, basebackup.h, syncrep.h,
  pg_backup_directory.c and auth_delay.c.
- "IDENTIFICATION $PostgreSQL$" in pg_collation.h, syncrep.h, and syncrep.c
  Other files has their actual paths in the same place.

It might be worth Bruce making another run of his copyright-update
script to fix the former problems.  As for the latter problems,

I ran it just now and nothing was changed, so we are OK now.

Speaking of running scripts, I think we should run pgindent now.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#2Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#1)
Re: pgindent (was Re: Header comments in the recently added files)

Robert Haas wrote:

On Thu, Mar 10, 2011 at 10:21 AM, Bruce Momjian <bruce@momjian.us> wrote:

Tom Lane wrote:

Itagaki Takahiro <itagaki.takahiro@gmail.com> writes:

I found trivial mistakes in the recently added files.
Will they fixed by some automated batches, or by manual?

- "Copyright (c) xxx-*2010*, PostgreSQL Global Development Group"
? in pg_collation.h, pg_foreign_table.h, basebackup.h, syncrep.h,
? pg_backup_directory.c and auth_delay.c.
- "IDENTIFICATION $PostgreSQL$" in pg_collation.h, syncrep.h, and syncrep.c
? Other files has their actual paths in the same place.

It might be worth Bruce making another run of his copyright-update
script to fix the former problems. ?As for the latter problems,

I ran it just now and nothing was changed, so we are OK now.

Speaking of running scripts, I think we should run pgindent now.

We usually do it during a late beta.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

#3Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#2)
Re: pgindent (was Re: Header comments in the recently added files)

Bruce Momjian <bruce@momjian.us> writes:

Robert Haas wrote:

Speaking of running scripts, I think we should run pgindent now.

We usually do it during a late beta.

Last time we did it early and then again late, and that seemed to work
well. I wouldn't object to a pgindent run now, but please sync with me
before you do --- I've got some heavy hacking to do on the collations
patch, and don't want to find myself trying to merge changes after a
pgindent run.

regards, tom lane

#4Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#3)
Re: pgindent (was Re: Header comments in the recently added files)

Tom Lane wrote:

Bruce Momjian <bruce@momjian.us> writes:

Robert Haas wrote:

Speaking of running scripts, I think we should run pgindent now.

We usually do it during a late beta.

Last time we did it early and then again late, and that seemed to work
well. I wouldn't object to a pgindent run now, but please sync with me
before you do --- I've got some heavy hacking to do on the collations
patch, and don't want to find myself trying to merge changes after a
pgindent run.

Andrew needs to update the typedef list in our GIT tree first.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

#5Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#3)
Re: pgindent (was Re: Header comments in the recently added files)

On Thu, Mar 10, 2011 at 10:47 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Bruce Momjian <bruce@momjian.us> writes:

Robert Haas wrote:

Speaking of running scripts, I think we should run pgindent now.

We usually do it during a late beta.

Last time we did it early and then again late, and that seemed to work
well.  I wouldn't object to a pgindent run now, but please sync with me
before you do --- I've got some heavy hacking to do on the collations
patch, and don't want to find myself trying to merge changes after a
pgindent run.

Yeah, +1 for doing it as soon as Tom is at a good stopping point. It
makes things a lot simpler later on.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#6Magnus Hagander
magnus@hagander.net
In reply to: Robert Haas (#5)
Re: pgindent (was Re: Header comments in the recently added files)

On Thu, Mar 10, 2011 at 16:50, Robert Haas <robertmhaas@gmail.com> wrote:

On Thu, Mar 10, 2011 at 10:47 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Bruce Momjian <bruce@momjian.us> writes:

Robert Haas wrote:

Speaking of running scripts, I think we should run pgindent now.

We usually do it during a late beta.

Last time we did it early and then again late, and that seemed to work
well.  I wouldn't object to a pgindent run now, but please sync with me
before you do --- I've got some heavy hacking to do on the collations
patch, and don't want to find myself trying to merge changes after a
pgindent run.

Yeah, +1 for doing it as soon as Tom is at a good stopping point.  It
makes things a lot simpler later on.

Agreed.

With git in play, it should be quite possible to merge with head just
before the pgindent run, then run pgindent on your local topic branch,
and then merge with head after the pgindent run - that should take
care of *most* of the conflicts, I think - as long as you use the same
typdef list.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

#7Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#5)
Re: pgindent (was Re: Header comments in the recently added files)

Robert Haas <robertmhaas@gmail.com> writes:

Speaking of running scripts, I think we should run pgindent now.

Yeah, +1 for doing it as soon as Tom is at a good stopping point. It
makes things a lot simpler later on.

IIRC the argument for an early pgindent run was to standardize the new
code for easier review. I expect to be spending a whole lot of time
reading collate and SSI code over the next few weeks, so I'm in favor
of pgindent'ing that stuff first. But I guess we need the typedef
list update before anything can happen.

regards, tom lane

#8Andrew Dunstan
andrew@dunslane.net
In reply to: Robert Haas (#1)
Re: pgindent (was Re: Header comments in the recently added files)

On 03/10/2011 10:33 AM, Robert Haas wrote:

On Thu, Mar 10, 2011 at 10:21 AM, Bruce Momjian<bruce@momjian.us> wrote:

Tom Lane wrote:

Itagaki Takahiro<itagaki.takahiro@gmail.com> writes:

I found trivial mistakes in the recently added files.
Will they fixed by some automated batches, or by manual?
- "Copyright (c) xxx-*2010*, PostgreSQL Global Development Group"
in pg_collation.h, pg_foreign_table.h, basebackup.h, syncrep.h,
pg_backup_directory.c and auth_delay.c.
- "IDENTIFICATION $PostgreSQL$" in pg_collation.h, syncrep.h, and syncrep.c
Other files has their actual paths in the same place.

It might be worth Bruce making another run of his copyright-update
script to fix the former problems. As for the latter problems,

I ran it just now and nothing was changed, so we are OK now.

Speaking of running scripts, I think we should run pgindent now.

Please wait a few days at least. I am just setting up a new server at
this very moment (now that SL6 is released) and will get a new FBSD
buildfarm member extracting typedefs running.

cheers

andrew

#9Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#7)
Re: pgindent (was Re: Header comments in the recently added files)

On Thu, Mar 10, 2011 at 10:59 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Robert Haas <robertmhaas@gmail.com> writes:

Speaking of running scripts, I think we should run pgindent now.

Yeah, +1 for doing it as soon as Tom is at a good stopping point.  It
makes things a lot simpler later on.

IIRC the argument for an early pgindent run was to standardize the new
code for easier review.  I expect to be spending a whole lot of time
reading collate and SSI code over the next few weeks, so I'm in favor
of pgindent'ing that stuff first.  But I guess we need the typedef
list update before anything can happen.

That's one good reason. Another is that this is presumably the time
of the cycle when there are the fewest outstanding patches, making it
a good time for changes that are likely to conflict with lots of other
things.

At any rate, it sounds like Andrew needs a few days to get the typedef
list together, so let's wait for that to happen and then we'll see
where we are.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#10Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Tom Lane (#7)
1 attachment(s)
Re: pgindent (was Re: Header comments in the recently added files)

Tom Lane <tgl@sss.pgh.pa.us> wrote:

I expect to be spending a whole lot of time reading collate and
SSI code over the next few weeks, so I'm in favor of pgindent'ing
that stuff first.

I've been running that throughout development, but it hasn't been
run after the last few changes. If you want the SSI files in
pgindent format, you can get there by applying the attached patch.

-Kevin

Attachments:

ssi-pgindent-after-alpha1.patchtext/plain; name=ssi-pgindent-after-alpha1.patchDownload
*** a/src/backend/storage/lmgr/predicate.c
--- b/src/backend/storage/lmgr/predicate.c
***************
*** 746,755 **** OldSerXidAdd(TransactionId xid, SerCommitSeqNo minConflictCommitSeqNo)
  	Assert(TransactionIdIsValid(tailXid));
  
  	/*
! 	 * If the SLRU is currently unused, zero out the whole active region
! 	 * from tailXid to headXid before taking it into use. Otherwise zero
! 	 * out only any new pages that enter the tailXid-headXid range as we
! 	 * advance headXid.
  	 */
  	if (oldSerXidControl->headPage < 0)
  	{
--- 746,755 ----
  	Assert(TransactionIdIsValid(tailXid));
  
  	/*
! 	 * If the SLRU is currently unused, zero out the whole active region from
! 	 * tailXid to headXid before taking it into use. Otherwise zero out only
! 	 * any new pages that enter the tailXid-headXid range as we advance
! 	 * headXid.
  	 */
  	if (oldSerXidControl->headPage < 0)
  	{
***************
*** 855,862 **** OldSerXidSetActiveSerXmin(TransactionId xid)
  	/*
  	 * When no sxacts are active, nothing overlaps, set the xid values to
  	 * invalid to show that there are no valid entries.  Don't clear headPage,
! 	 * though.  A new xmin might still land on that page, and we don't want
! 	 * to repeatedly zero out the same page.
  	 */
  	if (!TransactionIdIsValid(xid))
  	{
--- 855,862 ----
  	/*
  	 * When no sxacts are active, nothing overlaps, set the xid values to
  	 * invalid to show that there are no valid entries.  Don't clear headPage,
! 	 * though.	A new xmin might still land on that page, and we don't want to
! 	 * repeatedly zero out the same page.
  	 */
  	if (!TransactionIdIsValid(xid))
  	{
***************
*** 901,907 **** OldSerXidSetActiveSerXmin(TransactionId xid)
  void
  CheckPointPredicate(void)
  {
! 	int tailPage;
  
  	LWLockAcquire(OldSerXidLock, LW_EXCLUSIVE);
  
--- 901,907 ----
  void
  CheckPointPredicate(void)
  {
! 	int			tailPage;
  
  	LWLockAcquire(OldSerXidLock, LW_EXCLUSIVE);
  
***************
*** 1317,1328 **** SummarizeOldestCommittedSxact(void)
  	/*
  	 * This function is only called if there are no sxact slots available.
  	 * Some of them must belong to old, already-finished transactions, so
! 	 * there should be something in FinishedSerializableTransactions list
! 	 * that we can summarize. However, there's a race condition: while we
! 	 * were not holding any locks, a transaction might have ended and cleaned
! 	 * up all the finished sxact entries already, freeing up their sxact
! 	 * slots. In that case, we have nothing to do here. The caller will find
! 	 * one of the slots released by the other backend when it retries.
  	 */
  	if (SHMQueueEmpty(FinishedSerializableTransactions))
  	{
--- 1317,1328 ----
  	/*
  	 * This function is only called if there are no sxact slots available.
  	 * Some of them must belong to old, already-finished transactions, so
! 	 * there should be something in FinishedSerializableTransactions list that
! 	 * we can summarize. However, there's a race condition: while we were not
! 	 * holding any locks, a transaction might have ended and cleaned up all
! 	 * the finished sxact entries already, freeing up their sxact slots. In
! 	 * that case, we have nothing to do here. The caller will find one of the
! 	 * slots released by the other backend when it retries.
  	 */
  	if (SHMQueueEmpty(FinishedSerializableTransactions))
  	{
***************
*** 2207,2213 **** PredicateLockTuple(const Relation relation, const HeapTuple tuple)
  	 */
  	if (relation->rd_index == NULL)
  	{
! 		TransactionId	myxid;
  
  		targetxmin = HeapTupleHeaderGetXmin(tuple->t_data);
  
--- 2207,2213 ----
  	 */
  	if (relation->rd_index == NULL)
  	{
! 		TransactionId myxid;
  
  		targetxmin = HeapTupleHeaderGetXmin(tuple->t_data);
  
***************
*** 2217,2222 **** PredicateLockTuple(const Relation relation, const HeapTuple tuple)
--- 2217,2223 ----
  			if (TransactionIdFollowsOrEquals(targetxmin, TransactionXmin))
  			{
  				TransactionId xid = SubTransGetTopmostTransaction(targetxmin);
+ 
  				if (TransactionIdEquals(xid, myxid))
  				{
  					/* We wrote it; we already have a write lock. */
***************
*** 2266,2272 **** PredicateLockTupleRowVersionLink(const Relation relation,
  	PREDICATELOCKTARGETTAG oldtupletag;
  	PREDICATELOCKTARGETTAG oldpagetag;
  	PREDICATELOCKTARGETTAG newtupletag;
! 	BlockNumber	oldblk,
  				newblk;
  	OffsetNumber oldoff,
  				newoff;
--- 2267,2273 ----
  	PREDICATELOCKTARGETTAG oldtupletag;
  	PREDICATELOCKTARGETTAG oldpagetag;
  	PREDICATELOCKTARGETTAG newtupletag;
! 	BlockNumber oldblk,
  				newblk;
  	OffsetNumber oldoff,
  				newoff;
***************
*** 2302,2311 **** PredicateLockTupleRowVersionLink(const Relation relation,
  
  	/*
  	 * A page-level lock on the page containing the old tuple counts too.
! 	 * Anyone holding a lock on the page is logically holding a lock on
! 	 * the old tuple, so we need to acquire a lock on his behalf on the
! 	 * new tuple too. However, if the new tuple is on the same page as the
! 	 * old one, the old page-level lock already covers the new tuple.
  	 *
  	 * A relation-level lock always covers both tuple versions, so we don't
  	 * need to worry about those here.
--- 2303,2312 ----
  
  	/*
  	 * A page-level lock on the page containing the old tuple counts too.
! 	 * Anyone holding a lock on the page is logically holding a lock on the
! 	 * old tuple, so we need to acquire a lock on his behalf on the new tuple
! 	 * too. However, if the new tuple is on the same page as the old one, the
! 	 * old page-level lock already covers the new tuple.
  	 *
  	 * A relation-level lock always covers both tuple versions, so we don't
  	 * need to worry about those here.
***************
*** 2662,2671 **** PredicateLockPageSplit(const Relation relation, const BlockNumber oldblkno,
  		/*
  		 * Move the locks to the parent. This shouldn't fail.
  		 *
! 		 * Note that here we are removing locks held by other
! 		 * backends, leading to a possible inconsistency in their
! 		 * local lock hash table. This is OK because we're replacing
! 		 * it with a lock that covers the old one.
  		 */
  		success = TransferPredicateLocksToNewTarget(oldtargettag,
  													newtargettag,
--- 2663,2672 ----
  		/*
  		 * Move the locks to the parent. This shouldn't fail.
  		 *
! 		 * Note that here we are removing locks held by other backends,
! 		 * leading to a possible inconsistency in their local lock hash table.
! 		 * This is OK because we're replacing it with a lock that covers the
! 		 * old one.
  		 */
  		success = TransferPredicateLocksToNewTarget(oldtargettag,
  													newtargettag,
***************
*** 2690,2705 **** PredicateLockPageCombine(const Relation relation, const BlockNumber oldblkno,
  						 const BlockNumber newblkno)
  {
  	/*
! 	 * Page combines differ from page splits in that we ought to be
! 	 * able to remove the locks on the old page after transferring
! 	 * them to the new page, instead of duplicating them. However,
! 	 * because we can't edit other backends' local lock tables,
! 	 * removing the old lock would leave them with an entry in their
! 	 * LocalPredicateLockHash for a lock they're not holding, which
! 	 * isn't acceptable. So we wind up having to do the same work as a
! 	 * page split, acquiring a lock on the new page and keeping the old
! 	 * page locked too. That can lead to some false positives, but
! 	 * should be rare in practice.
  	 */
  	PredicateLockPageSplit(relation, oldblkno, newblkno);
  }
--- 2691,2705 ----
  						 const BlockNumber newblkno)
  {
  	/*
! 	 * Page combines differ from page splits in that we ought to be able to
! 	 * remove the locks on the old page after transferring them to the new
! 	 * page, instead of duplicating them. However, because we can't edit other
! 	 * backends' local lock tables, removing the old lock would leave them
! 	 * with an entry in their LocalPredicateLockHash for a lock they're not
! 	 * holding, which isn't acceptable. So we wind up having to do the same
! 	 * work as a page split, acquiring a lock on the new page and keeping the
! 	 * old page locked too. That can lead to some false positives, but should
! 	 * be rare in practice.
  	 */
  	PredicateLockPageSplit(relation, oldblkno, newblkno);
  }
***************
*** 3710,3717 **** CheckTargetForConflictsIn(PREDICATELOCKTARGETTAG *targettag)
  					/*
  					 * Remove entry in local lock table if it exists and has
  					 * no children. It's OK if it doesn't exist; that means
! 					 * the lock was transferred to a new target by a
! 					 * different backend.
  					 */
  					if (locallock != NULL)
  					{
--- 3710,3717 ----
  					/*
  					 * Remove entry in local lock table if it exists and has
  					 * no children. It's OK if it doesn't exist; that means
! 					 * the lock was transferred to a new target by a different
! 					 * backend.
  					 */
  					if (locallock != NULL)
  					{
***************
*** 3721,3728 **** CheckTargetForConflictsIn(PREDICATELOCKTARGETTAG *targettag)
  						{
  							rmlocallock = (LOCALPREDICATELOCK *)
  								hash_search_with_hash_value(LocalPredicateLockHash,
! 															targettag, targettaghash,
! 															HASH_REMOVE, NULL);
  							Assert(rmlocallock == locallock);
  						}
  					}
--- 3721,3728 ----
  						{
  							rmlocallock = (LOCALPREDICATELOCK *)
  								hash_search_with_hash_value(LocalPredicateLockHash,
! 													targettag, targettaghash,
! 														  HASH_REMOVE, NULL);
  							Assert(rmlocallock == locallock);
  						}
  					}
***************
*** 3827,3834 **** CheckForSerializableConflictIn(const Relation relation, const HeapTuple tuple,
  										 relation->rd_node.dbNode,
  										 relation->rd_id,
  						 ItemPointerGetBlockNumber(&(tuple->t_data->t_ctid)),
! 					   ItemPointerGetOffsetNumber(&(tuple->t_data->t_ctid)),
! 					   HeapTupleHeaderGetXmin(tuple->t_data));
  		CheckTargetForConflictsIn(&targettag);
  	}
  
--- 3827,3834 ----
  										 relation->rd_node.dbNode,
  										 relation->rd_id,
  						 ItemPointerGetBlockNumber(&(tuple->t_data->t_ctid)),
! 						ItemPointerGetOffsetNumber(&(tuple->t_data->t_ctid)),
! 									  HeapTupleHeaderGetXmin(tuple->t_data));
  		CheckTargetForConflictsIn(&targettag);
  	}
  
*** a/src/include/storage/predicate_internals.h
--- b/src/include/storage/predicate_internals.h
***************
*** 266,272 **** typedef struct PREDICATELOCKTARGETTAG
   * version, before the reading transaction is obsolete, we need some way to
   * prevent errors from reuse of a tuple ID.  Rather than attempting to clean
   * up the targets as the related tuples are pruned or vacuumed, we check the
!  * xmin on access.  This should be far less costly.
   */
  typedef struct PREDICATELOCKTARGET PREDICATELOCKTARGET;
  
--- 266,272 ----
   * version, before the reading transaction is obsolete, we need some way to
   * prevent errors from reuse of a tuple ID.  Rather than attempting to clean
   * up the targets as the related tuples are pruned or vacuumed, we check the
!  * xmin on access.	This should be far less costly.
   */
  typedef struct PREDICATELOCKTARGET PREDICATELOCKTARGET;
  
#11Robert Haas
robertmhaas@gmail.com
In reply to: Robert Haas (#9)
Re: pgindent (was Re: Header comments in the recently added files)

On Thu, Mar 10, 2011 at 11:25 AM, Robert Haas <robertmhaas@gmail.com> wrote:

On Thu, Mar 10, 2011 at 10:59 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Robert Haas <robertmhaas@gmail.com> writes:

Speaking of running scripts, I think we should run pgindent now.

Yeah, +1 for doing it as soon as Tom is at a good stopping point.  It
makes things a lot simpler later on.

IIRC the argument for an early pgindent run was to standardize the new
code for easier review.  I expect to be spending a whole lot of time
reading collate and SSI code over the next few weeks, so I'm in favor
of pgindent'ing that stuff first.  But I guess we need the typedef
list update before anything can happen.

That's one good reason.  Another is that this is presumably the time
of the cycle when there are the fewest outstanding patches, making it
a good time for changes that are likely to conflict with lots of other
things.

At any rate, it sounds like Andrew needs a few days to get the typedef
list together, so let's wait for that to happen and then we'll see
where we are.

Andrew, any update on this?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company