Recreate primary key without dropping foreign keys?
Hi all,
In PostgreSQL 9.1.3, I have a few fairly large tables with bloated
primary key indexes. I'm trying to replace them using newly created
unique indexes as outlined in the docs. Something like:
CREATE UNIQUE INDEX CONCURRENTLY dist_id_temp_idx ON distributors (dist_id);
ALTER TABLE distributors DROP CONSTRAINT distributors_pkey,
ADD CONSTRAINT distributors_pkey PRIMARY KEY USING INDEX
dist_id_temp_idx;
However, the initial drop of the primary key constraint fails because
there are a whole bunch of foreign keys depending on it.
I've done some searching and haven't found a workable solution. Is
there any way to swap in the new index for the primary key constraint
without dropping all dependent foreign keys? Or am I pretty much stuck
with dropping and recreating all of the foreign keys?
Thanks in advance.
Chris Ernst
Data Operations Engineer
Zvelo, Inc.
http://zvelo.com/
On Sun, 15 Apr 2012 18:41:05 -0600
Chris Ernst <cernst@zvelo.com> wrote:
Hi all,
In PostgreSQL 9.1.3, I have a few fairly large tables with bloated
primary key indexes. I'm trying to replace them using newly created
unique indexes as outlined in the docs. Something like:CREATE UNIQUE INDEX CONCURRENTLY dist_id_temp_idx ON distributors
(dist_id); ALTER TABLE distributors DROP CONSTRAINT distributors_pkey,
ADD CONSTRAINT distributors_pkey PRIMARY KEY USING INDEX
dist_id_temp_idx;However, the initial drop of the primary key constraint fails because
there are a whole bunch of foreign keys depending on it.I've done some searching and haven't found a workable solution. Is
there any way to swap in the new index for the primary key constraint
without dropping all dependent foreign keys? Or am I pretty much
stuck with dropping and recreating all of the foreign keys?
REINDEX is not working here?
Cheers,
Frank
--
Frank Lanitz <frank@frank.uvena.de>
On 04/15/2012 10:57 PM, Frank Lanitz wrote:
On Sun, 15 Apr 2012 18:41:05 -0600 Chris Ernst <cernst@zvelo.com>
wrote:Hi all,
In PostgreSQL 9.1.3, I have a few fairly large tables with
bloated primary key indexes. I'm trying to replace them using
newly created unique indexes as outlined in the docs. Something
like:CREATE UNIQUE INDEX CONCURRENTLY dist_id_temp_idx ON
distributors (dist_id); ALTER TABLE distributors DROP CONSTRAINT
distributors_pkey, ADD CONSTRAINT distributors_pkey PRIMARY KEY
USING INDEX dist_id_temp_idx;However, the initial drop of the primary key constraint fails
because there are a whole bunch of foreign keys depending on it.I've done some searching and haven't found a workable solution.
Is there any way to swap in the new index for the primary key
constraint without dropping all dependent foreign keys? Or am I
pretty much stuck with dropping and recreating all of the foreign
keys?REINDEX is not working here?
Hi Frank,
Thanks, but REINDEX is not an option as it would take an exclusive
lock on the table for several hours.
For all of the other indexes, I create a new index concurrently, drop
the old and swap in the new. But the primary key is a bit trickier
because I can't drop the primary key index without dropping the
primary key constraint and I can't drop the primary key constraint
without dropping all of the foreign keys that reference that column.
- Chris
Am 16.04.2012 10:32, schrieb Chris Ernst:
On 04/15/2012 10:57 PM, Frank Lanitz wrote:
On Sun, 15 Apr 2012 18:41:05 -0600 Chris Ernst <cernst@zvelo.com>
wrote:Hi all,
In PostgreSQL 9.1.3, I have a few fairly large tables with
bloated primary key indexes. I'm trying to replace them using
newly created unique indexes as outlined in the docs. Something
like:CREATE UNIQUE INDEX CONCURRENTLY dist_id_temp_idx ON
distributors (dist_id); ALTER TABLE distributors DROP CONSTRAINT
distributors_pkey, ADD CONSTRAINT distributors_pkey PRIMARY KEY
USING INDEX dist_id_temp_idx;However, the initial drop of the primary key constraint fails
because there are a whole bunch of foreign keys depending on it.I've done some searching and haven't found a workable solution.
Is there any way to swap in the new index for the primary key
constraint without dropping all dependent foreign keys? Or am I
pretty much stuck with dropping and recreating all of the foreign
keys?REINDEX is not working here?
Hi Frank,
Thanks, but REINDEX is not an option as it would take an exclusive
lock on the table for several hours.
Well, from my little view I guess all rebuilding index action would
require such, as its the primary key with uniqueness. I'd think of a
complete reinit of the cluster with pg_dump and restoring, but this
would also need a downtime at least for write access.
Why is the index so bloated?
Cheers,
Frank
On 04/16/2012 02:39 AM, Frank Lanitz wrote:
Am 16.04.2012 10:32, schrieb Chris Ernst:
On 04/15/2012 10:57 PM, Frank Lanitz wrote:
On Sun, 15 Apr 2012 18:41:05 -0600 Chris Ernst <cernst@zvelo.com>
wrote:Hi all,
In PostgreSQL 9.1.3, I have a few fairly large tables with
bloated primary key indexes. I'm trying to replace them using
newly created unique indexes as outlined in the docs. Something
like:CREATE UNIQUE INDEX CONCURRENTLY dist_id_temp_idx ON
distributors (dist_id); ALTER TABLE distributors DROP CONSTRAINT
distributors_pkey, ADD CONSTRAINT distributors_pkey PRIMARY KEY
USING INDEX dist_id_temp_idx;However, the initial drop of the primary key constraint fails
because there are a whole bunch of foreign keys depending on it.I've done some searching and haven't found a workable solution.
Is there any way to swap in the new index for the primary key
constraint without dropping all dependent foreign keys? Or am I
pretty much stuck with dropping and recreating all of the foreign
keys?REINDEX is not working here?
Hi Frank,
Thanks, but REINDEX is not an option as it would take an exclusive
lock on the table for several hours.Well, from my little view I guess all rebuilding index action would
require such, as its the primary key with uniqueness. I'd think of a
complete reinit of the cluster with pg_dump and restoring, but this
would also need a downtime at least for write access.Why is the index so bloated?
As in my original post, you can create a unique index concurrently and
then replace the primary key index with it. This way, the index
creation doesn't require an exclusive lock. You only need a very brief
exclusive lock to drop and recreate the primary key constraint using the
new index.
However, the index creation is not the issue here. That part is done.
The issue is that there are several foreign keys depending on the
primary key index that I want to drop and replace with the newly built
unique index. I would prefer not to drop and recreate all of the
foreign keys as that would require many hours of down time as well (the
very situation I was trying to avoid by building the index concurrently
and swapping it in).
I believe the index bloat is due to a combination of under aggressive
autovacuum settings and recently deleting about 30% of the table.
- Chris
How about deferring the FK's while recreating the PK ?
or using a temporary parallel table to be pointed by the other tables
(FK) and swap it up on the recreation.
Cheers,
A.A
Show quoted text
On 04/16/2012 06:54 AM, Chris Ernst wrote:
On 04/16/2012 02:39 AM, Frank Lanitz wrote:
Am 16.04.2012 10:32, schrieb Chris Ernst:
On 04/15/2012 10:57 PM, Frank Lanitz wrote:
On Sun, 15 Apr 2012 18:41:05 -0600 Chris Ernst<cernst@zvelo.com>
wrote:Hi all,
In PostgreSQL 9.1.3, I have a few fairly large tables with
bloated primary key indexes. I'm trying to replace them using
newly created unique indexes as outlined in the docs. Something
like:CREATE UNIQUE INDEX CONCURRENTLY dist_id_temp_idx ON
distributors (dist_id); ALTER TABLE distributors DROP CONSTRAINT
distributors_pkey, ADD CONSTRAINT distributors_pkey PRIMARY KEY
USING INDEX dist_id_temp_idx;However, the initial drop of the primary key constraint fails
because there are a whole bunch of foreign keys depending on it.I've done some searching and haven't found a workable solution.
Is there any way to swap in the new index for the primary key
constraint without dropping all dependent foreign keys? Or am I
pretty much stuck with dropping and recreating all of the foreign
keys?REINDEX is not working here?
Hi Frank,
Thanks, but REINDEX is not an option as it would take an exclusive
lock on the table for several hours.Well, from my little view I guess all rebuilding index action would
require such, as its the primary key with uniqueness. I'd think of a
complete reinit of the cluster with pg_dump and restoring, but this
would also need a downtime at least for write access.Why is the index so bloated?
As in my original post, you can create a unique index concurrently and
then replace the primary key index with it. This way, the index
creation doesn't require an exclusive lock. You only need a very brief
exclusive lock to drop and recreate the primary key constraint using the
new index.However, the index creation is not the issue here. That part is done.
The issue is that there are several foreign keys depending on the
primary key index that I want to drop and replace with the newly built
unique index. I would prefer not to drop and recreate all of the
foreign keys as that would require many hours of down time as well (the
very situation I was trying to avoid by building the index concurrently
and swapping it in).I believe the index bloat is due to a combination of under aggressive
autovacuum settings and recently deleting about 30% of the table.- Chris
On 04/16/2012 07:02 PM, amador alvarez wrote:
How about deferring the FK's while recreating the PK ?
or using a temporary parallel table to be pointed by the other tables
(FK) and swap it up on the recreation.
Hmm.. Interesting. But it appears that you have to declare the foreign
key as deferrable at creation. Is there any way to set an existing
foreign key as deferrable?
- Chris
-----Original Message-----
From: Chris Ernst [mailto:cernst@zvelo.com]
Sent: Monday, April 16, 2012 10:55 PM
To: pgsql-admin@postgresql.org
Subject: Re: Recreate primary key without dropping foreign keys?On 04/16/2012 07:02 PM, amador alvarez wrote:
How about deferring the FK's while recreating the PK ?
or using a temporary parallel table to be pointed by the other tables
(FK) and swap it up on the recreation.Hmm.. Interesting. But it appears that you have to declare the
foreign
key as deferrable at creation. Is there any way to set an existing
foreign key as deferrable?- Chris
May be this (from the docs) would help:
"ADD table_constraint [ NOT VALID ]
This form adds a new constraint to a table using the same syntax as CREATE TABLE, plus the option NOT VALID, which is currently only allowed for foreign key constraints. If the constraint is marked NOT VALID, the potentially-lengthy initial check to verify that all rows in the table satisfy the constraint is skipped. The constraint will still be enforced against subsequent inserts or updates (that is, they'll fail unless there is a matching row in the referenced table). But the database will not assume that the constraint holds for all rows in the table, until it is validated by using the VALIDATE CONSTRAINT option."
Using this option you can drop and recreate corresponding FKs in a very short time, and start using them, while postponing to run "VALIDATE CONSTRAINT" for later.
It's similar to Oracle's adding FK with "NOCHECK" option, but if IRC there is no need to run "VALIDATE CONSTRAINT" later.
Regards,
Igor Neyman
Unfortunately I checked out that the deferrable option does not let us
drop the PK (postgres8.4) while remaining FK's , I did not try on the
constraint as NOT VALID is not supported by postgres8.
So unless you have a 9 release or you get a try on a parallel table, you
have to follow the manual procedure :
Generate new index
drop FK's
Drop PK
Recreate PK swiching to the new index
Recreate FK's
Can you afford a quick temporary user access to the database?
Show quoted text
On 04/17/2012 06:43 AM, Igor Neyman wrote:
-----Original Message-----
From: Chris Ernst [mailto:cernst@zvelo.com]
Sent: Monday, April 16, 2012 10:55 PM
To: pgsql-admin@postgresql.org
Subject: Re: Recreate primary key without dropping foreign keys?On 04/16/2012 07:02 PM, amador alvarez wrote:
How about deferring the FK's while recreating the PK ?
or using a temporary parallel table to be pointed by the other tables
(FK) and swap it up on the recreation.Hmm.. Interesting. But it appears that you have to declare the
foreign
key as deferrable at creation. Is there any way to set an existing
foreign key as deferrable?- Chris
May be this (from the docs) would help:
"ADD table_constraint [ NOT VALID ]
This form adds a new constraint to a table using the same syntax as CREATE TABLE, plus the option NOT VALID, which is currently only allowed for foreign key constraints. If the constraint is marked NOT VALID, the potentially-lengthy initial check to verify that all rows in the table satisfy the constraint is skipped. The constraint will still be enforced against subsequent inserts or updates (that is, they'll fail unless there is a matching row in the referenced table). But the database will not assume that the constraint holds for all rows in the table, until it is validated by using the VALIDATE CONSTRAINT option."
Using this option you can drop and recreate corresponding FKs in a very short time, and start using them, while postponing to run "VALIDATE CONSTRAINT" for later.
It's similar to Oracle's adding FK with "NOCHECK" option, but if IRC there is no need to run "VALIDATE CONSTRAINT" later.
Regards,
Igor Neyman
On 04/17/2012 07:43 AM, Igor Neyman wrote:
-----Original Message-----
From: Chris Ernst [mailto:cernst@zvelo.com]
Sent: Monday, April 16, 2012 10:55 PM
To: pgsql-admin@postgresql.org
Subject: Re: Recreate primary key without dropping foreign keys?On 04/16/2012 07:02 PM, amador alvarez wrote:
How about deferring the FK's while recreating the PK ?
or using a temporary parallel table to be pointed by the other tables
(FK) and swap it up on the recreation.Hmm.. Interesting. But it appears that you have to declare the
foreign
key as deferrable at creation. Is there any way to set an existing
foreign key as deferrable?- Chris
May be this (from the docs) would help:
"ADD table_constraint [ NOT VALID ]
This form adds a new constraint to a table using the same syntax as CREATE TABLE, plus the option NOT VALID, which is currently only allowed for foreign key constraints. If the constraint is marked NOT VALID, the potentially-lengthy initial check to verify that all rows in the table satisfy the constraint is skipped. The constraint will still be enforced against subsequent inserts or updates (that is, they'll fail unless there is a matching row in the referenced table). But the database will not assume that the constraint holds for all rows in the table, until it is validated by using the VALIDATE CONSTRAINT option."
Using this option you can drop and recreate corresponding FKs in a very short time, and start using them, while postponing to run "VALIDATE CONSTRAINT" for later.
It's similar to Oracle's adding FK with "NOCHECK" option, but if IRC there is no need to run "VALIDATE CONSTRAINT" later.
Hi Igor,
Oooooo... I like the sound of this. I'll give this a shot in the test
environment and report back my findings.
Thanks a bunch!
- Chris
I am part of a team that fills an operational roll administering 1000+ servers and
100's of applications. Of course we need to "read" all of our logs, and must use computers to
help us. In filtering postgreSQL logs there is one thing that makes life difficult for us admins.
Nice things about the postgreSQL logs
- user definable prefix
- each log line after the prefix contains a log line status such as;
ERROR:
FATAL:
LOG:
NOTICE:
WARNING:
STATEMENT:
- the configurable compile time option to set the wrap column for the log lines.
Now for the bad things
Even when the wrap column is set to a very large value (32k) STATEMENT lines still wrap according to the line breaks in
the original SQL statement.
Wrapped line no longer have the prefix - difficult to grep the log for everything pertaining to a particular database or user
Wrapped lines no longer have the log line status - difficult to auto-ignore all NOTICE status log lines when they wrap, or
ignore all user STATEMENT lines because they almost always wrap.
In conclusion, I would like to see a logging change that included the prefix on EVERY line, and included the STATUS on every line.
Comments?
If everyone :-) is in agreement can the authors just "get it done"?
Thanks for your time.
Evan Rempel
Systems administrator
University of Victoria.
Evan Rempel <erempel@uvic.ca> writes:
Even when the wrap column is set to a very large value (32k) STATEMENT lines still wrap according to the line breaks in
the original SQL statement.
Wrapped line no longer have the prefix - difficult to grep the log for everything pertaining to a particular database or user
Wrapped lines no longer have the log line status - difficult to auto-ignore all NOTICE status log lines when they wrap, or
ignore all user STATEMENT lines because they almost always wrap.
I think your life would be better if you used CSV log format.
In conclusion, I would like to see a logging change that included the prefix on EVERY line, and included the STATUS on every line.
This doesn't really sound like an improvement to me. It's going to make
the logs bulkier, but they're still not automatically parseable in any
meaningful sense. CSV is the way to go if you want machine-readable logs.
regards, tom lane
On Thu, May 31, 2012 at 2:05 PM, Evan Rempel <erempel@uvic.ca> wrote:
Even when the wrap column is set to a very large value (32k) STATEMENT lines still wrap according to the line breaks in
the original SQL statement.
The problem isn't so much the wrapping, it seems, as that your
statements' line breaks are being propagated through. So as a possible
alternative solution, perhaps there could be an option to replace
newlines with spaces before the line goes to the log?
ChrisA
Can this be done to syslog destination?
Evan Rempel
Systems Administrator
University of Victoria
On 2012-05-30, at 10:37 PM, "Tom Lane" <tgl@sss.pgh.pa.us> wrote:
Show quoted text
Evan Rempel <erempel@uvic.ca> writes:
Even when the wrap column is set to a very large value (32k) STATEMENT lines still wrap according to the line breaks in
the original SQL statement.
Wrapped line no longer have the prefix - difficult to grep the log for everything pertaining to a particular database or user
Wrapped lines no longer have the log line status - difficult to auto-ignore all NOTICE status log lines when they wrap, or
ignore all user STATEMENT lines because they almost always wrap.I think your life would be better if you used CSV log format.
In conclusion, I would like to see a logging change that included the prefix on EVERY line, and included the STATUS on every line.
This doesn't really sound like an improvement to me. It's going to make
the logs bulkier, but they're still not automatically parseable in any
meaningful sense. CSV is the way to go if you want machine-readable logs.regards, tom lane
On Wed, May 30, 2012 at 09:05:23PM -0700, Evan Rempel wrote:
I am part of a team that fills an operational roll administering 1000+ servers and
100's of applications. Of course we need to "read" all of our logs, and must use computers to
help us. In filtering postgreSQL logs there is one thing that makes life difficult for us admins.
consider using pg.grep:
http://www.depesz.com/2012/01/23/some-new-tools-for-postgresql-or-around-postgresql/
Best regards,
depesz
--
The best thing about modern society is how easy it is to avoid contact with it.
http://depesz.com/
On Thu, May 31, 2012 at 12:19 PM, Chris Angelico <rosuav@gmail.com> wrote:
On Thu, May 31, 2012 at 2:05 PM, Evan Rempel <erempel@uvic.ca> wrote:
Even when the wrap column is set to a very large value (32k) STATEMENT lines still wrap according to the line breaks in
the original SQL statement.The problem isn't so much the wrapping, it seems, as that your
statements' line breaks are being propagated through. So as a possible
alternative solution, perhaps there could be an option to replace
newlines with spaces before the line goes to the log?
I'd certainly like to see this or similar (encode the querys into a
single line of ascii, lossy is ok). I like my logs both readable and
greppable.
--
Stuart Bishop <stuart@stuartbishop.net>
http://www.stuartbishop.net/
I have a project where I will have two clients essentially doing the
same things at the same time. The idea is that if one has already done the
work, then the second one does not need to do it.
I was hoping that adding a task related unique identifier to a table
could be used to coordinate these client, something like a primary key and using
select for update.
The challenge I have is during the initial insert. One of the two clients will cause postgresql
to log an error, which I would rather avoid (just seems dirty).
Here is the time line;
Both clients A and B becomes aware to do a task
Client A or client B issues the "select for update ... if not exist do insert" type command
The other client gets blocked on the "select for update.
First client finishes insert/updates to record that it has delt with the task
second client gets unblocked and reads the record realizing that the first client delt with the task already.
It is the "select for update ... if not exist do insert" type command that I am ignorant of how to code.
Anyone care to school me?
Evan.
On Sat, 9 Jun 2012 15:41:34 -0700 Evan Rempel <erempel@uvic.ca> wrote:
I have a project where I will have two clients essentially doing the
same things at the same time. The idea is that if one has already done the
work, then the second one does not need to do it.I was hoping that adding a task related unique identifier to a table
could be used to coordinate these client, something like a primary key and using
select for update.The challenge I have is during the initial insert. One of the two clients will cause postgresql
to log an error, which I would rather avoid (just seems dirty).Here is the time line;
Both clients A and B becomes aware to do a task
Client A or client B issues the "select for update ... if not exist do insert" type command
The other client gets blocked on the "select for update.First client finishes insert/updates to record that it has delt with the task
second client gets unblocked and reads the record realizing that the first client delt with the task already.
It is the "select for update ... if not exist do insert" type command that I am ignorant of how to code.
Anyone care to school me?
It's amazing to me how often I have this conversation ...
How would you expect SELECT FOR UPDATE to work when you're checking to see
if you can insert a row? If the row doesn't exist, there's nothing to
lock against, and thus it doesn't help anything. FOR UPDATE is only
useful if you're UPDATING a row.
That being given, there are a number of ways to solve your problem. Which
one you use depends on a number of factors.
If it's x number of processes all contending for one piece of work, you could
just exclusive lock the entire table, and do the check/insert with the
table locked. This essentially creates a wait queue.
If the processes need to coordinate around doing several pieces of work, you
can put a row in for each piece of work with a boolean field indicating
whether a process is currently working on it. Then you can SELECT FOR
UPDATE a particular row representing work to be done, and if the boolean
isn't already true, set it to true and start working. In my experience,
you'll benefit from going a few steps forward and storing some information
about what's being done on it (like the PID of the process working on it,
and the time it started processing) -- it just makes problems easier to
debug later.
There are other approaches as well, but those are the two that come to
mind.
Not sure what your experience level is, but I'll point out that these
kinds of things only work well if you're transaction management is
correct. I have seen people struggle to get these kind of things working
because they didn't really understand how transactions and locking interact,
or they were using some sort of abstraction layer that does transaction
stuff in such an opaque way that they couldn't figure out what was actually
happening.
Hope this helps.
--
Bill Moran <wmoran@potentialtech.com>
You will find this reading a good start point:
http://www.cs.uiuc.edu/class/fa07/cs411/lectures/cs411-f07-tranmgr-3.pdf
There are no "fit all needs" cookbook about this, you will have to learn
the theory about transactional database transaction management and
locking mechanism and work on your solution.
Wish you the best,
Edson.
Em 09/06/2012 19:41, Evan Rempel escreveu:
Show quoted text
I have a project where I will have two clients essentially doing the
same things at the same time. The idea is that if one has already done the
work, then the second one does not need to do it.I was hoping that adding a task related unique identifier to a table
could be used to coordinate these client, something like a primary key and using
select for update.The challenge I have is during the initial insert. One of the two clients will cause postgresql
to log an error, which I would rather avoid (just seems dirty).Here is the time line;
Both clients A and B becomes aware to do a task
Client A or client B issues the "select for update ... if not exist do insert" type command
The other client gets blocked on the "select for update.First client finishes insert/updates to record that it has delt with the task
second client gets unblocked and reads the record realizing that the first client delt with the task already.
It is the "select for update ... if not exist do insert" type command that I am ignorant of how to code.
Anyone care to school me?
Evan.
-----Original Message-----
Both clients A and B becomes aware to do a task
Ideally you would have this aware-ness manifested as an INSERT into some
kind of job table. The clients can issue the "SELECT FOR UPDATE" + "UPDATE"
commands to indicate that they are going to be responsible for said task.
You seem to combining "something needs to be done" with "I am able to do
that something". You may not have a choice depending on your situation but
it is something to think about - how can I just focus on implementing the
"something needs to be done" part.
If you want to avoid the errors appearing in the logs or client you could
just wrap the INSERT command into a function and trap the duplicate key
exception.
It is hard to give suggestions when you are as vague as "becomes aware to do
a task". Ideally even if you have multiple clients monitoring for "aware
state" only one client should ever actually realize said awareness for a
given task. In effect you want to serialize the monitoring routine at this
level, insert the "something needs to be done" record, then serialize (for
update) the "I am able to do that something" action.
David J.