Question about DROP TABLE
Dear PostgreSQL Hackers,
I've discovered an issue with dropping a large table (~5T). I was
thinking drop table is fast operation however I found out my assumption
was wrong.
Is there any way how to tune it to drop a large table in the matter of
seconds or minutes? Any configuration variable in the postgresql.conf or
any tune up options available?
PostgreSQL version used is PgSQL 9.4.
Thanks a lot!
Michal
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi
2016-01-12 11:57 GMT+01:00 Michal Novotny <michal.novotny@trustport.com>:
Dear PostgreSQL Hackers,
I've discovered an issue with dropping a large table (~5T). I was
thinking drop table is fast operation however I found out my assumption
was wrong.Is there any way how to tune it to drop a large table in the matter of
seconds or minutes? Any configuration variable in the postgresql.conf or
any tune up options available?
drop table should be fast.
There can be two reasons why not:
1. locks - are you sure, so this statement didn't wait on some lock?
2. filesystem issue - can you check the speed of rm 5TB file on your IO?
Regards
Pavel
Show quoted text
PostgreSQL version used is PgSQL 9.4.
Thanks a lot!
Michal--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi Michal,
This isn't really a question for -hackers, the list for postgres
development. -general or -performance would have been more appropriate.
On 2016-01-12 11:57:05 +0100, Michal Novotny wrote:
I've discovered an issue with dropping a large table (~5T). I was
thinking drop table is fast operation however I found out my assumption
was wrong.
What exactly did you do, and how long did it take. Is there any chance
you were actually waiting for the lock on that large table, instead of
waiting for the actual execution?
Is there any way how to tune it to drop a large table in the matter of
seconds or minutes? Any configuration variable in the postgresql.conf or
any tune up options available?
The time for dropping a table primarily is spent on three things:
1) acquiring the exclusive lock. How long this takes entirely depends on
the concurrent activity. If there's a longrunning session using that
table it'll take till that session is finished.
2) The cached portion of that table needs to be eviced from cache. How
long that takes depends on the size of shared_buffers - but usually
this is a relatively short amount of time, and only matters if you
drop many, many relations.
3) The time the filesystem takes to actually remove the, in your case
5000 1GB, files. This will take a while, but shouldn't be minutes.
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi Pavel,
thanks for the information. I've been doing more investigation of this
issue and there's autovacuum running on the table however it's
automatically starting even if there is "autovacuum = off" in the
postgresql.conf configuration file.
The test of rm 5T file was fast and not taking 24 hours already. I guess
the autovacuum is the issue. Is there any way how to disable it? If I
killed the process using 'kill -9' yesterday the process started again.
Is there any way how to cancel this process and disallow PgSQL to run
autovacuum again and do the drop instead?
Thanks,
Michal
On 01/12/2016 12:01 PM, Pavel Stehule wrote:
Hi
2016-01-12 11:57 GMT+01:00 Michal Novotny <michal.novotny@trustport.com
<mailto:michal.novotny@trustport.com>>:Dear PostgreSQL Hackers,
I've discovered an issue with dropping a large table (~5T). I was
thinking drop table is fast operation however I found out my assumption
was wrong.Is there any way how to tune it to drop a large table in the matter of
seconds or minutes? Any configuration variable in the postgresql.conf or
any tune up options available?drop table should be fast.
There can be two reasons why not:
1. locks - are you sure, so this statement didn't wait on some lock?
2. filesystem issue - can you check the speed of rm 5TB file on your IO?
Regards
Pavel
PostgreSQL version used is PgSQL 9.4.
Thanks a lot!
Michal--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org
<mailto:pgsql-hackers@postgresql.org>)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi Andres,
thanks a lot for your reply. Unfortunately I've found out most it didn't
really start DROP TABLE yet and it's locked on autovacuum running for
the table and even if I kill the process it's autostarting again and again.
Is there any way how to do the DROP TABLE and bypass/disable autovacuum
entirely? Please note the "autovacuum = off" is set in the config file
(postgresql.conf).
Thanks a lot,
Michal
On 01/12/2016 12:05 PM, Andres Freund wrote:
Hi Michal,
This isn't really a question for -hackers, the list for postgres
development. -general or -performance would have been more appropriate.On 2016-01-12 11:57:05 +0100, Michal Novotny wrote:
I've discovered an issue with dropping a large table (~5T). I was
thinking drop table is fast operation however I found out my assumption
was wrong.What exactly did you do, and how long did it take. Is there any chance
you were actually waiting for the lock on that large table, instead of
waiting for the actual execution?Is there any way how to tune it to drop a large table in the matter of
seconds or minutes? Any configuration variable in the postgresql.conf or
any tune up options available?The time for dropping a table primarily is spent on three things:
1) acquiring the exclusive lock. How long this takes entirely depends on
the concurrent activity. If there's a longrunning session using that
table it'll take till that session is finished.
2) The cached portion of that table needs to be eviced from cache. How
long that takes depends on the size of shared_buffers - but usually
this is a relatively short amount of time, and only matters if you
drop many, many relations.
3) The time the filesystem takes to actually remove the, in your case
5000 1GB, files. This will take a while, but shouldn't be minutes.Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
2016-01-12 12:14 GMT+01:00 Michal Novotny <michal.novotny@trustport.com>:
Hi Pavel,
thanks for the information. I've been doing more investigation of this
issue and there's autovacuum running on the table however it's
automatically starting even if there is "autovacuum = off" in the
postgresql.conf configuration file.
Real autovacuum is automatically cancelled. It looks like VACUUM started by
cron, maybe?
Show quoted text
The test of rm 5T file was fast and not taking 24 hours already. I guess
the autovacuum is the issue. Is there any way how to disable it? If I
killed the process using 'kill -9' yesterday the process started again.Is there any way how to cancel this process and disallow PgSQL to run
autovacuum again and do the drop instead?Thanks,
MichalOn 01/12/2016 12:01 PM, Pavel Stehule wrote:
Hi
2016-01-12 11:57 GMT+01:00 Michal Novotny <michal.novotny@trustport.com
<mailto:michal.novotny@trustport.com>>:Dear PostgreSQL Hackers,
I've discovered an issue with dropping a large table (~5T). I was
thinking drop table is fast operation however I found out myassumption
was wrong.
Is there any way how to tune it to drop a large table in the matter
of
seconds or minutes? Any configuration variable in the
postgresql.conf or
any tune up options available?
drop table should be fast.
There can be two reasons why not:
1. locks - are you sure, so this statement didn't wait on some lock?
2. filesystem issue - can you check the speed of rm 5TB file on your IO?
Regards
Pavel
PostgreSQL version used is PgSQL 9.4.
Thanks a lot!
Michal--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org
<mailto:pgsql-hackers@postgresql.org>)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-01-12 12:17:09 +0100, Pavel Stehule wrote:
2016-01-12 12:14 GMT+01:00 Michal Novotny <michal.novotny@trustport.com>:
Hi Pavel,
thanks for the information. I've been doing more investigation of this
issue and there's autovacuum running on the table however it's
automatically starting even if there is "autovacuum = off" in the
postgresql.conf configuration file.Real autovacuum is automatically cancelled. It looks like VACUUM started by
cron, maybe?
Unless it's an anti-wraparound autovacuum...
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 12/01/16 12:17, Pavel Stehule wrote:
2016-01-12 12:14 GMT+01:00 Michal Novotny <michal.novotny@trustport.com>:
Hi Pavel,
thanks for the information. I've been doing more investigation of this
issue and there's autovacuum running on the table however it's
automatically starting even if there is "autovacuum = off" in the
postgresql.conf configuration file.Real autovacuum is automatically cancelled. It looks like VACUUM started by
cron, maybe?
Not if it's to prevent wraparound, which isn't unlikely if autovacuum=off.
.m
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
2016-01-12 12:22 GMT+01:00 Marko Tiikkaja <marko@joh.to>:
On 12/01/16 12:17, Pavel Stehule wrote:
2016-01-12 12:14 GMT+01:00 Michal Novotny <michal.novotny@trustport.com>:
Hi Pavel,
thanks for the information. I've been doing more investigation of this
issue and there's autovacuum running on the table however it's
automatically starting even if there is "autovacuum = off" in the
postgresql.conf configuration file.Real autovacuum is automatically cancelled. It looks like VACUUM started
by
cron, maybe?Not if it's to prevent wraparound, which isn't unlikely if autovacuum=off.
I didn't know it.
Thank you
Pavel
Show quoted text
.m
On 01/12/2016 12:20 PM, Andres Freund wrote:
On 2016-01-12 12:17:09 +0100, Pavel Stehule wrote:
2016-01-12 12:14 GMT+01:00 Michal Novotny <michal.novotny@trustport.com>:
Hi Pavel,
thanks for the information. I've been doing more investigation of this
issue and there's autovacuum running on the table however it's
automatically starting even if there is "autovacuum = off" in the
postgresql.conf configuration file.Real autovacuum is automatically cancelled. It looks like VACUUM started by
cron, maybe?Unless it's an anti-wraparound autovacuum...
Andres
Autovacuum is not started by CRON. How should I understand the
"anti-wraparound autovacuum" ?
Thanks,
Michal
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
On 2016-01-12 12:17:01 +0100, Michal Novotny wrote:
thanks a lot for your reply. Unfortunately I've found out most it didn't
really start DROP TABLE yet and it's locked on autovacuum running for
the table and even if I kill the process it's autostarting again and again.
Start the DROP TABLE and *then* cancel the autovacuum session. That
should work.
Is there any way how to do the DROP TABLE and bypass/disable autovacuum
entirely? Please note the "autovacuum = off" is set in the config file
(postgresql.conf).
That actually is likely to have caused the problem. Every
autovacuum_freeze_max_age tables need to be vacuumed - otherwise the
data can't be interpreted correctly anymore at some point. That's called
'anti-wraparound vacuum". It's started even if you disabled autovacuum,
to prevent database corruption.
If you disable autovacuum, you really should start vacuums in some other
way.
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi Andres,
On 01/12/2016 12:37 PM, Andres Freund wrote:
Hi,
On 2016-01-12 12:17:01 +0100, Michal Novotny wrote:
thanks a lot for your reply. Unfortunately I've found out most it didn't
really start DROP TABLE yet and it's locked on autovacuum running for
the table and even if I kill the process it's autostarting again and again.Start the DROP TABLE and *then* cancel the autovacuum session. That
should work.
By cancelling the autovacuum session you mean to run
pg_cancel_backend(pid int) *after* running DROP TABLE ?
Is there any way how to do the DROP TABLE and bypass/disable autovacuum
entirely? Please note the "autovacuum = off" is set in the config file
(postgresql.conf).
So should I set autovacuum to enable (on) and restart pgsql before doing
DROP TABLE (and pg_cancel_backend() as mentioned above)?
That actually is likely to have caused the problem. Every
autovacuum_freeze_max_age tables need to be vacuumed - otherwise the
data can't be interpreted correctly anymore at some point. That's called
'anti-wraparound vacuum". It's started even if you disabled autovacuum,
to prevent database corruption.
Ok, any recommendation how to set autovacuum_freeze_max_age?
Thanks,
Michal
If you disable autovacuum, you really should start vacuums in some other
way.Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi Andres,
thanks a lot. I've managed to run DROP TABLE and then cancel process
using pg_cancel_backend(autovacuum_pid) and it passed and dropped the 5T
table.
Thanks a lot!
Michal
On 01/12/2016 12:37 PM, Andres Freund wrote:
Hi,
On 2016-01-12 12:17:01 +0100, Michal Novotny wrote:
thanks a lot for your reply. Unfortunately I've found out most it didn't
really start DROP TABLE yet and it's locked on autovacuum running for
the table and even if I kill the process it's autostarting again and again.Start the DROP TABLE and *then* cancel the autovacuum session. That
should work.Is there any way how to do the DROP TABLE and bypass/disable autovacuum
entirely? Please note the "autovacuum = off" is set in the config file
(postgresql.conf).That actually is likely to have caused the problem. Every
autovacuum_freeze_max_age tables need to be vacuumed - otherwise the
data can't be interpreted correctly anymore at some point. That's called
'anti-wraparound vacuum". It's started even if you disabled autovacuum,
to prevent database corruption.If you disable autovacuum, you really should start vacuums in some other
way.Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers