Too slow to create new schema and their tables, functions, triggers.
We have in a single database 190 identical schemas. Now, when we create a new
one, with exactly same structure as the previous ones, it takes 20 or 30
minutes to finish. Usual time to finish that script was 30 seconds.
Basically, my script creates an entire structure for a new customer:
- Create schema TempSchema; --just to be sure that nobody will connect until
it finishes
- create tables (100), constraints, functions, etc.
- import data using restore only data.
- rename TempSchema to production name.
On second step, when creating tables, functions and so on, it takes
sometimes a minute just to create one table or one function.
So, does this happens because we have almost 200 schemas on that database ?
I´m almost sure about.
What do I need do to run my script as before ? Do I need to Reindex ? Vacuum
? Or am I reaching a limit in a number of schemas in a Postgres database ?
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
On 7/23/19 12:20 PM, PegoraroF10 wrote:
We have in a single database 190 identical schemas. Now, when we create a new
one, with exactly same structure as the previous ones, it takes 20 or 30
minutes to finish. Usual time to finish that script was 30 seconds.Basically, my script creates an entire structure for a new customer:
- Create schema TempSchema; --just to be sure that nobody will connect until
it finishes
- create tables (100), constraints, functions, etc.
- import data using restore only data.
- rename TempSchema to production name.On second step, when creating tables, functions and so on, it takes
sometimes a minute just to create one table or one function.
Can you show example of portion of script?
Have you looked at the Postgres log during the above to see if there any
relevant messages?
So, does this happens because we have almost 200 schemas on that database ?
I´m almost sure about.
What do I need do to run my script as before ? Do I need to Reindex ? Vacuum
? Or am I reaching a limit in a number of schemas in a Postgres database ?--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
--
Adrian Klaver
adrian.klaver@aklaver.com
On Tue, Jul 23, 2019 at 1:36 PM Adrian Klaver <adrian.klaver@aklaver.com>
wrote:
On 7/23/19 12:20 PM, PegoraroF10 wrote:
We have in a single database 190 identical schemas. Now, when we create
a new
one, with exactly same structure as the previous ones, it takes 20 or 30
minutes to finish. Usual time to finish that script was 30 seconds.
Can you create 200 schemas using your script without the data load step
and reproduce the issue? With 19,000 tables on that database, how
aggressive is autovacuum?
I did not have vacuumed or reindexed my database for last 30 days and that
was my problem.
It works fine if I do a reindex database before adding that new schema.
Well, I´ll try just reindexing system before adding a new schema to see if
it works.
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
On 7/24/19 11:33 AM, PegoraroF10 wrote:
I did not have vacuumed or reindexed my database for last 30 days and that
was my problem.
Autovacuum should be dealing with this.
Do you have it throttled in some manner?
It works fine if I do a reindex database before adding that new schema.
Well, I´ll try just reindexing system before adding a new schema to see if
it works.--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
--
Adrian Klaver
adrian.klaver@aklaver.com
So, what should I tune on autovacuum ?
My script was running strangely. Postgres log shows me the time spent to
create functions. That happens when creating triggers and tables too.
Sometimes it´s too fast and sometimes ...
statement: create or replace function valoresdfe... 0 mins 1.135 secs
statement: create or replace function dadosorigem... 0 mins 0.055 secs
statement: CREATE OR REPLACE FUNCTION SONU... 0 mins 0.013 secs
statement: create or replace function contatoscampa... 2 mins 13.492 secs
statement: create or replace function FORMATARTELEF... 0 mins 0.013 secs
statement: create or replace function ChecaVar 0 mins 0.012 secs
statement: CREATE or replace FUNCTION criatrigge... 1 mins 16.42 secs
So, is this a vacuum problem ? What do I need to configure it ?
And again, if I do a Reindex database before creating that schema, it works
perfectly.
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
On 7/25/19 3:16 PM, PegoraroF10 wrote:
So, what should I tune on autovacuum ?
My script was running strangely. Postgres log shows me the time spent to
create functions. That happens when creating triggers and tables too.
Sometimes it´s too fast and sometimes ...
I didn't realize there is too fast:) More below.
statement: create or replace function valoresdfe... 0 mins 1.135 secs
statement: create or replace function dadosorigem... 0 mins 0.055 secs
statement: CREATE OR REPLACE FUNCTION SONU... 0 mins 0.013 secs
statement: create or replace function contatoscampa... 2 mins 13.492 secs
statement: create or replace function FORMATARTELEF... 0 mins 0.013 secs
statement: create or replace function ChecaVar 0 mins 0.012 secs
statement: CREATE or replace FUNCTION criatrigge... 1 mins 16.42 secs
Are there other messages immediately(or close vicinity) before/after the
slow statements?
So, is this a vacuum problem ? What do I need to configure it ?
Configuration of autovacuum is done in postgresql.conf using these settings:
https://www.postgresql.org/docs/11/runtime-config-autovacuum.html
You might want to send the settings you have to the list. Also the
setting for track_counts.
And again, if I do a Reindex database before creating that schema, it works
perfectly.--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
--
Adrian Klaver
adrian.klaver@aklaver.com
Nope, no one message near those statements.
I haven´t changed anything on Postgres.conf related with autovacuum.
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
On 7/25/19 4:01 PM, PegoraroF10 wrote:
Nope, no one message near those statements.
I haven´t changed anything on Postgres.conf related with autovacuum.
So what are the settings?
--
Adrian Klaver
adrian.klaver@aklaver.com
On 7/25/19 4:01 PM, PegoraroF10 wrote:
Nope, no one message near those statements.
I haven´t changed anything on Postgres.conf related with autovacuum.
You probably should also look at this system view:
https://www.postgresql.org/docs/11/monitoring-stats.html#PG-STAT-ALL-TABLES-VIEW
to see what autovacuum activity has occurred on the tables.
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
--
Adrian Klaver
adrian.klaver@aklaver.com
On Fri, Jul 26, 2019 at 1:01 AM PegoraroF10 <marcos@f10.com.br> wrote:
Nope, no one message near those statements.
I haven´t changed anything on Postgres.conf related with autovacuum.
Please take a look and post results of the following query:
select name, setting from pg_settings where name like 'autovacuum%';
any chance autovacuum is stopped?
PegoraroF10 schrieb am 26.07.2019 um 00:16:
statement: create or replace function valoresdfe... 0 mins 1.135 secs
statement: create or replace function dadosorigem... 0 mins 0.055 secs
statement: CREATE OR REPLACE FUNCTION SONU... 0 mins 0.013 secs
statement: create or replace function contatoscampa... 2 mins 13.492 secs
statement: create or replace function FORMATARTELEF... 0 mins 0.013 secs
statement: create or replace function ChecaVar 0 mins 0.012 secs
statement: CREATE or replace FUNCTION criatrigge... 1 mins 16.42 secs
Is it possible those functions (were creating was slow) were still in use
by another session and the create script had to wait for an exclusive lock to replace the function?
Running that sql:
name setting
autovacuum on
autovacuum_analyze_scale_factor 0.1
autovacuum_analyze_threshold 50
autovacuum_freeze_max_age 200000000
autovacuum_max_workers 3
autovacuum_multixact_freeze_max_age 400000000
autovacuum_naptime 60
autovacuum_vacuum_cost_delay 20
autovacuum_vacuum_cost_limit -1
autovacuum_vacuum_scale_factor 0.2
autovacuum_vacuum_threshold 50
autovacuum_work_mem -1
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
Nope, that schema and all its entire structure did not exist.
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
select count(*), count(*) filter (where last_autovacuum is not null) from
pg_stat_all_tables
count count
36605 1178
But what tables should I see if vacuum ran on it ?
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
On 7/26/19 4:15 AM, PegoraroF10 wrote:
select count(*), count(*) filter (where last_autovacuum is not null) from
pg_stat_all_tables
count count
36605 1178But what tables should I see if vacuum ran on it ?
I would recommend reading this:
https://www.postgresql.org/docs/11/routine-vacuuming.html
and in particular:
https://www.postgresql.org/docs/11/routine-vacuuming.html#AUTOVACUUM
For now look at the actual values of last_autovacuum to see how current
the autovacuuming is. My guess is that the issues you are having has to
do with bloat in the system tables, so I would start there.
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
--
Adrian Klaver
adrian.klaver@aklaver.com
On Fri, Jul 26, 2019 at 1:15 PM PegoraroF10 <marcos@f10.com.br> wrote:
select count(*), count(*) filter (where last_autovacuum is not null) from
pg_stat_all_tables
count count
36605 1178
What are the results of the same query against pg_stat_sys_tables and
pg_stat_user_tables? That's would help understanding which set of
tables are not being vacuumed.
Luca
pg_stat_all_tables
count count
37158 807
pg_stat_sys_tables
count count
16609 223
pg_stat_user_tables
count count
20549 584
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
Correct, seems to be something wrong on system tables. Maybe our script is
running something is already there, we will check.
select * from pg_stat_sys_tables where schemaname = 'pg_catalog'
relname n_live_tup n_dead_tup
pg_attrdef 3699 1095
pg_index 4756 1183
pg_sequence 20827 1482
pg_statistic 171699 27101
pg_trigger 221319 20718
pg_shdepend 225017 22337
pg_attribute 883023 164153
pg_depend 1553586 142960
and all them last_vacuum is null and last_autovacuum is too old.
So, is it better to configure autovacuum properly to these tables or should
I run vacuum periodically ?
Obviously I´ll check our script too.
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
On Fri, Jul 26, 2019 at 9:21 PM PegoraroF10 <marcos@f10.com.br> wrote:
So, is it better to configure autovacuum properly to these tables or should
I run vacuum periodically ?
Obviously I´ll check our script too.
My guess would be that either you have disabled autovacuum on such
tables (I don't know if that is possible being system tables, but for
regular tables it is) or your script is running too frequently to let
autvacuum proceed on the sys tables. I would bet on the last one.
Seems to me you are also creating and deleting a lot of stuff to bloat
the catalog in such a way. Could it be your script is keeping a
trnsaction open (something like try in a loop)?