Postgres 9.6.1 big slowdown by upgrading 8.4.22
Hello guys,
a very strange thing: after upgrading from 8.4.22 to 9.6.1 i noticed, under heavy beanchmarks, a really slowdown of Postgresql 9.6.1, with the machine really "without breath".
By replacing Postgresql 8.4.22 evberything returns working fine.
With three days of investigation, i come at a function with lots of joing and conditions.
Only by removing this condition:
"exists ( select 1 from gruorari where gruorari.idgrucate=grucategorie.id and ( (('{'||gg_sett||'}')::int[] && array[EXTRACT(DOW FROM NOW())::int])='t' and now()::time between gruorari.dalle::time and gruorari.alle::time) )"
The benchmark with Postgresql 9.6.1 version are now very fast.
The table metnioned by the query has got indexes:
id | numeric(1000,1) | not null default function_get_next_sequence('gr
uorari_id_seq'::text)
idgrucate | numeric(1000,1) |
dalle | character varying |
alle | character varying |
gg_sett | character varying |
azione | character varying |
Indexes:
"keygruorari" PRIMARY KEY, btree (id)
"alle_idx" btree (alle)
"dalle_idx" btree (dalle)
"gg_sett_idx" btree (gg_sett)
"idgrucate_idx" btree (idgrucate)
What is strange, is that with 8.4.22 version there is no problem, but there is something that does not deal with 9.6.1 version.
Could you please help me?
THANK YOU!
/F
On 01/05/2017 10:18 AM, Job wrote:
Hello guys,
a very strange thing: after upgrading from 8.4.22 to 9.6.1 i noticed,
under heavy beanchmarks, a really slowdown of Postgresql 9.6.1, with
the machine really "without breath".
By replacing Postgresql 8.4.22 evberything returns working fine.
With three days of investigation, i come at a function with lots of
joing and conditions.
Only by removing this condition:
"exists ( select 1 from gruorari where
gruorari.idgrucate=grucategorie.id and ( (('{'||gg_sett||'}')::int[]
&& array[EXTRACT(DOW FROM NOW())::int])='t' and now()::time between
gruorari.dalle::time and gruorari.alle::time) )"
The benchmark with Postgresql 9.6.1 version are now very fast.
The table metnioned by the query has got indexes:
id | numeric(1000,1) | not null default function_get_next_sequence('gr
uorari_id_seq'::text)
idgrucate | numeric(1000,1) |
dalle | character varying |
alle | character varying |
gg_sett | character varying |
azione | character varying |
Indexes:
"keygruorari" PRIMARY KEY, btree (id)
"alle_idx" btree (alle)
"dalle_idx" btree (dalle)
"gg_sett_idx" btree (gg_sett)
"idgrucate_idx" btree (idgrucate)
What is strange, is that with 8.4.22 version there is no problem, but
there is something that does not deal with 9.6.1 version.
Could you please help me?THANK YOU!
/F
You might want to include the query plans for each server
Job <Job@colliniconsulting.it> writes:
Could you please help me?
There's advice here on how to ask this type of question with enough
detail to get answers:
https://wiki.postgresql.org/wiki/Slow_Query_Questions
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi guys,
really much appreciated your replies.
You might want to include the query plans for each server
W e use a function, the explain analyze is quite similar:
POSTGRESQL 8.4.22:
explain analyze select 'record.com' where 'record.com' like '%.%' and function_cloud_view_orari('53', '192.168.10.234', 'record.com') != '' limit 1;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.03..0.04 rows=1 width=0) (actual time=1.488..1.488 rows=0 loops=1)
-> Result (cost=0.03..0.04 rows=1 width=0) (actual time=1.485..1.485 rows=0 loops=1)
One-Time Filter: ((function_cloud_view_orari('53'::character varying, '192.168.10.234'::character varying, 'record.com'::character varying))::text <> ''::text)
Total runtime: 1.531 ms
POSTGRES 9.6.1:
explain analyze select 'record.com' where 'record.com' like '%.%' and function_cloud_view_orari('53', '192.168.10.234', 'record.com') != '' limit 1;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.03..0.04 rows=1 width=32) (actual time=4.216..4.216 rows=0 loops=1)
-> Result (cost=0.03..0.04 rows=1 width=32) (actual time=4.215..4.215 rows=0 loops=1)
One-Time Filter: ((function_cloud_view_orari('53'::character varying, '192.168.10.234'::character varying, 'record.com'::character varying))::text <> ''::text)
Planning time: 0.046 ms
Execution time: 4.230 ms
There is only one condition that, by deleting, Query in new 9.6.1 Postgresql Server is very fast also on massive benchmark test.
The condition is this:
"exists ( select 1 from gruorari where gruorari.idgrucate=grucategorie.id and ( (('{'||gg_sett||'}')::int[] && array[EXTRACT(DOW FROM NOW())::int])='t' and now()::time between gruorari.dalle::time and gruorari.alle::time) )"
We have a table of "weekly events", as example:
- monday from 12 to 14
- tuesday from 18 to 20
...
What is the best way, on Postgres, to store the information and to INDEX this kind of information?
Thank you, very best regards!
/F
On 6 Jan 2017, at 16:56, Job <Job@colliniconsulting.it> wrote:
W e use a function, the explain analyze is quite similar:
POSTGRESQL 8.4.22:
Total runtime: 1.531 msPOSTGRES 9.6.1:
Execution time: 4.230 ms
Are you being serious? You're complaining about a "big slowdown" for a query that goes from 1.5ms to 4ms?
What is the actual problem you're trying to solve? Because I don't see one in the above.
Just saying, you're obviously worried about something, but should you be?
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Fri, Jan 6, 2017 at 4:56 PM, Alban Hertroys <haramrae@gmail.com> wrote:
On 6 Jan 2017, at 16:56, Job <Job@colliniconsulting.it> wrote:
W e use a function, the explain analyze is quite similar:
POSTGRESQL 8.4.22:
Total runtime: 1.531 msPOSTGRES 9.6.1:
Execution time: 4.230 msAre you being serious? You're complaining about a "big slowdown" for a
query that goes from 1.5ms to 4ms?
What is the actual problem you're trying to solve? Because I don't see one
in the above.Just saying, you're obviously worried about something, but should you be?
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
*In addition to what Alban says, make sure you are comparing apples to
apples.*
*IOW, have you tuned the postgresql.conf on 9.6.1 to the same values as
8.4.22?*
*are you working on the same server & O/S for both versions?*
--
*Melvin Davidson*
I reserve the right to fantasize. Whether or not you
wish to share my fantasy is entirely up to you.
On Sat, Jan 7, 2017 at 2:56 AM, Job <Job@colliniconsulting.it> wrote:
Hi guys,
really much appreciated your replies.
You might want to include the query plans for each server
W e use a function, the explain analyze is quite similar:
POSTGRESQL 8.4.22:explain analyze select 'record.com' where 'record.com' like '%.%' and
function_cloud_view_orari('53', '192.168.10.234', 'record.com') != ''
limit 1;QUERY PLAN
------------------------------------------------------------
------------------------------------------------------------
-------------------------------------------------
Limit (cost=0.03..0.04 rows=1 width=0) (actual time=1.488..1.488 rows=0
loops=1)
-> Result (cost=0.03..0.04 rows=1 width=0) (actual time=1.485..1.485
rows=0 loops=1)
One-Time Filter: ((function_cloud_view_orari('53'::character
varying, '192.168.10.234'::character varying, 'record.com'::character
varying))::text <> ''::text)
Total runtime: 1.531 msPOSTGRES 9.6.1:
explain analyze select 'record.com' where 'record.com' like '%.%' and
function_cloud_view_orari('53', '192.168.10.234', 'record.com') != ''
limit 1;QUERY PLAN
------------------------------------------------------------
------------------------------------------------------------
-------------------------------------------------
Limit (cost=0.03..0.04 rows=1 width=32) (actual time=4.216..4.216 rows=0
loops=1)
-> Result (cost=0.03..0.04 rows=1 width=32) (actual time=4.215..4.215
rows=0 loops=1)
One-Time Filter: ((function_cloud_view_orari('53'::character
varying, '192.168.10.234'::character varying, 'record.com'::character
varying))::text <> ''::text)
Planning time: 0.046 ms
Execution time: 4.230 msThere is only one condition that, by deleting, Query in new 9.6.1
Postgresql Server is very fast also on massive benchmark test.
The condition is this:
"exists ( select 1 from gruorari where gruorari.idgrucate=grucategorie.id
and ( (('{'||gg_sett||'}')::int[] && array[EXTRACT(DOW FROM
NOW())::int])='t' and now()::time between gruorari.dalle::time and
gruorari.alle::time) )"We have a table of "weekly events", as example:
- monday from 12 to 14
- tuesday from 18 to 20
...
As already mentioned by others, i do not see a major performance problem
(atleast based on the information you gave) due to upgrading to 9.6.1. Do
you have latest statistics in place ? What about data ?
If you can notice in the EXPLAIN output, there is a difference in the
*width*. In 9.6.1 width is 32, any idea why ?
Regards,
Venkata B N
Database Consultant
On 01/07/2017 04:43 AM, Venkata B Nagothi wrote:
On Sat, Jan 7, 2017 at 2:56 AM, Job <Job@colliniconsulting.it
<mailto:Job@colliniconsulting.it>> wrote:__
Hi guys,
really much appreciated your replies.You might want to include the query plans for each server
W e use a function, the explain analyze is quite similar:
POSTGRESQL 8.4.22:
explain analyze select 'record.com <http://record.com>' where
'record.com <http://record.com>' like '%.%' and
function_cloud_view_orari('53', '192.168.10.234', 'record.com
<http://record.com>') != '' limit 1;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.03..0.04 rows=1 width=0) (actual time=1.488..1.488
rows=0 loops=1)
-> Result (cost=0.03..0.04 rows=1 width=0) (actual
time=1.485..1.485 rows=0 loops=1)
One-Time Filter:
((function_cloud_view_orari('53'::character varying,
'192.168.10.234'::character varying, 'record.com
<http://record.com>'::character varying))::text <> ''::text)
Total runtime: 1.531 ms
POSTGRES 9.6.1:
explain analyze select 'record.com <http://record.com>' where
'record.com <http://record.com>' like '%.%' and
function_cloud_view_orari('53', '192.168.10.234', 'record.com
<http://record.com>') != '' limit 1;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.03..0.04 rows=1 width=32) (actual time=4.216..4.216
rows=0 loops=1)
-> Result (cost=0.03..0.04 rows=1 width=32) (actual
time=4.215..4.215 rows=0 loops=1)
One-Time Filter:
((function_cloud_view_orari('53'::character varying,
'192.168.10.234'::character varying, 'record.com
<http://record.com>'::character varying))::text <> ''::text)
Planning time: 0.046 ms
Execution time: 4.230 ms
There is only one condition that, by deleting, Query in new 9.6.1
Postgresql Server is very fast also on massive benchmark test.
The condition is this:
"exists ( select 1 from gruorari where
gruorari.idgrucate=grucategorie.id <http://grucategorie.id> and (
(('{'||gg_sett||'}')::int[] && array[EXTRACT(DOW FROM
NOW())::int])='t' and now()::time between gruorari.dalle::time and
gruorari.alle::time) )"
We have a table of "weekly events", as example:
- monday from 12 to 14
- tuesday from 18 to 20
...
As already mentioned by others, i do not see a major performance
problem (atleast based on the information you gave) due to upgrading
to 9.6.1. Do you have latest statistics in place ? What about data ?
There almost certainly *is* performance problem, despite the queries
only take a few milliseconds. The timings show almost ~3x slowdown, and
if the application executes a lot of those queries, it can easily mean
3x increase in system load.
The question is what's causing it. I wonder whether this might be caused
by different data layout, or data not being frozen on 9.6 yet, or
something like that.
Job, can you explain how did you perform the upgrade (pg_upgrade or
pg_dump), and how you switch back to 8.4?
All this is based on the assumption the difference is consistent, and
not just random fluke.
If you can notice in the EXPLAIN output, there is a difference in
the *width*. In 9.6.1 width is 32, any idea why ?
I'd guess Result was not filling the width field in 8.4, or something
like that. In any case, the plans are exactly the same in both versions.
regards
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi guys,
First of all excuse me but i really do not explain the problem, sorry...
Are you being serious? You're complaining about a "big slowdown" for a query that goes from 1.5ms to 4ms?
What is the actual problem you're trying to solve? Because I don't see one in the above.
Single query if fast both in 8.4.22 and 9.6.1, no problem.
But the problem is not here!
The big problem is the benchmark before put the system under production.
We launch about 100/200 queries per second and we monitor with "top" the two machines.
They are VM with 4 vCPU and 10Gb of RAM, with CentOS 7.2 64bit.
This is what it happens:
Postgres 8.4.22
Medium average load 1.5/2.0
Further queries respond very quickly
Postgres 9.6.1
Medium average load 18.0/20.0 !!
Further queries are really very slow
There is a bottle neck
By removing *only* this condition in the query function:
"exists ( select 1 from gruorari where gruorari.idgrucate=grucategorie.id and ( (('{'||gg_sett||'}')::int[] && array[EXTRACT(DOW FROM NOW())::int])='t' and now()::time between gruorari.dalle::time and gruorari.alle::time) )"
The Postgres 9.6.1 machine average workload return at about 2.0/3.0!
The problem is not the single query, but the massive queries!
Thank you again and excuse me for my bad explanation!
/F
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 01/07/2017 06:44 AM, Job wrote:
Hi guys,
First of all excuse me but i really do not explain the problem, sorry...
Are you being serious? You're complaining about a "big slowdown" for a query that goes from 1.5ms to 4ms?
What is the actual problem you're trying to solve? Because I don't see one in the above.Single query if fast both in 8.4.22 and 9.6.1, no problem.
But the problem is not here!
The big problem is the benchmark before put the system under production.
We launch about 100/200 queries per second and we monitor with "top" the two machines.
They are VM with 4 vCPU and 10Gb of RAM, with CentOS 7.2 64bit.This is what it happens:
Postgres 8.4.22
Medium average load 1.5/2.0
Further queries respond very quicklyPostgres 9.6.1
Medium average load 18.0/20.0 !!
Further queries are really very slow
There is a bottle neckBy removing *only* this condition in the query function:
"exists ( select 1 from gruorari where gruorari.idgrucate=grucategorie.id and ( (('{'||gg_sett||'}')::int[] && array[EXTRACT(DOW FROM NOW())::int])='t' and now()::time between gruorari.dalle::time and gruorari.alle::time) )"
Without the rest of the query the above is somewhat difficult to
understand relative to your performance issue. The entire function would
be nice to see if that is possible?
Also is it possible to run the query that is in the function outside the
function as a stand alone query using explain analyze?
The Postgres 9.6.1 machine average workload return at about 2.0/3.0!
The problem is not the single query, but the massive queries!
Thank you again and excuse me for my bad explanation!
/F
--
Adrian Klaver
adrian.klaver@aklaver.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 7 Jan 2017, at 15:44, Job <Job@colliniconsulting.it> wrote:
Hi guys,
First of all excuse me but i really do not explain the problem, sorry...
Are you being serious? You're complaining about a "big slowdown" for a query that goes from 1.5ms to 4ms?
What is the actual problem you're trying to solve? Because I don't see one in the above.Single query if fast both in 8.4.22 and 9.6.1, no problem.
But the problem is not here!
The big problem is the benchmark before put the system under production.
We launch about 100/200 queries per second and we monitor with "top" the two machines.
They are VM with 4 vCPU and 10Gb of RAM, with CentOS 7.2 64bit.This is what it happens:
Postgres 8.4.22
Medium average load 1.5/2.0
Further queries respond very quicklyPostgres 9.6.1
Medium average load 18.0/20.0 !!
Further queries are really very slow
There is a bottle neck
I see.
By removing *only* this condition in the query function:
"exists ( select 1 from gruorari where gruorari.idgrucate=grucategorie.id and ( (('{'||gg_sett||'}')::int[] && array[EXTRACT(DOW FROM NOW())::int])='t' and now()::time between gruorari.dalle::time and gruorari.alle::time) )"
Then most likely the slow-down you're experiencing is indeed in the above subquery. It could also be the addition of the exists though, let's not rule that out!
Note that I'm not on either of the versions involved (9.3.15 here), so I can't easily observe what you're seeing.
A general observation; I think now() calls gettimeofday() each time, the performance of which can differ significantly depending on which hardware clock is being used by your OS (there are often multiple options). On the contrary, CURRENT_TIMESTAMP, CURRENT_TIME and friends are only updated at the start of the transaction, requiring but a single call to gettimeofday().
Judging from your queries, you don't actually seem to need the accuracy that NOW() provides…
The one-liner is a bit hard to read, btw - and so requires more effort from anyone trying to help.
A useful next step would be to benchmark the separate parts of that query:
1). where gruorari.idgrucate = grucategorie.id
2). and (('{'||gg_sett||'}')::int[] && array[EXTRACT(DOW FROM NOW())::int])='t'
3). and now()::time between gruorari.dalle::time and gruorari.alle::time
4). exists(select 1)
Ad 1). If there's any difference in performance between the 2 PG versions here, most likely it's a different plan for this condition. It might be as simple as a difference in statistics or number of rows.
Ad 2). This part seems a bit convoluted, which may be responsible for some of the overhead. Frankly, I'm not 100% certain of the purpose of that condition, but it appears that the gg_sett field contains a comma-separated list of days of the week that need to be matched to today's day of the week.
I rewrote it to:
extract(DOW from NOW()) = any('{' || gg_sett || '}'::int[])
Performance of either query on my 9.3 installation is pretty much the same, but I have only done some synthetic benchmarks:
=> explain analyze select '{6,7,0}'::int[] && array[extract(DOW from NOW())::int] from generate_series(1, 10000);
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------
Function Scan on generate_series (cost=0.00..20.00 rows=1000 width=0) (actual time=4.548..58.072 rows=10000 loops=1)
Total runtime: 77.116 ms
(2 rows)
=> explain analyze select extract(DOW from NOW()) = any('{6,7,0}'::int[]) from generate_series(1, 10000);
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------
Function Scan on generate_series (cost=0.00..18.75 rows=1000 width=0) (actual time=4.341..48.902 rows=10000 loops=1)
Total runtime: 67.477 ms
(2 rows)
Ad 3). Casting fields in the where-clause is usually a bad idea. Unless you have indexes on those fields in which they are cast to time AND the query planner chooses to use those indexes, the type-cast will get applied to every candidate record each. If you have a million candidate records, that's 2x a million casts taking place (for two fields).
To say more about this we would need more details about what types those fields are and why you're casting them to time.
The Postgres 9.6.1 machine average workload return at about 2.0/3.0!
The problem is not the single query, but the massive queries!
Thank you again and excuse me for my bad explanation!
/F
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 01/08/2017 01:12 AM, Alban Hertroys wrote:
On 7 Jan 2017, at 15:44, Job <Job@colliniconsulting.it> wrote:
...
This is what it happens:
Postgres 8.4.22
Medium average load 1.5/2.0
Further queries respond very quicklyPostgres 9.6.1
Medium average load 18.0/20.0 !!
Further queries are really very slow
There is a bottle neckI see.
This behavior is typical when a resource gets saturated. You have
probably ran out of CPU time or I/O, resulting in growing latencies.
Thus more processes are running (or waiting for a CPU) at the same time,
which is what average load is based on.
What is the CPU and I/O usage in those cases?
FWIW you still haven't explained how the upgrade was performed. That
might be a very important piece of information, because the 9.4 cluster
might have hint bits set and/or the data may be mostly frozen, but the
9.6 cluster may not have that yet, resulting in higher CPU usage.
By removing *only* this condition in the query function:
"exists ( select 1 from gruorari where
gruorari.idgrucate=grucategorie.id and ( (('{'||gg_sett||'}')::int[] &&
array[EXTRACT(DOW FROM NOW())::int])='t' and now()::time between
gruorari.dalle::time and gruorari.alle::time) )"Then most likely the slow-down you're experiencing is indeed in the
above subquery. It could also be the addition of the exists though,
let's not rule that out!Note that I'm not on either of the versions involved (9.3.15 here),
so I can't easily observe what you're seeing.A general observation; I think now() calls gettimeofday() each time,
the performance of which can differ significantly depending on which
hardware clock is being used by your OS (there are often multiple
options). On the contrary, CURRENT_TIMESTAMP, CURRENT_TIME and friends
are only updated at the start of the transaction, requiring but a single
call to gettimeofday().
Judging from your queries, you don't actually seem to need the
accuracy that NOW() provides…
No. now() calls GetCurrentTransactionStartTimestamp(), so it does not
call gettimeofday() and so the clock source overhead is pretty much
irrelevant. Moreover it's marked as 'stable' which makes repeated calls
unnecessary.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
No. now() calls GetCurrentTransactionStartTimestamp(), so it does not
call gettimeofday() and so the clock source overhead is pretty much
irrelevant. Moreover it's marked as 'stable' which makes repeated calls
unnecessary.
Yeah. My own thought, in view of the large version jump involved, is that
there's been some change in the planning of semi-joins that isn't working
well here. But since the OP hasn't provided EXPLAIN output for the
problem case, table schemas, or indeed any of the details suggested in
https://wiki.postgresql.org/wiki/Slow_Query_Questions
it's hard to provide any concrete answers.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi guys,
FWIW you still haven't explained how the upgrade was performed. That might be a very important piece of information, because the 9.4 cluster might have hint bits set and/or the data may be mostly frozen, but the
9.6 cluster may not have that yet, resulting in higher CPU usage.
We upgraded the database with a pg_dumpall from 8.4.22 and then a psql < BACKUP.SAV command!
Thank you again!
Francesco
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
FWIW you still haven't explained how the upgrade was performed.
That might be a very important piece of information, because the
9.4 cluster might have hint bits set and/or the data may be
mostly frozen, but the 9.6 cluster may not have that yet,
resulting in higher CPU usage.We upgraded the database with a pg_dumpall from 8.4.22 and then a psql < BACKUP.SAV command!
Not to miss the obvious: did you run ANALYZE after importing the backup?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi guys,
Here is an update, i think i have found the right statement that bring Postgresql 9.6.1 under heavy-use of resources (CPU) in a massive benchmark.
I only try to remove one simple, but very simple, condition:
and (grulist.stato is null or grulist.stato!=2)
Grulist.stato field is numeric and has a btree index on it.
Very strange: if i leave only one condition (example: grulist.stato is null) no heavy-cpu use occurred (but i need the two conditions with the OR).
But when combyning the two conditions (grulist.stato is null OR grulist.stato != 2) something happens...
The single explain analyze is very fast and maybe there is something that create a strange condition for postgresql 9.6.1 that takes CPU resources.
The btree index is right for this (numeric) two conditions?
Thank you again!
/F
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
I would also like to add this:
The explain analyze show that the index on that numeric field *is not* being used.
I also try to set the seqscan off but that index continues not to be used.
Maybe the problem is this?
Thank you again!
/F
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
I would also like to add this:
The explain analyze show that the index on that numeric field *is not* being used.
I also try to set the seqscan off but that index continues not to be used.Maybe the problem is this?
Thank you again!
/F
Hi,
maybe I missed something, but it appears to me you did NOT yet show the complete actual query + plan.
All we saw is the explain analyze of the call to the procedure function_cloud_view_orari(), but we
don't know what's happening inside the procedure.
Bye,
Chris.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi, here it is, excuse me for the delay:
select * from webrecord
left join grucategorie on grucategorie.codcategoria=webrecord.categoria and grucategorie.codgruppo='f50132'
left join grulist on grulist.nome=webrecord.dominio and grulist.codgruppo='f50132' and grulist.stato in (1,2)
left join listef_geo_reject on listef_geo_reject.country=webrecord.country and listef_geo_reject.codgruppo='f50132'
where dominio='PATTERN' and (grulist.stato='1' OR grucategorie.codcategoria is not null OR listef_geo_reject.country is not null) and (stato != 2 or stato is null);
flashstart=# explain analyze select * from webrecord
left join grucategorie on grucategorie.codcategoria=webrecord.categoria and grucategorie.codgruppo='f50132'
left join grulist on grulist.nome=webrecord.dominio and grulist.codgruppo='f50132' and grulist.stato in (1,2)
left join listef_geo_reject on listef_geo_reject.country=webrecord.country and listef_geo_reject.codgruppo='f50132'
where dominio='PATTERN' and (grulist.stato='1' OR grucategorie.codcategoria is not null OR listef_geo_reject.country is not null) and (stato != 2 or stato is null);
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop Left Join (cost=246.29..801.33 rows=2 width=260) (actual time=0.364..0.364 rows=0 loops=1)
Filter: ((grulist.stato = '1'::numeric) OR (grucategorie.codcategoria IS NOT NULL) OR (listef_geo_reject.country IS NOT NULL))
-> Nested Loop Left Join (cost=164.84..558.37 rows=2 width=241) (actual time=0.364..0.364 rows=0 loops=1)
Join Filter: ((grulist.nome)::text = (webrecord.dominio)::text)
Filter: ((grulist.stato <> '2'::numeric) OR (grulist.stato IS NULL))
Rows Removed by Filter: 1
-> Nested Loop Left Join (cost=124.54..448.46 rows=2 width=103) (actual time=0.297..0.299 rows=1 loops=1)
-> Index Scan using dominio_idx on webrecord (cost=0.56..120.46 rows=2 width=74) (actual time=0.050..0.051 rows=1 loops=1)
Index Cond: ((dominio)::text = 'PATTERN'::text)
-> Bitmap Heap Scan on grucategorie (cost=123.98..163.99 rows=1 width=29) (actual time=0.243..0.243 rows=1 loops=1)
Recheck Cond: (((codgruppo)::text = 'f50132'::text) AND ((codcategoria)::text = (webrecord.categoria)::text))
Heap Blocks: exact=3
-> BitmapAnd (cost=123.98..123.98 rows=1 width=0) (actual time=0.230..0.230 rows=0 loops=1)
-> Bitmap Index Scan on grucategorie_codgruppo_idx (cost=0.00..40.39 rows=14 width=0) (actual time=0.035..0.035 rows=135 loops=1)
Index Cond: ((codgruppo)::text = 'f50132'::text)
-> Bitmap Index Scan on grucategorie_categoria_idx (cost=0.00..83.33 rows=406 width=0) (actual time=0.187..0.187 rows=1579 loops=1)
Index Cond: ((codcategoria)::text = (webrecord.categoria)::text)
-> Materialize (cost=40.30..109.87 rows=1 width=138) (actual time=0.051..0.051 rows=1 loops=1)
-> Bitmap Heap Scan on grulist (cost=40.30..109.87 rows=1 width=138) (actual time=0.047..0.047 rows=1 loops=1)
Recheck Cond: ((nome)::text = 'PATTERN'::text)
Filter: ((stato = ANY ('{1,2}'::numeric[])) AND ((codgruppo)::text = 'f50132'::text))
Rows Removed by Filter: 1
Heap Blocks: exact=2
-> Bitmap Index Scan on grulist_nome_idx (cost=0.00..40.30 rows=2 width=0) (actual time=0.021..0.021 rows=2 loops=1)
Index Cond: ((nome)::text = 'PATTERN'::text)
-> Bitmap Heap Scan on listef_geo_reject (cost=81.46..121.47 rows=1 width=19) (never executed)
Recheck Cond: (((codgruppo)::text = 'f50132'::text) AND ((country)::text = (webrecord.country)::text))
-> BitmapAnd (cost=81.46..81.46 rows=1 width=0) (never executed)
-> Bitmap Index Scan on listef_reject_codgruppo_idx (cost=0.00..40.44 rows=21 width=0) (never executed)
Index Cond: ((codgruppo)::text = 'f50132'::text)
-> Bitmap Index Scan on listef_reject_country_idx (cost=0.00..40.76 rows=63 width=0) (never executed)
Index Cond: ((country)::text = (webrecord.country)::text)
Planning time: 0.881 ms
Execution time: 0.510 ms
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Job <Job@colliniconsulting.it> writes:
Hi, here it is, excuse me for the delay:
...
Execution time: 0.510 ms
You're still not showing us the actual problem query (the one with
EXISTS in it). Nor have you yet shown us any table schemas.
*Please* go read the Slow_Query_Questions wiki page. You're wasting
your time as well as the time of people who are trying to help you
by ignoring that advice about the information we need.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general