slow queries

Started by Howieover 27 years ago15 messagesgeneral
Jump to latest
#1Howie
caffeine@toodarkpark.org

ive been evaluating pgsql as a replacement for MySQL, which currently
handles all of a client's authentication needs for their websites.
however, some queries that have been blindingly fast under MySQL are
incredlbly slow under pgsql.

for instance:

---[ CUT ]---
SELECT
doms.dom, types.batch, types.active, codes.code
FROM
doms,types,codes,hosts
WHERE
hosts.client=doms.client AND
doms.client=types.client AND
types.batch='FREECODE' AND
types.type=codes.type AND
hosts.hostname='somehostnamehere.com';
---[ CUT ]---

under MySQL, this query takes about 2-3 seconds. under pgsql v6.3, this
query takes roughly 40 seconds to a minute. system is a P133, Linux
kernel 2.0.33, 128m mem, EIDE based ( bah ) disks. there is very little
going on while executing these queries since this is a development box.

there are keys/indexes on hosts.client (primary), doms.client (primary),
types.batch & types.client (unique index), and types.type (primary). all
the 'client' columns are int4, types.batch is "char(8) not null". the
pgsql schema and mysql schema are the same and the indexes/keys are the
same.

doms has about 80 rows as does hosts. types has ~350 rows, codes has 157k
rows ( a "select count(*) from codes" takes about 3 seconds to return;
MySQL returns _immediately_ ).

am i missing something or is pgsql really that much slower ? if you need
the table layout and some sample data i can supply that...

---
Howie <caffeine@toodarkpark.org> URL: http://www.toodarkpark.org
[[NSNotificationCenter defaultCenter] addObserver:systemAdministrator
selector:@selector(disableUserAccount:) name:@"UserIsWhining" object:aLuser];

#2Wim Ceulemans
wim.ceulemans@pophost.eunet.be
In reply to: Howie (#1)
Re: [GENERAL] slow queries

I also have seen this slowdown if you join more then 3 or 4 tables. I have
the impression that the query analyzer just linearly searches through a
number of tables in stead of add a first selects to these tables?

ive been evaluating pgsql as a replacement for MySQL, which currently
handles all of a client's authentication needs for their websites.
however, some queries that have been blindingly fast under MySQL are
incredlbly slow under pgsql.

for instance:

---[ CUT ]---
SELECT
doms.dom, types.batch, types.active, codes.code
FROM
doms,types,codes,hosts
WHERE
hosts.client=doms.client AND
doms.client=types.client AND
types.batch='FREECODE' AND
types.type=codes.type AND
hosts.hostname='somehostnamehere.com';
---[ CUT ]---

under MySQL, this query takes about 2-3 seconds. under pgsql v6.3, this
query takes roughly 40 seconds to a minute. system is a P133, Linux
kernel 2.0.33, 128m mem, EIDE based ( bah ) disks. there is very little
going on while executing these queries since this is a development box.

there are keys/indexes on hosts.client (primary), doms.client (primary),
types.batch & types.client (unique index), and types.type (primary). all
the 'client' columns are int4, types.batch is "char(8) not null". the
pgsql schema and mysql schema are the same and the indexes/keys are the
same.

doms has about 80 rows as does hosts. types has ~350 rows, codes has 157k
rows ( a "select count(*) from codes" takes about 3 seconds to return;
MySQL returns _immediately_ ).

am i missing something or is pgsql really that much slower ? if you need
the table layout and some sample data i can supply that...

---
Howie <caffeine@toodarkpark.org> URL: http://www.toodarkpark.org
[[NSNotificationCenter defaultCenter] addObserver:systemAdministrator
selector:@selector(disableUserAccount:) name:@"UserIsWhining"

object:aLuser];

Show quoted text
#3David Hartwig
daveh@insightdist.com
In reply to: Howie (#1)
Re: [GENERAL] slow queries

The postgres query sounds slower than I would expect. Have you run a "VACUUM
ANALYZE"? This is needed initially, and occasionally, to compute statistics for
optimal query planning. VACUUM also recovers disk space. "man vacuum"

You may want to prefix your SELECT statement with EXPLAIN to display the query
plan. Try this before and after the VACUUM.

As far as the count(*) goes, I am not familiar with MySQL's implementation, but
postgres does a sequential scan on a count(*) without a WHERE condition. Some
SQL engines have this value saved off for just this kind of query. This is just
one of the many performance/spaces tradeoffs. Postgres, you will find, has many
other redeeming features.

Howie wrote:

Show quoted text

ive been evaluating pgsql as a replacement for MySQL, which currently
handles all of a client's authentication needs for their websites.
however, some queries that have been blindingly fast under MySQL are
incredlbly slow under pgsql.

for instance:

---[ CUT ]---
SELECT
doms.dom, types.batch, types.active, codes.code
FROM
doms,types,codes,hosts
WHERE
hosts.client=doms.client AND
doms.client=types.client AND
types.batch='FREECODE' AND
types.type=codes.type AND
hosts.hostname='somehostnamehere.com';
---[ CUT ]---

under MySQL, this query takes about 2-3 seconds. under pgsql v6.3, this
query takes roughly 40 seconds to a minute. system is a P133, Linux
kernel 2.0.33, 128m mem, EIDE based ( bah ) disks. there is very little
going on while executing these queries since this is a development box.

there are keys/indexes on hosts.client (primary), doms.client (primary),
types.batch & types.client (unique index), and types.type (primary). all
the 'client' columns are int4, types.batch is "char(8) not null". the
pgsql schema and mysql schema are the same and the indexes/keys are the
same.

doms has about 80 rows as does hosts. types has ~350 rows, codes has 157k
rows ( a "select count(*) from codes" takes about 3 seconds to return;
MySQL returns _immediately_ ).

am i missing something or is pgsql really that much slower ? if you need
the table layout and some sample data i can supply that...

---
Howie <caffeine@toodarkpark.org> URL: http://www.toodarkpark.org
[[NSNotificationCenter defaultCenter] addObserver:systemAdministrator
selector:@selector(disableUserAccount:) name:@"UserIsWhining" object:aLuser];

#4Thomas Good
tomg@nrnet.org
In reply to: Howie (#1)
Re: [GENERAL] slow queries

On Thu, 17 Sep 1998, Howie wrote:

Howie,

I have a query that looks up a patient's placement (which ward)
in one table and then looks up diagnostic and demographic data
in another...takes 53 seconds. Ouch.

However, when I use table aliasing...the query output is on the
screen *before* I submit the query. ;-)

I'll send syntax, if you like.

Tom

ive been evaluating pgsql as a replacement for MySQL, which currently
handles all of a client's authentication needs for their websites.
however, some queries that have been blindingly fast under MySQL are
incredlbly slow under pgsql.

for instance:

---[ CUT ]---
SELECT
doms.dom, types.batch, types.active, codes.code
FROM
doms,types,codes,hosts
WHERE
hosts.client=doms.client AND
doms.client=types.client AND
types.batch='FREECODE' AND
types.type=codes.type AND
hosts.hostname='somehostnamehere.com';
---[ CUT ]---

under MySQL, this query takes about 2-3 seconds. under pgsql v6.3, this
query takes roughly 40 seconds to a minute. system is a P133, Linux
kernel 2.0.33, 128m mem, EIDE based ( bah ) disks. there is very little
going on while executing these queries since this is a development box.

there are keys/indexes on hosts.client (primary), doms.client (primary),
types.batch & types.client (unique index), and types.type (primary). all
the 'client' columns are int4, types.batch is "char(8) not null". the
pgsql schema and mysql schema are the same and the indexes/keys are the
same.

doms has about 80 rows as does hosts. types has ~350 rows, codes has 157k
rows ( a "select count(*) from codes" takes about 3 seconds to return;
MySQL returns _immediately_ ).

am i missing something or is pgsql really that much slower ? if you need
the table layout and some sample data i can supply that...

---
Howie <caffeine@toodarkpark.org> URL: http://www.toodarkpark.org
[[NSNotificationCenter defaultCenter] addObserver:systemAdministrator
selector:@selector(disableUserAccount:) name:@"UserIsWhining" object:aLuser];

Cheers,
Tom

----------- Sisters of Charity Medical Center ----------
Department of Psychiatry
----
Thomas Good, System Administrator <tomg@q8.nrnet.org>
North Richmond CMHC/Residential Services Phone: 718-354-5528
75 Vanderbilt Ave, Quarters 8 Fax: 718-354-5056
Staten Island, NY 10304 www.panix.com/~ugd
----
Powered by PostgreSQL 6.3.2 / Perl 5.004 / DBI-0.91::DBD-PG-0.69

#5David Hartwig
daveh@insightdist.com
In reply to: Thomas Good (#4)
Re: [GENERAL] slow queries

Thomas Good wrote:

On Thu, 17 Sep 1998, Howie wrote:

Howie,

I have a query that looks up a patient's placement (which ward)
in one table and then looks up diagnostic and demographic data
in another...takes 53 seconds. Ouch.

However, when I use table aliasing...the query output is on the
screen *before* I submit the query. ;-)

I'll send syntax, if you like.

I would like to see it! I was not aware that table aliasing could have any impact
on performance.

#6Oliver Elphick
olly@lfix.co.uk
In reply to: David Hartwig (#5)
Re: [GENERAL] slow queries

Thomas Good wrote:

I have a query that looks up a patient's placement (which ward)
in one table and then looks up diagnostic and demographic data
in another...takes 53 seconds. Ouch.

However, when I use table aliasing...the query output is on the
screen *before* I submit the query. ;-)

I'll send syntax, if you like.

Please do, to the list. I have never heard of 'table aliasing' and would
like to know more.

--
Oliver Elphick Oliver.Elphick@lfix.co.uk
Isle of Wight http://www.lfix.co.uk/oliver
PGP key from public servers; key ID 32B8FAA1
========================================
"Give, and it shall be given unto you; good measure,
pressed down, and shaken together, and running over,
shall men pour into your lap. For by your standard of
measure it will be measured to in return."
Luke 6:38

#7Thomas Good
tomg@admin.nrnet.org
In reply to: David Hartwig (#5)
Re: [GENERAL] slow queries

On Fri, 18 Sep 1998, David Hartwig wrote:

I would like to see it! I was not aware that table aliasing could
have any impact on performance.

Hello Oliver and David,

I was a bit amazed myself. Federico Passaro, on the SQL list,
helped me out some time ago when a query was failing.
His code worked so well that I filed it away for a rainy day...
this week I decided to try it and see if it helped hasten my
slooooowest query. It did.

Here is my code (hope it's coherent, I didn't name the tables!):

#!/bin/sh
tput clear
echo -ne "Enter Unit Code: "
read unit
psql millie <<EOF | less
SELECT unit_lname FROM crund1 WHERE unit_id = ${unit};
SELECT S.tr_id, T.i_date, S.client_lname, S.client_fname, S.eth_nic
FROM svcrd1 S, trhist T
WHERE S.client_id = T.pt_id
AND T.unit_id = ${unit}
AND T.o_date IS NULL
ORDER BY S.client_lname;
EOF

Cheers,
Tom

---------- Sisters of Charity Medical Center ----------
Department of Psychiatry
----
Thomas Good <tomg@q8.nrnet.org>
Coordinator, North Richmond C.M.H.C. Information Systems
75 Vanderbilt Ave, Quarters 8 Phone: 718-354-5528
Staten Island, NY 10304 Fax: 718-354-5056

#8Wim Ceulemans
wim.ceulemans@pophost.eunet.be
In reply to: Thomas Good (#7)
Re: [GENERAL] slow queries

Thomas Good wrote:

I have a query that looks up a patient's placement (which ward)
in one table and then looks up diagnostic and demographic data
in another...takes 53 seconds. Ouch.

However, when I use table aliasing...the query output is on the
screen *before* I submit the query. ;-)

I'll send syntax, if you like.

Please do, to the list. I have never heard of 'table aliasing' and would
like to know more.

I'm also very interested in this table aliasing.

Regards
Wim

#9Howie
caffeine@toodarkpark.org
In reply to: Thomas Good (#7)
Re: [GENERAL] slow queries

On Fri, 18 Sep 1998, Thomas Good wrote:

On Fri, 18 Sep 1998, David Hartwig wrote:

I would like to see it! I was not aware that table aliasing could
have any impact on performance.

[SNIP]
I was a bit amazed myself. Federico Passaro, on the SQL list,
helped me out some time ago when a query was failing.
His code worked so well that I filed it away for a rainy day...
this week I decided to try it and see if it helped hasten my
slooooowest query. It did.
[SNIP]

seems that by creating a view ( with the query i mentioned before ), my
queries were sped up by roughly 10 seconds... odd odd odd.

---
Howie <caffeine@toodarkpark.org> URL: http://www.toodarkpark.org
[[NSNotificationCenter defaultCenter] addObserver:systemAdministrator
selector:@selector(disableUserAccount:) name:@"UserIsWhining" object:aLuser];

#10Jackson, DeJuan
djackson@cpsgroup.com
In reply to: Howie (#9)
RE: [GENERAL] slow queries

seems that by creating a view ( with the query i mentioned before ),
my
queries were sped up by roughly 10 seconds... odd odd odd.

Not really so odd when you think of everything the backend has to do to
process a query. Parsing, planning, and optimizing (not necessarily in
that order) can be very costly. A view basically skips most of those
three steps.
-DEJ

#11Jeremy Hansen
jeremy@xxedgexx.com
In reply to: Jackson, DeJuan (#10)
Is there a server log????

Is there a server log for postgres? Something that records errors,
or some type of debugging information. I'm trying to debug a simple
modify in perl that's not working and it's driving me crazy because
I see no error output.

Thanks
-jeremy

#12Oliver Elphick
olly@lfix.co.uk
In reply to: Jeremy Hansen (#11)
Re: [GENERAL] Is there a server log????

Jeremy Hansen wrote:

Is there a server log for postgres? Something that records errors,
or some type of debugging information. I'm trying to debug a simple
modify in perl that's not working and it's driving me crazy because
I see no error output.

Debugging options can be set on the backend; look at the man page for
postgres. Use the -o option to the postmaster to set the backend options,
remembering to enclose all the backend options in single quotes.

--
Oliver Elphick Oliver.Elphick@lfix.co.uk
Isle of Wight http://www.lfix.co.uk/oliver
PGP key from public servers; key ID 32B8FAA1
========================================
"For I am convinced that neither death, nor life, nor
angels, nor principalities, nor things present, nor
things to come, nor powers, nor height, nor depth, nor
any other created thing, shall be able to separate us
from the love of God, which is in Christ Jesus our
Lord." Romans 8:38,39

#13Rex McMaster
rmcm@compsoft.com.au
In reply to: Jeremy Hansen (#11)
Re: [GENERAL] Is there a server log????

You can start postmaster with stdout/stderr directed to syslogd;

su postgres -c '/bin/sh -c "/usr/local/pgsql/bin/postmaster -D/usr/local/pgsql/data -o-e 2>&1 | logger -t postgres -i -p local1.notice &"
------------------------------------------------------------
Rex McMaster rmcm@compsoft.com.au
rex@mcmaster.wattle.id.au
PGP Public key: http://www.compsoft.com.au/~rmcm/pgp-pk

Jeremy Hansen writes:

Is there a server log for postgres? Something that records errors,
or some type of debugging information. I'm trying to debug a simple
modify in perl that's not working and it's driving me crazy because
I see no error output.

Thanks
-jeremy

--

#14Howie
caffeine@toodarkpark.org
In reply to: Jackson, DeJuan (#10)
RE: [GENERAL] slow queries

On Fri, 25 Sep 1998, Jackson, DeJuan wrote:

seems that by creating a view ( with the query i mentioned before ),
my
queries were sped up by roughly 10 seconds... odd odd odd.

Not really so odd when you think of everything the backend has to do to
process a query. Parsing, planning, and optimizing (not necessarily in
that order) can be very costly. A view basically skips most of those
three steps.

all thats taken out is the parsing bit. the planning and optimizing still
must be done ( unless its done when the view is/was created ), yesno ?
the view should still hit the indexes created for the underlying tables,
right ?

---
Howie <caffeine@toodarkpark.org> URL: http://www.toodarkpark.org
"Oh my god, they killed init! YOU BASTARDS!"

#15Jackson, DeJuan
djackson@cpsgroup.com
In reply to: Howie (#14)
RE: [GENERAL] slow queries

On Fri, 25 Sep 1998, Jackson, DeJuan wrote:

seems that by creating a view ( with the query i mentioned before

),

my
queries were sped up by roughly 10 seconds... odd odd odd.

Not really so odd when you think of everything the backend has to do

to

process a query. Parsing, planning, and optimizing (not necessarily

in

that order) can be very costly. A view basically skips most of

those

three steps.

all thats taken out is the parsing bit. the planning and optimizing
still
must be done ( unless its done when the view is/was created ), yesno ?
the view should still hit the indexes created for the underlying
tables,
right ?

Well, a view in the system is really just a stored plan. The actual
text isn't stored afaik. So, that tells me that there's a good chance
that the optimization step won't be revisited unless the view is
dropped. And since the plan is the last step of the optimization step
I'd say you bypass a good deal of code. The draw back being that if the
plan isn't redone for major table changes then the query will not be the
most efficient plan, such as adding of an index that would effect the
query or additional rows in one of the tables which would make the cost
of an index scan cheaper than a table scan.

To answer your second question; it will hit the indexes if the plan that
was stored when the view was created hit those indexes.