slow request

Started by Michael Guiardover 18 years ago9 messagesgeneral
Jump to latest
#1Michael Guiard
michael.guiard@gmail.com

Hi !
I am using postgreSQL v8.1.5 with latest odbc driver v8.02.0500.
I have very slow request between my server and my client. They are
both on the same switch 100Mb/s. I have no particular network
problems.
I use the pgadmin tool to do my request.

My database is compose of one table. This table has some simple fields
(varchar & int, less than 500 bytes) and its primary key is a
varchar(32) (CLSID). This table has 140000 records.
I know the primary key as a clsid is not the best choice, but it is
mine :)

The request "select * from mytable" on the server takes
approximatively 30 seconds.
The same request on the client takes approximatively 400 seconds !
What I do not understand is that my network traffic during this
request on the client side is very low. It is less than 100KB/s !

Why is it so slow ? I suppose that my 140000 records are retrieve one
by one... is it true ? if yes, why ?

I try to do the same thing with another table with a primary key as
'int4'. The result is the same : 540 secs to retrieve 150000 records
at 45 KB/s (average speed) (132 octets/record * 150000 = 19MB / 45 KB/
s = 430 seconds)

How can I improve these performances ?
thanks
Mike

#2Scott Marlowe
scott.marlowe@gmail.com
In reply to: Michael Guiard (#1)
Re: slow request

On 10/9/07, michael.guiard@gmail.com <michael.guiard@gmail.com> wrote:

Hi !
I am using postgreSQL v8.1.5 with latest odbc driver v8.02.0500.
I have very slow request between my server and my client. They are
both on the same switch 100Mb/s. I have no particular network
problems.
I use the pgadmin tool to do my request.

My database is compose of one table. This table has some simple fields
(varchar & int, less than 500 bytes) and its primary key is a
varchar(32) (CLSID). This table has 140000 records.
I know the primary key as a clsid is not the best choice, but it is
mine :)

The request "select * from mytable" on the server takes
approximatively 30 seconds.
The same request on the client takes approximatively 400 seconds !
What I do not understand is that my network traffic during this
request on the client side is very low. It is less than 100KB/s !

Why is it so slow ? I suppose that my 140000 records are retrieve one
by one... is it true ? if yes, why ?

Who cares one way or another... They're fast locally and slow
remotely, which means you've likely got some kind of networking issue
going on here.

How fast can you network copy things from the server to the client
without pgsql involved? (i.e. rsync, ftp, wget http, scp etc...)
What OS are the client and server running?

#3Frank Finner
postgresql@finner.de
In reply to: Michael Guiard (#1)
Re: slow request

On Tue, 09 Oct 2007 10:34:45 -0000 michael.guiard@gmail.com thought long, then sat down and wrote:

Hi !
I am using postgreSQL v8.1.5 with latest odbc driver v8.02.0500.
I have very slow request between my server and my client. They are
both on the same switch 100Mb/s. I have no particular network
problems.
I use the pgadmin tool to do my request.

My database is compose of one table. This table has some simple fields
(varchar & int, less than 500 bytes) and its primary key is a
varchar(32) (CLSID). This table has 140000 records.
I know the primary key as a clsid is not the best choice, but it is
mine :)

The request "select * from mytable" on the server takes
approximatively 30 seconds.
The same request on the client takes approximatively 400 seconds !
What I do not understand is that my network traffic during this
request on the client side is very low. It is less than 100KB/s !

Why is it so slow ? I suppose that my 140000 records are retrieve one
by one... is it true ? if yes, why ?

I try to do the same thing with another table with a primary key as
'int4'. The result is the same : 540 secs to retrieve 150000 records
at 45 KB/s (average speed) (132 octets/record * 150000 = 19MB / 45 KB/
s = 430 seconds)

How can I improve these performances ?
thanks
Mike

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match

This could be a DNS related problem, if not the request itself, but the connect, is slow. How did you measure these times and speeds?

--
Frank Finner

Invenius - Lösungen mit Linux
Köpfchenstraße 36
57072 Siegen
Telefon: 0271 231 8606 Mail: frank.finner@invenius.de
Telefax: 0271 231 8608 Web: http://www.invenius.de
Key fingerprint = 90DF FF40 582E 6D6B BADF 6E6A A74E 67E4 E788 2651

#4Steve Atkins
steve@blighty.com
In reply to: Michael Guiard (#1)
Re: slow request

On Oct 9, 2007, at 6:34 AM, michael.guiard@gmail.com wrote:

Hi !
I am using postgreSQL v8.1.5 with latest odbc driver v8.02.0500.
I have very slow request between my server and my client. They are
both on the same switch 100Mb/s. I have no particular network
problems.
I use the pgadmin tool to do my request.

My database is compose of one table. This table has some simple fields
(varchar & int, less than 500 bytes) and its primary key is a
varchar(32) (CLSID). This table has 140000 records.
I know the primary key as a clsid is not the best choice, but it is
mine :)

The request "select * from mytable" on the server takes
approximatively 30 seconds.
The same request on the client takes approximatively 400 seconds !
What I do not understand is that my network traffic during this
request on the client side is very low. It is less than 100KB/s !

Why is it so slow ? I suppose that my 140000 records are retrieve one
by one... is it true ? if yes, why ?

I try to do the same thing with another table with a primary key as
'int4'. The result is the same : 540 secs to retrieve 150000 records
at 45 KB/s (average speed) (132 octets/record * 150000 = 19MB / 45 KB/
s = 430 seconds)

How can I improve these performances ?

This has been reported before, IIRC, and one issue was that pgadmin
spent a lot longer rendering the data than it did retrieving it.

So before you look any further, run the same query from psql and
see if that changes anything.

Cheers,
Steve

#5Owen Hartnett
owen@clipboardinc.com
In reply to: Steve Atkins (#4)
Generating subtotal reports direct from SQL

I'm hoping there's a real easy way of doing this that I'm just missing:

Given a Select statement such as:

Select ID, code, amount from foo where code < 10;

that gives me a table like this:

ID code amount
_____________________________________
1 4 20
2 3 10
3 4 15
4 2 10
5 3 9
6 3 8

I want to generate a report table like the following (group by code):

ID code amount
_____________________________________
4 2 10
2 10
2 3 10
5 3 9
6 3 8
3 27
1 4 20
3 4 15
4 35
72

Such that the final table has additional subtotal rows with the
aggregate sum of the amounts. I'm thinking I can generate two tables
and merge them, but is there an easier way using a fancy Select
statement?

-Owen

#6Erik Jones
erik@myemma.com
In reply to: Owen Hartnett (#5)
Re: Generating subtotal reports direct from SQL

On Oct 9, 2007, at 4:53 PM, Owen Hartnett wrote:

I'm hoping there's a real easy way of doing this that I'm just
missing:

Given a Select statement such as:

Select ID, code, amount from foo where code < 10;

that gives me a table like this:

ID code amount
_____________________________________
1 4 20
2 3 10
3 4 15
4 2 10
5 3 9
6 3 8

I want to generate a report table like the following (group by code):

ID code amount
_____________________________________
4 2 10
2 10
2 3 10
5 3 9
6 3 8
3 27
1 4 20
3 4 15
4 35
72

Such that the final table has additional subtotal rows with the
aggregate sum of the amounts. I'm thinking I can generate two
tables and merge them, but is there an easier way using a fancy
Select statement?

Try generating them and merging them in one queryt:

SELECT ID, code, amount
FROM (SELECT ID, code, amount
FROM table_name
UNION
SELECT null, code, sum(amount)
FROM table_name
GROUP BY code) t
ORDER BY code, test1_id

Note that I didn't test that

Erik Jones

Software Developer | Emma®
erik@myemma.com
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com

#7Bruce Momjian
bruce@momjian.us
In reply to: Owen Hartnett (#5)
Re: Generating subtotal reports direct from SQL

"Owen Hartnett" <owen@clipboardinc.com> writes:

Such that the final table has additional subtotal rows with the aggregate sum
of the amounts. I'm thinking I can generate two tables and merge them, but is
there an easier way using a fancy Select statement?

Unfortunately the fancy SQL feature you're looking for is ROLLUP which
Postgres doesn't support.

I think you would have to do

(
select id, code, amount from foo where code < 10
union all
select null, code, sum(amount) from foo where code < 10 group by code
) order by code, id

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com

#8Owen Hartnett
owen@clipboardinc.com
In reply to: Bruce Momjian (#7)
Re: Generating subtotal reports direct from SQL

At 1:32 AM +0100 10/10/07, Gregory Stark wrote:

"Owen Hartnett" <owen@clipboardinc.com> writes:

Such that the final table has additional subtotal rows with the
aggregate sum
of the amounts. I'm thinking I can generate two tables and merge
them, but is
there an easier way using a fancy Select statement?

Unfortunately the fancy SQL feature you're looking for is ROLLUP which
Postgres doesn't support.

I think you would have to do

(
select id, code, amount from foo where code < 10
union all
select null, code, sum(amount) from foo where code < 10 group by code
) order by code, id

Perfect. Just what I was looking for.

Thanks!

-Owen

#9Michael Guiard
michael.guiard@gmail.com
In reply to: Frank Finner (#3)
Re: slow request

Hi !
thanks for your answers
My network works correctly, I have no other problems. I can send/receive
huge files quickly using the windows file share protocol.
Using psql tool, I have the same results ... very slow traffic ... :(
I measure these time and speeds using my watch and a free network tool :
ShowTraffic (Ethereal gives me the same results).

I will try using another switch ...

Show quoted text

On 10/9/07, Frank Finner <postgresql@finner.de> wrote:

On Tue, 09 Oct 2007 10:34:45 -0000 michael.guiard@gmail.com thought long,
then sat down and wrote:

Hi !
I am using postgreSQL v8.1.5 with latest odbc driver v8.02.0500.
I have very slow request between my server and my client. They are
both on the same switch 100Mb/s. I have no particular network
problems.
I use the pgadmin tool to do my request.

My database is compose of one table. This table has some simple fields
(varchar & int, less than 500 bytes) and its primary key is a
varchar(32) (CLSID). This table has 140000 records.
I know the primary key as a clsid is not the best choice, but it is
mine :)

The request "select * from mytable" on the server takes
approximatively 30 seconds.
The same request on the client takes approximatively 400 seconds !
What I do not understand is that my network traffic during this
request on the client side is very low. It is less than 100KB/s !

Why is it so slow ? I suppose that my 140000 records are retrieve one
by one... is it true ? if yes, why ?

I try to do the same thing with another table with a primary key as
'int4'. The result is the same : 540 secs to retrieve 150000 records
at 45 KB/s (average speed) (132 octets/record * 150000 = 19MB / 45 KB/
s = 430 seconds)

How can I improve these performances ?
thanks
Mike

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match

This could be a DNS related problem, if not the request itself, but the
connect, is slow. How did you measure these times and speeds?

--
Frank Finner

Invenius - Lösungen mit Linux
Köpfchenstraße 36
57072 Siegen
Telefon: 0271 231 8606 Mail: frank.finner@invenius.de
Telefax: 0271 231 8608 Web: http://www.invenius.de
Key fingerprint = 90DF FF40 582E 6D6B BADF 6E6A A74E 67E4 E788 2651