Performance issues with compaq server

Started by Samuel J. Sutjionoalmost 24 years ago19 messagesgeneral
Jump to latest
#1Samuel J. Sutjiono
ssutjiono@wc-group.com

I tried to load data using insert statement to our new database server (Compaq DL580, DL590, DL380 using SCSI driver) but it was very very slow. The database runs on Red Hat Linux 7.2 and PostgreSQL version 7.1.3

Does anybody know any idea to what cause this problem ? or things that I need to look or check (in the config. file)

I appreciate any help.

Regards,
Samuel

#2Noname
postgres@vrane.com
In reply to: Samuel J. Sutjiono (#1)
Re: Performance issues with compaq server

One thing you can try is to use "-F"
switch which will turn off fsync.

$ man postmaster

There are other possibilities however

Show quoted text

On Tue, May 07, 2002 at 06:40:32PM -0400, Samuel J. Sutjiono wrote:

I tried to load data using insert statement to our new database server (Compaq DL580, DL590, DL380 using SCSI driver) but it was very very slow. The database runs on Red Hat Linux 7.2 and PostgreSQL version 7.1.3

Does anybody know any idea to what cause this problem ? or things that I need to look or check (in the config. file)

I appreciate any help.

Regards,
Samuel

#3Martijn van Oosterhout
kleptog@svana.org
In reply to: Samuel J. Sutjiono (#1)
Re: Performance issues with compaq server

On Tue, May 07, 2002 at 06:40:32PM -0400, Samuel J. Sutjiono wrote:

I tried to load data using insert statement to our new database server
(Compaq DL580, DL590, DL380 using SCSI driver) but it was very very slow.
The database runs on Red Hat Linux 7.2 and PostgreSQL version 7.1.3

Does anybody know any idea to what cause this problem ? or things that I
need to look or check (in the config. file)

Put them in a trasaction (begin/commit). Without that, each insert becomes
it's own transaction which is rather expensive. Postgresql 7.2 improves this
a bit but the transaction will help anyway.

HTH,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/

Show quoted text

Canada, Mexico, and Australia form the Axis of Nations That
Are Actually Quite Nice But Secretly Have Nasty Thoughts About America

#4Holger Marzen
holger@marzen.de
In reply to: Martijn van Oosterhout (#3)
Re: Performance issues with compaq server

On Wed, 8 May 2002, Martijn van Oosterhout wrote:

On Tue, May 07, 2002 at 06:40:32PM -0400, Samuel J. Sutjiono wrote:

I tried to load data using insert statement to our new database server
(Compaq DL580, DL590, DL380 using SCSI driver) but it was very very slow.
The database runs on Red Hat Linux 7.2 and PostgreSQL version 7.1.3

Does anybody know any idea to what cause this problem ? or things that I
need to look or check (in the config. file)

Put them in a trasaction (begin/commit). Without that, each insert becomes
it's own transaction which is rather expensive. Postgresql 7.2 improves this
a bit but the transaction will help anyway.

ACK. On a given hardware I get about 150 inserts per second. Using a
begin/end transaction for a group of 100 inserts speeds it up to about
450 inserts per second.

But beware: if one insert fails (duplicate key, faulty data) then you
have to re-insert the remaining rows as single transactions, else all
rows of the previous transaction are discarded.

--
PGP/GPG Key-ID:
http://blackhole.pca.dfn.de:11371/pks/lookup?op=get&amp;search=0xB5A1AFE1

#5Doug McNaught
doug@wireboard.com
In reply to: Holger Marzen (#4)
Re: Performance issues with compaq server

Holger Marzen <holger@marzen.de> writes:

ACK. On a given hardware I get about 150 inserts per second. Using a
begin/end transaction for a group of 100 inserts speeds it up to about
450 inserts per second.

COPY is even faster as there is less query parsing to be done, plus
you get a transaction per COPY statement even without BEGIN/END.

But beware: if one insert fails (duplicate key, faulty data) then you
have to re-insert the remaining rows as single transactions, else all
rows of the previous transaction are discarded.

Hmm don't you have to ROLLBACK and redo the whole transaction without
the offending row(s), since you can't commit while in ABORT state? Or
am I misunderstanding?

-Doug

#6Steve Atkins
steve@blighty.com
In reply to: Doug McNaught (#5)
Re: Performance issues with compaq server

On Wed, May 08, 2002 at 11:02:35AM -0400, Doug McNaught wrote:

Holger Marzen <holger@marzen.de> writes:

ACK. On a given hardware I get about 150 inserts per second. Using a
begin/end transaction for a group of 100 inserts speeds it up to about
450 inserts per second.

COPY is even faster as there is less query parsing to be done, plus
you get a transaction per COPY statement even without BEGIN/END.

Does anyone have any performance figures to hand on COPY vs BEGIN/INSERT/END,
on indexed or unindexed tables?

Cheers,
Steve

#7Holger Marzen
holger@marzen.de
In reply to: Doug McNaught (#5)
Re: Performance issues with compaq server

On 8 May 2002, Doug McNaught wrote:

Holger Marzen <holger@marzen.de> writes:

ACK. On a given hardware I get about 150 inserts per second. Using a
begin/end transaction for a group of 100 inserts speeds it up to about
450 inserts per second.

COPY is even faster as there is less query parsing to be done, plus
you get a transaction per COPY statement even without BEGIN/END.

Yes, but I wanted to change something in some rows, so I used perl and
insert.

But beware: if one insert fails (duplicate key, faulty data) then you
have to re-insert the remaining rows as single transactions, else all
rows of the previous transaction are discarded.

Hmm don't you have to ROLLBACK and redo the whole transaction without
the offending row(s), since you can't commit while in ABORT state? Or
am I misunderstanding?

Postgres complains and doesn't accept the following inserts after a
failed one until end of transaction. I didn't have the time yet to
figure out if it rolls back the preceeding inserts.

Is there a rule in SQL standards that describes what should happen if
some statemens in a transaction fail and the program issues a commit?

--
PGP/GPG Key-ID:
http://blackhole.pca.dfn.de:11371/pks/lookup?op=get&amp;search=0xB5A1AFE1

#8Diehl, Jeffrey
jdiehl@sandia.gov
In reply to: Doug McNaught (#5)
having trouble w/ having clause...

Hi all,

This is a simple question, but for the life of me, I can't see what is wrong with this query:

select host,slot,port,count(mac) as d from cam group by host,slot,port having d>1;

I'm running 7.2.

Thanx in advance,

--
Mike Diehl
Network Tools Devl.
Sandia Labs
(505) 284-3137

#9Ian Lawrence Barwick
barwick@gmail.com
In reply to: Diehl, Jeffrey (#8)
Re: having trouble w/ having clause...

On Wednesday 08 May 2002 22:58, Mike Diehl wrote:

Hi all,

This is a simple question, but for the life of me, I can't see what is
wrong with this query:

select host,slot,port,count(mac) as d from cam group by host,slot,port
having d>1;

Try:

select host,slot,port,count(mac) as d from cam group by host,slot,port
having count(mac) > 1

See:

http://www.postgresql.org/idocs/index.php?sql-select.html

"output_name

Specifies another name for an output column using the AS clause. This name is
primarily used to label the column for display. It can also be used to refer
to the column's value in ORDER BY and GROUP BY clauses. But the output_name
cannot be used in the WHERE or HAVING clauses; write out the expression
instead."

HTH

Ian Barwick

#10Doug McNaught
doug@wireboard.com
In reply to: Holger Marzen (#7)
Re: Performance issues with compaq server

Holger Marzen <holger@marzen.de> writes:

On 8 May 2002, Doug McNaught wrote:

Hmm don't you have to ROLLBACK and redo the whole transaction without
the offending row(s), since you can't commit while in ABORT state? Or
am I misunderstanding?

Postgres complains and doesn't accept the following inserts after a
failed one until end of transaction. I didn't have the time yet to
figure out if it rolls back the preceeding inserts.

I'm pretty sure that the transaction goes into ABORT state and all you
can do is rollback and start over, minus the offending row(s).

Is there a rule in SQL standards that describes what should happen if
some statemens in a transaction fail and the program issues a commit?

I think PG's is standard behavior; that's kind of the whole point of
having transactions.

-Doug

#11Curt Sampson
cjs@cynic.net
In reply to: Holger Marzen (#7)
Re: Performance issues with compaq server

On Wed, 8 May 2002, Holger Marzen wrote:

COPY is even faster as there is less query parsing to be done, plus
you get a transaction per COPY statement even without BEGIN/END.

Yes, but I wanted to change something in some rows, so I used perl and
insert.

Why not use Perl and COPY?

cjs
--
Curt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org
Don't you know, in this new Dark Age, we're all light. --XTC

#12Holger Marzen
holger@marzen.de
In reply to: Doug McNaught (#10)
Re: Performance issues with compaq server

On 9 May 2002, Doug McNaught wrote:

Holger Marzen <holger@marzen.de> writes:

Is there a rule in SQL standards that describes what should happen if
some statemens in a transaction fail and the program issues a commit?

I think PG's is standard behavior; that's kind of the whole point of
having transactions.

- rolling back the whole transaction if you do a COMMIT
or
- keeping the changes until the first failing statement and ignoring
everything that follows if you do a COMMIT

?

--
PGP/GPG Key-ID:
http://blackhole.pca.dfn.de:11371/pks/lookup?op=get&amp;search=0xB5A1AFE1

#13Holger Marzen
holger@marzen.de
In reply to: Curt Sampson (#11)
Re: Performance issues with compaq server

On Thu, 9 May 2002, Curt Sampson wrote:

On Wed, 8 May 2002, Holger Marzen wrote:

COPY is even faster as there is less query parsing to be done, plus
you get a transaction per COPY statement even without BEGIN/END.

Yes, but I wanted to change something in some rows, so I used perl and
insert.

Why not use Perl and COPY?

A good idea. I'll try it if the customer complains about the speed.

--
PGP/GPG Key-ID:
http://blackhole.pca.dfn.de:11371/pks/lookup?op=get&amp;search=0xB5A1AFE1

#14John Gray
jgray@azuli.co.uk
In reply to: Holger Marzen (#12)
Re: Performance issues with compaq server

On Thu, 2002-05-09 at 12:23, Holger Marzen wrote:

On 9 May 2002, Doug McNaught wrote:

Holger Marzen <holger@marzen.de> writes:

Is there a rule in SQL standards that describes what should happen if
some statemens in a transaction fail and the program issues a commit?

I think PG's is standard behavior; that's kind of the whole point of
having transactions.

- rolling back the whole transaction if you do a COMMIT
or
- keeping the changes until the first failing statement and ignoring
everything that follows if you do a COMMIT

I can't speak to the phrasing of the standard, but PG behaviour is:

Everything in the transaction is ignored. All the inserts made before
the error, since the BEGIN, are rolled back. All subsequent inserts are
ignored (generating an error that transaction is in ABORT state). The
subsequent COMMIT will end the transaction (and thus clear the abort
state) but *will not commit anything*.

This means that scripts will work as intended -namely that all or none
of the commands within a transaction block will succeed[*]. So if you
split your insert into blocks, and one block fails, you will only need
to correct that block and try to load it again.

Regards

John

[*] This is handy for doing things like reindexing or drop/recreate
table in a script. The drop table stands a good chance of succeeding,
whereas the prospect of a syntax error in the create table makes it more
likely to fail. A subsequent commit of the DROP TABLE would not produce
the end result you intended....

--
John Gray
Azuli IT
www.azuli.co.uk

#15Denis Gasparin
denis@edistar.com
In reply to: Samuel J. Sutjiono (#1)
Re: Performance issues with compaq server

The Compaq servers you indicated use the smart array controller which is
very very slow in storing data into hard disks. Are you using the RAID?

I suggest you to buy a controller with a larger write cache (the smart
array controller does not have w-cache)... You should note a large
performance progress...

Also do not forget to set fsync to false and to use the COPY command
whenever is possible...

Bye!
--
Doct. Eng. Denis Gasparin: denis@edistar.com
---------------------------
Programmer & System Administrator - Edistar srl

Il mer, 2002-05-08 alle 00:40, Samuel J. Sutjiono ha scritto:

Show quoted text

I tried to load data using insert statement to our new database server (Compaq DL580, DL590, DL380 using SCSI driver) but it was very very slow. The database runs on Red Hat Linux 7.2 and PostgreSQL version 7.1.3

Does anybody know any idea to what cause this problem ? or things that I need to look or check (in the config. file)

I appreciate any help.

Regards,
Samuel

#16Diehl, Jeffrey
jdiehl@sandia.gov
In reply to: Ian Lawrence Barwick (#9)
Re: having trouble w/ having clause...

Yep, this fixed it. Thanx,

On Wednesday 08 May 2002 05:13 pm, Ian Barwick wrote:

On Wednesday 08 May 2002 22:58, Mike Diehl wrote:

Hi all,

This is a simple question, but for the life of me, I can't see what
is wrong with this query:

select host,slot,port,count(mac) as d from cam group by
host,slot,port having d>1;

Try:

select host,slot,port,count(mac) as d from cam group by host,slot,port
having count(mac) > 1

See:

http://www.postgresql.org/idocs/index.php?sql-select.html

"output_name

Specifies another name for an output column using the AS clause. This
name is primarily used to label the column for display. It can also be
used to refer to the column's value in ORDER BY and GROUP BY clauses.
But the output_name cannot be used in the WHERE or HAVING clauses;
write out the expression instead."

HTH

Ian Barwick

--
Mike Diehl
Network Tools Devl.
Sandia Labs
(505) 284-3137

#17Curt Sampson
cjs@cynic.net
In reply to: Denis Gasparin (#15)
Re: Performance issues with compaq server

On 9 May 2002, Denis Gasparin wrote:

The Compaq servers you indicated use the smart array controller which is
very very slow in storing data into hard disks. Are you using the RAID?

I suggest you to buy a controller with a larger write cache (the smart
array controller does not have w-cache)... You should note a large
performance progress...

And take your transaction log off of RAID entirely and put it on
a mirrored pair of disks instead.

cjs
--
Curt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org
Don't you know, in this new Dark Age, we're all light. --XTC

#18Command Prompt, Inc.
pgsql-general@commandprompt.com
In reply to: Holger Marzen (#4)
MacOS X Shared Buffers (SHMMAX)?

Good day,

Does anyone have any experience in increasing the amount of shared memory
buffers available to Mac OS X/Darwin? When starting with more than 128
backend connections on a test G3 I encounter:

-----------------------------------------------------------------------------------
IpcMemoryCreate: shmget(key=5432001, size=33357824, 03600) failed: Invalid argument

This error usually means that PostgreSQL's request for a shared memory
segment exceeded your kernel's SHMMAX parameter. You can either
reduce the request size or reconfigure the kernel with larger SHMMAX.
-----------------------------------------------------------------------------------

However, I can find no helpful information looking around for how Darwin
handles this configuration. Hopefully it does not require a recompile of
the Kernel. ;)

Any knowledge on this topic would be greatly appreciated.

Best Regards,
Jw.
--
jlx@commandprompt.com
by way of pgsql-general@commandprompt.com
http://www.postgresql.info/
http://www.commandprompt.com/

#19Tom Lane
tgl@sss.pgh.pa.us
In reply to: Command Prompt, Inc. (#18)
Re: MacOS X Shared Buffers (SHMMAX)?

"Command Prompt, Inc." <pgsql-general@commandprompt.com> writes:

Does anyone have any experience in increasing the amount of shared memory
buffers available to Mac OS X/Darwin?

A little bit of googling turns up Apple bug reports stating that OSX
has a hardwired limit on shared memory --- so no, short of modifying
and recompiling the kernel, you're outta luck.

regards, tom lane