Performance issues with compaq server
I tried to load data using insert statement to our new database server (Compaq DL580, DL590, DL380 using SCSI driver) but it was very very slow. The database runs on Red Hat Linux 7.2 and PostgreSQL version 7.1.3
Does anybody know any idea to what cause this problem ? or things that I need to look or check (in the config. file)
I appreciate any help.
Regards,
Samuel
One thing you can try is to use "-F"
switch which will turn off fsync.
$ man postmaster
There are other possibilities however
Show quoted text
On Tue, May 07, 2002 at 06:40:32PM -0400, Samuel J. Sutjiono wrote:
I tried to load data using insert statement to our new database server (Compaq DL580, DL590, DL380 using SCSI driver) but it was very very slow. The database runs on Red Hat Linux 7.2 and PostgreSQL version 7.1.3
Does anybody know any idea to what cause this problem ? or things that I need to look or check (in the config. file)
I appreciate any help.
Regards,
Samuel
On Tue, May 07, 2002 at 06:40:32PM -0400, Samuel J. Sutjiono wrote:
I tried to load data using insert statement to our new database server
(Compaq DL580, DL590, DL380 using SCSI driver) but it was very very slow.
The database runs on Red Hat Linux 7.2 and PostgreSQL version 7.1.3Does anybody know any idea to what cause this problem ? or things that I
need to look or check (in the config. file)
Put them in a trasaction (begin/commit). Without that, each insert becomes
it's own transaction which is rather expensive. Postgresql 7.2 improves this
a bit but the transaction will help anyway.
HTH,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/
Show quoted text
Canada, Mexico, and Australia form the Axis of Nations That
Are Actually Quite Nice But Secretly Have Nasty Thoughts About America
On Wed, 8 May 2002, Martijn van Oosterhout wrote:
On Tue, May 07, 2002 at 06:40:32PM -0400, Samuel J. Sutjiono wrote:
I tried to load data using insert statement to our new database server
(Compaq DL580, DL590, DL380 using SCSI driver) but it was very very slow.
The database runs on Red Hat Linux 7.2 and PostgreSQL version 7.1.3Does anybody know any idea to what cause this problem ? or things that I
need to look or check (in the config. file)Put them in a trasaction (begin/commit). Without that, each insert becomes
it's own transaction which is rather expensive. Postgresql 7.2 improves this
a bit but the transaction will help anyway.
ACK. On a given hardware I get about 150 inserts per second. Using a
begin/end transaction for a group of 100 inserts speeds it up to about
450 inserts per second.
But beware: if one insert fails (duplicate key, faulty data) then you
have to re-insert the remaining rows as single transactions, else all
rows of the previous transaction are discarded.
--
PGP/GPG Key-ID:
http://blackhole.pca.dfn.de:11371/pks/lookup?op=get&search=0xB5A1AFE1
Holger Marzen <holger@marzen.de> writes:
ACK. On a given hardware I get about 150 inserts per second. Using a
begin/end transaction for a group of 100 inserts speeds it up to about
450 inserts per second.
COPY is even faster as there is less query parsing to be done, plus
you get a transaction per COPY statement even without BEGIN/END.
But beware: if one insert fails (duplicate key, faulty data) then you
have to re-insert the remaining rows as single transactions, else all
rows of the previous transaction are discarded.
Hmm don't you have to ROLLBACK and redo the whole transaction without
the offending row(s), since you can't commit while in ABORT state? Or
am I misunderstanding?
-Doug
Import Notes
Reply to msg id not found: HolgerMarzen'smessageofWed8May2002090550+0200CEST
On Wed, May 08, 2002 at 11:02:35AM -0400, Doug McNaught wrote:
Holger Marzen <holger@marzen.de> writes:
ACK. On a given hardware I get about 150 inserts per second. Using a
begin/end transaction for a group of 100 inserts speeds it up to about
450 inserts per second.COPY is even faster as there is less query parsing to be done, plus
you get a transaction per COPY statement even without BEGIN/END.
Does anyone have any performance figures to hand on COPY vs BEGIN/INSERT/END,
on indexed or unindexed tables?
Cheers,
Steve
On 8 May 2002, Doug McNaught wrote:
Holger Marzen <holger@marzen.de> writes:
ACK. On a given hardware I get about 150 inserts per second. Using a
begin/end transaction for a group of 100 inserts speeds it up to about
450 inserts per second.COPY is even faster as there is less query parsing to be done, plus
you get a transaction per COPY statement even without BEGIN/END.
Yes, but I wanted to change something in some rows, so I used perl and
insert.
But beware: if one insert fails (duplicate key, faulty data) then you
have to re-insert the remaining rows as single transactions, else all
rows of the previous transaction are discarded.Hmm don't you have to ROLLBACK and redo the whole transaction without
the offending row(s), since you can't commit while in ABORT state? Or
am I misunderstanding?
Postgres complains and doesn't accept the following inserts after a
failed one until end of transaction. I didn't have the time yet to
figure out if it rolls back the preceeding inserts.
Is there a rule in SQL standards that describes what should happen if
some statemens in a transaction fail and the program issues a commit?
--
PGP/GPG Key-ID:
http://blackhole.pca.dfn.de:11371/pks/lookup?op=get&search=0xB5A1AFE1
Hi all,
This is a simple question, but for the life of me, I can't see what is wrong with this query:
select host,slot,port,count(mac) as d from cam group by host,slot,port having d>1;
I'm running 7.2.
Thanx in advance,
--
Mike Diehl
Network Tools Devl.
Sandia Labs
(505) 284-3137
On Wednesday 08 May 2002 22:58, Mike Diehl wrote:
Hi all,
This is a simple question, but for the life of me, I can't see what is
wrong with this query:select host,slot,port,count(mac) as d from cam group by host,slot,port
having d>1;
Try:
select host,slot,port,count(mac) as d from cam group by host,slot,port
having count(mac) > 1
See:
http://www.postgresql.org/idocs/index.php?sql-select.html
"output_name
Specifies another name for an output column using the AS clause. This name is
primarily used to label the column for display. It can also be used to refer
to the column's value in ORDER BY and GROUP BY clauses. But the output_name
cannot be used in the WHERE or HAVING clauses; write out the expression
instead."
HTH
Ian Barwick
Holger Marzen <holger@marzen.de> writes:
On 8 May 2002, Doug McNaught wrote:
Hmm don't you have to ROLLBACK and redo the whole transaction without
the offending row(s), since you can't commit while in ABORT state? Or
am I misunderstanding?Postgres complains and doesn't accept the following inserts after a
failed one until end of transaction. I didn't have the time yet to
figure out if it rolls back the preceeding inserts.
I'm pretty sure that the transaction goes into ABORT state and all you
can do is rollback and start over, minus the offending row(s).
Is there a rule in SQL standards that describes what should happen if
some statemens in a transaction fail and the program issues a commit?
I think PG's is standard behavior; that's kind of the whole point of
having transactions.
-Doug
Import Notes
Reply to msg id not found: HolgerMarzen'smessageofWed8May2002220105+0200CEST
On Wed, 8 May 2002, Holger Marzen wrote:
COPY is even faster as there is less query parsing to be done, plus
you get a transaction per COPY statement even without BEGIN/END.Yes, but I wanted to change something in some rows, so I used perl and
insert.
Why not use Perl and COPY?
cjs
--
Curt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org
Don't you know, in this new Dark Age, we're all light. --XTC
On 9 May 2002, Doug McNaught wrote:
Holger Marzen <holger@marzen.de> writes:
Is there a rule in SQL standards that describes what should happen if
some statemens in a transaction fail and the program issues a commit?I think PG's is standard behavior; that's kind of the whole point of
having transactions.
- rolling back the whole transaction if you do a COMMIT
or
- keeping the changes until the first failing statement and ignoring
everything that follows if you do a COMMIT
?
--
PGP/GPG Key-ID:
http://blackhole.pca.dfn.de:11371/pks/lookup?op=get&search=0xB5A1AFE1
On Thu, 9 May 2002, Curt Sampson wrote:
On Wed, 8 May 2002, Holger Marzen wrote:
COPY is even faster as there is less query parsing to be done, plus
you get a transaction per COPY statement even without BEGIN/END.Yes, but I wanted to change something in some rows, so I used perl and
insert.Why not use Perl and COPY?
A good idea. I'll try it if the customer complains about the speed.
--
PGP/GPG Key-ID:
http://blackhole.pca.dfn.de:11371/pks/lookup?op=get&search=0xB5A1AFE1
On Thu, 2002-05-09 at 12:23, Holger Marzen wrote:
On 9 May 2002, Doug McNaught wrote:
Holger Marzen <holger@marzen.de> writes:
Is there a rule in SQL standards that describes what should happen if
some statemens in a transaction fail and the program issues a commit?I think PG's is standard behavior; that's kind of the whole point of
having transactions.- rolling back the whole transaction if you do a COMMIT
or
- keeping the changes until the first failing statement and ignoring
everything that follows if you do a COMMIT
I can't speak to the phrasing of the standard, but PG behaviour is:
Everything in the transaction is ignored. All the inserts made before
the error, since the BEGIN, are rolled back. All subsequent inserts are
ignored (generating an error that transaction is in ABORT state). The
subsequent COMMIT will end the transaction (and thus clear the abort
state) but *will not commit anything*.
This means that scripts will work as intended -namely that all or none
of the commands within a transaction block will succeed[*]. So if you
split your insert into blocks, and one block fails, you will only need
to correct that block and try to load it again.
Regards
John
[*] This is handy for doing things like reindexing or drop/recreate
table in a script. The drop table stands a good chance of succeeding,
whereas the prospect of a syntax error in the create table makes it more
likely to fail. A subsequent commit of the DROP TABLE would not produce
the end result you intended....
--
John Gray
Azuli IT
www.azuli.co.uk
The Compaq servers you indicated use the smart array controller which is
very very slow in storing data into hard disks. Are you using the RAID?
I suggest you to buy a controller with a larger write cache (the smart
array controller does not have w-cache)... You should note a large
performance progress...
Also do not forget to set fsync to false and to use the COPY command
whenever is possible...
Bye!
--
Doct. Eng. Denis Gasparin: denis@edistar.com
---------------------------
Programmer & System Administrator - Edistar srl
Il mer, 2002-05-08 alle 00:40, Samuel J. Sutjiono ha scritto:
Show quoted text
I tried to load data using insert statement to our new database server (Compaq DL580, DL590, DL380 using SCSI driver) but it was very very slow. The database runs on Red Hat Linux 7.2 and PostgreSQL version 7.1.3
Does anybody know any idea to what cause this problem ? or things that I need to look or check (in the config. file)
I appreciate any help.
Regards,
Samuel
Yep, this fixed it. Thanx,
On Wednesday 08 May 2002 05:13 pm, Ian Barwick wrote:
On Wednesday 08 May 2002 22:58, Mike Diehl wrote:
Hi all,
This is a simple question, but for the life of me, I can't see what
is wrong with this query:select host,slot,port,count(mac) as d from cam group by
host,slot,port having d>1;Try:
select host,slot,port,count(mac) as d from cam group by host,slot,port
having count(mac) > 1See:
http://www.postgresql.org/idocs/index.php?sql-select.html
"output_name
Specifies another name for an output column using the AS clause. This
name is primarily used to label the column for display. It can also be
used to refer to the column's value in ORDER BY and GROUP BY clauses.
But the output_name cannot be used in the WHERE or HAVING clauses;
write out the expression instead."HTH
Ian Barwick
--
Mike Diehl
Network Tools Devl.
Sandia Labs
(505) 284-3137
On 9 May 2002, Denis Gasparin wrote:
The Compaq servers you indicated use the smart array controller which is
very very slow in storing data into hard disks. Are you using the RAID?I suggest you to buy a controller with a larger write cache (the smart
array controller does not have w-cache)... You should note a large
performance progress...
And take your transaction log off of RAID entirely and put it on
a mirrored pair of disks instead.
cjs
--
Curt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org
Don't you know, in this new Dark Age, we're all light. --XTC
Good day,
Does anyone have any experience in increasing the amount of shared memory
buffers available to Mac OS X/Darwin? When starting with more than 128
backend connections on a test G3 I encounter:
-----------------------------------------------------------------------------------
IpcMemoryCreate: shmget(key=5432001, size=33357824, 03600) failed: Invalid argument
This error usually means that PostgreSQL's request for a shared memory
segment exceeded your kernel's SHMMAX parameter. You can either
reduce the request size or reconfigure the kernel with larger SHMMAX.
-----------------------------------------------------------------------------------
However, I can find no helpful information looking around for how Darwin
handles this configuration. Hopefully it does not require a recompile of
the Kernel. ;)
Any knowledge on this topic would be greatly appreciated.
Best Regards,
Jw.
--
jlx@commandprompt.com
by way of pgsql-general@commandprompt.com
http://www.postgresql.info/
http://www.commandprompt.com/
"Command Prompt, Inc." <pgsql-general@commandprompt.com> writes:
Does anyone have any experience in increasing the amount of shared memory
buffers available to Mac OS X/Darwin?
A little bit of googling turns up Apple bug reports stating that OSX
has a hardwired limit on shared memory --- so no, short of modifying
and recompiling the kernel, you're outta luck.
regards, tom lane