Performance suggestions?

Started by Harvey, Allan ACover 21 years ago5 messagesgeneral
Jump to latest
#1Harvey, Allan AC
HarveyA@OneSteel.com

Hi all,

I'm after suggestions on how to keep the initial performance
of a system I've put together for longer than 2 minutes.

I have a small table about 20 rows, a constant, that is receiving
about 160 updates per second.
The table is used to share gathered data to other process asynchronously.
After 5 min it is 12 updates per second.
Performance returns after a vacuum analyse.

I'm using 7.4.5.
This is the table structure
Table "public.lastscan"
Column | Type | Modifiers
-----------+-----------------------------+-----------
pointnum | integer | not null
parameter | character varying(8) | not null
value | double precision | not null
dt | timestamp without time zone | not null
Indexes:
"lsindex" btree (pointnum, parameter)

Or should I just stick to saving the data, inserts seem to go on and on, and
use a different IPC method.

Ta
Allan

#2Dann Corbit
DCorbit@connx.com
In reply to: Harvey, Allan AC (#1)
Re: Performance suggestions?

-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org] On Behalf Of Harvey, Allan
AC
Sent: Tuesday, December 14, 2004 3:23 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Performance suggestions?

Hi all,

I'm after suggestions on how to keep the initial performance
of a system I've put together for longer than 2 minutes.

I have a small table about 20 rows, a constant, that is receiving
about 160 updates per second.
The table is used to share gathered data to other process
asynchronously.
After 5 min it is 12 updates per second.
Performance returns after a vacuum analyse.

I'm using 7.4.5.
This is the table structure
Table "public.lastscan"
Column | Type | Modifiers
-----------+-----------------------------+-----------
pointnum | integer | not null
parameter | character varying(8) | not null
value | double precision | not null
dt | timestamp without time zone | not null
Indexes:
"lsindex" btree (pointnum, parameter)

Or should I just stick to saving the data, inserts seem to go on and on,
and
use a different IPC method.

Is 'pointnum' alone enough to uniquely describe the data (or can it be
chosen in such a way as to uniquely describe the data)? If so, drop
'parameter' from the index.

What happens if you drop the index altogether?

What happens if you change the index type to hashed?

Storing the data in shared memory instead of a table is an option, of
course.
<<

#3Neil Conway
neilc@samurai.com
In reply to: Harvey, Allan AC (#1)
Re: Performance suggestions?

On Wed, 2004-12-15 at 10:22 +1100, Harvey, Allan AC wrote:

I have a small table about 20 rows, a constant, that is receiving
about 160 updates per second.
The table is used to share gathered data to other process asynchronously.
After 5 min it is 12 updates per second.
Performance returns after a vacuum analyse.

It should be quite feasible to VACUUM this table frequently (once per
minute or even more often).

Or should I just stick to saving the data, inserts seem to go on and on, and
use a different IPC method.

MVCC isn't really ideal for this kind of situation, but without knowing
more about your application it's difficult to say whether switching to
another IPC method would be a better choice.

-Neil

#4Bruno Wolff III
bruno@wolff.to
In reply to: Harvey, Allan AC (#1)
Re: Performance suggestions?

On Wed, Dec 15, 2004 at 10:22:31 +1100,
"Harvey, Allan AC" <HarveyA@OneSteel.com> wrote:

Hi all,

I'm after suggestions on how to keep the initial performance
of a system I've put together for longer than 2 minutes.

I have a small table about 20 rows, a constant, that is receiving
about 160 updates per second.
The table is used to share gathered data to other process asynchronously.
After 5 min it is 12 updates per second.
Performance returns after a vacuum analyse.

You should vacuum that table after a very short period of time. From your
description, I would guess that you want to vacuum it at least once a
minute, probably more often if the vacuum itself doesn't slow things
down too much.

#5Paul Tillotson
pntil@shentel.net
In reply to: Harvey, Allan AC (#1)
Re: Performance suggestions?

Allan,

Postgres is probably not the ideal solution to this problem. If you'd
like to try this though, two points:

- If the table really only has 20 rows, drop the index. If the table
really only has 20 active rows at a
time, then the planner will never use that index.
(run EXPLAIN on your query to see if it is using your index. If not, it
is only slowing you down.)

- As said before, VACUUM frequently, maybe even every 10 seconds
(experiment with different intervals.)

Paul Tillotson

Show quoted text

I have a small table about 20 rows, a constant, that is receiving
about 160 updates per second.
The table is used to share gathered data to other process asynchronously.
After 5 min it is 12 updates per second.
Performance returns after a vacuum analyse.

I'm using 7.4.5.
This is the table structure
Table "public.lastscan"
Column | Type | Modifiers
-----------+-----------------------------+-----------
pointnum | integer | not null
parameter | character varying(8) | not null
value | double precision | not null
dt | timestamp without time zone | not null
Indexes:
"lsindex" btree (pointnum, parameter)