maximum of postgres ?

Started by Fernezelyi Martonover 27 years ago3 messages

[reposted from pgsql-admin list]

Hi,

We have a rather simple database with 2 tables and 2 indices. The tables
contain char, int, and bool type fields, and both has ~60000 records now.

-rw------- 1 postgres postgres 3727360 Jun 5 11:45 mail
-rw------- 1 postgres postgres 1843200 Jun 4 02:45 mail_name_key
-rw------- 1 postgres postgres 9977856 Jun 5 11:45 pers
-rw------- 1 postgres postgres 1835008 Jun 4 02:45 pers_name_key

We would like to reach at least 15-20 query per second, 95 percent
SELECT id FROM mail WHERE name='name' queries. The rest is SELECT pers or
UPDATE in one of the two tables.

When the number of concurrent queries is 30 or higher, the postgres
answers very slowly, and it writes

NOTICE: SIAssignBackendId: discarding tag 2147339305
FATAL 1: Backend cache invalidation initialization failed

messages to the log.

If the number of concurrencies are 10, then everything goes fine, but the
number of queries/sec are 8. Is this the maximum loadability of postgres?

Is the any fine tuning possibilities for higher performance?

Some other questions:

1. How often the database has to be vacuumed? (Our database is vacuumed 3
times a day now.)
2. Why select * much more fast than select id? (before vacuum)
(`id' is a field in the table)

Postmaster runs with options: postmaster -B 468 -i -o -F.

Backend system: FreeBSD-2.2.6R, PII-400MHz, 64MB, UW SCSI RAID
Postgres version: 6.3.2

Thanks,
Marci

#2Vadim Mikheev
vadim@krs.ru
In reply to: Fernezelyi Marton (#1)
Re: [HACKERS] maximum of postgres ?

Fernezelyi Marton wrote:

We would like to reach at least 15-20 query per second, 95 percent
SELECT id FROM mail WHERE name='name' queries. The rest is SELECT pers or
UPDATE in one of the two tables.

When the number of concurrent queries is 30 or higher, the postgres
answers very slowly, and it writes

NOTICE: SIAssignBackendId: discarding tag 2147339305
FATAL 1: Backend cache invalidation initialization failed

messages to the log.

If the number of concurrencies are 10, then everything goes fine, but the
number of queries/sec are 8. Is this the maximum loadability of postgres?

I hope that both issues will be addressed in 6.4 by removing
invalidation code and skipping fsync() after each SELECT...

Vadim

#3Noname
dg@illustra.com
In reply to: Fernezelyi Marton (#1)
Re: [HACKERS] maximum of postgres ?

[reposted from pgsql-admin list]

Hi,

We have a rather simple database with 2 tables and 2 indices. The tables
contain char, int, and bool type fields, and both has ~60000 records now.

-rw------- 1 postgres postgres 3727360 Jun 5 11:45 mail
-rw------- 1 postgres postgres 1843200 Jun 4 02:45 mail_name_key
-rw------- 1 postgres postgres 9977856 Jun 5 11:45 pers
-rw------- 1 postgres postgres 1835008 Jun 4 02:45 pers_name_key

We would like to reach at least 15-20 query per second, 95 percent
SELECT id FROM mail WHERE name='name' queries. The rest is SELECT pers or
UPDATE in one of the two tables.

When the number of concurrent queries is 30 or higher, the postgres
answers very slowly, and it writes

NOTICE: SIAssignBackendId: discarding tag 2147339305
FATAL 1: Backend cache invalidation initialization failed

messages to the log.

If the number of concurrencies are 10, then everything goes fine, but the
number of queries/sec are 8. Is this the maximum loadability of postgres?

Is the any fine tuning possibilities for higher performance?

Some other questions:

1. How often the database has to be vacuumed? (Our database is vacuumed 3
times a day now.)
2. Why select * much more fast than select id? (before vacuum)
(`id' is a field in the table)

Postmaster runs with options: postmaster -B 468 -i -o -F.

Backend system: FreeBSD-2.2.6R, PII-400MHz, 64MB, UW SCSI RAID
Postgres version: 6.3.2

Thanks,
Marci

A couple of suggestions:

Increase the number of buffers. I suggest you use 1024 or even more.

Dump and reload the tables and rebuild the indexes. If this helps, try
to do it periodically.

I will post a patch to 6.3.2 on the patches and hackers lists this weekend
that may improve your performance when there are large numbers of concurrent
queries. This will be the S_LOCK patch. Since I will also post a version for
6.4, make sure you get the 6.3.2 version. I would also suggest backing up
your source tree before applying the patch just in case I make a mistake.

If the machine is paging at all under heavy load, add memory. 64Mb is not
very much to support 30 db sessions.

-dg

David Gould dg@illustra.com 510.628.3783 or 510.305.9468
Informix Software (No, really) 300 Lakeside Drive Oakland, CA 94612
"Don't worry about people stealing your ideas. If your ideas are any
good, you'll have to ram them down people's throats." -- Howard Aiken