Server load statistics

Started by Torsten Brongerover 15 years ago6 messagesgeneral
Jump to latest
#1Torsten Bronger
bronger@physik.rwth-aachen.de

Hall�chen!

I need statistics about the PG server load. At the moment, I use
for this

SELECT tup_returned + tup_fetched + tup_inserted + tup_updated +
tup_deleted FROM pg_stat_database WHERE datname='mydb';

However, the figures are absurdly high (> 100.000 rows per second).
If a "row" is one dataset (one user account, one blog entry, one
comment etc), I expect two or three orders of magnitude less. Is my
SQL statement nevertheless a good way to measure how busy the server
is?

Tsch�,
Torsten.

--
Torsten Bronger, aquisgrana, europa vetus
Jabber ID: torsten.bronger@jabber.rwth-aachen.de
or http://bronger-jmp.appspot.com

In reply to: Torsten Bronger (#1)
Re: Server load statistics

On Sun, Jul 25, 2010 at 12:15:06PM +0200, Torsten Bronger wrote:

Hallöchen!

I need statistics about the PG server load. At the moment, I use
for this

SELECT tup_returned + tup_fetched + tup_inserted + tup_updated +
tup_deleted FROM pg_stat_database WHERE datname='mydb';

However, the figures are absurdly high (> 100.000 rows per second).
If a "row" is one dataset (one user account, one blog entry, one
comment etc), I expect two or three orders of magnitude less. Is my
SQL statement nevertheless a good way to measure how busy the server
is?

did you rememebr to substract *previous* value of this select?

Best regards,

depesz

--
Linkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/
jid/gtalk: depesz@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

#3Andreas Kretschmer
akretschmer@spamfence.net
In reply to: Torsten Bronger (#1)
Re: Server load statistics

Torsten Bronger <bronger@physik.rwth-aachen.de> wrote:

Hall�chen!

I need statistics about the PG server load. At the moment, I use
for this

SELECT tup_returned + tup_fetched + tup_inserted + tup_updated +
tup_deleted FROM pg_stat_database WHERE datname='mydb';

However, the figures are absurdly high (> 100.000 rows per second).

Do you know, that you have to call pg_stat_reset() to restart all the
counters? And yes, a update is a delete and a insert. With your query
you get 3 rows per one singel update (tup_updated + tup_deleted +
tup_inserted). That's maybe not that what you expected.

Not to mention, every operation affects the system-tables, so one single
update, for instance, procude more than 3 table-operations.

Andreas
--
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect. (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly." (unknown)
Kaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�

#4Torsten Bronger
bronger@physik.rwth-aachen.de
In reply to: Torsten Bronger (#1)
Re: Server load statistics

Hall�chen!

hubert depesz lubaczewski writes:

On Sun, Jul 25, 2010 at 12:15:06PM +0200, Torsten Bronger wrote:

I need statistics about the PG server load. At the moment, I use
for this

SELECT tup_returned + tup_fetched + tup_inserted + tup_updated +
tup_deleted FROM pg_stat_database WHERE datname='mydb';

However, the figures are absurdly high (> 100.000 rows per second).
If a "row" is one dataset (one user account, one blog entry, one
comment etc), I expect two or three orders of magnitude less. Is my
SQL statement nevertheless a good way to measure how busy the server
is?

did you rememebr to substract *previous* value of this select?

Yes, I did. I plot curves, and they seem to be proportional to the
load. It's just the absolute values that look very strange.
Additionally, I don't know whether the above sum makes sense at all.

Tsch�,
Torsten.

--
Torsten Bronger, aquisgrana, europa vetus
Jabber ID: torsten.bronger@jabber.rwth-aachen.de
or http://bronger-jmp.appspot.com

#5Torsten Bronger
bronger@physik.rwth-aachen.de
In reply to: Torsten Bronger (#1)
Re: Server load statistics

Hall�chen!

Andreas Kretschmer writes:

Torsten Bronger <bronger@physik.rwth-aachen.de> wrote:

I need statistics about the PG server load. At the moment, I use
for this

SELECT tup_returned + tup_fetched + tup_inserted + tup_updated +
tup_deleted FROM pg_stat_database WHERE datname='mydb';

However, the figures are absurdly high (> 100.000 rows per second).

[...] And yes, a update is a delete and a insert. With your query
you get 3 rows per one singel update (tup_updated + tup_deleted +
tup_inserted). That's maybe not that what you expected.

Thank you, I will correct this.

Not to mention, every operation affects the system-tables, so one
single update, for instance, procude more than 3 table-operations.

But so many? My web application reports 10 new objects per second.
Of course, these are "compound" objects. They consist of at most 10
single table entries. This makes 100 rows/s. With further tables
being affected (news feed queue, table of last modifications etc),
let's assume 300 rows/s. Then, with your above remark, we get to
1000 rows/s. But this is a very much streched upper estimate.

I measure 180.000 rows/s. Thus, it is still too high by a factor of
180. Even if the server is not contacted at all, it's at 50 rows/s
just for its admistrative work.

Tsch�,
Torsten.

--
Torsten Bronger, aquisgrana, europa vetus
Jabber ID: torsten.bronger@jabber.rwth-aachen.de
or http://bronger-jmp.appspot.com

#6Sam Mason
sam@samason.me.uk
In reply to: Torsten Bronger (#1)
Re: Server load statistics

On Sun, Jul 25, 2010 at 12:15:06PM +0200, Torsten Bronger wrote:

I need statistics about the PG server load. At the moment, I use
for this

SELECT tup_returned + tup_fetched + tup_inserted + tup_updated +
tup_deleted FROM pg_stat_database WHERE datname='mydb';

However, the figures are absurdly high (> 100.000 rows per second).
If a "row" is one dataset (one user account, one blog entry, one
comment etc), I expect two or three orders of magnitude less. Is my
SQL statement nevertheless a good way to measure how busy the server
is?

Yes, but I don't think it's measuring what you think it is.
"tup_returned" gives the number of tuples read during sequential scans,
so you've probably got some queries that are touching many more rows
than you're expecting.

--
Sam http://samason.me.uk/