Recording insert, updates, and deletes

Started by Andy Daleabout 19 years ago8 messagesgeneral
Jump to latest
#1Andy Dale
andy.dale@gmail.com

Hello,

I need to be able to keep track of the number of transactions (the ones that
are committed) that i am interested in, which in my case is all insert,
update, and deletes performed on tables in the public schema. I have
already tried to "select xact_commit from pg_stat_database" but xact_commit
considers select to be a committed transaction which i do not want, also i
don't consider pg_stat_database to be absolutely reliable as i have had a
Postgres server installed on my local machine since last June, and it is
only says i have 16,000 commited transactions (this is not correct, there
have been far more).

I have been looking around the pg_catalog schema and i have found a few
functions that are of interest to me, these are:

pg_stat_get_tuples_inserted()
pg_stat_get_tuples_updated()
pg_stat_get_tuples_deleted()

Each function takes an oid as the parameter, so i thought i could just pass
a table's oid and it would return the results that i am interested in.
Unfortunately the result returned from any of the above functions is always
0, this is wrong as i know data has at least been inserted (verified in
pgAdmin). Having checked in postgresql.conf i found the following settings
for statistics

# - Statistics Monitoring -

#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#log_statement_stats = off

# - Query/Index Statistics Collector -

#stats_start_collector = on
stats_command_string = on
#stats_block_level = off
#stats_row_level = off
#stats_reset_on_server_start = off

Do i have to enable one of these to get the function calls to work, and if
so, which one(s).

I have also thought of obtaining this number by having a simple trigger that
increments a number (value in a separate table) on insert, delete, update of
a row in each table. I have been told that it might be possible to have a
larger number than is correct because a trigger (BEFORE or AFTER) fires
before the commit has taken place, so effectively it could be possible to
increment the counter and then the DB server could crash and then the
counter would be 1 larger than it should be, is this correct ?

If anyone can help or offer advice on how to achieve my objective it would
be greatly appreciated.

Thanks,

Andy

#2Brad Nicholson
bnichols@ca.afilias.info
In reply to: Andy Dale (#1)
Re: Recording insert, updates, and deletes

On Wed, 2007-01-10 at 16:51 +0100, Andy Dale wrote:

If anyone can help or offer advice on how to achieve my objective it
would be greatly appreciated.

Slony log shipping will do this

--
Brad Nicholson 416-673-4106
Database Administrator, Afilias Canada Corp.

#3Andy Dale
andy.dale@gmail.com
In reply to: Brad Nicholson (#2)
Re: Recording insert, updates, and deletes

Hi Brad,

If i have to create a separate slony replication set, then i cannot do it
this way (i cannot and do not want to have a master-slave(s) architecture)

Andy

Show quoted text

On 10/01/07, Brad Nicholson <bnichols@ca.afilias.info> wrote:

On Wed, 2007-01-10 at 16:51 +0100, Andy Dale wrote:

If anyone can help or offer advice on how to achieve my objective it
would be greatly appreciated.

Slony log shipping will do this

--
Brad Nicholson 416-673-4106
Database Administrator, Afilias Canada Corp.

#4Scott Marlowe
smarlowe@g2switchworks.com
In reply to: Andy Dale (#3)
Re: Recording insert, updates, and deletes

On Wed, 2007-01-10 at 12:37, Andy Dale wrote:

Hi Brad,

If i have to create a separate slony replication set, then i cannot do
it this way (i cannot and do not want to have a master-slave(s)
architecture)

Andy

On 10/01/07, Brad Nicholson <bnichols@ca.afilias.info> wrote:
On Wed, 2007-01-10 at 16:51 +0100, Andy Dale wrote:

If anyone can help or offer advice on how to achieve my

objective it

would be greatly appreciated.

Slony log shipping will do this

I've lost the OP, but look in the contrib/spi directory for something
designed to do auditing of inserts / deletes etc...

#5Andy Dale
andy.dale@gmail.com
In reply to: Scott Marlowe (#4)
Re: Recording insert, updates, and deletes

Hi,

I turned on the stats_row_level in the postgresql.conf file and now the the
calls to the stats functions work. I want to get the inserted, updated, and
deleted numbers on a given database, so i have written a query to do so:

SELECT
sum(pg_stat_get_tuples_inserted(c.oid)) AS inserted,
sum(pg_stat_get_tuples_updated(c.oid)) AS updated,
sum(pg_stat_get_tuples_deleted(c.oid)) AS deleted
FROM
pg_class c, information_schema.tables i
WHERE
i.table_catalog = 'testdb' AND
i.table_schema= 'public' AND
i.table_name = c.relname

I had to use the information schema as i could not figure out a way to fetch
the tables of a particular database using only pg_* tables. What i am
really now concerned is reliability, is it possible that the stats can be
incorrect ? and are they never reset ?. Also does using row level stats
have a serious effect on the performance ?

Cheers,

Andy

Show quoted text

On 10/01/07, Scott Marlowe <smarlowe@g2switchworks.com> wrote:

On Wed, 2007-01-10 at 12:37, Andy Dale wrote:

Hi Brad,

If i have to create a separate slony replication set, then i cannot do
it this way (i cannot and do not want to have a master-slave(s)
architecture)

Andy

On 10/01/07, Brad Nicholson <bnichols@ca.afilias.info> wrote:
On Wed, 2007-01-10 at 16:51 +0100, Andy Dale wrote:

If anyone can help or offer advice on how to achieve my

objective it

would be greatly appreciated.

Slony log shipping will do this

I've lost the OP, but look in the contrib/spi directory for something
designed to do auditing of inserts / deletes etc...

#6Richard Huxton
dev@archonet.com
In reply to: Andy Dale (#5)
Re: Recording insert, updates, and deletes

Andy Dale wrote:

Hi,

I turned on the stats_row_level in the postgresql.conf file and now the the
calls to the stats functions work. I want to get the inserted, updated,
and
deleted numbers on a given database, so i have written a query to do so:

SELECT
sum(pg_stat_get_tuples_inserted(c.oid)) AS inserted,
sum(pg_stat_get_tuples_updated(c.oid)) AS updated,
sum(pg_stat_get_tuples_deleted(c.oid)) AS deleted
FROM
pg_class c, information_schema.tables i
WHERE
i.table_catalog = 'testdb' AND
i.table_schema= 'public' AND
i.table_name = c.relname

I had to use the information schema as i could not figure out a way to
fetch
the tables of a particular database using only pg_* tables.

It's the pg_class table you'll want to start with. If you start psql
with -E and then do \dt you'll see the queries it uses.

What i am
really now concerned is reliability, is it possible that the stats can be
incorrect ? and are they never reset ?. Also does using row level stats
have a serious effect on the performance ?

Well, I'm not sure about incorrect. AFAIK the stats gatherer is "lossy",
so there's not a 100% guarantee that every read/write is measured.
Performance shouldn't be an issue unless you're already pushing the
limits of your hardware.

--
Richard Huxton
Archonet Ltd

#7Andy Dale
andy.dale@gmail.com
In reply to: Richard Huxton (#6)
Re: Recording insert, updates, and deletes

Sorry for being stupid, you can select the table info from the pg_class
table, so i can ignore the information schema.

If the stats collector is 'lossy ' i will not be able to use it, can anyone
confirm that it is ? So maybe my best option is to write a simple trigger
that just increments a counter (value in a separate table) after an
insert/update/delete and then add this trigger to each table i want to
record the stats for. Would this new approach work (i.e. be "lossless") ?

Cheers,

Andy

Show quoted text

On 11/01/07, Richard Huxton <dev@archonet.com> wrote:

Andy Dale wrote:

Hi,

I turned on the stats_row_level in the postgresql.conf file and now the

the

calls to the stats functions work. I want to get the inserted, updated,
and
deleted numbers on a given database, so i have written a query to do so:

SELECT
sum(pg_stat_get_tuples_inserted(c.oid)) AS inserted,
sum(pg_stat_get_tuples_updated(c.oid)) AS updated,
sum(pg_stat_get_tuples_deleted(c.oid)) AS deleted
FROM
pg_class c, information_schema.tables i
WHERE
i.table_catalog = 'testdb' AND
i.table_schema= 'public' AND
i.table_name = c.relname

I had to use the information schema as i could not figure out a way to
fetch
the tables of a particular database using only pg_* tables.

It's the pg_class table you'll want to start with. If you start psql
with -E and then do \dt you'll see the queries it uses.

What i am
really now concerned is reliability, is it possible that the stats can

be

incorrect ? and are they never reset ?. Also does using row level stats
have a serious effect on the performance ?

Well, I'm not sure about incorrect. AFAIK the stats gatherer is "lossy",
so there's not a 100% guarantee that every read/write is measured.
Performance shouldn't be an issue unless you're already pushing the
limits of your hardware.

--
Richard Huxton
Archonet Ltd

#8Martijn van Oosterhout
kleptog@svana.org
In reply to: Andy Dale (#7)
Re: Recording insert, updates, and deletes

On Thu, Jan 11, 2007 at 11:46:17AM +0100, Andy Dale wrote:

If the stats collector is 'lossy ' i will not be able to use it, can anyone
confirm that it is ? So maybe my best option is to write a simple trigger
that just increments a counter (value in a separate table) after an
insert/update/delete and then add this trigger to each table i want to
record the stats for. Would this new approach work (i.e. be "lossless") ?

Yes, the stats collector is designed so that if the server is very
busy, it sacrifices accuracy for speed. It's designed to be minimal
impact so that it can be turned on without slowing down your system.

You on the other hand want accuracy over speed, and so the stats
collector is not what you want. Some triggers will do it fine.

Have a nice day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/

Show quoted text

From each according to his ability. To each according to his ability to litigate.