Memory Leak
Hello,
RedHat 7.0, Postgres 7.1 (libpq), Intel Cel 433, 64mb, 15g hd.
I am running a test which performs 1000 transactions of 1000 updates of a single column in a single table, or (1 tranaction = 1000 updates) *
1000. I have no indecies for any of the columns and the table has 3 columns and 200 records. I do a VACUUM ANALYZE after every transaction. A
single transaction takes about 3-6 seconds.
It appears that RAM decreases at about 10 to 100K a second until it is all gone. Any thoughts on how I can optimise/configure the db to
alleviate this problem? Any hints on where this leak maybe occurring?
Thanks,
-justin
I am using Postgres 7.0.2 :)
Sorry about that. I promise to put done my crack pipe before I send emails :)
Justin Foster wrote:
Show quoted text
Hello,
RedHat 7.0, Postgres 7.1 (libpq), Intel Cel 433, 64mb, 15g hd.
I am running a test which performs 1000 transactions of 1000 updates of a single column in a single table, or (1 tranaction = 1000 updates) *
1000. I have no indecies for any of the columns and the table has 3 columns and 200 records. I do a VACUUM ANALYZE after every transaction. A
single transaction takes about 3-6 seconds.It appears that RAM decreases at about 10 to 100K a second until it is all gone. Any thoughts on how I can optimise/configure the db to
alleviate this problem? Any hints on where this leak maybe occurring?Thanks,
-justin
Here is a sample of the code which demonstrates the memory problem I am having. The problem does not occur immediately after memory has been maxed
out. It appears that there is an attempt to recover some memory, about 1 Kbytes, once max is near. This works for about a half a day to one full day
until everything finally freezes up.
btw: This project has just been dropped into my lap this week, so please excuse my ignorance. I am wading through one of my co-workers code and trying
to catch up on the research. Your help is greatly appreciated. :)
Since my initial testing I have discovered that doing 1000 updates of 200 rows in a single transaction is sort of overkill. Only the last 200 updates
will every be seen. Right? :) The code demanstrates the problem nonetheless.
---
int main(int argc, char **argv)
{
int i;
PGconn *conn; /* The connection to the database */
PGresult *res; /* data structure holding query results */
time_t start, end, stime;
time_t begin;
char lastValue[10];
char query[250];
int k;
conn = PQconnectdb (CONNECTION_STRING);
if (PQstatus (conn) == CONNECTION_BAD)
{
printf ("Unable to connect to database\n");
return 1;
}
time (&begin);
for (k=0;;k++)
{
time (&start);
#ifdef USE_TRANSACTIONS
/* Create a transaction, so we can use cursors */
res = PQexec (conn, "BEGIN");
if (!res || PQresultStatus(res) != PGRES_COMMAND_OK)
{
PQclear (res);
}
PQclear (res);
#endif
for (i=0; i < 1000; i++)
{
time (&stime);
sprintf (lastValue, "%d%x", i, i);
sprintf (query, DDT_UPDATE_REALTIME, lastValue, ctime (&stime));
res = PQexec (conn, query);
if (!res || PQresultStatus(res) != PGRES_COMMAND_OK)
{
printf ("Error processing query\n");
PQclear (res);
return 1;
}
/* PQcmdTuples returns an empty string if no rows were affected */
if (strlen (PQcmdTuples (res)) <= 0)
{
printf ("Error processing query\n");
return FALSE;
}
PQclear (res);
}
#ifdef USE_TRANSACTIONS
res = PQexec (conn, "COMMIT");
PQclear (res);
res = PQexec (conn, "VACUUM ANALYZE");
if (!res || PQresultStatus(res) != PGRES_COMMAND_OK)
{
PQclear (res);
}
PQclear (res);
#endif
time (&end);
printf ("Inner Loop %d, started: %s: Elapsed time: %d\n", k,
ctime (&begin), (int) (end-start));
}
PQfinish (conn);
return 0;
}
Justin Foster wrote:
Show quoted text
I am using Postgres 7.0.2 :)
Sorry about that. I promise to put done my crack pipe before I send emails :)
Joseph Shraibman wrote:
7.1 is in development. Things like this should be discussed in the
hackers list.Justin Foster wrote:
Hello,
RedHat 7.0, Postgres 7.1 (libpq), Intel Cel 433, 64mb, 15g hd.
I am running a test which performs 1000 transactions of 1000 updates of a single column in a single table, or (1 tranaction = 1000 updates) *
1000. I have no indecies for any of the columns and the table has 3 columns and 200 records. I do a VACUUM ANALYZE after every transaction. A
single transaction takes about 3-6 seconds.It appears that RAM decreases at about 10 to 100K a second until it is all gone. Any thoughts on how I can optimise/configure the db to
alleviate this problem? Any hints on where this leak maybe occurring?Thanks,
-justin--
Joseph Shraibman
jks@selectacast.net
Increase signal to noise ratio. http://www.targabot.com
Justin Foster <jfoster@corder-eng.com> writes:
I am running a test which performs 1000 transactions of 1000 updates
of a single column in a single table, or (1 tranaction = 1000 updates)
* 1000. I have no indecies for any of the columns and the table has 3
columns and 200 records. I do a VACUUM ANALYZE after every
transaction. A single transaction takes about 3-6 seconds.
It appears that RAM decreases at about 10 to 100K a second until it is
all gone.
When you say "RAM decreases", do you mean that the process size of the
backend is growing?
We have some known problems with memory leakage during a query
(hopefully 7.1 will solve this), but I'm not aware of any problems
that would cause leakage that accumulates across queries --- at least
not for such a simple case as you describe. Normally, all memory used
during a query is freed at query end, so the test you describe ought
to run in a static backend process size.
Could we see the exact query/queries you are running, and the full
definition of the table?
regards, tom lane