Memory leaks for large objects

Started by Maurice Gittensalmost 28 years ago8 messages
#1Maurice Gittens
mgittens@gits.nl

Ok,

I think large objects are leaking memory because the large object functions
in the backend use their own GlobalMemoryContext (called Filesystem), which
(according to a quick grep) is never freed.

Supposing this is true and I ensure that the large object subsystem always
uses the current memory context for it's memory allocations.

What might go wrong? (Or why did the designers decide to use a
GlobalMemoryContext for large objects?).

I simple don't understand why one would create a special memory context
for large objects without some special reason.
Or should I just try it and see is anything breaks?

Thanks for any comments.
Maurice

#2Peter T Mount
psqlhack@maidast.demon.co.uk
In reply to: Maurice Gittens (#1)
Re: [HACKERS] Memory leaks for large objects

On Mon, 16 Feb 1998, Maurice Gittens wrote:

Ok,

I think large objects are leaking memory because the large object functions
in the backend use their own GlobalMemoryContext (called Filesystem), which
(according to a quick grep) is never freed.

Supposing this is true and I ensure that the large object subsystem always
uses the current memory context for it's memory allocations.

What might go wrong? (Or why did the designers decide to use a
GlobalMemoryContext for large objects?).

I simple don't understand why one would create a special memory context
for large objects without some special reason.
Or should I just try it and see is anything breaks?

I was wondering the same thing when I was looking at that part of the code
a couple of months back. It would be interesting to see if anything did
break.

--
Peter T Mount petermount@earthling.net or pmount@maidast.demon.co.uk
Main Homepage: http://www.demon.co.uk/finder
Work Homepage: http://www.maidstone.gov.uk Work EMail: peter@maidstone.gov.uk

#3Thomas G. Lockhart
lockhart@alumni.caltech.edu
In reply to: Peter T Mount (#2)
Re: [HACKERS] Memory leaks for large objects

Peter T Mount wrote:

On Mon, 16 Feb 1998, Maurice Gittens wrote:

Ok,

I think large objects are leaking memory because the large object functions
in the backend use their own GlobalMemoryContext (called Filesystem), which
(according to a quick grep) is never freed.

Supposing this is true and I ensure that the large object subsystem always
uses the current memory context for it's memory allocations.

What might go wrong? (Or why did the designers decide to use a
GlobalMemoryContext for large objects?).

I simple don't understand why one would create a special memory context
for large objects without some special reason.
Or should I just try it and see is anything breaks?

I was wondering the same thing when I was looking at that part of the code
a couple of months back. It would be interesting to see if anything did
break.

Does the large object I/O persist across transactions? If so, then storage would
need to be outside of the usual context, which is reset after every transaction.
Is there a place where the large object context could be freed, but is not at
the moment?

- Tom

#4Maurice Gittens
mgittens@gits.nl
In reply to: Thomas G. Lockhart (#3)
Re: [HACKERS] Memory leaks for large objects

Does the large object I/O persist across transactions? If so, then storage

would

need to be outside of the usual context, which is reset after every

transaction.

Is there a place where the large object context could be freed, but is not

at

the moment?

- Tom

Large object I/O does not persist across transactions in my case.
But maybe there are applications which assume that it does. So
"fixing" it might break things. How about some compile time flag
which selects between the old behaviour and new behaviour?
The old behaviour could be the default.

(The new behaviour would simply avoid fiddling with MemoryContexts at all.)
My current workaround is to reconnect to the database after some
number of transactions.

Regards,
Maurice

#5Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Maurice Gittens (#4)
Re: [HACKERS] Memory leaks for large objects

Large object I/O does not persist across transactions in my case.
But maybe there are applications which assume that it does. So
"fixing" it might break things. How about some compile time flag
which selects between the old behaviour and new behaviour?
The old behaviour could be the default.

(The new behaviour would simply avoid fiddling with MemoryContexts at all.)
My current workaround is to reconnect to the database after some
number of transactions.

Large object have been broken for quite some time. I say remove the
memory context stuff and see what breaks. Can't be worse than earlier
releases, and if there is a problem, it will show up for us and we can
issue a patch.

--
Bruce Momjian
maillist@candle.pha.pa.us

#6Maurice Gittens
mgittens@gits.nl
In reply to: Bruce Momjian (#5)
Re: [HACKERS] Memory leaks for large objects

Large object have been broken for quite some time. I say remove the
memory context stuff and see what breaks. Can't be worse than earlier
releases, and if there is a problem, it will show up for us and we can
issue a patch.

--

I insured that all memory allocations in be-fsstubs.c used the
current memorycontext for their allocations.
The system encounters errors when opening large objects which
were just created. Message like: "ERROR cannot open xinv<number>".
This happens even though all large_object operations are performed
in a transaction.

I'm now wondering wether in the approach above the files associated
with the large object will ever be freed (Or will de virtual file descriptor
stuff
handle this?).

Might it be so that because large objects and are implemented using
relations/indexes that information about these must persist until these
are properly closed by the postgres system?

How about not changing anything except adding a lo_garbage_collect function,
which frees the MemoryContext used by large objects and does any other
work needed? (Like closes indexes/relations?).

Thanks,
Maurice

#7Peter T Mount
psqlhack@maidast.demon.co.uk
In reply to: Maurice Gittens (#4)
Re: [HACKERS] Memory leaks for large objects

On Wed, 18 Feb 1998, Maurice Gittens wrote:

Does the large object I/O persist across transactions? If so, then

storage would >need to be outside of the usual context, which is reset
after every transaction. >Is there a place where the large object
context could be freed, but is not at >the moment? > > - Tom

Large object I/O does not persist across transactions in my case.

They do here when I've tried them.

But maybe there are applications which assume that it does. So
"fixing" it might break things. How about some compile time flag
which selects between the old behaviour and new behaviour?
The old behaviour could be the default.

(The new behaviour would simply avoid fiddling with MemoryContexts at all.)
My current workaround is to reconnect to the database after some
number of transactions.

At the moment, JDBC defaults to not using transactions. As not many
java apps are using large objects yet (its a new 6.3 feature), it
shouldn't be difficult to disable the API's if autoCommit is enabled (aka
no transaction).

Thinking about it, the large object examples in the source tree use
transactions, so perhaps this is the original behaviour...

--
Peter T Mount petermount@earthling.net or pmount@maidast.demon.co.uk
Main Homepage: http://www.demon.co.uk/finder
Work Homepage: http://www.maidstone.gov.uk Work EMail: peter@maidstone.gov.uk

#8Bruce Momjian
maillist@candle.pha.pa.us
In reply to: Maurice Gittens (#6)
Re: [HACKERS] Memory leaks for large objects

Added to TODO list.

Large object have been broken for quite some time. I say remove the
memory context stuff and see what breaks. Can't be worse than earlier
releases, and if there is a problem, it will show up for us and we can
issue a patch.

--

I insured that all memory allocations in be-fsstubs.c used the
current memorycontext for their allocations.
The system encounters errors when opening large objects which
were just created. Message like: "ERROR cannot open xinv<number>".
This happens even though all large_object operations are performed
in a transaction.

I'm now wondering wether in the approach above the files associated
with the large object will ever be freed (Or will de virtual file descriptor
stuff
handle this?).

Might it be so that because large objects and are implemented using
relations/indexes that information about these must persist until these
are properly closed by the postgres system?

How about not changing anything except adding a lo_garbage_collect function,
which frees the MemoryContext used by large objects and does any other
work needed? (Like closes indexes/relations?).

Thanks,
Maurice

-- 
Bruce Momjian                          |  830 Blythe Avenue
maillist@candle.pha.pa.us              |  Drexel Hill, Pennsylvania 19026
  +  If your life is a hard drive,     |  (610) 353-9879(w)
  +  Christ can be your backup.        |  (610) 853-3000(h)