Re: [HACKERS] Re: Are large objects well supported? Are they considered very stableto use?

Started by Tatsuo Ishiialmost 27 years ago2 messages
#1Tatsuo Ishii
t-ishii@sra.co.jp

I have tried to use the lo interface and it appears to
work ok (although there is a fix required for solaris).
There is also a memory leak in the back end so several
thousand large objects will probably cause the backend
to fail .

This was reported some times ago but I don't have time to fix.

Ouch.

Well perhaps if I tell you PG hackers what I want to do, if you could
tell me the best way to do it.

I want to have a comment database storying ascii text comments. These
could be over 8000 bytes, and my understanding is that conventional PG
rows can't be bigger than 8000 bytes. On the other hand most of them
will probably be much smaller than 8000 bytes. I will certainly have
more than "several thousand" of them.

I thought the problem stated above was in that creating lots of large
objects in a session could be a trouble. On the other hand, if you
read/or write not so much in a session, you could avoid the problem, I
guess.

Is large objects the right way to go here? What are the disk usage /
speed tradeoffs of using large objects here, perhaps compared to
straight UNIX files? The main reasons I don't use the file system is
that I might run out of inodes, and also it's probably not that fast or
efficient.

If you are short of inodes, forget about large objects. Creating a
large object consumes 2 inodes (one is for holding data itself,
another is for an index for faster access) and problably this is not
good news for you.

I think we could implement large objects in a different way, for
example packing many of them into a single table. This is just a
thought, though.
---
Tatsuo Ishii

#2Chris Bitmead
chris.bitmead@bigfoot.com
In reply to: Tatsuo Ishii (#1)
Re: Are large objects well supported? Are they considered very stableto use?

Thanks for all the suggestions about large objects. To me they sound
nearly a waste of time, partly because they take 2 unix files for each
one, and partly because the minimum size is 16k.

For the moment I think I will use text type in a regular class and just
put up with the restriction of less than 8k. Maybe I will use an "oid
more," link for chaining.

I think the only real solution to this is to remove the arbitrary limits
in postgres as in the 8k record limit and the 8k query buffer limit.

Has anybody thought much about this yet?

--
Chris Bitmead
http://www.bigfoot.com/~chris.bitmead
mailto:chris.bitmead@bigfoot.com