Large objects error - expected a 'V' from the backend
Hi all,
I'm using postgres 6.3.2 as built by RedHat 5.2.
Every time one of my programs tries to read the _2nd_ large object it
gets an error. Well actually, closing the descriptor on the 1st large
object fails as does retrieving the 2nd large object. The error is....
PQfn: expected a 'V' from the backend. Got 'N' instead
I have got a code extract below. It is simply a perl program using
Pg-0.91 that opens the database and tries to read two large objects
given on the command line.
What is the best bet for getting around this? Is upgrading to a later
version of postgres likely to help? Has anyone seen this before?
The large objects I'm using aren't very large. Only a few hundred bytes.
Are large objects well supported? Are they considered very stable to
use?
Here is the code....
package techmod;
require Exporter;
use DBI;
use Pg;
sub pgdbconnect
{
$pgdbh ||= Pg::connectdb("dbname=httpd");
die unless $pgdbh;
$pgdbh->trace(STDOUT);
return $pgdbh;
}
sub getlarge
{
my ($name,$lobjId)=@_;
my $buf;
my $mode = PGRES_INV_READ;
if (0 <= ($lobj_fd = $pgdbh->lo_open($lobjId, $mode)))
{
print "open\n";
while (0 < ($nbytes = $pgdbh->lo_read($lobj_fd, $b, 100000)))
{
$buf = $buf . $b;
}
if ($nbytes < 0)
{ print "read fail\n", $pgdbh->errorMessage; }
if ($pgdbh->lo_close($lobj_fd) < 0)
{ print "close fail\n", $pgdbh->errorMessage; }
}
else
{
print "notopen $lobjId\n", $pgdbh->errorMessage;
}
return $buf;
}
#!/usr/bin/perl
use techmod;
techmod->pgdbconnect();
$lobjId=$ARGV[0];
print techmod->getlarge($lobjId);
print techmod->getlarge($ARGV[1]);
Here is an extract from the trace.
To backend> F
To backend (4#)> 954
To backend (4#)> 2
To backend (4#)> 4
To backend (4#)> 0
To backend (4#)> 4
To backend (4#)> 100000
From backend> V
From backend> G
From backend (#4)> 33
From backend (33)> This is some data stored in a large object.
From backend> 0
To backend> F
To backend (4#)> 954
To backend (4#)> 2
To backend (4#)> 4
To backend (4#)> 0
To backend (4#)> 4
To backend (4#)> 100000
From backend> V
From backend> G
From backend (#4)> 0
From backend (0)>
From backend> 0
To backend> F
To backend (4#)> 953
To backend (4#)> 1
To backend (4#)> 4
To backend (4#)> 0
From backend> N
close fail
PQfn: expected a 'V' from the backend. Got 'N' insteadThis is some data
stored in a large object
To backend> F
To backend (4#)> 952
To backend (4#)> 2
To backend (4#)> 4
To backend (4#)> 21008
To backend (4#)> 4
To backend (4#)> 262144
From backend> N
notopen 21008
PQfn: expected a 'V' from the backend. Got 'N' instead
--
Chris Bitmead
http://www.bigfoot.com/~chris.bitmead
mailto:chris.bitmead@bigfoot.com
Chris Bitmead wrote:
Hi all,
I'm using postgres 6.3.2 as built by RedHat 5.2.
Every time one of my programs tries to read the _2nd_ large object it
gets an error. Well actually, closing the descriptor on the 1st large
object fails as does retrieving the 2nd large object. The error is....PQfn: expected a 'V' from the backend. Got 'N' instead
I have got a code extract below. It is simply a perl program using
Pg-0.91 that opens the database and tries to read two large objects
given on the command line.
this will most probably not solve your problem, but for DBD-Pg-0.91 you need
postgresql-6.4.2.
Edmund
--
Edmund Mergl mailto:E.Mergl@bawue.de
Im Haldenhau 9 http://www.bawue.de/~mergl
70565 Stuttgart fon: +49 711 747503
Germany
I have tried to use the lo interface and it appears to
work ok (although there is a fix required for solaris).
There is also a memory leak in the back end so several
thousand large objects will probably cause the backend
to fail .
Ouch.
Well perhaps if I tell you PG hackers what I want to do, if you could
tell me the best way to do it.
I want to have a comment database storying ascii text comments. These
could be over 8000 bytes, and my understanding is that conventional PG
rows can't be bigger than 8000 bytes. On the other hand most of them
will probably be much smaller than 8000 bytes. I will certainly have
more than "several thousand" of them.
Is large objects the right way to go here? What are the disk usage /
speed tradeoffs of using large objects here, perhaps compared to
straight UNIX files? The main reasons I don't use the file system is
that I might run out of inodes, and also it's probably not that fast or
efficient.
I'd stay away from PostgreSQL large objects for now.
Two big problems:
1) Minimum size is 16K
2) They all end up in the same directory as your regular
tables.
If you need to store a lot of files in the 10-20-30K size, I'd
suggest first trying the unix file system, but hash them into some
sort of subdirectory structure so as to have not so many in each
directory. 256 per directory is nice, so give each file a 32 bit
id, store the id and the key information in postgresql, and when
you need file 0x12345678, go to 12/34/56/12345678.txt. You could
be smarter about the hashing so the bins filled evenly. Either way
you can spread the load out over different file systems with
soft links.
If space is at a preimum, and your files are compressable, you can
do what we did on one project: batch the files up into batches of,
say, about 32k (i.e. keep adding files till the aggregate gets over
32k), store start and end offsets for each file in the database, and
gzip each batch. gzip -d -c can tear through whatever your 32K compresses
down to pretty quickly, and a little bit of C or perl can discard the unwanted
leading part of the file pretty quickly too. You can store the blocks
themselves hashed as described above.
Have fun,
Drop me a line if I can help.
-- cary
cobrien@radix.net
Import Notes
Resolved by subject fallback
FYI, on a standard RedHat 5.2 system the current PG snapshot fails the
following regessions...
int2 .. failed
int4 .. failed
geometry .. failed
If anyone wants more info, let me know.
diff results/int4.out expected/int4.out
10c10
< ERROR: pg_atoi: error reading "1000000000000": Numerical result out
of range
---
ERROR: pg_atoi: error reading "1000000000000": Math result not representable
diff results/int2.out expected
10c10
< ERROR: pg_atoi: error reading "100000": Numerical result out of range
---
Show quoted text
ERROR: pg_atoi: error reading "100000": Math result not representable
FYI, on a standard RedHat 5.2 system the current PG snapshot fails the
following regessions...
OK. I have the "reference platform" for the regression tests, and it
has recently had a (forced) upgrade to RH5.2 after losing some disks.
I'd expect the regression tests to start matching your installation
very soon now; certainly before we release v6.5.
Thanks for the info.
- Tom