UUID-OSSP Contrib Module Compilation Issue
Hi All,
I am trying to build the uuid-ossp contrib module for PostgreSQL 8.3.4.
I am building on Solaris x86 with Sun Studio 12.
I built the ossp-uuid version 1.6.2 libraries and installed them,
however, whenever I attempt to build the contrib module I always end up
with the following error:
----------------------
+ cd contrib
+ cd uuid-ossp
+ make all
sed 's,MODULE_PATHNAME,$libdir/uuid-ossp,g' uuid-ossp.sql.in >uuid-ossp.sql
/usr/bin/cc -Xa -I/usr/sfw/include -KPIC -I. -I../../src/include
-I/usr/sfw/include -c -o uuid-ossp.o uuid-ossp.c
"uuid-ossp.c", line 29: #error: OSSP uuid.h not found
cc: acomp failed for uuid-ossp.c
make: *** [uuid-ossp.o] Error 2
----------------------
I have the ossp uuid libraries and headers in the standar locations
(/usr/include, /usr/lib) but the checks within the contrib module dont
appear to find the ossp uuid headers I have installed.
Am I mising something here, or could the #ifdefs have something to do
with it not picking up the newer ossp uuid defnitions?
Any suggestions would be greatly appreciated.
Thanks
Bruce
Bruce McAlister <bruce.mcalister@blueface.ie> writes:
I am trying to build the uuid-ossp contrib module for PostgreSQL 8.3.4.
I am building on Solaris x86 with Sun Studio 12.
I built the ossp-uuid version 1.6.2 libraries and installed them,
however, whenever I attempt to build the contrib module I always end up
with the following error:
"uuid-ossp.c", line 29: #error: OSSP uuid.h not found
Um ... did you run PG's configure script with --with-ossp-uuid?
It looks like either you didn't do that, or configure doesn't know
to look in the place where you put the ossp-uuid header files.
regards, tom lane
Um ... did you run PG's configure script with --with-ossp-uuid?
It looks like either you didn't do that, or configure doesn't know
to look in the place where you put the ossp-uuid header files.
Doh, I missed that, however, I have now included that option but it
still does not find the libraries that I have installed.
My configure options are:
./configure --prefix=/opt/postgresql-v8.3.4 \
--with-openssl \
--without-readline \
--with-perl \
--enable-integer-datetimes \
--enable-thread-safety \
--enable-dtrace \
--with-ossp-uuid
When I run configure with the above options, I end up with the following
configure error:
checking for uuid_export in -lossp-uuid... no
checking for uuid_export in -luuid... no
configure: error: library 'ossp-uuid' or 'uuid' is required for OSSP-UUID
The uuid library that I built was obtained from the following url as
mentioned in the documentation:
http://www.ossp.org/pkg/lib/uuid/
I've built and installed version 1.6.2 and the libraries/headers built
are installed in: /usr/lib and /usr/include, the cli tool is in /usr/bin.
ll /usr/lib/*uuid* | grep 'Oct 28'
-rw-r--r-- 1 root bin 81584 Oct 28 15:33 /usr/lib/libuuid_dce.a
-rw-r--r-- 1 root bin 947 Oct 28 15:33
/usr/lib/libuuid_dce.la
lrwxrwxrwx 1 root root 22 Oct 28 15:34
/usr/lib/libuuid_dce.so -> libuuid_dce.so.16.0.22
lrwxrwxrwx 1 root root 22 Oct 28 15:34
/usr/lib/libuuid_dce.so.16 -> libuuid_dce.so.16.0.22
-rwxr-xr-x 1 root bin 80200 Oct 28 15:33
/usr/lib/libuuid_dce.so.16.0.22
-rw-r--r-- 1 root bin 77252 Oct 28 15:33 /usr/lib/libuuid.a
-rw-r--r-- 1 root bin 919 Oct 28 15:33 /usr/lib/libuuid.la
lrwxrwxrwx 1 root root 18 Oct 28 15:34
/usr/lib/libuuid.so -> libuuid.so.16.0.22
lrwxrwxrwx 1 root root 18 Oct 28 15:34
/usr/lib/libuuid.so.16 -> libuuid.so.16.0.22
-rwxr-xr-x 1 root bin 76784 Oct 28 15:33
/usr/lib/libuuid.so.16.0.22
Do I need to use a specific version of the ossp-uuid libraries for this
module?
Thanks
Bruce
Hi.
Um, you are reconfigure of postgresql then. It is necessary to specify with-ossp-uuid.
Regards,
Hiroshi Saito
----- Original Message -----
From: "Bruce McAlister" <bruce.mcalister@blueface.ie>
To: "pgsql" <pgsql-general@postgresql.org>
Sent: Wednesday, October 29, 2008 8:01 AM
Subject: [GENERAL] UUID-OSSP Contrib Module Compilation Issue
Show quoted text
Hi All,
I am trying to build the uuid-ossp contrib module for PostgreSQL 8.3.4.
I am building on Solaris x86 with Sun Studio 12.I built the ossp-uuid version 1.6.2 libraries and installed them,
however, whenever I attempt to build the contrib module I always end up
with the following error:---------------------- + cd contrib + cd uuid-ossp + make all sed 's,MODULE_PATHNAME,$libdir/uuid-ossp,g' uuid-ossp.sql.in >uuid-ossp.sql /usr/bin/cc -Xa -I/usr/sfw/include -KPIC -I. -I../../src/include -I/usr/sfw/include -c -o uuid-ossp.o uuid-ossp.c "uuid-ossp.c", line 29: #error: OSSP uuid.h not found cc: acomp failed for uuid-ossp.c make: *** [uuid-ossp.o] Error 2 ----------------------I have the ossp uuid libraries and headers in the standar locations
(/usr/include, /usr/lib) but the checks within the contrib module dont
appear to find the ossp uuid headers I have installed.Am I mising something here, or could the #ifdefs have something to do
with it not picking up the newer ossp uuid defnitions?Any suggestions would be greatly appreciated.
Thanks
Bruce--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Bruce McAlister <bruce.mcalister@blueface.ie> writes:
When I run configure with the above options, I end up with the following
configure error:
checking for uuid_export in -lossp-uuid... no
checking for uuid_export in -luuid... no
configure: error: library 'ossp-uuid' or 'uuid' is required for OSSP-UUID
Huh. Nothing obvious in your info about why it wouldn't work. I think
you'll need to dig through the config.log output to see why these link
tests are failing. (They'll be a few hundred lines above the end of the
log, because the last part of the log is always a dump of configure's
internal variables.)
regards, tom lane
Do I need to use a specific version of the ossp-uuid libraries for this
module?
The 1.6.2 stable version which you use is right.
Regards,
Hiroshi Saito
Huh. Nothing obvious in your info about why it wouldn't work. I think
you'll need to dig through the config.log output to see why these link
tests are failing. (They'll be a few hundred lines above the end of the
log, because the last part of the log is always a dump of configure's
internal variables.)
In addition to the missing configure option, it turned out to be missing
LDFLAGS parameters, I just added -L/usr/lib to LDFLAGS and it all built
successfully now.
Thanks for the pointers :)
The 1.6.2 stable version which you use is right.
Thanks, we managed to get it working now. Thanks for the pointers.
Bruce McAlister <bruce.mcalister@blueface.ie> writes:
In addition to the missing configure option, it turned out to be missing
LDFLAGS parameters, I just added -L/usr/lib to LDFLAGS and it all built
successfully now.
Bizarre ... I've never heard of a Unix system that didn't consider that
a default place to look. Unless this is a 64-bit machine and uuid
should have installed itself in /usr/lib64?
regards, tom lane
I am planning on setting up PITR for my application.
It does not see much traffic and it looks like the 16 MB log files
switch about every 4 hours or so during business hours.
I am also about to roll out functionality to store documents in a bytea
column. This should make the logs roll faster.
I also have to ship them off site using a T1 so setting the time to
automatically switch files will just waste bandwidth if they are still
going to be 16 MB anyway.
*1. What is the effect of recompiling and reducing the default size of
the WAL files?
2. What is the minimum suggested size?
3. If I reduce the size how will this work if I try to save a document
that is larger than the WAL size?
Any other suggestions would be most welcome.
*
Thank you for your time,
Jason Long
CEO and Chief Software Engineer
BS Physics, MS Chemical Engineering
http://www.octgsoftware.com
HJBug Founder and President
http://www.hjbug.com
Jason Long wrote:
I am planning on setting up PITR for my application.
I also have to ship them off site using a T1 so setting the time to
automatically switch files will just waste bandwidth if they are still
going to be 16 MB anyway.*1. What is the effect of recompiling and reducing the default size of
the WAL files?
Increased I/O
2. What is the minimum suggested size?
16 megs, the default.
3. If I reduce the size how will this work if I try to save a document
that is larger than the WAL size?
You will create more segments.
Joshua D. Drake
Bizarre ... I've never heard of a Unix system that didn't consider that
a default place to look. Unless this is a 64-bit machine and uuid
should have installed itself in /usr/lib64?
It is a rather peculiar issue, I also assumed that it would check the
standard locations, but I thought I would try it anyway and see what
happens.
The box is indeed a 64-bit system but the packages being built are all
32-bit and therefor all libraries being built are all in the standard
locations.
Bruce McAlister <bruce.mcalister@blueface.ie> writes:
Bizarre ... I've never heard of a Unix system that didn't consider that
a default place to look. Unless this is a 64-bit machine and uuid
should have installed itself in /usr/lib64?
It is a rather peculiar issue, I also assumed that it would check the
standard locations, but I thought I would try it anyway and see what
happens.
The box is indeed a 64-bit system but the packages being built are all
32-bit and therefor all libraries being built are all in the standard
locations.
Hmm ... it sounds like some part of the compile toolchain didn't get the
word about wanting to build 32-bit. Perhaps the switch you really need
is along the lines of CFLAGS=-m32.
regards, tom lane
On Tue, 28 Oct 2008, Jason Long wrote:
I also have to ship them off site using a T1 so setting the time to
automatically switch files will just waste bandwidth if they are still going
to be 16 MB anyway.
The best way to handle this is to clear the unused portion of the WAL file
and then compress it before sending over the link. There is a utility
named pg_clearxlogtail available at
http://www.2ndquadrant.com/replication.htm that handles the first part of
that you may find useful here.
This reminds me yet again that pg_clearxlogtail should probably get added
to the next commitfest for inclusion into 8.4; it's really essential for a
WAN-based PITR setup and it would be nice to include it with the
distribution.
--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
On Wed, 2008-10-29 at 09:05 -0400, Greg Smith wrote:
On Tue, 28 Oct 2008, Jason Long wrote:
I also have to ship them off site using a T1 so setting the time to
automatically switch files will just waste bandwidth if they are still going
to be 16 MB anyway.The best way to handle this is to clear the unused portion of the WAL file
and then compress it before sending over the link. There is a utility
named pg_clearxlogtail available at
http://www.2ndquadrant.com/replication.htm that handles the first part of
that you may find useful here.This reminds me yet again that pg_clearxlogtail should probably get added
to the next commitfest for inclusion into 8.4; it's really essential for a
WAN-based PITR setup and it would be nice to include it with the
distribution.
What is to be gained over just using rsync with -z?
Joshua D. Drake
--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
--
On Thu, 30 Oct 2008, Joshua D. Drake wrote:
This reminds me yet again that pg_clearxlogtail should probably get added
to the next commitfest for inclusion into 8.4; it's really essential for a
WAN-based PITR setup and it would be nice to include it with the
distribution.What is to be gained over just using rsync with -z?
When a new XLOG segment is created, it gets zeroed out first, so that
there's no chance it can accidentally look like a valid segment. But when
an existing segment is recycled, it gets a new header and that's it--the
rest of the 16MB is still left behind from whatever was in that segment
before. That means that even if you only write, say, 1MB of new data to a
recycled segment before a timeout that causes you to ship it somewhere
else, there will still be a full 15MB worth of junk from its previous life
which may or may not be easy to compress.
I just noticed that recently this project has been pushed into pgfoundry,
it's at
http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/clearxlogtail/clearxlogtail/
What clearxlogtail does is look inside the WAL segment, and it clears the
"tail" behind the portion of that is really used. So our example file
would end up with just the 1MB of useful data, followed by 15MB of zeros
that will compress massively. Since it needs to know how XLogPageHeader
is formatted and if it makes a mistake your archive history will be
silently corrupted, it's kind of a scary utility to just download and use.
That's why I'd like to see it turn into a more official contrib module, so
that it will never lose sync with the page header format and be available
to anyone using PITR.
--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
Greg Smith wrote:
On Thu, 30 Oct 2008, Joshua D. Drake wrote:
This reminds me yet again that pg_clearxlogtail should probably get
added
to the next commitfest for inclusion into 8.4; it's really essential
for a
WAN-based PITR setup and it would be nice to include it with the
distribution.What is to be gained over just using rsync with -z?
When a new XLOG segment is created, it gets zeroed out first, so that
there's no chance it can accidentally look like a valid segment. But
when an existing segment is recycled, it gets a new header and that's
it--the rest of the 16MB is still left behind from whatever was in
that segment before. That means that even if you only write, say, 1MB
of new data to a recycled segment before a timeout that causes you to
ship it somewhere else, there will still be a full 15MB worth of junk
from its previous life which may or may not be easy to compress.I just noticed that recently this project has been pushed into
pgfoundry, it's at
http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/clearxlogtail/clearxlogtail/What clearxlogtail does is look inside the WAL segment, and it clears
the "tail" behind the portion of that is really used. So our example
file would end up with just the 1MB of useful data, followed by 15MB
of zeros that will compress massively. Since it needs to know how
XLogPageHeader is formatted and if it makes a mistake your archive
history will be silently corrupted, it's kind of a scary utility to
just download and use.
I would really like to add something like this to my application.
1. Should I be scared or is it just scary in general?
2. Is this safe to use with 8.3.4?
3. Any pointers on how to install and configure this?
Show quoted text
That's why I'd like to see it turn into a more official contrib
module, so that it will never lose sync with the page header format
and be available to anyone using PITR.--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
Greg Smith wrote:
there's no chance it can accidentally look like a valid segment. But
when an existing segment is recycled, it gets a new header and that's
it--the rest of the 16MB is still left behind from whatever was in that
segment before. That means that even if you only write, say, 1MB of new
[...]
What clearxlogtail does is look inside the WAL segment, and it clears
the "tail" behind the portion of that is really used. So our example
file would end up with just the 1MB of useful data, followed by 15MB of
It sure would be nice if there was a way for PG itself to zero the
unused portion of logs as they are completed, perhaps this will make it
in as part of the ideas discussed on this list a while back to make a
more "out of the box" log-ship mechanism?
--
Kyle Cordes
http://kylecordes.com
Kyle Cordes wrote:
Greg Smith wrote:
there's no chance it can accidentally look like a valid segment. But
when an existing segment is recycled, it gets a new header and that's
it--the rest of the 16MB is still left behind from whatever was in
that segment before. That means that even if you only write, say,
1MB of new[...]
What clearxlogtail does is look inside the WAL segment, and it clears
the "tail" behind the portion of that is really used. So our example
file would end up with just the 1MB of useful data, followed by 15MB ofIt sure would be nice if there was a way for PG itself to zero the
unused portion of logs as they are completed, perhaps this will make
it in as part of the ideas discussed on this list a while back to make
a more "out of the box" log-ship mechanism?
*I agree totally. I looked at the code for clearxlogtail and it seems
short and not very complex. Hopefully something like this will at least
be a trivial to set up option in 8.4.**
*
Show quoted text
On Thu, 30 Oct 2008, Kyle Cordes wrote:
It sure would be nice if there was a way for PG itself to zero the unused
portion of logs as they are completed, perhaps this will make it in as part
of the ideas discussed on this list a while back to make a more "out of the
box" log-ship mechanism?
The overhead of clearing out the whole thing is just large enough that it
can be disruptive on systems generating lots of WAL traffic, so you don't
want the main database processes bothering with that. A related fact is
that there is a noticable slowdown to clients that need a segment switch
on a newly initialized and fast system that has to create all its WAL
segments, compared to one that has been active long enough to only be
recycling them. That's why this sort of thing has been getting pushed
into the archive_command path; nothing performance-sensitive that can slow
down clients is happening there, so long as your server is powerful enough
to handle that in parallel with everything else going on.
Now, it would be possible to have that less sensitive archive code path
zero things out, but you'd need to introduce a way to note when it's been
done (so you don't do it for a segment twice) and a way to turn it off so
everybody doesn't go through that overhead (which probably means another
GUC). That's a bit much trouble to go through just for a feature with a
fairly limited use-case that can easily live outside of the engine
altogether.
--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD