Recovery inconsistencies, standby much larger than primary
Since the point release we've run into a number of databases that when
we restore from a base backup end up being larger than the primary
database was. Sometimes by a large factor. The data below is from
9.1.11 (both primary and standby) but we've seen the same thing on
9.2.6.
primary$ for i in 1261982 1364767 1366221 473158 ; do echo -n "$i " ;
du -shc $i* | tail -1 ; done
1261982 29G total
1364767 23G total
1366221 12G total
473158 76G total
standby$ for i in 1261982 1364767 1366221 473158 ; do echo -n "$i " ;
du -shc $i* | tail -1 ; done
1261982 55G total
1364767 28G total
1366221 17G total
473158 139G total
I've run the snaga xlogdump on the WAL records played before reaching
a consistent point (we confirmed the extra storage had already
appeared by then) and grepped for the above relfilenode but they're
quite large. I believe these dumps don't contain any sensitive data,
when I verify that I can upload one of them for inspection.
$ ls -lh [14]*
-rw-rw-r-- 1 heroku heroku 325M Jan 24 04:13 1261982
-rw-r--r-- 1 root root 352M Jan 25 00:04 1364767
-rw-r--r-- 1 root root 123M Jan 25 00:04 1366221
-rw-r--r-- 1 root root 357M Jan 25 00:04 473158
The first three are btrees and the fourth is a haeap btw.
We're also seeing log entries about "wal contains reference to invalid
pages" but these errors seem only vaguely correlated. Sometimes we get
the errors but the tables don't grow noticeably and sometimes we don't
get the errors and the tables are much larger.
Much of the added space is uninitialized pages as you might expect but
I don't understand is how the database can start up without running
into the "reference to invalid pages" panic consistently. We check
both that there are no references after consistency is reached *and*
that any references before consistency are resolved by a truncate or
unlink before consistency.
The primary was never this large btw, so it's not just a case of
leftover files from drops or truncates that might have failed on the
standby.
I'm assuming this is somehow related to the mulixact or transaction
wraparound problems but I don't really understand how they could be
hitting when both the primary and standby are post-upgrade to the most
recent point release which have the fixes
--
greg
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
On 2014-01-24 19:23:28 -0500, Greg Stark wrote:
Since the point release we've run into a number of databases that when
we restore from a base backup end up being larger than the primary
database was. Sometimes by a large factor. The data below is from
9.1.11 (both primary and standby) but we've seen the same thing on
9.2.6.
What's the procedure for creating those standbys? Were they of similar
size after being cloned?
primary$ for i in 1261982 1364767 1366221 473158 ; do echo -n "$i " ;
du -shc $i* | tail -1 ; done
1261982 29G total
1364767 23G total
1366221 12G total
473158 76G totalstandby$ for i in 1261982 1364767 1366221 473158 ; do echo -n "$i " ;
du -shc $i* | tail -1 ; done
1261982 55G total
1364767 28G total
1366221 17G total
473158 139G total
...
The first three are btrees and the fourth is a haeap btw.
Are those all of the same underlying heap relation?
We're also seeing log entries about "wal contains reference to invalid
pages" but these errors seem only vaguely correlated. Sometimes we get
the errors but the tables don't grow noticeably and sometimes we don't
get the errors and the tables are much larger.
Uhm. I am a bit confused. You see those in the standby's log? At !debug
log levels? That'd imply that the standby is dead and needed to be
recloned, no? How do you continue after that?
Much of the added space is uninitialized pages as you might expect but
I don't understand is how the database can start up without running
into the "reference to invalid pages" panic consistently. We check
both that there are no references after consistency is reached *and*
that any references before consistency are resolved by a truncate or
unlink before consistency.
Well, it's pretty easy to get into a situation with lot's of new
pages. Lots of concurrent inserts that all fail before logging WAL. The
next insert will extend the relation and only initialise that last
value.
It'd be interesting to look for TRUNCATE records using xlogdump. Could
you show those for starters?
I'm assuming this is somehow related to the mulixact or transaction
wraparound problems but I don't really understand how they could be
hitting when both the primary and standby are post-upgrade to the most
recent point release which have the fixes
That doesn't sound likely. For one the symptoms don't fit, for another,
those problems are mostly 9.3+.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Sun, Jan 26, 2014 at 9:45 AM, Andres Freund <andres@2ndquadrant.com> wrote:
Hi,
On 2014-01-24 19:23:28 -0500, Greg Stark wrote:
Since the point release we've run into a number of databases that when
we restore from a base backup end up being larger than the primary
database was. Sometimes by a large factor. The data below is from
9.1.11 (both primary and standby) but we've seen the same thing on
9.2.6.What's the procedure for creating those standbys? Were they of similar
size after being cloned?
These are restored from base backup using WAL-E and then started in
standby mode. The logs are retrieved using archive_command (which is
WAL-E) after it retrieves lots of archived wal the database switches
to streaming.
We confirmed from size monitoring that the standby database grew
substantially before the time it reported reaching consistent state,
so I only downloaded the WAL from that range for analysis.
primary$ for i in 1261982 1364767 1366221 473158 ; do echo -n "$i " ;
du -shc $i* | tail -1 ; done
1261982 29G total
1364767 23G total
1366221 12G total
473158 76G totalstandby$ for i in 1261982 1364767 1366221 473158 ; do echo -n "$i " ;
du -shc $i* | tail -1 ; done
1261982 55G total
1364767 28G total
1366221 17G total
473158 139G total
...
The first three are btrees and the fourth is a haeap btw.Are those all of the same underlying heap relation?
Are you asking whether the relfilenode was reused for a different
relation? I doubt it.
Or are you asking if the first three indexes are for the same heap
(presumably the fourth one)? I don't think so but I can check.
We're also seeing log entries about "wal contains reference to invalid
pages" but these errors seem only vaguely correlated. Sometimes we get
the errors but the tables don't grow noticeably and sometimes we don't
get the errors and the tables are much larger.Uhm. I am a bit confused. You see those in the standby's log? At !debug
log levels? That'd imply that the standby is dead and needed to be
recloned, no? How do you continue after that?
It's possible I'm confusing symptoms from an unrelated problem. But
the symptom we saw was that it got this error, recovery crashed, then
recovery started again and it replayed fine. I agree that doesn't jive
with the code I see in 9.3, I didn't check how long the code was this
tense though.
Much of the added space is uninitialized pages as you might expect but
I don't understand is how the database can start up without running
into the "reference to invalid pages" panic consistently. We check
both that there are no references after consistency is reached *and*
that any references before consistency are resolved by a truncate or
unlink before consistency.Well, it's pretty easy to get into a situation with lot's of new
pages. Lots of concurrent inserts that all fail before logging WAL. The
next insert will extend the relation and only initialise that last
value.It'd be interesting to look for TRUNCATE records using xlogdump. Could
you show those for starters?
There are no records matching grep -i truncate in any of those
extracts for those relfilenodes. I'm grepping the whole xlogdump now
but it'll take a while. So far no truncates anywhere.
I'm assuming this is somehow related to the mulixact or transaction
wraparound problems but I don't really understand how they could be
hitting when both the primary and standby are post-upgrade to the most
recent point release which have the fixesThat doesn't sound likely. For one the symptoms don't fit, for another,
those problems are mostly 9.3+.
These problems all started to appear after the latest point release
btw. That could just be a coincidence of course.
--
greg
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Sun, Jan 26, 2014 at 5:45 PM, Andres Freund <andres@2ndquadrant.com> wrote:
We're also seeing log entries about "wal contains reference to invalid
pages" but these errors seem only vaguely correlated. Sometimes we get
the errors but the tables don't grow noticeably and sometimes we don't
get the errors and the tables are much larger.Uhm. I am a bit confused. You see those in the standby's log? At !debug
log levels? That'd imply that the standby is dead and needed to be
recloned, no? How do you continue after that?
So in chatting with Heikki last night we came up with a scenario where
this check is insufficient.
If you have multiple checkpoints during the base backup then there
will be restartpoints during recovery. If the reference to the invalid
page is before the restartpont then after crashing recovery and coming
back up the recovery will go forward fine.
Fixing this check doesn't look trivial. I'm inclined to say to
suppress any restartpoints while there are references to invalid pages
in the hash. The problem with this is that it will prevent trimming
the xlog during recovery. It seems frightening that most days recovery
will take little extra space but if you happen to have a drop table or
truncate during the base backup then your recovery might require a lot
of extra space.
The alternative of spilling the hash table to disk at every
restartpoint seems kind of hokey. Then we need to worry about fsyncing
this file, cleaning it up, dealing with the file after crashes, etc.
--
greg
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2014-01-31 11:09:14 +0000, Greg Stark wrote:
On Sun, Jan 26, 2014 at 5:45 PM, Andres Freund <andres@2ndquadrant.com> wrote:
We're also seeing log entries about "wal contains reference to invalid
pages" but these errors seem only vaguely correlated. Sometimes we get
the errors but the tables don't grow noticeably and sometimes we don't
get the errors and the tables are much larger.Uhm. I am a bit confused. You see those in the standby's log? At !debug
log levels? That'd imply that the standby is dead and needed to be
recloned, no? How do you continue after that?
So in chatting with Heikki last night we came up with a scenario where
this check is insufficient.
But that seems unrelated to the issue at hand, right?
If you have multiple checkpoints during the base backup then there
will be restartpoints during recovery. If the reference to the invalid
page is before the restartpont then after crashing recovery and coming
back up the recovery will go forward fine.
We don't perform restartpoints if there are invalid pages
registered. Check the XLogHaveInvalidPages() call in xlog.c.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2014-01-31 11:09:14 +0000, Greg Stark wrote:
On Sun, Jan 26, 2014 at 5:45 PM, Andres Freund <andres@2ndquadrant.com> wrote:
We're also seeing log entries about "wal contains reference to invalid
pages" but these errors seem only vaguely correlated. Sometimes we get
the errors but the tables don't grow noticeably and sometimes we don't
get the errors and the tables are much larger.Uhm. I am a bit confused. You see those in the standby's log? At !debug
log levels? That'd imply that the standby is dead and needed to be
recloned, no? How do you continue after that?
So in chatting with Heikki last night we came up with a scenario where
this check is insufficient.
The slightly more likely explanation for transient errors is that you
hit the vacuum bug (061b079f89800929a863a692b952207cadf15886). That had
only taken effect if HS has already assembled a snapshot, which can make
such an error vanish after a restart...
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Jan 31, 2014 at 11:26 AM, Andres Freund <andres@2ndquadrant.com> wrote:
The slightly more likely explanation for transient errors is that you
hit the vacuum bug (061b079f89800929a863a692b952207cadf15886). That had
only taken effect if HS has already assembled a snapshot, which can make
such an error vanish after a restart...
Which one, there seem to be several....
So this seems like it's more likely to be a symptom of whatever is
causing the table to grow than a cause? That is, there's some bug
causing the standby to extend the btree dramatically resulting in lots
of uninitialized pages and touching those pages triggers this bug. But
this doesn't explain why the btree is being extended I don't think.
--
greg
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2014-01-31 11:46:09 +0000, Greg Stark wrote:
On Fri, Jan 31, 2014 at 11:26 AM, Andres Freund <andres@2ndquadrant.com> wrote:
The slightly more likely explanation for transient errors is that you
hit the vacuum bug (061b079f89800929a863a692b952207cadf15886). That had
only taken effect if HS has already assembled a snapshot, which can make
such an error vanish after a restart...Which one, there seem to be several....
So this seems like it's more likely to be a symptom of whatever is
causing the table to grow than a cause? That is, there's some bug
causing the standby to extend the btree dramatically resulting in lots
of uninitialized pages and touching those pages triggers this bug. But
this doesn't explain why the btree is being extended I don't think.
I don't think anything we've talked about so far is likely to explain
the issue. I don't have time atm to look closer, but what I'd do is try
to look if there are any pages with valid LSNs on the standby in the
bloated area... That then might give you a hint where to look.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
1261982.53 is entirely nuls. I think that's true for most if not all
of the intervening files, still investigating.
The 54th segment is nul up to offset 1f0c0000 after which it has valid
looking blocks:
# hexdump 1261982.54 | head -100
0000000 0000 0000 0000 0000 0000 0000 0000 0000
*
1f0c0000 0ea1 0000 8988 0063 0006 0000 04d8 0cf0
However when I grep xlogdump for any records mentioning this block I
get nothing.
In fact the largest block I find in the xlog is 3646630:
# grep 'tid 3646631/' 1261982 | wc -l
0
# grep 'tid 3646630/' 1261982 | wc -l
177
Looking at the block above it looks like the LSN is 0000EA100638988
which I find in the logs but it's a btree insert on a different btree:
[cur:EA1/637140, xid:1418089147, rmid:11(Btree), len/tot_len:18/6194,
info:8, prev:EA1/635290] bkpblock[1]: s/d/r:1663/16385/1261982
blk:3634978 hole_off/len:1240/2072
[cur:EA1/638988, xid:1418089147, rmid:11(Btree), len/tot_len:18/5894,
info:8, prev:EA1/637140] insert_leaf: s/d/r:1663/16385/1364767 tid
2746914/219
[cur:EA1/638988, xid:1418089147, rmid:11(Btree), len/tot_len:18/5894,
info:8, prev:EA1/637140] bkpblock[1]: s/d/r:1663/16385/1364767
blk:2746914 hole_off/len:1180/2372
[cur:EA1/63A0A8, xid:1418089147, rmid:1(Transaction),
len/tot_len:32/64, info:0, prev:EA1/638988] d/s:16385/1663 commit at
2014-01-21 05:41:11 UTC
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2014-01-31 14:39:47 +0000, Greg Stark wrote:
1261982.53 is entirely nuls. I think that's true for most if not all
of the intervening files, still investigating.The 54th segment is nul up to offset 1f0c0000 after which it has valid
looking blocks:
It'd be interesting to dump the page header for that using pageinspect.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Jan 31, 2014 at 2:39 PM, Greg Stark <stark@mit.edu> wrote:
[cur:EA1/637140, xid:1418089147, rmid:11(Btree), len/tot_len:18/6194,
info:8, prev:EA1/635290] bkpblock[1]: s/d/r:1663/16385/1261982
blk:3634978 hole_off/len:1240/2072
[cur:EA1/638988, xid:1418089147, rmid:11(Btree), len/tot_len:18/5894,
info:8, prev:EA1/637140] insert_leaf: s/d/r:1663/16385/1364767 tid
2746914/219
Actually wait, the immediate previous record is indeed on the right
filenode. Is the LSN printed in xlogdump the LSN that would be in the
pd_lsn or is the pd_lsn going to be from the following record?
--
greg
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2014-01-31 14:59:21 +0000, Greg Stark wrote:
On Fri, Jan 31, 2014 at 2:39 PM, Greg Stark <stark@mit.edu> wrote:
[cur:EA1/637140, xid:1418089147, rmid:11(Btree), len/tot_len:18/6194,
info:8, prev:EA1/635290] bkpblock[1]: s/d/r:1663/16385/1261982
blk:3634978 hole_off/len:1240/2072
[cur:EA1/638988, xid:1418089147, rmid:11(Btree), len/tot_len:18/5894,
info:8, prev:EA1/637140] insert_leaf: s/d/r:1663/16385/1364767 tid
2746914/219Actually wait, the immediate previous record is indeed on the right
filenode. Is the LSN printed in xlogdump the LSN that would be in the
pd_lsn or is the pd_lsn going to be from the following record?
It points to the end of the record (i.e. the beginning of the next). It
needs to, because otherwise XLogFlush()es on the pd_lsn wouldn't flush
enough.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Jan 31, 2014 at 3:08 PM, Andres Freund <andres@2ndquadrant.com> wrote:
It points to the end of the record (i.e. the beginning of the next). It
needs to, because otherwise XLogFlush()es on the pd_lsn wouldn't flush
enough.
Ah, in which case the relevant record is:
[cur:EA1/637140, xid:1418089147, rmid:11(Btree), len/tot_len:18/6194,
info:8, prev:EA1/635290] insert_leaf: s/d/r:1663/16385/1261982 tid
3634978/282
[cur:EA1/637140, xid:1418089147, rmid:11(Btree), len/tot_len:18/6194,
info:8, prev:EA1/635290] bkpblock[1]: s/d/r:1663/16385/1261982
blk:3634978 hole_off/len:1240/2072
It looks like get_raw_page() refuses to read past the end of relpages.
I could make a clone of this database to allow experimenting with
tweaking relpages but it may or may not reproduce the problem...
=# select pg_relation_size('data_pkey') / 1024 / 1024 / 1024;
?column?
----------
23
(1 row)
=# select get_raw_page('data_pkey', 'main', 11073632) ;
ERROR: block number 11073632 is out of range for relation "data_pkey"
d9de7pcqls4ib6=# select relpages from pg_class where relname = 'data_pkey';
relpages
----------
2889286
--
greg
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2014-01-31 15:15:24 +0000, Greg Stark wrote:
On Fri, Jan 31, 2014 at 3:08 PM, Andres Freund <andres@2ndquadrant.com> wrote:
It points to the end of the record (i.e. the beginning of the next). It
needs to, because otherwise XLogFlush()es on the pd_lsn wouldn't flush
enough.Ah, in which case the relevant record is:
[cur:EA1/637140, xid:1418089147, rmid:11(Btree), len/tot_len:18/6194,
info:8, prev:EA1/635290] insert_leaf: s/d/r:1663/16385/1261982 tid
3634978/282
[cur:EA1/637140, xid:1418089147, rmid:11(Btree), len/tot_len:18/6194,
info:8, prev:EA1/635290] bkpblock[1]: s/d/r:1663/16385/1261982
blk:3634978 hole_off/len:1240/2072It looks like get_raw_page() refuses to read past the end of relpages.
I could make a clone of this database to allow experimenting with
tweaking relpages but it may or may not reproduce the problem...
No, it uses smgrnblocks() to get the size.
=# select get_raw_page('data_pkey', 'main', 11073632) ;
ERROR: block number 11073632 is out of range for relation "data_pkey"
Isn't the page 3634978?
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, Jan 31, 2014 at 3:19 PM, Andres Freund <andres@2ndquadrant.com> wrote:
=# select get_raw_page('data_pkey', 'main', 11073632) ;
ERROR: block number 11073632 is out of range for relation "data_pkey"Isn't the page 3634978?
The page in the record is.
But the page on disk is in the 54th segment at offset 1F0C0000
So unless my arithmetic is wrong:
bc -l
ibase=16
400 * 400 * 400 / 2000 * 54 + 1F0C0000 / 2000
11073632
--
greg
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2014-01-31 15:21:35 +0000, Greg Stark wrote:
On Fri, Jan 31, 2014 at 3:19 PM, Andres Freund <andres@2ndquadrant.com> wrote:
=# select get_raw_page('data_pkey', 'main', 11073632) ;
ERROR: block number 11073632 is out of range for relation "data_pkey"Isn't the page 3634978?
The page in the record is.
It'd be interesting to look at the referenced page using bt_page_items().
But the page on disk is in the 54th segment at offset 1F0C0000
It's interesting that the smgr gets this wrong then (as also evidenced
by the fact that relation_size does as well). Could you please do a ls
-l path/to/relfilenode*?
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund <andres@2ndquadrant.com> writes:
It's interesting that the smgr gets this wrong then (as also evidenced
by the fact that relation_size does as well). Could you please do a ls
-l path/to/relfilenode*?
IIRC, smgrnblocks will stop as soon as it finds a segment that is not
1GB in size. Could you check the lengths of all segments for that
relation?
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2014-01-31 10:33:16 -0500, Tom Lane wrote:
Andres Freund <andres@2ndquadrant.com> writes:
It's interesting that the smgr gets this wrong then (as also evidenced
by the fact that relation_size does as well). Could you please do a ls
-l path/to/relfilenode*?IIRC, smgrnblocks will stop as soon as it finds a segment that is not
1GB in size. Could you check the lengths of all segments for that
relation?
Yea, that's what I am wondering about. I wanted the full list because
there could be an entire file missing and it's interesting to see at
which time they were last touched relative to each other...
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Greg Stark <stark@mit.edu> writes:
On Fri, Jan 31, 2014 at 3:19 PM, Andres Freund <andres@2ndquadrant.com> wrote:
Isn't the page 3634978?
The page in the record is.
But the page on disk is in the 54th segment at offset 1F0C0000
So unless my arithmetic is wrong:
bc -l
ibase=16
400 * 400 * 400 / 2000 * 54 + 1F0C0000 / 2000
11073632
At least two of us are confused. I get
# select ((2^30) * 54.0 + 'x1F0C0000'::bit(32)::int) / 8192;
?column?
----------
7141472
(1 row)
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Sorry guys. I transposed two numbers when looking up the relation.
"data_pk" wasn't the right index.
=# select (page_header(get_raw_page('index_data_id', 'main', 3020854))).* ;
lsn | tli | flags | lower | upper | special | pagesize |
version | prune_xid
--------------+-----+-------+-------+-------+---------+----------+---------+-----------
CF0/2DD67BB8 | 5 | 0 | 792 | 5104 | 8176 | 8192 |
4 | 0
(1 row)
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers