what to revert
On Tue, May 3, 2016 at 11:36 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
There are a lot more than 2 patchsets that are busted at the moment,
unfortunately, but I assume you are referring to "snapshot too old"
and "Use Foreign Key relationships to infer multi-column join
selectivity".Yeah, those are the ones I'm thinking of. I've not heard serious
proposals to revert any others, have you?
Here's a list of what I think is currently broken in 9.6 that we might
conceivably fix by reverting patches:
- Snapshot Too Old. Tom, Andres, and Bruce want this reverted. It
regresses performance significantly when turned on. It originally
regressed performance significantly even when turned off, but that
might be fixed now. Also, it seems to be broken for hash indexes, per
Amit Kapila's report.
- Atomic Pin/Unpin. There are two different open items related to
this, both apparently relating to testing done by Jeff Janes. I am
not sure whether they are really independent reports of different
problems or just two reports of the same problem. But they sound like
fairly serious breakage.
- Predefined Roles. Neither you nor I liked Stephen's design. It
slowed down pg_dump. It also broke pg_dump for non-superusers and
something about bypassrls. None of these issues have been fixed
despite considerable time having gone by.
- Checkpoint Sorting and Flushing. Andres tried to fix the last
report of problems with this in
72a98a639574d2e25ed94652848555900c81a799, but there were almost
immediately new reports.
- Foreign Keys and Multi-Column Outer Joins. You called for a revert,
and the author responded with various thoughts and counterproposals.
There are a few other open items, but I would consider reverting the
antecedent patches as either impractical or disproportionate in those
cases. Aside from the two cases you mentioned, I don't think that
anyone's actually called for these other patches to be reverted, but
I'm not sure that we shouldn't be considering it. What do you (and
others) think?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, May 3, 2016 at 10:58 AM, Robert Haas <robertmhaas@gmail.com> wrote:
- Snapshot Too Old. Tom, Andres, and Bruce want this reverted.
It regresses performance significantly when turned on.
When turned on, it improves performance in some cases and regresses
performance in others. Don't forget it is currently back-patched
to 9.4 and in use for production by users who could not use
PostgreSQL without the feature. PostgreSQL failed their
performance tests miserably without the feature, and passes with
it.
It originally regressed performance significantly even when
turned off,
Which was wildly exaggerated since most of the benchmarks
purporting to show that actually had it turned on. I don't think
the FUD from that has really evaporated.
but that might be fixed now.
Certainly all evidence suggests that, FUD to the contrary.
Also, it seems to be broken for hash indexes, per Amit Kapila's
report.
Yeah, with a fairly simple fix suggested immediately by Amit. I'm
looking into a couple other angles for possible fixes, but
certainly what he suggested could be done before beta1.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-05-03 11:58:34 -0400, Robert Haas wrote:
- Atomic Pin/Unpin. There are two different open items related to
this, both apparently relating to testing done by Jeff Janes. I am
not sure whether they are really independent reports of different
problems or just two reports of the same problem. But they sound like
fairly serious breakage.
It's a bit hard to say, given the test take this long to run, but so far
I've a fair amount of doubt that the bug report is actually related to
the atomic pin/unpin. It appears to me - I'm in the process of trying
to confirm (again long runs) that the pin/unpin patch merely changed the
timing.
- Checkpoint Sorting and Flushing. Andres tried to fix the last
report of problems with this in
72a98a639574d2e25ed94652848555900c81a799, but there were almost
immediately new reports.
Yea, it's an issue with the 72a98a639574d2e25ed94652848555900c81a799,
not checkpoint flushing itself. Turns out that mdsync() *wants/needs* to
access deleted segments. Easily enough fixed. I've posted a patch, which
I plan to commit unless somebody wants to give input on the flag design.
There are a few other open items, but I would consider reverting the
antecedent patches as either impractical or disproportionate in those
cases. Aside from the two cases you mentioned, I don't think that
anyone's actually called for these other patches to be reverted, but
I'm not sure that we shouldn't be considering it. What do you (and
others) think?
I'm *really* doubtful about the logical timeline following patches. I
think they're, as committed, over-complicated and pretty much unusable.
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-05-03 11:12:03 -0500, Kevin Grittner wrote:
On Tue, May 3, 2016 at 10:58 AM, Robert Haas <robertmhaas@gmail.com> wrote:
- Snapshot Too Old. Tom, Andres, and Bruce want this reverted.
It regresses performance significantly when turned on.When turned on, it improves performance in some cases and regresses
performance in others. Don't forget it is currently back-patched
to 9.4 and in use for production by users who could not use
PostgreSQL without the feature. PostgreSQL failed their
performance tests miserably without the feature, and passes with
it.It originally regressed performance significantly even when
turned off,Which was wildly exaggerated since most of the benchmarks
purporting to show that actually had it turned on. I don't think
the FUD from that has really evaporated.
Oh, ffs. The first report of the whole issue was *with default
parameters*:
http://archives.postgresql.org/message-id/CAPpHfdtMONZFOXSsw1HkrD9Eb4ozF8Q8oCqH4tkpH_girJPPuA%40mail.gmail.com
The issue with 0 v. -1 (and 0 vs. > 0 makes a big performance
difference, so it's not that surprising to interpret numbers that way)
was immediately addressed by another round of benchmarks after you
pointed it out.
but that might be fixed now.
Certainly all evidence suggests that, FUD to the contrary.
So it's now FUD to report issues with a patch that obviously hasn't
received sufficient benchmarking? Give me break.
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, May 3, 2016 at 12:22 PM, Andres Freund <andres@anarazel.de> wrote:
but that might be fixed now.
Certainly all evidence suggests that, FUD to the contrary.
So it's now FUD to report issues with a patch that obviously hasn't
received sufficient benchmarking? Give me break.
Yeah, I don't think that's FUD. Kevin, since your last fix, we don't
have a round of benchmarking on a big machine to show whether that
fixed the issue or not. I think that to really know whether this is
fixed, somebody would need to compare current master with current
master after reverting snapshot too old on a big machine and see if
there's a difference. If anyone has done that, they have not posted
the results. So it's more accurate to say that we just don't know.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, May 3, 2016 at 11:22 AM, Andres Freund <andres@anarazel.de> wrote:
The issue with 0 v. -1 (and 0 vs. > 0 makes a big performance
difference, so it's not that surprising to interpret numbers that way)
... if you fail to read the documentation of the feature or the
code implementing it before testing.
was immediately addressed by another round of benchmarks after you
pointed it out.
Which showed a 4% maximum hit before moving the test for whether it
was "off" inline. (I'm not clear from the posted results whether
that was before or after skipping the spinlock when the feature was
off.) All tests that I have done and that others have done (some
on big NUMA machines) and shared with me show no regression now. I
haven't been willing to declare victory on that basis without
hearing back from others who were able to show a regression before.
If there is still a regression found when "off", there is one more
test of old_snapshot_threshold which could easily be shifted
in-line; it just seems unnecessary given the other work done in
that area unless there is evidence that it is really needed.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund wrote:
I'm *really* doubtful about the logical timeline following patches. I
think they're, as committed, over-complicated and pretty much unusable.
As its committer, I tend to agree about reverting that feature. Craig
was just posting some more patches, and I have the pg_recvlogical
changes here (--endpos) which after some testing are not quite looking
ready to go -- plus we still have to write the actual Perl test scripts
that would use it. Taken together, this is now looking to me a bit
rushed, so I prefer to cut my losses here and revert the patch so that
we can revisit it for 9.7.
--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, May 3, 2016 at 9:58 AM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
As its committer, I tend to agree about reverting that feature. Craig
was just posting some more patches, and I have the pg_recvlogical
changes here (--endpos) which after some testing are not quite looking
ready to go -- plus we still have to write the actual Perl test scripts
that would use it. Taken together, this is now looking to me a bit
rushed, so I prefer to cut my losses here and revert the patch so that
we can revisit it for 9.7.
I think it's a positive development that we can take this attitude to
reverting patches. It should not be seen as a big personal failure,
because it isn't. Stigmatizing reverts incentivizes behavior that
leads to bad outcomes.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
* Robert Haas (robertmhaas@gmail.com) wrote:
Here's a list of what I think is currently broken in 9.6 that we might
conceivably fix by reverting patches:
[...]
- Predefined Roles. Neither you nor I liked Stephen's design. It
slowed down pg_dump. It also broke pg_dump for non-superusers and
something about bypassrls. None of these issues have been fixed
despite considerable time having gone by.
The issues you list are not with predefined roles at all, but with the
changes to dump ACLs defined against objects in pg_catalog. I've also
got patches to address those issues already developed and (mostly)
posted. I'll be posting a new set later today which addresses all of
the known issues with dumping catalog ACLs.
There is an ongoing thread where Noah and I have been discussing the
dumping of catalog ACLs issues and the TAP test suite which I've been
developing for pg_dump. Certainly, anyone is welcome to join in that
discussion.
As mentioned before, the concern raised about predefined roles are the
checks which were added to make them unlike regular roles. I'll be
posting a patch tomorrow to revert those checks, as I mentioned on
another thread earlier today.
Thanks!
Stephen
On 2016-05-03 11:46:23 -0500, Kevin Grittner wrote:
was immediately addressed by another round of benchmarks after you
pointed it out.Which showed a 4% maximum hit before moving the test for whether it
was "off" inline.
(I'm not clear from the posted results whether that was before or
after skipping the spinlock when the feature was off.)
They're from after the spinlock issue was resolved. Before that the
issue was a lot worse (see mail linked two messages upthread).
I'm pretty sure that I said that somewhere else at least once: But to be
absolutely clear, I'm *not* really concerned with the performance with
the feature turned off. I'm concerned about the performance with it
turned on.
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-05-03 10:12:51 -0700, Peter Geoghegan wrote:
On Tue, May 3, 2016 at 9:58 AM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
As its committer, I tend to agree about reverting that feature. Craig
was just posting some more patches, and I have the pg_recvlogical
changes here (--endpos) which after some testing are not quite looking
ready to go -- plus we still have to write the actual Perl test scripts
that would use it. Taken together, this is now looking to me a bit
rushed, so I prefer to cut my losses here and revert the patch so that
we can revisit it for 9.7.I think it's a positive development that we can take this attitude to
reverting patches. It should not be seen as a big personal failure,
because it isn't. Stigmatizing reverts incentivizes behavior that
leads to bad outcomes.
+1
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 05/03/2016 07:12 PM, Peter Geoghegan wrote:
On Tue, May 3, 2016 at 9:58 AM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
As its committer, I tend to agree about reverting that feature. Craig
was just posting some more patches, and I have the pg_recvlogical
changes here (--endpos) which after some testing are not quite looking
ready to go -- plus we still have to write the actual Perl test scripts
that would use it. Taken together, this is now looking to me a bit
rushed, so I prefer to cut my losses here and revert the patch so that
we can revisit it for 9.7.I think it's a positive development that we can take this attitude to
reverting patches. It should not be seen as a big personal failure,
because it isn't. Stigmatizing reverts incentivizes behavior that
leads to bad outcomes.
Absolutely +1
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 05/03/2016 07:41 PM, Andres Freund wrote:
On 2016-05-03 11:46:23 -0500, Kevin Grittner wrote:
was immediately addressed by another round of benchmarks after you
pointed it out.Which showed a 4% maximum hit before moving the test for whether it
was "off" inline.(I'm not clear from the posted results whether that was before or
after skipping the spinlock when the feature was off.)They're from after the spinlock issue was resolved. Before that the
issue was a lot worse (see mail linked two messages upthread).I'm pretty sure that I said that somewhere else at least once: But to
be absolutely clear, I'm *not* really concerned with the performance
with the feature turned off. I'm concerned about the performance with
it turned on.
If you tell me how to best test it, I do have a 4-socket server sitting
idly in the corner (well, a corner reachable by SSH). I can get us some
numbers, but I haven't been following the snapshot_too_old so I'll need
some guidance on what to test.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, May 3, 2016 at 9:57 PM, Tomas Vondra
<tomas.vondra@2ndquadrant.com> wrote:
If you tell me how to best test it, I do have a 4-socket server sitting idly
in the corner (well, a corner reachable by SSH). I can get us some numbers,
but I haven't been following the snapshot_too_old so I'll need some guidance
on what to test.
I worry about two contention points with the current implementation.
The main one is the locking within MaintainOldSnapshotTimeMapping()
that gets called every time a snapshot is taken. AFAICS this should
show up by setting old_snapshot_threshold to any positive value and
then running a simple within shared buffers scale factor read only
pgbench at high concurrency (number of CPUs or a small multiple). On a
single socket system this does not show up.
The second one is probably a bit harder to hit,
GetOldSnapshotThresholdTimestamp() has a spinlock that gets hit
everytime a scan sees a page that has been modified after the snapshot
was taken. A workload that would tickle this is something that uses a
repeatable read snapshot, builds a non-temporary table and runs
reporting on it. Something like this would work:
BEGIN ISOLATION LEVEL REPEATABLE READ;
DROP TABLE IF EXISTS test_:client_id;
CREATE TABLE test_:client_id (x int, filler text);
INSERT INTO test_:client_id SELECT x, repeat(' ', 1000) AS filler
FROM generate_series(1,1000) x;
SELECT (SELECT COUNT(*) FROM test_:client_id WHERE x != y) FROM
generate_series(1,1000) y;
COMMIT;
With this script running with -c4 on a 4 core workstation I'm seeing
the following kind of contention and a >2x loss in throughput:
+ 14.77% postgres postgres [.] GetOldSnapshotThresholdTimestamp
- 8.01% postgres postgres [.] s_lock
- s_lock
+ 88.15% GetOldSnapshotThresholdTimestamp
+ 10.47% TransactionIdLimitedForOldSnapshots
+ 0.71% TestForOldSnapshot_impl
+ 0.57% GetSnapshotCurrentTimestamp
Now this is kind of an extreme example, but I'm willing to bet that on
multi socket hosts similar issues can crop up with common real world
use cases.
Regards,
Ants Aasma
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Import Notes
Reply to msg id not found: 207298937.238671.1462301848350@RIA
On 2016-05-04 00:01:20 +0300, Ants Aasma wrote:
On Tue, May 3, 2016 at 9:57 PM, Tomas Vondra
<tomas.vondra@2ndquadrant.com> wrote:If you tell me how to best test it, I do have a 4-socket server sitting idly
in the corner (well, a corner reachable by SSH). I can get us some numbers,
but I haven't been following the snapshot_too_old so I'll need some guidance
on what to test.I worry about two contention points with the current implementation.
The main one is the locking within MaintainOldSnapshotTimeMapping()
that gets called every time a snapshot is taken. AFAICS this should
show up by setting old_snapshot_threshold to any positive value and
then running a simple within shared buffers scale factor read only
pgbench at high concurrency (number of CPUs or a small multiple). On a
single socket system this does not show up.
On a two socket system it does, check the bottom of:
http://archives.postgresql.org/message-id/20160413171955.i53me46fqqhdlztq%40alap3.anarazel.de
Note that this (accidentally) compares thresholds 0 and 10 (instead of
-1 and 10), but that's actually interesting for this question because of
the quick exit in MaintainOldSnapshotData:
/* No further tracking needed for 0 (used for testing). */
if (old_snapshot_threshold == 0)
return;
which happens before the lwlock is taken.
The second one is probably a bit harder to hit,
GetOldSnapshotThresholdTimestamp() has a spinlock that gets hit
everytime a scan sees a page that has been modified after the snapshot
was taken. A workload that would tickle this is something that uses a
repeatable read snapshot, builds a non-temporary table and runs
reporting on it.
I'm not particularly concerned about that kind of issue - we can quite
easily replace that spinlock with 64bit atomic reads/writes for which
I've already proposed a patch. I'd expect that to go into 9.7.
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-05-03 20:57:13 +0200, Tomas Vondra wrote:
On 05/03/2016 07:41 PM, Andres Freund wrote:
On 2016-05-03 11:46:23 -0500, Kevin Grittner wrote:
was immediately addressed by another round of benchmarks after you
pointed it out.Which showed a 4% maximum hit before moving the test for whether it
was "off" inline.(I'm not clear from the posted results whether that was before or
after skipping the spinlock when the feature was off.)They're from after the spinlock issue was resolved. Before that the
issue was a lot worse (see mail linked two messages upthread).I'm pretty sure that I said that somewhere else at least once: But to
be absolutely clear, I'm *not* really concerned with the performance
with the feature turned off. I'm concerned about the performance with
it turned on.If you tell me how to best test it, I do have a 4-socket server sitting idly
in the corner (well, a corner reachable by SSH). I can get us some numbers,
but I haven't been following the snapshot_too_old so I'll need some guidance
on what to test.
I think it'd be cool if you could test the effect of the feature in
read-only (and additionally read-mostly?) workload with various client
counts and snapshot_too_old values. For the latter maybe -1, 0, 10, 60
or such? I've done so (accidentally comparing 0 and 1 instead of -1 and
1) on a two socket machine in:
www.postgresql.org/message-id/20160413171955.i53me46fqqhdlztq@alap3.anarazel.de
It'd be very interesting to see how big the penalty is on a bigger box.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 4 May 2016 at 01:12, Peter Geoghegan <pg@heroku.com> wrote:
On Tue, May 3, 2016 at 9:58 AM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:As its committer, I tend to agree about reverting that feature. Craig
was just posting some more patches, and I have the pg_recvlogical
changes here (--endpos) which after some testing are not quite looking
ready to go -- plus we still have to write the actual Perl test scripts
that would use it. Taken together, this is now looking to me a bit
rushed, so I prefer to cut my losses here and revert the patch so that
we can revisit it for 9.7.I think it's a positive development that we can take this attitude to
reverting patches. It should not be seen as a big personal failure,
because it isn't. Stigmatizing reverts incentivizes behavior that
leads to bad outcomes.
Indeed. Being scared to revert also makes the barrier to committing
something and getting it into more hands, for longer, under a variety of
different conditions higher. Not a ton of help with this particular feature
but quite important with others.
While I'm personally disappointed by this outcome, I also can't really
disagree with it. The whole area is a bit of a mess and hard to work on,
and as demonstrated my understanding of it when I wrote the original patch
was incomplete. We could commit the rewritten function, but ... rewriting a
feature just before beta1 probably says it's just not baked yet, right?
The upside of all this is that now I have a decent picture of how it should
all look and how timeline following, failover capability etc can be
introduced in a staged way. I'd also like to get rid of the use of various
globals to pass timeline information "around" the walsender and share more
of the logic between the code paths.
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
On Tue, May 3, 2016 at 9:28 PM, Robert Haas <robertmhaas@gmail.com> wrote:
On Tue, May 3, 2016 at 11:36 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
There are a lot more than 2 patchsets that are busted at the moment,
unfortunately, but I assume you are referring to "snapshot too old"
and "Use Foreign Key relationships to infer multi-column join
selectivity".Yeah, those are the ones I'm thinking of. I've not heard serious
proposals to revert any others, have you?Here's a list of what I think is currently broken in 9.6 that we might
conceivably fix by reverting patches:
Yes, that would be a way forward for 9.6 if we are not able to close
blocking open items before beta1. However, I think it would be bad if we
miss some of the below listed important features like snapshot_too_old or
atomic pin/unpin for 9.6. Can we consider to postpone beta1, so that the
patch authors get time to resolve blocking issues? I think there could be
a strong argument that it is just a waste of time if the situation doesn't
improve much even after delay, but it seems we can rely on people involved
in those patch sets to make a progress.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
Amit Kapila <amit.kapila16@gmail.com> writes:
Yes, that would be a way forward for 9.6 if we are not able to close
blocking open items before beta1. However, I think it would be bad if we
miss some of the below listed important features like snapshot_too_old or
atomic pin/unpin for 9.6. Can we consider to postpone beta1, so that the
patch authors get time to resolve blocking issues?
This was already considered and rejected by the release team. Most of
the patches in question went in very close to the feature freeze deadline
(all but one, in fact, in the last week) and there is every reason to
suspect that they were rushed rather than really being ready to commit.
We should not allow an entire year's worth of work to slide in the
possibly-vain hope that these few patches can get fixed up if they're
given more time.
The bigger picture here is that we'd all like to get back to development
sooner rather than later. The longer it takes to stabilize 9.6, the
longer it will be before the tree reopens for new development. That
consideration should make us very willing to revert problematic changes
and let their authors try again in the next cycle.
regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 03-05-2016 20:25, Craig Ringer wrote:
On 4 May 2016 at 01:12, Peter Geoghegan <pg@heroku.com
<mailto:pg@heroku.com>> wrote:On Tue, May 3, 2016 at 9:58 AM, Alvaro Herrera
<alvherre@2ndquadrant.com <mailto:alvherre@2ndquadrant.com>> wrote:As its committer, I tend to agree about reverting that feature. Craig
was just posting some more patches, and I have the pg_recvlogical
changes here (--endpos) which after some testing are not quite looking
ready to go -- plus we still have to write the actual Perl test scripts
that would use it. Taken together, this is now looking to me a bit
rushed, so I prefer to cut my losses here and revert the patch so that
we can revisit it for 9.7.I think it's a positive development that we can take this attitude to
reverting patches. It should not be seen as a big personal failure,
because it isn't. Stigmatizing reverts incentivizes behavior that
leads to bad outcomes.Indeed. Being scared to revert also makes the barrier to committing
something and getting it into more hands, for longer, under a variety of
different conditions higher. Not a ton of help with this particular
feature but quite important with others.While I'm personally disappointed by this outcome, I also can't really
disagree with it. The whole area is a bit of a mess and hard to work on,
and as demonstrated my understanding of it when I wrote the original
patch was incomplete. We could commit the rewritten function, but ...
rewriting a feature just before beta1 probably says it's just not baked
yet, right?
You said that in another thread...
The upside of all this is that now I have a decent picture of how it
should all look and how timeline following, failover capability etc can
be introduced in a staged way. I'd also like to get rid of the use of
various globals to pass timeline information "around" the walsender and
share more of the logic between the code paths.
Question is: is the actual code so useless that it can't even be a 1.0
release? A lot of (complex) features were introduced with the notion
that will be improved in the next version. However, if the code is buggy
or there are serious API problems, revert them.
--
Euler Taveira Timbira - http://www.timbira.com.br/
PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 4 May 2016 at 13:03, Euler Taveira <euler@timbira.com.br> wrote:
Question is: is the actual code so useless that it can't even be a 1.0
release?
What's committed suffers from a design problem and cannot work correctly,
nor can it be fixed without an API change and significant revision. The
revised version I posted yesterday is that fix, but it's new code just
before beta1. It's pretty much equivalent to reverting the original patch
and then adding a new, corrected implementation. If considered as a new
feature it'd never be accepted at this stage of the release.
A lot of (complex) features were introduced with the notion
that will be improved in the next version.
Absolutely, and this is what Petr and I (and Andres, though less actively
lately) have both been trying to do in terms of building on logical
decoding to add logical replication support. This is one small piece of
that work.
It's a pity since I'll have to maintain a patchset for 9.6 for people who
need physical HA support for their logical decoding and replication
clients. But it doesn't change the WAL format or catalogs, so it's pretty
painless to swap into existing installations. It could be worse.
However, if the code is buggy
or there are serious API problems, revert them.
Exactly. That's the case here.
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Craig Ringer wrote:
On 4 May 2016 at 13:03, Euler Taveira <euler@timbira.com.br> wrote:
Question is: is the actual code so useless that it can't even be a 1.0
release?What's committed suffers from a design problem and cannot work correctly,
nor can it be fixed without an API change and significant revision. The
revised version I posted yesterday is that fix, but it's new code just
before beta1. It's pretty much equivalent to reverting the original patch
and then adding a new, corrected implementation. If considered as a new
feature it'd never be accepted at this stage of the release.
To make it worse, we don't have test code for a portion of the new
functionality: it turned out that the test module only tests half of it.
And in order to test the other half, we have a pending patch for some
pg_recvlogical changes, but we still don't have the actual test script.
So we would need to
1. commit the pg_recvlogical patch, assuming it's OK now.
2. write the test script to use that
3. commit the fix patch written a few days ago (which is still
unreviewed).
We could also commit the fix without the test, but that doesn't seem a
great idea.
As Craig, I am not happy with this outcome. But realistically I think
it's the best decision at this point.
--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund wrote:
On 2016-05-04 00:01:20 +0300, Ants Aasma wrote:
On Tue, May 3, 2016 at 9:57 PM, Tomas Vondra
<tomas.vondra@2ndquadrant.com> wrote:If you tell me how to best test it, I do have a 4-socket server sitting idly
in the corner (well, a corner reachable by SSH). I can get us some numbers,
but I haven't been following the snapshot_too_old so I'll need some guidance
on what to test.I worry about two contention points with the current implementation.
The main one is the locking within MaintainOldSnapshotTimeMapping()
that gets called every time a snapshot is taken. AFAICS this should
show up by setting old_snapshot_threshold to any positive value and
then running a simple within shared buffers scale factor read only
pgbench at high concurrency (number of CPUs or a small multiple). On a
single socket system this does not show up.On a two socket system it does, check the bottom of:
http://archives.postgresql.org/message-id/20160413171955.i53me46fqqhdlztq%40alap3.anarazel.de
Note that this (accidentally) compares thresholds 0 and 10 (instead of
-1 and 10),
In other words, we expect that when the feature is disabled, no
performance degradation occurs, because that function is not called at
all.
Honestly, I don't see any strong ground in which to base a revert threat
for this feature. It doesn't scale as well as we would like in the case
where a high-level is fully stressed with a read-only load -- okay. But
that's unlikely to be a case where this feature is put to work. So I
think accepting the promise that this feature would be improved in a
future release to better support that case is good enough.
--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
On 2016-05-04 13:35:02 -0300, Alvaro Herrera wrote:
Honestly, I don't see any strong ground in which to base a revert threat
for this feature.
It's datastructures are badly designed. But releasing it there's no
pressure to fix that. If this were an intrinsic cost - ok. But it's
not.
It doesn't scale as well as we would like in the case
where a high-level is fully stressed with a read-only load -- okay. But
that's unlikely to be a case where this feature is put to work.
It'll be just the same in a read mostly workload, which is part of the
case for this feature.
So I think accepting the promise that this feature would be improved
in a future release to better support that case is good enough.
I've not heard any such promise.
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, May 4, 2016 at 12:42 PM, Andres Freund <andres@anarazel.de> wrote:
On 2016-05-04 13:35:02 -0300, Alvaro Herrera wrote:
Honestly, I don't see any strong ground in which to base a revert threat
for this feature.It's datastructures are badly designed. But releasing it there's no
pressure to fix that. If this were an intrinsic cost - ok. But it's
not.
I don't want to rule out the possibility that you are right, because
you're frequently right about lots of things. However, you haven't
provided much detail. AFAIK, the closest you've come to articulating
a case for reverting this patch is to say that the tax ought to be
paid by the write-side, rather than the read-side. I don't know
exactly what that means, though. Snapshots are not stored in shared
memory; writers can't iterate through all snapshots and whack the ones
that are problematic - and even if they could, they can't know what
tuples will be read in the future, which determines whether or not
there is an actual problem. We could dispense with a lot of this
machinery if we simply killed off transactions or sessions that stuck
around too long, but the whole point of this feature is to avoid
having to do that when it isn't really necessary. Surely nobody here
is blind to the fact that holding back xmin often creates a lot of
bloat for no reason - many or all of the retained old row versions may
never be accessed. So I would like to hear more detail about why you
think that the data structures are badly designed, and how they could
be designed differently without changing what the patch does (which
amounts to wishing that Kevin had implemented a different feature than
the one you think he should have implemented).
It doesn't scale as well as we would like in the case
where a high-level is fully stressed with a read-only load -- okay. But
that's unlikely to be a case where this feature is put to work.It'll be just the same in a read mostly workload, which is part of the
case for this feature.
So what? SSI doesn't scale either, and nobody's saying we have to rip
it out. It's still useful to people. pgbench is not the world.
So I think accepting the promise that this feature would be improved
in a future release to better support that case is good enough.I've not heard any such promise.
The question Alvaro is raising isn't whether such a promise has been
made but whether it would suffice. In fairness, such promises are not
enforceable. Kevin can promise to work on this and then be run over
by a rogue Mr. Softee truck tomorrow, and the work won't get done; or
he can go to work for some non-PostgreSQL-supporting company and
vanish. I hope neither of those things happens, but if we accept a
promise to improve it, it's going to be contingent on certain things
happening or not happening which are beyond the ability of the
PostgreSQL community to effect or forestall. (FWIW, I believe I can
safely say that EnterpriseDB will support his continued work on this
feature for as long as the community has concerns about it and Kevin
works for EnterpriseDB.)
But personally, I generally agree with Alvaro's analysis. If this
patch is slow when turned on, and you don't like that, don't use it.
If you need to use it and want it to scale better, then you can write
a patch to make it do that. Nobody is more qualified than you. I
think it's a show-stopper if the patch regresses performance more than
trivially when turned off, but we need fresh test results before
reaching a conclusion about that. I also think it's a show-stopper if
you can hold out specific design issues which we can generally agree
are signs of serious flaws in Kevin's thinking. I do not think it's a
show-stopper if you wish he'd worked harder on the performance under
heavy concurrency with very short transactions. You could have raised
that issue at any time, but instead waited until after feature freeze.
If somebody had even hinted that such a problem might exist, Kevin
probably would have fixed it before commit, but nobody did. As soon
as it was raised, Kevin started working on it. That's about all you
can ask, I think.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, May 4, 2016 at 1:25 PM, Robert Haas <robertmhaas@gmail.com> wrote:
If somebody had even hinted that such a problem might exist, Kevin
probably would have fixed it before commit, but nobody did. As soon
as it was raised, Kevin started working on it. That's about all you
can ask, I think.
Right; I have not been ignoring the issue -- but I prioritized it
below fixing correctness issues and performance issues when the
feature is off. Since there are no known issues in either of those
areas remaining once I push the patch I posted earlier today, I'm
taking a close look at the three proposals from three different
people about ways to address it (along with any other ideas that
come to mind while working through those). Fortunately, the access
problems to the EDB big NUMA machines have now been solved (by
tweaking firewall settings), so I should have some sort of
meaningful benchmarks of those three alternatives and anything else
the comes to mind within a few days. (Until now I have been asking
others to do some runs, which doesn't gather the data nearly as
quickly as actually having access.)
Amit's proof-of-concept patch is very small and safe and yielded a
3x to 4x performance improvement with the old_snapshot_threshold =
1 on a big NUMA machine with concurrency in the 32 to 64 client
range. I don't know whether we can do better before beta1, but it
is something. I'll be happy to try any other suggestions.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
On 2016-05-04 14:25:14 -0400, Robert Haas wrote:
On Wed, May 4, 2016 at 12:42 PM, Andres Freund <andres@anarazel.de> wrote:
On 2016-05-04 13:35:02 -0300, Alvaro Herrera wrote:
Honestly, I don't see any strong ground in which to base a revert threat
for this feature.It's datastructures are badly designed. But releasing it there's no
pressure to fix that. If this were an intrinsic cost - ok. But it's
not.I don't want to rule out the possibility that you are right, because
you're frequently right about lots of things. However, you haven't
provided much detail. AFAIK, the closest you've come to articulating
a case for reverting this patch is to say that the tax ought to be
paid by the write-side, rather than the read-side.
I think I did go into some more detail than that, but let me explain
here:
My concern isn't performing checks at snapshot build time and at buffer
access time. That seems fairly reasonable. My concern is that the
time->xid mapping used to determine the xid horizon is built at snapshot
access time. The relevant function for that is
MaintainOldSnapshotTimeMapping() called by GetSnapshotData(), if you
want to look at it. Building the mapping when building the snapshot
requires that an exclusive lwlock is taken. Given that there's very few
access patterns in which more xids are assigned than snapshots taken,
that just doesn't make much sense; even leaving the fact that adding an
exclusive lock in a function that's already one of the biggest
bottlenecks in postgres, isn't a good idea.
If we instead built the map somewhere around GetNewTransactionId() we'd
only pay the cost when advancing an xid. It'd also make it possible to
avoid another GetCurrentIntegerTimestamp() per snapshot, including the
acquiration of oldSnapshotControl->mutex_current. In addition the data
structure would be easier to maintain, because xids grow monotonically
(yes, yes wraparound, but that's not a problem) - atm we need somwehat
more complex logic to determine into which bucket an xid/timestamp pair
has to be mapped to.
So I'm really not just concerned with the performance, I think that's
just fallout from choosing the wrong data representation.
If you look at at bottom of
/messages/by-id/20160413171955.i53me46fqqhdlztq@alap3.anarazel.de
which shows performance numbers for old_snapshot_threshold=0
vs. old_snapshot_threshold=10. While that wasn't intended at the time,
old_snapshot_threshold=0 shows the cost of the feature without
maintaining the mapping, and old_snapshot_threshold=10 shows it while
building the mapping. Pretty dramatic difference - that's really the
cost of map maintenance.
FWIW, it's not just me doubting the data structure here, Ants did as
well.
Surely nobody here is blind to the fact that holding back xmin often
creates a lot of bloat for no reason - many or all of the retained old
row versions may never be accessed.
Definitely. I *like* the feature.
So I would like to hear more detail about why you think that the data
structures are badly designed, and how they could be designed
differently without changing what the patch does (which amounts to
wishing that Kevin had implemented a different feature than the one
you think he should have implemented).
Well, there'd be some minor differences what determines "too old" based
on the above, but I think it'd just be a bit easier to explain.
It doesn't scale as well as we would like in the case
where a high-level is fully stressed with a read-only load -- okay. But
that's unlikely to be a case where this feature is put to work.It'll be just the same in a read mostly workload, which is part of the
case for this feature.So what? SSI doesn't scale either, and nobody's saying we have to rip
it out. It's still useful to people. pgbench is not the world.
Sure, pgbench is not the world. But this isn't a particularly pgbench
specific issue.
So I think accepting the promise that this feature would be improved
in a future release to better support that case is good enough.I've not heard any such promise.
The question Alvaro is raising isn't whether such a promise has been
made but whether it would suffice. In fairness, such promises are not
enforceable. Kevin can promise to work on this and then be run over
by a rogue Mr. Softee truck tomorrow, and the work won't get done; or
he can go to work for some non-PostgreSQL-supporting company and
vanish.
Sure, it's not a guarantee. But I think a "promise" (signed in blood of
course) of a senior contributor makes quite the difference.
But personally, I generally agree with Alvaro's analysis. If this
patch is slow when turned on, and you don't like that, don't use it.
If you need to use it and want it to scale better, then you can write
a patch to make it do that. Nobody is more qualified than you.
I think that's what ticks me off about this. By introducing a feature
implemented with the wrong structure, the onus of work is placed on
others. It's imo perfectly reasonable not to unneccesarily perform
micro-optimization before benchmarks show a problem, and if it were just
a question of doing that in 9.7, I'd be fine. But what we're talking
about is rewriting the central part of the feature.
I think it's a show-stopper if the patch regresses performance more
than trivially when turned off, but we need fresh test results before
reaching a conclusion about that.
I'm not concerned about that at this stage. Kevin addressed those
problems as far as I'm aware.
I also think it's a show-stopper if you can hold out specific design
issues which we can generally agree are signs of serious flaws in
Kevin's thinking.
I think that's my issue.
I do not think it's a show-stopper if you wish he'd worked harder on
the performance under heavy concurrency with very short transactions.
You could have raised that issue at any time, but instead waited until
after feature freeze.
Sorry, I don't buy that argument. There were senior community people
giving feedback on the patch, the potential for performance regressions
wasn't called out, the patch was updated pretty late in the CF. I find
it really scary that review didn't call out the potential for
regressions here, at the very least the ones with the feature disabled
really should have been caught. Putting exclusive lwlocks and spinlocks
in critical paths isn't a subtle, easy to miss, issue.
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, May 4, 2016 at 3:42 PM, Andres Freund <andres@anarazel.de> wrote:
My concern isn't performing checks at snapshot build time and at buffer
access time. That seems fairly reasonable. My concern is that the
time->xid mapping used to determine the xid horizon is built at snapshot
access time. The relevant function for that is
MaintainOldSnapshotTimeMapping() called by GetSnapshotData(), if you
want to look at it. Building the mapping when building the snapshot
requires that an exclusive lwlock is taken. Given that there's very few
access patterns in which more xids are assigned than snapshots taken,
that just doesn't make much sense; even leaving the fact that adding an
exclusive lock in a function that's already one of the biggest
bottlenecks in postgres, isn't a good idea.
It seems to me that you are confusing the structure with the point
from which the function to call it is filled (and how often).
Some of the proposals involve fairly small tweaks to call
MaintainOldSnapshotTimeMapping() from elsewhere or only when
something changes (like crossing a minute boundary or seeing that a
new TransactionId has been assigned). If you can disentangle those
ideas, it might not look so bad.
If we instead built the map somewhere around GetNewTransactionId() we'd
only pay the cost when advancing an xid. It'd also make it possible to
avoid another GetCurrentIntegerTimestamp() per snapshot, including the
acquiration of oldSnapshotControl->mutex_current. In addition the data
structure would be easier to maintain, because xids grow monotonically
(yes, yes wraparound, but that's not a problem) - atm we need somwehat
more complex logic to determine into which bucket an xid/timestamp pair
has to be mapped to.
Right.
So I'm really not just concerned with the performance, I think that's
just fallout from choosing the wrong data representation.
Wrong. It's a matter of when the calls are made to maintain the
structure.
If you look at at bottom of
/messages/by-id/20160413171955.i53me46fqqhdlztq@alap3.anarazel.de
which shows performance numbers for old_snapshot_threshold=0
vs. old_snapshot_threshold=10. While that wasn't intended at the time,
old_snapshot_threshold=0 shows the cost of the feature without
maintaining the mapping, and old_snapshot_threshold=10 shows it while
building the mapping. Pretty dramatic difference - that's really the
cost of map maintenance.
Right.
FWIW, it's not just me doubting the data structure here, Ants did as
well.
It is true that the patch he posted added one more field to a struct.
Surely nobody here is blind to the fact that holding back xmin often
creates a lot of bloat for no reason - many or all of the retained old
row versions may never be accessed.Definitely. I *like* the feature.
And I think we have agreed in off-list discussions that even with
these NUMA issues the patch, as it stands, would help some -- the
poor choice of a call site for MaintainOldSnapshotTimeMapping()
just unnecessarily limits how many workloads can benefit.
Hopefully you can understand the reasons I chose to focus on the
performance issues when it is disabled, and the hash index issue
before moving on to this.
So I think accepting the promise that this feature would be improved
in a future release to better support that case is good enough.I've not heard any such promise.
The question Alvaro is raising isn't whether such a promise has been
made but whether it would suffice. In fairness, such promises are not
enforceable. Kevin can promise to work on this and then be run over
by a rogue Mr. Softee truck tomorrow, and the work won't get done; or
he can go to work for some non-PostgreSQL-supporting company and
vanish.Sure, it's not a guarantee. But I think a "promise" (signed in blood of
course) of a senior contributor makes quite the difference.
How about we see where we are as we get closer to a go/no go point
and see where performance has settled and what kind of promise
would satisfy you. I'm uncomfortable with the demand for a rather
non-specific promise, and find myself flashing back to some
sections of the Catch 22 novel.
But personally, I generally agree with Alvaro's analysis. If this
patch is slow when turned on, and you don't like that, don't use it.If you need to use it and want it to scale better, then you can write
a patch to make it do that. Nobody is more qualified than you.I think that's what ticks me off about this. By introducing a feature
implemented with the wrong structure, the onus of work is placed on
others. It's imo perfectly reasonable not to unneccesarily perform
micro-optimization before benchmarks show a problem, and if it were just
a question of doing that in 9.7, I'd be fine. But what we're talking
about is rewriting the central part of the feature.
If you see a need for more than adding a very small number of
fields to any struct or function, please elaborate; if anyone else
has proposed a solution that requires that, I've missed it.
I also think it's a show-stopper if you can hold out specific design
issues which we can generally agree are signs of serious flaws in
Kevin's thinking.I think that's my issue.
Amit's proof of concept patch reduced the problem by a factor of 3x
to 4x with a local "if", without any structure changes, and Ants'
proposal looks like it will reduce the locks to once per minute
(potentially making them pretty hard to measure) by adding one
field to one struct -- so I'm really having trouble understanding
what you're getting at. Clarification welcome.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, May 4, 2016 at 1:42 PM, Andres Freund <andres@anarazel.de> wrote:
Surely nobody here is blind to the fact that holding back xmin often
creates a lot of bloat for no reason - many or all of the retained old
row versions may never be accessed.Definitely. I *like* the feature.
+1.
I do not think it's a show-stopper if you wish he'd worked harder on
the performance under heavy concurrency with very short transactions.
You could have raised that issue at any time, but instead waited until
after feature freeze.Sorry, I don't buy that argument. There were senior community people
giving feedback on the patch, the potential for performance regressions
wasn't called out, the patch was updated pretty late in the CF. I find
it really scary that review didn't call out the potential for
regressions here, at the very least the ones with the feature disabled
really should have been caught. Putting exclusive lwlocks and spinlocks
in critical paths isn't a subtle, easy to miss, issue.
As one of the people that looked at the patch before it went in,
perhaps I can shed some light. I focused exclusively on the
correctness of the core mechanism. It simply didn't occur to me to
check for problems of the type you describe, that affect the system
even when the feature is disabled. I didn't check because Kevin is a
committer, that I assumed wouldn't have made such an error. Also, time
was limited.
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
On 2016-05-04 16:22:41 -0500, Kevin Grittner wrote:
On Wed, May 4, 2016 at 3:42 PM, Andres Freund <andres@anarazel.de> wrote:
My concern isn't performing checks at snapshot build time and at buffer
access time. That seems fairly reasonable. My concern is that the
time->xid mapping used to determine the xid horizon is built at snapshot
access time. The relevant function for that is
MaintainOldSnapshotTimeMapping() called by GetSnapshotData(), if you
want to look at it. Building the mapping when building the snapshot
requires that an exclusive lwlock is taken. Given that there's very few
access patterns in which more xids are assigned than snapshots taken,
that just doesn't make much sense; even leaving the fact that adding an
exclusive lock in a function that's already one of the biggest
bottlenecks in postgres, isn't a good idea.It seems to me that you are confusing the structure with the point
from which the function to call it is filled (and how often).
Well, you say tomato I say tomato. Sure we'd still use some form of
mapping, but filling it from somewhere else, with somewhat different
contents (xact timestamp presumably), with a somewhat different
replacement policy wouldn't leave all that much of the current structure
in place. But ok, let's then call it a question of where from and how
the mapping is maintained.
Some of the proposals involve fairly small tweaks to call
MaintainOldSnapshotTimeMapping() from elsewhere or only when
something changes (like crossing a minute boundary or seeing that a
new TransactionId has been assigned). If you can disentangle those
ideas, it might not look so bad.
Yea, if we can do that, I'm ok. I'm doubtful about releasing with the
current state, even leaving performance aside, since fixing this will
result in somewhat changed semantics, and I'm doubtful about significant
development at this point of the release process. If it comes down to
either one of those I'm clearly in favor of the latter.
Surely nobody here is blind to the fact that holding back xmin often
creates a lot of bloat for no reason - many or all of the retained old
row versions may never be accessed.Definitely. I *like* the feature.
And I think we have agreed in off-list discussions that even with
these NUMA issues the patch, as it stands, would help some -- the
poor choice of a call site for MaintainOldSnapshotTimeMapping()
just unnecessarily limits how many workloads can benefit.
Yes, totally.
Hopefully you can understand the reasons I chose to focus on the
performance issues when it is disabled, and the hash index issue
before moving on to this.
Well, somewhat. For me addressing architectural issues (and where to
fill the mapping from is that, independent of whether you call it a data
structure issue) is pretty important too, because I personally don't
believe we can release or even go to beta before that. But I'd not be
all that bothered to release beta1 with a hash index issue that we know
how to address.
Sure, it's not a guarantee. But I think a "promise" (signed in blood of
course) of a senior contributor makes quite the difference.How about we see where we are as we get closer to a go/no go point
and see where performance has settled and what kind of promise
would satisfy you.
Hm. We're pretty close to a go/no go point, namely beta1. I don't want
to be in the situation that we don't fix the issue before release, just
because beta has passed.
I'm uncomfortable with the demand for a rather
non-specific promise, and find myself flashing back to some
sections of the Catch 22 novel.
Gotta read that one day (ordered).
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, May 4, 2016 at 6:06 PM, Andres Freund <andres@anarazel.de> wrote:
Some of the proposals involve fairly small tweaks to call
MaintainOldSnapshotTimeMapping() from elsewhere or only when
something changes (like crossing a minute boundary or seeing that a
new TransactionId has been assigned). If you can disentangle those
ideas, it might not look so bad.Yea, if we can do that, I'm ok. I'm doubtful about releasing with the
current state, even leaving performance aside, since fixing this will
result in somewhat changed semantics, and I'm doubtful about significant
development at this point of the release process. If it comes down to
either one of those I'm clearly in favor of the latter.
How would the semantics change?
Hm. We're pretty close to a go/no go point, namely beta1. I don't want
to be in the situation that we don't fix the issue before release, just
because beta has passed.
So, I was worried about this, too. But I think there is an
overwhelming consensus on pgsql-release that getting a beta out early
trumps all, and that if that results in somewhat more post-beta1
change than we've traditionally had, so be it. And I think that's
pretty fair. We need to be really careful, as we get closer to
release, about what impact the changes we make might have even on
people not using the features in question, because otherwise we might
invalidate the results of testing already performed. We also need to
be careful about whether late changes are going to be stable, because
instability at a late date will postpone the release. But I don't
believe that rules out all meaningful change. I think we can decide
to revert this feature, or rework it somewhat, after beta1, and that's
OK.
Similarly with the other commits that currently have multiple
outstanding bugs. If they continue to breed bugs, we can shoot them
later.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2016-05-04 18:22:27 -0400, Robert Haas wrote:
On Wed, May 4, 2016 at 6:06 PM, Andres Freund <andres@anarazel.de> wrote:
Some of the proposals involve fairly small tweaks to call
MaintainOldSnapshotTimeMapping() from elsewhere or only when
something changes (like crossing a minute boundary or seeing that a
new TransactionId has been assigned). If you can disentangle those
ideas, it might not look so bad.Yea, if we can do that, I'm ok. I'm doubtful about releasing with the
current state, even leaving performance aside, since fixing this will
result in somewhat changed semantics, and I'm doubtful about significant
development at this point of the release process. If it comes down to
either one of those I'm clearly in favor of the latter.How would the semantics change?
Right now the time for computing the snapshot is relevant, if
maintenance of xids is moved, it'll likely be tied to the time xids are
assigned. That seems perfectly alright, but it'll change behaviour.
So, I was worried about this, too. But I think there is an
overwhelming consensus on pgsql-release that getting a beta out early
trumps all, and that if that results in somewhat more post-beta1
change than we've traditionally had, so be it.
*If* that's the policy - cool! I just don't want to see the issue
not being fixed due to only wanting conservative changes. And the
discussion around fixing spinlock related issues in the patch certainly
made me think the RMT aimed to be conservative.
Greetings,
Andres Freund
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Wed, May 4, 2016 at 6:28 PM, Andres Freund <andres@anarazel.de> wrote:
On 2016-05-04 18:22:27 -0400, Robert Haas wrote:
On Wed, May 4, 2016 at 6:06 PM, Andres Freund <andres@anarazel.de> wrote:
Some of the proposals involve fairly small tweaks to call
MaintainOldSnapshotTimeMapping() from elsewhere or only when
something changes (like crossing a minute boundary or seeing that a
new TransactionId has been assigned). If you can disentangle those
ideas, it might not look so bad.Yea, if we can do that, I'm ok. I'm doubtful about releasing with the
current state, even leaving performance aside, since fixing this will
result in somewhat changed semantics, and I'm doubtful about significant
development at this point of the release process. If it comes down to
either one of those I'm clearly in favor of the latter.How would the semantics change?
Right now the time for computing the snapshot is relevant, if
maintenance of xids is moved, it'll likely be tied to the time xids are
assigned. That seems perfectly alright, but it'll change behaviour.
Not following.
So, I was worried about this, too. But I think there is an
overwhelming consensus on pgsql-release that getting a beta out early
trumps all, and that if that results in somewhat more post-beta1
change than we've traditionally had, so be it.*If* that's the policy - cool! I just don't want to see the issue
not being fixed due to only wanting conservative changes. And the
discussion around fixing spinlock related issues in the patch certainly
made me think the RMT aimed to be conservative.
Understand that I am conveying what I understand the sentiment of the
community to be, not guaranteeing any specific outcome.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 5 May 2016 1:28 a.m., "Andres Freund" <andres@anarazel.de> wrote:
On 2016-05-04 18:22:27 -0400, Robert Haas wrote:
How would the semantics change?
Right now the time for computing the snapshot is relevant, if
maintenance of xids is moved, it'll likely be tied to the time xids are
assigned. That seems perfectly alright, but it'll change behaviour.
FWIW moving the maintenance to a clock tick process will not change user
visible semantics in any significant way. The change could easily be made
in the next release.
Regards,
Ants Aasma
Import Notes
Reply to msg id not found: 2086038828.267131.1462400895371@RIA
On 2016-05-05 06:08:39 +0300, Ants Aasma wrote:
On 5 May 2016 1:28 a.m., "Andres Freund" <andres@anarazel.de> wrote:
On 2016-05-04 18:22:27 -0400, Robert Haas wrote:
How would the semantics change?
Right now the time for computing the snapshot is relevant, if
maintenance of xids is moved, it'll likely be tied to the time xids are
assigned. That seems perfectly alright, but it'll change behaviour.FWIW moving the maintenance to a clock tick process will not change user
visible semantics in any significant way. The change could easily be made
in the next release.
I'm not convinced of that - right now the timeout is computed as a
offset to the time a snapshot with a certain xmin horizon is
taken. Moving the computation to GetNewTransactionId() or a clock tick
process will make it relative to the time an xid has been generated
(minus a fuzz factor). That'll behave differently in a number of cases, no?
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Thu, May 5, 2016 at 8:44 AM, Andres Freund <andres@anarazel.de> wrote:
On 2016-05-05 06:08:39 +0300, Ants Aasma wrote:
On 5 May 2016 1:28 a.m., "Andres Freund" <andres@anarazel.de> wrote:
On 2016-05-04 18:22:27 -0400, Robert Haas wrote:
How would the semantics change?
Right now the time for computing the snapshot is relevant, if
maintenance of xids is moved, it'll likely be tied to the time xids
are
assigned. That seems perfectly alright, but it'll change behaviour.
FWIW moving the maintenance to a clock tick process will not change user
visible semantics in any significant way. The change could easily be
made
in the next release.
I'm not convinced of that - right now the timeout is computed as a
offset to the time a snapshot with a certain xmin horizon is
taken.
Here are you talking about snapshot time (snapshot->whenTaken) which is
updated at the time of GetSnapshotData()?
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
5. mai 2016 6:14 AM kirjutas kuupäeval "Andres Freund" <andres@anarazel.de>:
On 2016-05-05 06:08:39 +0300, Ants Aasma wrote:
On 5 May 2016 1:28 a.m., "Andres Freund" <andres@anarazel.de> wrote:
On 2016-05-04 18:22:27 -0400, Robert Haas wrote:
How would the semantics change?
Right now the time for computing the snapshot is relevant, if
maintenance of xids is moved, it'll likely be tied to the time xids
are
assigned. That seems perfectly alright, but it'll change behaviour.
FWIW moving the maintenance to a clock tick process will not change user
visible semantics in any significant way. The change could easily be
made
in the next release.
I'm not convinced of that - right now the timeout is computed as a
offset to the time a snapshot with a certain xmin horizon is
taken. Moving the computation to GetNewTransactionId() or a clock tick
process will make it relative to the time an xid has been generated
(minus a fuzz factor). That'll behave differently in a number of cases,
no?
Timeout is calculated in TransactionIdLimitedForOldSnapshots() as
GetSnapshotCurrentTimestamp() minus old_snapshot_timeout rounded down to
previous minute. If the highest seen xmin in that minute is useful in
raising cleanup xmin the threshold is bumped. Snapshots switch behavior
when their whenTaken is past this threshold. Xid generation time has no
effect on this timing, it's completely based on passage of time.
The needed invariant is, that no snapshot having whenTaken after timeout
timestamp can have xmin less than calculated bound. Moving the xmin
calculation and storage to clock tick actually makes the bound tighter. The
ordering between xmin calculation and clok update/read needs to be correct
to ensure the invariant.
Regards,
Ants Aasma
Import Notes
Reply to msg id not found: 1943161138.268987.1462418092654@RIA
On Thu, May 5, 2016 at 3:39 AM, Ants Aasma <ants.aasma@eesti.ee> wrote:
5. mai 2016 6:14 AM kirjutas kuupäeval "Andres Freund" <andres@anarazel.de>:
On 2016-05-05 06:08:39 +0300, Ants Aasma wrote:
On 5 May 2016 1:28 a.m., "Andres Freund" <andres@anarazel.de> wrote:
On 2016-05-04 18:22:27 -0400, Robert Haas wrote:
How would the semantics change?
Right now the time for computing the snapshot is relevant, if
maintenance of xids is moved, it'll likely be tied to the time xids
are assigned. That seems perfectly alright, but it'll change behaviour.
Basically this feature allows pruning or vacuuming rows that would
still be visible to some snapshots, and then throws an error if one
of those snapshots is used for a scan that would generate incorrect
results because of the early cleanup. There are already several
places that we relax the timings in such a way that it allows
better performance by not being as aggressive as theoretically
possible in the cleanup. From my perspective, the performance
problems on NUMA when the feature is in use just show that this
approach wasn't taken far enough, and the solutions don't do
anything that isn't conceptually happening anyway. Some rows that
currently get cleaned up in vacuum N will get cleaned up in vacuum
N+1 with the proposed changes, but I don't see that as a semantic
change. In many of those cases we might be able to add some
locking and clean up the rows in vacuumm N-1, but nobody wants
that.
FWIW moving the maintenance to a clock tick process will not change user
visible semantics in any significant way. The change could easily be
made in the next release.I'm not convinced of that - right now the timeout is computed as a
offset to the time a snapshot with a certain xmin horizon is
taken. Moving the computation to GetNewTransactionId() or a clock tick
process will make it relative to the time an xid has been generated
(minus a fuzz factor). That'll behave differently in a number of cases,
no?
Not in what I would consider any meaningful way. This feature is
not about trying to provoke the error, it is about preventing bloat
while minimizing errors. I have gotten many emails off-list from
people asking whether the feature was broken because they had a
case which was running with correct results but not generating any
errors, and it often came down to such things as cursor use which
had materialized a result set -- correct results were returned from
the materialized cursor results, so no error was needed. As long
as bloat is limited to something near the old_snapshot_threshold
and incorrect results are never returned the contract of this
feature is maintained. It's reaching a little bit as a metaphore,
but we don't say that the semantics of autovacuum are changed in
any significant way based slight variations in the timing of
vacuums.
Timeout is calculated in TransactionIdLimitedForOldSnapshots() as
GetSnapshotCurrentTimestamp() minus old_snapshot_timeout rounded down to
previous minute. If the highest seen xmin in that minute is useful in
raising cleanup xmin the threshold is bumped. Snapshots switch behavior when
their whenTaken is past this threshold. Xid generation time has no effect on
this timing, it's completely based on passage of time.The needed invariant is, that no snapshot having whenTaken after timeout
timestamp can have xmin less than calculated bound. Moving the xmin
calculation and storage to clock tick actually makes the bound tighter. The
ordering between xmin calculation and clok update/read needs to be correct
to ensure the invariant.
Right.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
On 05/04/2016 12:42 AM, Andres Freund wrote:
On 2016-05-03 20:57:13 +0200, Tomas Vondra wrote:
On 05/03/2016 07:41 PM, Andres Freund wrote:
...
I'm pretty sure that I said that somewhere else at least once: But to
be absolutely clear, I'm *not* really concerned with the performance
with the feature turned off. I'm concerned about the performance with
it turned on.If you tell me how to best test it, I do have a 4-socket server sitting idly
in the corner (well, a corner reachable by SSH). I can get us some numbers,
but I haven't been following the snapshot_too_old so I'll need some guidance
on what to test.I think it'd be cool if you could test the effect of the feature in
read-only (and additionally read-mostly?) workload with various client
counts and snapshot_too_old values. For the latter maybe -1, 0, 10, 60
or such? I've done so (accidentally comparing 0 and 1 instead of -1 and
1) on a two socket machine in:
www.postgresql.org/message-id/20160413171955.i53me46fqqhdlztq@alap3.anarazel.deIt'd be very interesting to see how big the penalty is on a bigger box.
OK. I do have results from mater with different values for the GUC (-1,
0, 10, 60), but I'm struggling with the reverts. Can you provide a patch
against current master (commit 4bbc1a7e) that does the revert?
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
Over the past few days I've been running benchmarks on a fairly large
NUMA box (4 sockets, 32 cores / 64 with HR, 256GB of RAM) to see the
impact of the 'snapshot too old' - both when disabled and enabled with
various values in the old_snapshot_threshold GUC.
The full results are available here:
https://drive.google.com/folderview?id=0BzigUP2Fn0XbR1dyOFA3dUxRaUU
but let me explain how the benchmark was done and present a brief
summary of the results as I understand them.
The benchmark is a simple read-only pgbench with prepared statements,
i.e. doing something like this:
pgbench -S -M prepared -j N -c N
This was done on three datasets with different scales - small (100),
medium (3000) and large (10000). All of this fits into RAM, the two
smaller data sets fit into shared_buffers.
For each scale, the benchmark was run for 1, 16, 32, 64 and 128 clients,
5 runs for each client count, 5 minutes each. But only after a 2-hour
warmup with 128 clients.
So essentially something like this:
for s in 100 3000 10000; do
... init + warmup
for r in 1 2 3 4 5; do
for c in 1 16 32 64 128; do
pgbench -S -M prepared ...
done
done
done
The full results are available in the .tgz archive (as produced by
run.sh included in it). The old_snap.csv contains just the extracted
results, and there's also an ODS with some charts.
The initial set of test was run on 4bbc1a7e which was the last commit in
master when the benchmark was started. Then I've tested with 689f9a05
which is the last commit before the 'snapshot too old' got introduced,
and 91fd1df4 which is the current master, including a commit that should
improve performance in some cases.
So in total there are data for these combinations
master-0 - 4bbc1a7e + old_snapshot_threshold=0
master-10 - 4bbc1a7e + old_snapshot_threshold=10
master-10-new - 91fd1df4 + old_snapshot_threshold=10
master-10-new-2 - 91fd1df4 + old_snapshot_threshold=10 (rerun)
master-60 - 4bbc1a7e + old_snapshot_threshold=60
master-689f9a05 - before the feature was added
master-default-disabled - 4bbc1a7e + old_snapshot_threshold=-1
master-revert - 4bbc1a7e + revert.patch (reverts the feature)
The results (average of the 5 runs) look like this (also see the charts
attached to this post):
scale=100
---------
dataset 1 16 32 64 128
master-0 18118 227615 370336 176435 229796
master-10 17741 186197 161655 66681 54977
master-10-new 18572 229685 351378 141624 183029
master-10-new-2 18366 228256 353420 148061 197953
master-60 17691 177053 137311 59508 53325
master-689f9a05 18630 237123 389782 519069 444732
master-disabled 17727 230818 386112 524981 440043
master-revert 18584 237383 390779 531075 457005
scale=3000
----------
dataset 1 16 32 64 128
master-0 16480 214081 347689 183451 285729
master-10 16390 181683 143847 65614 54499
master-10-new 16821 216562 339106 144488 238851
master-10-new-2 17121 215423 339424 149910 206001
master-60 16372 177228 133013 59861 53599
master-689f9a05 16945 223894 363393 488892 402938
master-disabled 16635 220064 359686 495642 399728
master-revert 16963 221884 366086 496107 430518
scale=10000
-----------
dataset 1 16 32 64 128
master-0 13954 171378 249923 260291 309065
master-10 13513 162214 146458 64457 54159
master-10-new 13776 178255 267475 149335 146706
master-10-new-2 13188 176799 269629 145049 233515
master-60 14102 160247 154704 61015 52711
master-689f9a05 13896 179688 276487 390220 340206
master-disabled 13939 184222 293032 406846 390739
master-revert 12971 174562 254986 394569 356436
A few initial observations:
* The results are a bit noisy, but I think in general this shows that
for certain cases there's a clearly measurable difference (up to 5%)
between the "disabled" and "reverted" cases. This is particularly
visible on the smallest data set.
* What's fairly strange is that on the largest dataset (scale 10000),
the "disabled" case is actually consistently faster than "reverted" -
that seems a bit suspicious, I think. It's possible that I did the
revert wrong, though - the revert.patch is included in the tgz. This is
why I also tested 689f9a05, but that's also slower than "disabled".
* The performance impact with the feature enabled seems rather
significant, especially once you exceed the number of physical cores (32
in this case). Then the drop is pretty clear - often ~50% or more.
* 7e3da1c4 claims to bring the performance within 5% of the disabled
case, but that seems not to be the case. What it however does is
bringing the 'non-immediate' cases close to the immediate ones (before
the performance drop came much sooner in these cases - at 16 clients).
* It's also seems to me the feature greatly amplifies the variability of
the results, somehow. It's not uncommon to see results like this:
master-10-new-2 235516 331976 133316 155563 133396
where after the first runs (already fairly variable) the performance
tanks to ~50%. This happens particularly with higher client counts,
otherwise the max-min is within ~5% of the max. There are a few cases
where this happens without the feature (i.e. old master, reverted or
disabled), but it's usually much smaller than with it enabled
(immediate, 10 or 60). See the 'summary' sheet in the ODS spreadsheet.
I don't know what's the problem here - at first I thought that maybe
something else was running on the machine, or that anti-wraparound
autovacuum kicked in, but that seems not to be the case. There's nothing
like that in the postgres log (also included in the .tgz).
And the shell script interleaves runs for different client counts, so
another process running on the system would affect the surrounding runs
too (but that's not the case).
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachments:
old_snap.odsapplication/vnd.oasis.opendocument.spreadsheet; name=old_snap.odsDownload
PK W�H�l9�. . mimetypeapplication/vnd.oasis.opendocument.spreadsheetPK W�H�-��
�
Thumbnails/thumbnail.png�PNG
IHDR � � 4D�� cPLTE%%%,,,333<<<BBBKKKSSS[[[ccckkksss{{{������������������������������������������������ ���b��� �IDATx����r����g�c���o���Tz���[�$v���i8%WlS�'����c�������>�K�_~�s�,k��$cg���
�?n��/!�[�Ls�����J���\L���`�a�A��2�����4��t�����=�����'��Y��<�vD��r�]�9}�a:}��O<����������H�'�j9��K����s�WF��a���X�3�rU/2��2B���H�z�14� y|<V�>"5��D�Y�4Y��K������&��9��;�dD����C�e�ZF���R��=�(����k4/�d��7���g�+~H�Z����B���J��1{����/�Q����<����������G����BV�_�U�a �������� T�H��}�[|��0V����]6�G[�A��s�
2B?3�)������>�?�� 5n��(]
���(4�x��������A�x��cv�o�!#��6b��c.�|hauS�,
��q.��!m.vs�{�Xv_��� S�������CgTVl^�G���b���b-��f�>2�:~=����&��,�Jj��nS�.��o��J&,����s����o����#�4��+��j��/?�+�^qY���+��!�Kp�u_��jk:�f��p��K���7b�M�V���W����V�.�YE��S��y���<�*{[�ow��L����n�Io��S;j�C��et"u��NlL{,��]eY�fz���� i��x���5�a� ��G1H#�r��s��/� �E����3O��� �V�`6C���A�<�x��� �>$K4��4@ ����
��f�?b��$&�� C{�z�2���[K�d ����to_��td����E��c0�D���30��V�`~�����T��'�k���I�/F�R���|:O-U1���w����+elYI�}}2�����Y���,��R*l��GMw�9`�B� �w�e�������I��|�;� ��1��9y_I�[��$1���B����a��bP�;��M(������m!����pi��k^�r:�1Y[��:m��ck,��f��.R�S:���d��3��*����^��vLKd>��������Hq����u�V<�@�]pZ������4 02�1������ #�f�S=�#2�&�$_�F^�$�����91Bx�:��y�w5����DS7���kv���� .�sY�_��_����G�X�I�2���E����k�%�{O�����M���������pJ�BL�YYZ����J�&������f]��bQ���8O�V&y�v���j��un(��y����vF]�Ju���:�g*u�D�v�(�������(=�D�9�������c�u2�8�G������<�l���T5��N��W<�U�`� Be��:`BI����
���������ah��|�:�{��*@�T���
:E9_d\:l��'=[���Z"-��YYZ��4��#�:'��V|#Et��{���F�^��#Ew�����|�q�GUa��A�R�2N���+�rfq�Fu�L�c�V�!�m[�W��V���X�>dX�e[��
:!c����B��������7���
(������A�����O��a��//�Ng���]]�6w���}]�Y�`���4�U|����sY=@�F����D>����9d�s��,}�h&-�
I-�I
�����op;�3�U;�-������"���@�������Z��U�c���z��?J�#xIp`E4M&�g���z�T���yk E�po3}�%�����u�<���F? ���N=�b�y���[�Z��|i 22�S��z�soN
c�Z�1�P��\hRT�-���n�e��>���A��'6L�u�bD/a��������;P�YF�����Ajf��;��7e�c�]� �Z������^�Cc�}������X}�k7��{�N�2����0�� �t�Ri�e}����/"�A�#����'�V����V6f�U20G�3��*a�RR.�9�T*q81�m���]��n�+%_���B��u�}� &�H�c��<@��NDjS�h�)��]7� |����wTj��������Z�h��;�I�+1�����"������Q`��T�~*G�I����� ��ct��%�!���i��%J�{�iG���%�+����gZ��D��wD�f�T�Eok~�+�R�0�3�*i���g�U������UH��i�$�/^�YEw��}���MB����SV��Y�ZH'7�<cs���~�������V���X7Y��]���u����M����6~^�Gd�m���:\���������������`�C�t���(C�8��U�~��y�6 S2g� Ryjm��������f�^Li1��������Y���p�Y�f��������+���|������ 2�/�$���i�
��\����S�$�N<S��o�|�w��� ��n�]
�����"�r��Vg�-�b������D�����<0�KIi��1n�� R-�sI�nqM����k]kL}����������"��SK����(�v�m�LE����;OO4���q����0(:������^�%�z/<������N:���8�|�u��{^�_�S���s�Pw�E���p?v65��:;�jQ?`�~�
n��A���rq9�n��t����E���,�������y�*����0o���4��}� �?$b�VpT8HB�K3�sb``24�+�Ak���Z������>��m�k��~�S�|?� ��j�B�J�i�������5�Z�H}�4��4F�f����5J����k��,u1(�����3}��]L^���B`�S�������-�C m`�$�D�\��������� TD�Y�G������K��.�7p O��{_���z�L����O����:��||#���+�U�7��1}�.��1B���m ���J�|j�6��|�z�����s}� @>Z\�����:F+0���_m5L�=Ez�?�,�� f�e�2.h��h^��
�ZM�:���v�������\k"��mU��}M.��C3ef{�{�@FdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdDFdD����?���?�oE IEND�B`�PK W�H Object 2/content.xml��n����_!x�b� ���M2@
t^��a:��H���6H�u2_�Cm�l�WJ�8�
�D��<�BR�w?=���U�U�g�+l��!�0��l{���o�0����?���M�u���Td��Y
�
��U�v�~�/�uTq���TT�:\����Yk{��j!U��L�� ��k�\O�,qGs����duvT���%.(U����N~�s����"��.��8��~���bm����:P+/�6������p8��2i������*[��qSQS���*K�>}�d�upf���v�G|�^PM����� ��K�����:7
���0�6_~9�B�N�%qG�
���,f�����|`UNh�a� �����}��~(�Z�
zx=�p�x���x�S|�n:8�TDua�����..��/��+��48"��F6�������R����]�"/�A1�� �E�vu�\w9��n�(��;�����3�����(^�n7H��:#�AR���&H��8�1����
m�}"@���'�Q�r(H�i��
��'�+��*���(��"��� �v�<7�
��$/���qu(��i��������'i"�*Z����_m9f������4d��w ���!����d������]�o�F��%����� ��=�q����yQ��k�e���l���K�"�i��E���JZ�I4��������p�W`�q�"A2�)�2(v��l'�:�!��*��a�,�DCV�p��N�yR���OZ�,�*����^���y���d[�.!�M;$}F]k���A�X� "�E=0��MHxAviP�D<w�3��Gj|Mj��Qj<Oj����g��MW�������Y���8Iz�0�
O�LV(��P��Q�h�B�W)4��d sG���P$ $�M�T �OS�ZQUp]��6EZ�/���6����" *�����~������x��[����Y&���S5��5������T��U%EqI��L�'��EM�m m�.����B" �a��n�����F�O���O��m��0)5��i����R�����e������|���d��Df3=�����|i���e1��^��d�z���J��>��Y�C�0��'��!���
�Sm����2O�nGh��V����5����?b,�M�g��f��m�#��>���>%Ho�~�������3���y�C�}���h����l_ D����G�����'��a������|D;\��}f;P�@���C7����{���!�i�w7����M��#�<���;��|����6
Ga���#��5L7��G/�C���]���g�E�IY����vB�������9���bs�������*���=<L���T����n�����`�AQ4�"2�Aj8�kx�?�c*�&oKz��������<�s�"�6��%��Us�R<�'��;��b�(�RD�m ��2F��[�����;�;[��3R��F���K�����Y�=2������:7�]���~�ee��9:&z�=���,2����)��+V=��b��LJ.������������B�EhRx;P$ym�����-y^m�A[� �J!��/��7Pz��������y�@�0�<i���=��!w�� ��E����r�,��S/�-F\ vR�y����MP��FlA!G=�>q��x;{"��}Iy��<�NS�����UbQ�iP���zSxX���)�>n�E�� ��9��Y�;�|c�I,���#�b���.�w��W�G�]���^{/z��^G�+:bPd/��UT�xU��V$z�a�E��-�H���IBM����m�w?�up��� ����d rn 2�02��e.G����j��E�����[x��:���p"_���~�*�l58E��5���<�Y�<_��N��[�O_#�s.�3K�������F~�\~w���<����}������,��x�n#����s��Y��/���F~�5��s��,��xd����F~~���$���|9�A�������I�I�������G�F�)��gW��I;0�}�t�e�D�R���iV��Z�$4o�������!�����bu���x 'd��}�x�t;v���~�u�`��^�F�k�CO���gs��}��]m$��aR#ug�wv���2�)�zJ���4�r@�24<�H�s��� S�E�U*�����D��S|W�tU�q�=��k~0�x���K}�������7��4����h�ci7��;��Y�s��TS�s�Cw$����v,� ����c�BQ�#J=�z�CT���t�s�k9����Zo_�6!����y�E�Qu��o{xV��NN�>�w��!�E>Q��������2���oa"��QS�
_( <����3��3*�*|�R.�,�C����F�e(��=r3cX)u#�M���aN������o][.�����S�s�Owx���C�� �{��FYX�/T�\L}f�����
|{�B����_�6��0��c6�~� 4�x����B�z��F�?V�<'u���������>'h����B�:�[���KF!���
A�<dq��]_�q����=&�(��UH���7���]Z�C��q$������9����y������9t���rA�S���q�UT����=�>U�?�]J�c��u\�V��:�7<@����������p�����G�9���5\��QC�� ��Km<G����5F,��;�1|� �` �;�|�F��mJ*���\�Hz�a Fb�����JN������=��-J=h�F�c�� C���;�!b��|�5�c�Ra�aL,(8�' c�BaH���D�����
C��2�6�Q�P��0�o��s�O���{�0
���8���9��{w�=� ��{�gT�B��l--��{J?�/�
b�C��Swt�2�/~���kQ�a�y�����w{�����T��G���8yl�I�?�����PK��qg| :S PK W�H Object 2/styles.xml��K�� ��9�KYc,+��)���LM� $SA�
�#� KA��PYJ����/�/c#w=�F��d�����bP
U_�����[�r�t����X�pe��?%7;'V�L�K�iE�a��
7�2-W���4 ��?�Y�<�������=����������C���.����T�h$� 1hZj���P?.����`<�~(��k��N'���������*���`���������G-��2��������nT'�7��ez��ru|�E�z�c= ,�����C����<}G����\G8{�3*��h��q9v��/�Lk����b�[�v9H�>�.;��mo�����[g��e�������k>�>|^�����p�Vm�T�p������3KW���J��2��S�
n�����k�MTYy�{L���o���j- .��Y��� 5 ��Z�W�kLa������
�����=N�����L�
����Ct�PK�P�� � PK W�H Object 2/meta.xml��AO�0��~
Rw�����X���/�x[�}cUhI)c�QdA�������k�=�UpB�*�3QF��H�����<����Rs8(�\���]X�+�K�n�X�Hg57E�Z��[�7
����i����s��gF��5��{�/��%DI���N�W��l�)) +Z�h;�Wj`�J��k����>.fl�z�K+eu�v ���Ia?u�ca��!����f#�I>�c�S�_�F[8c��z���x���i��)���o��~�
f���������d����~�����PK[��H! I PK W�H Object 1/content.xml�\[o��~���h�}�DRRn�R�����-�WE�mu%Q���d}���rh)��8{�s��e���7R�������W^V���W�B+�����|{���o�0����?���&��:�>�ymF"��_F���m�_��|-�*��y��j]GkQ���V{�]���_��������?�S������t�Mgut\����e_ U�S?W���zV�urb�s�����vu]�m�p8X����A�M�`p4�+�e���#��\*�lla����:�j�������'^N�&��W^��n'G���h�]XN������N<��N����zw�'����__~9�B�M�%�����������:^1�*�m�%�v���>\�~(���J��b�(L�q��@�~��&�*�t| Duf ����s���?_~�W��Yx��|����U�GdJ���+�����`6�&x����,=Ow��w��q��
�86P�g~M���Q>���tQ�������5�5@�4@�S|�v�������:��s��D6�i3l=�A��T�a�n�Rf����}����F3� |�&�X+���C�=O�NRC���O�DTUN����~�e�)7GH��&�( ��������Df�����6�w)��m6������V'���)K���-���mR���~��5sNP��)�y�dajiqP����h�f_k�k��^u�M�%��I>�C�eX� �|��:��!��*��0A.r��U4\RG.�k�Te��'=k�J���u���<��N�a������M2f��6�
�*�:b��cz� /��5��<��]��U;g�������U�y�v�m���p4���$`|*������Vm�M��}�A0t���3Pt Pm�P4P�M�&y��cn��8���4�d� �
�!���v���.yX�<+��~�d��r�P�&�+��R�Jq���2���6rc3|�����T$|=����M%�_)N*H�/f>�t��TlC(wY��Vr�qH�%�Z����,�n>P�����S1R"��4��9����48�0�������Ek��5���fF,��)b��#6�x!������O'�=S���+�K�$R�~)�aZ\O��C�+d!e��!�c���8�8���[n����\rz��\�g�������%H[[>��7>�b� ���O��3����1�t��Z>����>�(G�`���������O��D�f��C���p�X����`
,������w?+��!�6H�w-��8�}�2�i��m��+��<�i,���^[d�}
�5<��exhlz�k�������$�,��>�v\���EQ#m�X�J��_YV���w�H��aUu��(�����m�8�YXM��l�1\�3|���
������:�%�`D�g���}�Dp���B��j.[��b���hg�_���EX���
0fvB������_��������
��O#[��G�[n��3�9�I��H�Z����i����2~�]������oy��r��$j��Uo$�&��������n8.��}�q��KhRx�P��6��P����+y_m�aG�0�K.��/!��7 z������d?� m�l2�����j�� k�TY$��!H�8��rv��b�a�.!�:��3MX���o�#��#%�+%����^�1+5�����n1�����[�2����|�;������2���L z����Y�y��c�voQ����D~yq�%@|����� ��.���G�.�����E�@�G @������,H��X��Eoe@�7*�<���Lb}�C'I5�+�3h�����~��a���9�D�22���D8����\�)�����%�����B
w��/8Di���i��T��Pu4V:�P�L��LC��1p���������f�G�6�o���`���@3��w�7a�k0��a�����
��0��<4S<��`@���`��a������&�K��44S<�ap�T�@p�u7)��sRfNPJ5[~�w���#R�4���t����0��</�����B�h���������h�b���:���qHN��������m��d�bS�^w ��j�"?^�V|�O�������{c^E��;{qg������b������Q�
���=���et`�����~�:�hcuJ�����2��G�B�K��w|g����YX�k�9��?0_!�\X>����Av������`�E�"�-�3+a</���,��{Jt��E:s\[�G(��1�/��0���"8�W7���{x���*W �A�;��x��7L�/���P����U/D�a�oQ�S)0�/�����g��*�G�e4{� �\��rK�JoJ<3��Y��^�]�|�@��K����t���W�� ������0�/��9F���."��������2jQ�DG�W��|crwE. ��i$�9�L2������ � �#)��U�,jo��@@�%��A�E��?E���
2�z��� �-Li��U�B��@P�� ��x,�!�y���5�5JOJ,��32*=��Y�����B��0��~�\���"�� (�F1�UG��J�����]����P/����-�43�����c�&��[�[�+P0p=������x������ \�)����e$_h�a�#T^Pk*<�����t���W/O���t#8�
�_�*�3����1����6�����Yt������d�8��=��"�EG���+�!����(�FtT���|�[P=8��q�t!:�*��eT���ZsP�4"�������Tl~��E@����!@1�nC�,j���1cp���1�����}�<�Q'��W�CU� }�(dJ����#�B��af9>ATM�c���2�}����X�_��'����M��������?PK6�g y zS PK W�H Object 1/styles.xml��K�� ��9�KYc,+��)���LM� $SA�
�#� KA��PYJ����/�/c#w=�F��d�����bP
U_�����[�r�t����X�pe��?%7;'V�L�K�iE�a��
7�2-W���4 ��?�Y�<�������=����������C���.����T�h$� 1hZj���P?.����`<�~(��k��N'���������*���`���������G-��2��������nT'�7��ez��ru|�E�z�c= ,�����C����<}G����\G8{�3*��h��q9v��/�Lk����b�[�v9H�>�.;��mo�����[g��e�������k>�>|^�����p�Vm�T�p������3KW���J��2��S�
n�����k�MTYy�{L���o���j- .��Y��� 5 ��Z�W�kLa������
�����=N�����L�
����Ct�PK�P�� � PK W�H Object 1/meta.xml��AO�0��~
Rw�����X���/�x[�}cUhI)c�QdA�������k�=�UpB�*�3QF��H�����<����Rs8(�\���]X�+�K�n�X�Hg57E�Z��[�7
����i����s��gF��5��{�/��%DI���N�W��l�)) +Z�h;�Wj`�J��k����>.fl�z�K+eu�v ���Ia?u�ca��!����f#�I>�c�S�_�F[8c��z���x���i��)���o��~�
f���������d����~�����PK[��H! I PK W�H settings.xml�[[s�:~?�"���C.-L�c(�6I��7�^@E�<��!��2��������q^���j��v%���9���Z��A+u����V����
���\��;Pu�x@�� )Uq��SQ]5�
�U�U�<U�T�t3���wu)l�dN0��
)�j��������b�R������#<�W���SQ��_��+e����vZ\}/��|b�r����f�������1��E�9Z?�T����C��j�]������6��3��i�_5b*u����]��0�� �aWNvA�j�O��w�';U/������!�S��n��p�*-�(���}4���n�)$W.P�GQz�����}�,������4m�k\0~�����w.e�e|���s��Aw���JDL�`y�\�6�����NV�[�o,��}���#���������o�M��6�F���.yoPLJ��<d��+�T=:� �QW�\Kj4��Y_E?OnNX��x��m� Z�K�n�_��tZ�w|y7E��q9C��3t��C���tN�9A��g4�WS���m�>�/:r��Y:g���Sei��t:� �,:����s~��y7?���_�����K�s��9:�����?��� �wk-kCb�O��~0�8�b#[�S:;���`'�48��
��n(��F����G�8[�/�����_3�t�m�r��d��-h��MT*����g ����kV�"�T<j�����s �d���I0s�����n*B�.�U!�Da]�c�F��O���0)���@B��W�!`���*��L�B9W����� �n~w� ��.�a����/��<���
^�ca.�3����x/M=�X��� @�_�zp-�{����%���$������cR�~Y����3������bF�"x�R�m�Wf7u��O��E�\%g��D��%Nu�� �#������Z�
n[������������N�b�79)�<.��6����dn�Y����?�i�0�E:��#�dpm�_�/17
���s������Ws����u�6U�F���5��j�����z@^^� ��:0^�"���~Z����1��jo&FhQ����&�o.��U���h'���o9�gz����8A6���$T��/�m$�������������������_}�����@s�}�0�������.��GOo�sj����U��ik�>o�����6��U������������X��WQ�������Jpc5f5wDkz�3�C�����^i�x�o{�Q�����Mo+W�0����+��a��3�S��������G�c��waE����p:���A�i�#?�}-l������Ek-c�vZ���*�������
�A�M���������hj������[�����,�qeX�QPi�L7��m�e=�k��1��z����@ ��$J���&�|"������=t_��\�{Y��PK�`��� �= PK W�H Configurations2/toolbar/PK W�H Configurations2/progressbar/PK W�H Configurations2/menubar/PK W�H Configurations2/floater/PK W�H Configurations2/statusbar/PK W�H Configurations2/toolpanel/PK W�H Configurations2/popupmenu/PK W�H Configurations2/images/Bitmaps/PK W�H '