Testing with concurrent sessions
There has been periodic discussion here about allowing psql to deal
with multiple sessions, or possibly creating another tool to allow
this sort of test. Is anyone working on this?
It's very soon going to be critical that I be able to test particular
interleavings of statements in particular concurrent transaction sets
to be able to make meaningful progress on the serializable
transaction work. It would be wonderful if some of these scripts
could be integrated into the PostgreSQL 'make check' scripts,
although that's not an absolute requirement. I'm not really
concerned about performance tests for a while, just testing the
behavior of particular interleavings of statements in multiple
sessions. If psql isn't expected to support that soon, any
suggestions? Is pgTAP suited to this?
-Kevin
On Jan 1, 2010, at 6:01 PM, Kevin Grittner wrote:
It's very soon going to be critical that I be able to test particular
interleavings of statements in particular concurrent transaction sets
to be able to make meaningful progress on the serializable
transaction work. It would be wonderful if some of these scripts
could be integrated into the PostgreSQL 'make check' scripts,
although that's not an absolute requirement. I'm not really
concerned about performance tests for a while, just testing the
behavior of particular interleavings of statements in multiple
sessions. If psql isn't expected to support that soon, any
suggestions? Is pgTAP suited to this?
We've discussed it a bit in the past with regard to testing replication and such. I think the consensus was, failing support for concurrent sessions in psql, to use a Perl script to control multiple psql sessions and perhaps use Test::More to do the testing. Although pgTAP might make sense, too, if the tests ought to run in the database.
Best,
David
"David E. Wheeler" <david@kineticode.com> wrote:
I think the consensus was, failing support for concurrent sessions
in psql, to use a Perl script to control multiple psql sessions
and perhaps use Test::More to do the testing.
Are there any examples of that? While I can hack my way through
regular expressions when I need them, perl as a language is
something I don't know at all; with an example I might be able to
come up to speed quickly, though.
Although pgTAP might make sense, too, if the
tests ought to run in the database.
I need to run statements against a database; I don't particularly
need any special features of psql for this. Can anyone confirm that
pgTAP can let you interleave specific statements against specific
connections in a specific sequence? (The answer to that didn't leap
out at me in a quick scan of the docs.)
-Kevin
On mån, 2010-01-04 at 17:10 -0600, Kevin Grittner wrote:
"David E. Wheeler" <david@kineticode.com> wrote:
I think the consensus was, failing support for concurrent sessions
in psql, to use a Perl script to control multiple psql sessions
and perhaps use Test::More to do the testing.Are there any examples of that? While I can hack my way through
regular expressions when I need them, perl as a language is
something I don't know at all; with an example I might be able to
come up to speed quickly, though.
If you're not deep into Perl, perhaps ignore the Test::More comment for
the moment and just use DBI to connect to several database sessions,
execute your queries and check if the results are what you want. Once
you have gotten somewhere with that, wrapping a test harness around it
is something others will be able to help with.
Although pgTAP might make sense, too, if the
tests ought to run in the database.I need to run statements against a database; I don't particularly
need any special features of psql for this. Can anyone confirm that
pgTAP can let you interleave specific statements against specific
connections in a specific sequence? (The answer to that didn't leap
out at me in a quick scan of the docs.)
pgTAP isn't really going to help you here, as it runs with *one*
database session, and its main functionality is to format the result of
SQL functions into TAP output, which is not very much like what you
ought to be doing.
On Jan 4, 2010, at 3:29 PM, Peter Eisentraut wrote:
If you're not deep into Perl, perhaps ignore the Test::More comment for
the moment and just use DBI to connect to several database sessions,
execute your queries and check if the results are what you want. Once
you have gotten somewhere with that, wrapping a test harness around it
is something others will be able to help with.
Last I heard, Andrew was willing to require Test::More for testing, so that a Perl script could handle multiple psql connections (perhaps forked) and output test results based on them. But he wasn't as interested in requiring DBI and DBD::Pg, neither of which are in the Perl core and are more of a PITA to install (not huge, but the barrier might as well stay low).
pgTAP isn't really going to help you here, as it runs with *one*
database session, and its main functionality is to format the result of
SQL functions into TAP output, which is not very much like what you
ought to be doing.
Right, exactly.
Best,
David
Hi,
Kevin Grittner wrote:
It's very soon going to be critical that I be able to test particular
interleavings of statements in particular concurrent transaction sets
to be able to make meaningful progress on the serializable
transaction work.
I've something in place for Postgres-R, as I also need to test
concurrent transactions there. It's based on python/twisted and is able
to start multiple Postgres instances (as required for testing
replication) and query them concurrently (as you seem to need as well).
It uses an asynchronous event loop (from twisted) and basically controls
processes, issues queries and checks results and ordering constraints
(e.g. transaction X must commit and return a result before transaction Y).
I'm still under the impression that this testing framework needs
cleanup. However, others already showed interest as well...
Regards
Markus Wanner
Markus Wanner <markus@bluegap.ch> wrote:
Kevin Grittner wrote:
It's very soon going to be critical that I be able to test
particular interleavings of statements in particular concurrent
transaction sets to be able to make meaningful progress on the
serializable transaction work.I've something in place for Postgres-R, as I also need to test
concurrent transactions there. It's based on python/twisted and is
able to start multiple Postgres instances (as required for testing
replication) and query them concurrently (as you seem to need as
well). It uses an asynchronous event loop (from twisted) and
basically controls processes, issues queries and checks results
and ordering constraints (e.g. transaction X must commit and
return a result before transaction Y).
Where would I find this (and any related documentation)?
-Kevin
Hi,
Kevin Grittner wrote:
Where would I find this (and any related documentation)?
Sorry, if that didn't get clear. I'm trying to put together something I
can release real soon now (tm). I'll keep you informed.
Regards
Markus Wanner
"David E. Wheeler" <david@kineticode.com> wrote:
Last I heard, Andrew was willing to require Test::More for
testing, so that a Perl script could handle multiple psql
connections (perhaps forked) and output test results based on
them. But he wasn't as interested in requiring DBI and DBD::Pg,
neither of which are in the Perl core and are more of a PITA to
install (not huge, but the barrier might as well stay low).
OK, I've gotten familiar with Perl as a programming language and
tinkered with Test::More. What's not clear to me yet is what would
be considered good technique for launching several psql sessions
from that environment, interleaving commands to each of them, and
checking results from each of them as the test plan progresses. Any
code snippets or URLs to help me understand that are welcome. (It
seems clear enough with DBI, but I'm trying to avoid that per the
above.)
-Kevin
On ons, 2010-01-06 at 15:52 -0600, Kevin Grittner wrote:
"David E. Wheeler" <david@kineticode.com> wrote:
Last I heard, Andrew was willing to require Test::More for
testing, so that a Perl script could handle multiple psql
connections (perhaps forked) and output test results based on
them. But he wasn't as interested in requiring DBI and DBD::Pg,
neither of which are in the Perl core and are more of a PITA to
install (not huge, but the barrier might as well stay low).OK, I've gotten familiar with Perl as a programming language and
tinkered with Test::More. What's not clear to me yet is what would
be considered good technique for launching several psql sessions
from that environment, interleaving commands to each of them, and
checking results from each of them as the test plan progresses. Any
code snippets or URLs to help me understand that are welcome. (It
seems clear enough with DBI, but I'm trying to avoid that per the
above.)
Then I don't see much of a point in using Perl. You might as well fire
up a few psqls from a shell script.
On Jan 6, 2010, at 1:52 PM, Kevin Grittner wrote:
Last I heard, Andrew was willing to require Test::More for
testing, so that a Perl script could handle multiple psql
connections (perhaps forked) and output test results based on
them. But he wasn't as interested in requiring DBI and DBD::Pg,
neither of which are in the Perl core and are more of a PITA to
install (not huge, but the barrier might as well stay low).OK, I've gotten familiar with Perl as a programming language and
tinkered with Test::More. What's not clear to me yet is what would
be considered good technique for launching several psql sessions
from that environment, interleaving commands to each of them, and
checking results from each of them as the test plan progresses. Any
code snippets or URLs to help me understand that are welcome. (It
seems clear enough with DBI, but I'm trying to avoid that per the
above.)
Probably the simplest way is to use the core IPC::Open3 module:
http://search.cpan.org/perldoc?IPC::Open3
IPC::Run might be easier to use if it's available, but it's not in Perl core, so YMMV. Really it's up to andrew what modules he requires test servers to have.
Best,
David
On Jan 6, 2010, at 2:08 PM, Peter Eisentraut wrote:
Then I don't see much of a point in using Perl. You might as well fire
up a few psqls from a shell script
If you're more comfortable with shell, then yes. Although then it won't run on Windows, will it?
Best,
David
On 2010-01-07 00:08 +0200, Peter Eisentraut wrote:
On ons, 2010-01-06 at 15:52 -0600, Kevin Grittner wrote:
"David E. Wheeler"<david@kineticode.com> wrote:
Last I heard, Andrew was willing to require Test::More for
testing, so that a Perl script could handle multiple psql
connections (perhaps forked) and output test results based on
them. But he wasn't as interested in requiring DBI and DBD::Pg,
neither of which are in the Perl core and are more of a PITA to
install (not huge, but the barrier might as well stay low).OK, I've gotten familiar with Perl as a programming language and
tinkered with Test::More. What's not clear to me yet is what would
be considered good technique for launching several psql sessions
from that environment, interleaving commands to each of them, and
checking results from each of them as the test plan progresses. Any
code snippets or URLs to help me understand that are welcome. (It
seems clear enough with DBI, but I'm trying to avoid that per the
above.)Then I don't see much of a point in using Perl. You might as well fire
up a few psqls from a shell script.
I don't see how that would work, but I might have misunderstood what
we're reaching for here. What I think would be most useful would be to
interleave statements between transactions, not just randomly fire psql
sessions and hope for race conditions.
Regards,
Marko Tiikkaja
Marko Tiikkaja <marko.tiikkaja@cs.helsinki.fi> wrote:
On 2010-01-07 00:08 +0200, Peter Eisentraut wrote:
Then I don't see much of a point in using Perl. You might as
well fire up a few psqls from a shell script.I don't see how that would work, but I might have misunderstood
what we're reaching for here. What I think would be most useful
would be to interleave statements between transactions, not just
randomly fire psql sessions and hope for race conditions.
Yeah, I want to test specific interleavings of statements on
concurrent connections. There may *also* be some tests which throw
a lot at the server concurrently in a more random fashion, but it is
important to be able to have some very controlled tests where we
don't count on randomly creating the desired conflicts.
It would be valuable to be able to include some of these tests with
controlled and predicatable statement interleavings in the "make
check" tests.
-Kevin
Marko Tiikkaja wrote:
On 2010-01-07 00:08 +0200, Peter Eisentraut wrote:
Then I don't see much of a point in using Perl. You might as well fire
up a few psqls from a shell script.I don't see how that would work, but I might have misunderstood what
we're reaching for here. What I think would be most useful would be
to interleave statements between transactions, not just randomly
fire psql sessions and hope for race conditions.
Open a few psql with -f pointing to a pipe, and from the shell write
into the pipe? I don't think it's straightforward, but it should be
possible.
--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Alvaro Herrera <alvherre@commandprompt.com> wrote:
Open a few psql with -f pointing to a pipe, and from the shell
write into the pipe? I don't think it's straightforward, but it
should be possible.
I'll play with it and see what I can do.
Thanks,
-Kevin
On Wed, Jan 6, 2010 at 4:52 PM, Kevin Grittner
<Kevin.Grittner@wicourts.gov> wrote:
"David E. Wheeler" <david@kineticode.com> wrote:
Last I heard, Andrew was willing to require Test::More for
testing, so that a Perl script could handle multiple psql
connections (perhaps forked) and output test results based on
them. But he wasn't as interested in requiring DBI and DBD::Pg,
neither of which are in the Perl core and are more of a PITA to
install (not huge, but the barrier might as well stay low).OK, I've gotten familiar with Perl as a programming language and
tinkered with Test::More. What's not clear to me yet is what would
be considered good technique for launching several psql sessions
from that environment, interleaving commands to each of them, and
checking results from each of them as the test plan progresses. Any
code snippets or URLs to help me understand that are welcome. (It
seems clear enough with DBI, but I'm trying to avoid that per the
above.)
Doing this without DBI is going to be ten times harder than doing it
with DBI. Are we really sure that's not a viable option?
...Robert
Robert Haas wrote:
On Wed, Jan 6, 2010 at 4:52 PM, Kevin Grittner
<Kevin.Grittner@wicourts.gov> wrote:"David E. Wheeler" <david@kineticode.com> wrote:
Last I heard, Andrew was willing to require Test::More for
testing, so that a Perl script could handle multiple psql
connections (perhaps forked) and output test results based on
them. But he wasn't as interested in requiring DBI and DBD::Pg,
neither of which are in the Perl core and are more of a PITA to
install (not huge, but the barrier might as well stay low).OK, I've gotten familiar with Perl as a programming language and
tinkered with Test::More. What's not clear to me yet is what would
be considered good technique for launching several psql sessions
from that environment, interleaving commands to each of them, and
checking results from each of them as the test plan progresses. Any
code snippets or URLs to help me understand that are welcome. (It
seems clear enough with DBI, but I'm trying to avoid that per the
above.)Doing this without DBI is going to be ten times harder than doing it
with DBI. Are we really sure that's not a viable option?
In the buildfarm? Yes, I think so. The philosophy of the buildfarm is
that it should do what you would do yourself by hand.
And adding DBI as a requirement for running a buildfarm member would be
a significant extra barrier to entry, ISTM. (I am very fond of DBI, and
use it frequently, BTW)
I'm persuadable on most things, but this one would take a bit of doing.
A parallel psql seems to me a better way to go. We talked about that a
while ago, but I don't recall what happened to it.
cheers
andrew
On Wed, Jan 6, 2010 at 8:40 PM, Andrew Dunstan <andrew@dunslane.net> wrote:
Doing this without DBI is going to be ten times harder than doing it
with DBI. Are we really sure that's not a viable option?In the buildfarm? Yes, I think so. The philosophy of the buildfarm is that
it should do what you would do yourself by hand.
It just seems crazy to me to try to test anything without proper
language bindings. Opening a psql session and parsing the results
seems extraordinarily painful. I wonder if it would make sense write
a small wrapper program that uses libpq and dumps out the results in a
format that is easy for Perl to parse.
Another idea would be to make a set of Perl libpq bindings that is
simpler than DBD::Pg and don't go through DBI. If we put those in the
main source tree (perhaps as a contrib module) they would be available
wherever we need them.
A parallel psql seems to me a better way to go. We talked about that a while
ago, but I don't recall what happened to it.
That seems like a dead-end to me. It's hard for me to imagine it's
ever going to be more than a toy.
...Robert
Robert Haas <robertmhaas@gmail.com> writes:
It just seems crazy to me to try to test anything without proper
language bindings. Opening a psql session and parsing the results
seems extraordinarily painful. I wonder if it would make sense write
a small wrapper program that uses libpq and dumps out the results in a
format that is easy for Perl to parse.
Another idea would be to make a set of Perl libpq bindings that is
simpler than DBD::Pg and don't go through DBI. If we put those in the
main source tree (perhaps as a contrib module) they would be available
wherever we need them.
We have not yet fully accepted the notion that you must have Perl to
build (and, in fact, I am right now trying to verify that you don't).
I don't think that requiring Perl to test is going to fly.
A parallel psql seems to me a better way to go. We talked about that a while
ago, but I don't recall what happened to it.
That seems like a dead-end to me. It's hard for me to imagine it's
ever going to be more than a toy.
Well, the argument there is that it might be useful for actual use,
not only testing.
regards, tom lane
Andrew Dunstan wrote:
Robert Haas wrote:
Doing this without DBI is going to be ten times harder than doing
it with DBI. Are we really sure that's not a viable option?
In the buildfarm? Yes, I think so. The philosophy of the buildfarm
is that it should do what you would do yourself by hand.And adding DBI as a requirement for running a buildfarm member
would be a significant extra barrier to entry, ISTM. (I am very
fond of DBI, and use it frequently, BTW)I'm persuadable on most things, but this one would take a bit of
doing.
As far as I've been able to determine so far, to call psql in a
relatively portable way would require something like this:
http://perldoc.perl.org/perlfork.html
Is that really better than DBI?
Don't we need some way to routinely test multi-session issues?
Other ideas?
-Kevin
Import Notes
Resolved by subject fallback
Tom Lane wrote:
We have not yet fully accepted the notion that you must have Perl
to build (and, in fact, I am right now trying to verify that you
don't). I don't think that requiring Perl to test is going to fly.
Well, if that's the consensus, I have to choose between trying to
implement multi-session psql and using testing which can't carry over
to long-term regular use. Are we anywhere close to an agreement on
what the multi-session psql implementation would look like? (If not
I can put something forward.)
-Kevin
Import Notes
Resolved by subject fallback
On Wed, Jan 6, 2010 at 9:26 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Robert Haas <robertmhaas@gmail.com> writes:
It just seems crazy to me to try to test anything without proper
language bindings. Opening a psql session and parsing the results
seems extraordinarily painful. I wonder if it would make sense write
a small wrapper program that uses libpq and dumps out the results in a
format that is easy for Perl to parse.Another idea would be to make a set of Perl libpq bindings that is
simpler than DBD::Pg and don't go through DBI. If we put those in the
main source tree (perhaps as a contrib module) they would be available
wherever we need them.We have not yet fully accepted the notion that you must have Perl to
build (and, in fact, I am right now trying to verify that you don't).
I don't think that requiring Perl to test is going to fly.
I suppose that depends on the context. I'm not exactly sure what
Kevin's goal is here. For basic regression tests, yeah, we'd probably
like to keep that Perl-free. For more complex testing, I think using
Perl makes sense. Or to put the shoe on the other foot, if we DON'T
allow the use of Perl for more complex testing, then we're probably
not going to have any more complex tests. If we use a hypothetical
concurrent psql implementation to run the tests, how will we analyze
the results? It's no secret that the current regression tests are
fairly limited, in part because the only thing we can do with them is
diff the output against one or two "known good" results.
...Robert
Robert Haas wrote:
I'm not exactly sure what Kevin's goal is here.
I think it would be insane to try to do something like serializable
isolation mode without having regression tests. I would want more
tests than could reasonably go into the 'make check' suite to support
development, but it would be very good to have some go in there.
For basic regression tests, yeah, we'd probably like to keep that
Perl-free.
OK. Is parallel psql the only reasonable option?
For more complex testing, I think using Perl makes sense. Or to put
the shoe on the other foot, if we DON'T allow the use of Perl for
more complex testing, then we're probably not going to have any
more complex tests.
Do you envision some test suite committed to CVS beyond the 'make
check' tests, for "on demand" testing at a more rigorous level?
Am I missing something that's already there?
-Kevin
Import Notes
Resolved by subject fallback
Kevin Grittner escribi�:
Well, if that's the consensus, I have to choose between trying to
implement multi-session psql and using testing which can't carry over
to long-term regular use. Are we anywhere close to an agreement on
what the multi-session psql implementation would look like? (If not
I can put something forward.)
See
http://archives.postgresql.org/message-id/8204.1207689056@sss.pgh.pa.us
and followups.
--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
On Wed, Jan 6, 2010 at 10:00 PM, Kevin Grittner
<Kevin.Grittner@wicourts.gov> wrote:
For basic regression tests, yeah, we'd probably like to keep that
Perl-free.OK. Is parallel psql the only reasonable option?
It seems so, assuming you're willing to concede that it is a
reasonable option in the first place.
For more complex testing, I think using Perl makes sense. Or to put
the shoe on the other foot, if we DON'T allow the use of Perl for
more complex testing, then we're probably not going to have any
more complex tests.Do you envision some test suite committed to CVS beyond the 'make
check' tests, for "on demand" testing at a more rigorous level?
Am I missing something that's already there?
Personally, I tend to think that to test this well you are going to
need a test suite written in a scripting language. Whether or not
that gets committed to CVS is a political question, but I would be in
favor of it (assuming it's good, of course). Maybe you will find that
you can do it all with concurrent psql, but (1) I'm not convinced and
(2) if that's your plan, does that mean you're going to do nothing
until someone implements concurrent psql?
...Robert
Robert Haas wrote:
It just seems crazy to me to try to test anything without proper
language bindings. Opening a psql session and parsing the results
seems extraordinarily painful.
I've written a Python based program that spawns a captive psql and talks
to it--twice for different people--that ultimately uses the same sort of
open3() spawning David mentioned is available via IPC::Open3. You can
throw together a prototype that works well enough for some purposes in a
couple of hours. I don't know that it would ever reach really robust
though.
The main problem with that whole approach is that you have to be
extremely careful in how you deal with the situation where the captive
program is spewing an unknown amount of information back at you. How do
you know when it's done? Easy for the child and its master to deadlock
if you're not careful. In the past I worked around that issue by just
waiting for the process to end and then returning everything it had
written until that time. I don't know that this would be flexible
enough for what's needed for concurrent testing, where people are
probably going to want more of a "send a command, get some lines back
again" approach that keeps the session open.
If I thought a captive psql would work well in this context I'd have
written a prototype already. I'm not sure if it's actually possible to
do this well enough to meet expectations. Parsing psql output is
completely viable for trivial purposes though, and if the requirements
were constrained enough it might work well enough for simple concurrent
testing. While both concurrent psql and the libpq shim you suggested
would take more work, I feel a bit more confident those would result in
something that really worked as expected on every platform when finished.
--
Greg Smith 2ndQuadrant Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com www.2ndQuadrant.com
On ons, 2010-01-06 at 14:10 -0800, David E. Wheeler wrote:
On Jan 6, 2010, at 2:08 PM, Peter Eisentraut wrote:
Then I don't see much of a point in using Perl. You might as well fire
up a few psqls from a shell scriptIf you're more comfortable with shell, then yes. Although then it won't run on Windows, will it?
Well, I'm not saying that this is necessarily the better alternative.
But you might have to compromise somewhere. Otherwise this project will
take two years to complete and will be an unmaintainable mess (see
pg_regress).
On ons, 2010-01-06 at 20:49 -0500, Robert Haas wrote:
Another idea would be to make a set of Perl libpq bindings that is
simpler than DBD::Pg and don't go through DBI. If we put those in the
main source tree (perhaps as a contrib module) they would be available
wherever we need them.
http://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/interfaces/perl5/?hideattic=0
On 7/01/2010 9:15 AM, Robert Haas wrote:
On Wed, Jan 6, 2010 at 4:52 PM, Kevin Grittner
<Kevin.Grittner@wicourts.gov> wrote:"David E. Wheeler"<david@kineticode.com> wrote:
Last I heard, Andrew was willing to require Test::More for
testing, so that a Perl script could handle multiple psql
connections (perhaps forked) and output test results based on
them. But he wasn't as interested in requiring DBI and DBD::Pg,
neither of which are in the Perl core and are more of a PITA to
install (not huge, but the barrier might as well stay low).OK, I've gotten familiar with Perl as a programming language and
tinkered with Test::More. What's not clear to me yet is what would
be considered good technique for launching several psql sessions
from that environment, interleaving commands to each of them, and
checking results from each of them as the test plan progresses. Any
code snippets or URLs to help me understand that are welcome. (It
seems clear enough with DBI, but I'm trying to avoid that per the
above.)Doing this without DBI is going to be ten times harder than doing it
with DBI. Are we really sure that's not a viable option?
At this point, I'm personally wondering if it's worth putting together a
simple (ish) C program that reads a file describing command
interleavings on n connections. It fires up one thread per connection
required, then begins queuing commands up for the threads to execute in
per-thread fifo queues. The input file may contain synchronization
points where two or more explicitly specified threads (or just all
threads) must finish all their queued work before they may be given more.
Yes, it requires wrangling low-level threading ( pthreads, or the
practically equivalent for simple purposes but differently spelled win32
threading ) so it's not going to be beautiful. But it'd permit a
declarative form for tests and a single, probably fairly maintainable,
test runner.
I reluctantly suspect that XML would be a good way to describe the tests
- first a block declaring your connections and their conn strings, then
a sequence of statements (each of which is associated with a named
connection) and synchronization points. Though, come to think of it, a
custom plaintext format would be pretty trivial too.
CONN conn1: dbname=regress, user=regress
CONN conn2: dbname=regress, user=regress
STMT conn1: SELECT blah blah;
STMT conn2: UPDATE blah blah;
SYNC conn1, conn2
etc. Or alternately one-file-per-connection (which would be handy if one
connection has *lots* of commands and others only occasional ones) - the
only trouble there being how to conveniently specify synchronization points.
Anyway: If Java were acceptable I'd put one together now - but somehow I
don't think requiring Java would be popular if Perl is an issue ;-) My
C/pthreads is more than a little bit rusty (ie: practially nonexistent)
and mostly confined to exception-controlled C++ code with RAII for lock
management. If C++ is OK, I can write and post a tool for evaluation,
but if it must be plain C ... well, I'll avoid scarring you with the
sight of what I'd produce.
I suspect that a custom tool for this job could actually be fairly
simple. A lot simpler than a proper, clean and usable parallel psql, anyway.
--
Craig Ringer
-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160
Doing this without DBI is going to be ten times harder than doing it
with DBI. Are we really sure that's not a viable option?
In the buildfarm? Yes, I think so. The philosophy of the buildfarm is
that it should do what you would do yourself by hand.And adding DBI as a requirement for running a buildfarm member would be
a significant extra barrier to entry, ISTM. (I am very fond of DBI, and
use it frequently, BTW)
What about something less than a requirement then? If you have it great,
you can run these extra tests. If you don't have it, no harm, no foul.
We could even bundle DBI and DBD::Pg to ensure that the minimum versions
are there. All the prerequisites should be in place for 99% of the machines:
a C compiler and Perl are the biggies, and I can't see any buildfarm members
running without those. :)
- --
Greg Sabino Mullane greg@turnstep.com
PGP Key: 0x14964AC8 201001071014
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-----BEGIN PGP SIGNATURE-----
iEYEAREDAAYFAktF+ucACgkQvJuQZxSWSsjYOgCglyLIyGCr60og+iQSnyRgkowd
+lYAnRDjPe/XxC7gb9OBPdpZlqU0wncK
=kPIR
-----END PGP SIGNATURE-----
On 2010-01-07 11:50 +0200, Craig Ringer wrote:
On 7/01/2010 9:15 AM, Robert Haas wrote:
Doing this without DBI is going to be ten times harder than doing it
with DBI. Are we really sure that's not a viable option?At this point, I'm personally wondering if it's worth putting together a
simple (ish) C program that reads a file describing command
interleavings on n connections. It fires up one thread per connection
required, then begins queuing commands up for the threads to execute in
per-thread fifo queues. The input file may contain synchronization
points where two or more explicitly specified threads (or just all
threads) must finish all their queued work before they may be given more.
CONN conn1: dbname=regress, user=regress
CONN conn2: dbname=regress, user=regress
STMT conn1: SELECT blah blah;
STMT conn2: UPDATE blah blah;
SYNC conn1, conn2etc. Or alternately one-file-per-connection (which would be handy if one
connection has *lots* of commands and others only occasional ones) - the
only trouble there being how to conveniently specify synchronization points.
I had a similar syntax in mind, but instead of using threads, just
execute the file in order using asynchronous connections.
Regards,
Marko Tiikkaja
On 2010-01-07 18:13 +0200, Marko Tiikkaja wrote:
I had a similar syntax in mind, but instead of using threads, just
execute the file in order using asynchronous connections.
I completely failed to make the point here which was to somehow mark
which statements will (or, should) block. So here we go:
A=> BEGIN;
B=> BEGIN;
A=> UPDATE foo ..;
&B=> UPDATE foo ..; -- this will block
A=> COMMIT;
B=> SELECT * FROM foo;
B=> COMMIT;
[expected output here]
Regards,
Marko Tiikkaja
On Jan 6, 2010, at 6:26 PM, Tom Lane wrote:
We have not yet fully accepted the notion that you must have Perl to
build (and, in fact, I am right now trying to verify that you don't).
I don't think that requiring Perl to test is going to fly.
I believe that the build farm already requires Perl, regardless of whether the PostgreSQL build itself requires it.
Best,
David
"Greg Sabino Mullane" <greg@turnstep.com> writes:
We could even bundle DBI and DBD::Pg to ensure that the minimum versions
are there.
As a packager, my reaction to that is "over my dead body". We have
enough trouble keeping our own software up to date, and pretty much
every external component that we've started to bundle has been a
disaster from a maintenance standpoint. (Examples: the zic database
is constant work and yet almost never up to date; the snowball stemmer
never gets updated.)
regards, tom lane
David E. Wheeler wrote:
On Jan 6, 2010, at 6:26 PM, Tom Lane wrote:
We have not yet fully accepted the notion that you must have Perl to
build (and, in fact, I am right now trying to verify that you don't).
I don't think that requiring Perl to test is going to fly.I believe that the build farm already requires Perl, regardless of whether the PostgreSQL build itself requires it.
Unless I am mistaken, Perl is required in any case to build from CVS,
although not from a tarball.
DBI/DBD::pg is quite another issue, however. I have been deliberately
very conservative about what modules to require for the buildfarm, and
we have similarly (and I think wisely) been conservative about what
modules to require for Perl programs in the build process.
Using DBI/DBD::Pg would raise another issue - what version of libpq
would it be using? Not the one in the build being tested, that's for
sure. If you really want to use Perl then either a Pure Perl DBI driver
(which Greg has talked about) or a thin veneer over libpq such as we
used to have in contrib seems a safer way to go.
cheers
On Thu, Jan 7, 2010 at 11:46 AM, Andrew Dunstan <andrew@dunslane.net> wrote:
Using DBI/DBD::Pg would raise another issue - what version of libpq would it
be using? Not the one in the build being tested, that's for sure. If you
really want to use Perl then either a Pure Perl DBI driver (which Greg has
talked about) or a thin veneer over libpq such as we used to have in contrib
seems a safer way to go.
I completely agree. As between those two options, count me as +1 for
the thin veneer.
...Robert
On Wed, Jan 06, 2010 at 08:40:28PM -0500, Andrew Dunstan wrote:
A parallel psql seems to me a better way to go. We talked about that
a while ago, but I don't recall what happened to it.
Greg Stark had a patch a couple of years ago. Dunno what happened to
it since then.
Cheers,
David.
--
David Fetter <david@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david.fetter@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics
Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
On Jan 6, 2010, at 6:31 PM, Kevin Grittner wrote:
As far as I've been able to determine so far, to call psql in a
relatively portable way would require something like this:
Here's an example using IPC::Open3:
#!/usr/local/bin/perl -w
use strict;
use warnings;
use IPC::Open3;
use Symbol 'gensym';
use constant EOC => "__DONE__\n";
my ($in1, $out1, $err1) = (gensym, gensym, gensym);
my ($in2, $out2, $err2) = (gensym, gensym, gensym);
my $pid1 = open3 $in1, $out1, $err1, 'bash';
my $pid2 = open3 $in2, $out2, $err2, 'bash';
print $in1 "cd ~/dev/postgresql\n";
print $in1 "ls doc\n";
print $in1 "echo ", EOC;
while ((my $line = <$out1>)) {
last if $line eq EOC;
print "LS: $line";
}
print "#### Finished file listing\n\n";
print $in2 "cd ~/dev/postgresql\n";
print $in2 "head -4 README\n";
print $in2 "echo ", EOC;
while (defined( my $line = <$out2> )) {
last if $line eq EOC;
print "HEAD: $line";
}
print "#### Finished reading README\n\n";
print $in1 "exit\n";
print $in2 "exit\n";
waitpid $pid2, 0;
print "#### All done!\n";
With that, I get:
LS: KNOWN_BUGS
LS: MISSING_FEATURES
LS: Makefile
LS: README.mb.big5
LS: README.mb.jp
LS: TODO
LS: bug.template
LS: src
#### Finished file listing
HEAD: PostgreSQL Database Management System
HEAD: =====================================
HEAD:
HEAD: This directory contains the source code distribution of the PostgreSQL
#### Finished reading README
#### All done!
I could easily write a very simple module to abstract all that stuff for you, then you could just do something like:
my $psql1 = Shell::Pipe->new(qw(psql -U postgres));
my $psql2 = Shell::Pipe->new(qw(psql -U postgres));
$psql1->print('SELECT * from pg_class');
while (my $line = $psql1->readln) { print "Output: $line\n" }
$psql1->close;
All I'd need is some more reliable way than "echo "DONE__\n" to be able to tell when a particular command has finished outputting.
Thoughts?
Best,
David
Andrew Dunstan <andrew@dunslane.net> writes:
Unless I am mistaken, Perl is required in any case to build from CVS,
although not from a tarball.
Right, but to my mind "building from a tarball" needs to include the
ability to run the regression tests on what you built. So injecting
Perl into that is moving the goalposts on build requirements.
regards, tom lane
On Jan 7, 2010, at 9:08 AM, Tom Lane wrote:
Right, but to my mind "building from a tarball" needs to include the
ability to run the regression tests on what you built. So injecting
Perl into that is moving the goalposts on build requirements.
In that case, there's nothing for it except concurrent psql. Or else some sort of shell environment that's available on all platforms. do we require bash on Windows? Oh, wait, the Windows build requires Perl…
Best,
David
"David E. Wheeler" <david@kineticode.com> writes:
On Jan 7, 2010, at 9:08 AM, Tom Lane wrote:
Right, but to my mind "building from a tarball" needs to include the
ability to run the regression tests on what you built. So injecting
Perl into that is moving the goalposts on build requirements.
In that case, there's nothing for it except concurrent psql.
Unless we are prepared to define concurrency testing as something
separate from the basic regression tests. Which is kind of annoying but
perhaps less so than the alternatives. It certainly seems to me to
be the kind of thing you wouldn't need to test in order to have
confidence in a local build.
regards, tom lane
On Jan 7, 2010, at 9:19 AM, Tom Lane wrote:
In that case, there's nothing for it except concurrent psql.
Unless we are prepared to define concurrency testing as something
separate from the basic regression tests. Which is kind of annoying but
perhaps less so than the alternatives. It certainly seems to me to
be the kind of thing you wouldn't need to test in order to have
confidence in a local build.
I was rather assuming that was what we were talking about here, since we have in the past discussed testing things like dump and restore, which would require something like Perl to handle multiple processes, and wouldn't work very well for a regular release.
I think if we have the ability to add tests that are not distributed, it gives us a lot more freedom and opportunity to test things that are not currently well-tested.
Best,
David
-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160
Unless we are prepared to define concurrency testing as something
separate from the basic regression tests. Which is kind of annoying but
perhaps less so than the alternatives. It certainly seems to me to
be the kind of thing you wouldn't need to test in order to have
confidence in a local build.
I thought we were leaning towards something separate.
- --
Greg Sabino Mullane greg@turnstep.com
PGP Key: 0x14964AC8 201001071234
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-----BEGIN PGP SIGNATURE-----
iEYEAREDAAYFAktGGxYACgkQvJuQZxSWSsgG0gCfZ4eTpXd/97p3zSJpLqGhKMh6
8nMAoJ2lQUaCWNVeSPDU8fq7VnkO0s4C
=xBOo
-----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160
Using DBI/DBD::Pg would raise another issue - what version of libpq
would it be using? Not the one in the build being tested, that's for
sure.
Er...why not? That's what psql uses. As for those advocating using a
custom C program written using libpq - that's basically what
DBI/DBD::Pg ends up being! Only with a shiny Perl outside and years
of real-world testing and usage.
If you really want to use Perl then either a Pure Perl DBI driver
(which Greg has talked about) or a thin veneer over libpq such as we
used to have in contrib seems a safer way to go.
I'm still *very* interested in making a libpq-less pure perl driver,
if anyone feels like funding it, let me know! :)
- --
Greg Sabino Mullane greg@turnstep.com
PGP Key: 0x14964AC8 201001071236
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-----BEGIN PGP SIGNATURE-----
iEYEAREDAAYFAktGHF0ACgkQvJuQZxSWSsiFmACfUJbRDUJGvDTJNjgj/dyQKVCA
tZwAn2fiXKNWbWzYXobrHZjeE8aSSiVv
=sGzK
-----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160
We could even bundle DBI and DBD::Pg to ensure that the minimum versions
are there.
As a packager, my reaction to that is "over my dead body". We have
enough trouble keeping our own software up to date, and pretty much
every external component that we've started to bundle has been a
disaster from a maintenance standpoint. (Examples: the zic database
is constant work and yet almost never up to date; the snowball stemmer
never gets updated.)
As a counterargument, I'll point out that this won't be as critical
as zic, especially if we're talking about an additional/optional
set of tests. Also, Tim Bunce and I are right here, so the maintenance
should not be that bad (and I'd hazard that a lot more people in
the community know Perl/DBI than zic or stemmers).
- --
Greg Sabino Mullane greg@turnstep.com
PGP Key: 0x14964AC8 201001071315
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-----BEGIN PGP SIGNATURE-----
iEYEAREDAAYFAktGJNQACgkQvJuQZxSWSshoHwCg9urTTT19m55soiIjuYKKuB5L
PjYAoJbDAe7j4xsxsSFfVEkYDFpTjKE9
=48oW
-----END PGP SIGNATURE-----
Alvaro Herrera <alvherre@commandprompt.com> wrote:
Kevin Grittner escribiᅵ:
Are we anywhere close to an agreement on what the multi-session
psql implementation would look like?
See
http://archives.postgresql.org/message-id/8204.1207689056@sss.pgh.pa.us
and followups.
Thanks, I had missed or forgotten this thread. It seems like it
drifted more-or-less to a consensus. Does everyone agree that this
thread represents the will of the community?
I see the thread started at the point where Greg Stark had a patch,
but by the end it was a "start over, stealing code as appropriate"
situation. Has anything happened with that? Any plans?
-Kevin
On Jan 7, 2010, at 12:39 PM, Greg Sabino Mullane wrote:
I'm still *very* interested in making a libpq-less pure perl driver,
if anyone feels like funding it, let me know! :)
You mean this one:
http://search.cpan.org/~arc/DBD-PgPP-0.07/lib/DBD/PgPP.pm
?
Cheers,
M
A.M. wrote:
On Jan 7, 2010, at 12:39 PM, Greg Sabino Mullane wrote:
I'm still *very* interested in making a libpq-less pure perl driver,
if anyone feels like funding it, let me know! :)You mean this one:
http://search.cpan.org/~arc/DBD-PgPP-0.07/lib/DBD/PgPP.pm
?
It has a list of limitations as long as your arm.
cheers
andrew
On 8/01/2010 1:39 AM, Greg Sabino Mullane wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160Using DBI/DBD::Pg would raise another issue - what version of libpq
would it be using? Not the one in the build being tested, that's for
sure.Er...why not? That's what psql uses.
Because you'd have to build DBD::Pg against the new libpq, as you do
psql. That means you need DBD::Pg sources and the build environment for
Perl (headers etc) not just a working Perl runtime. Big difference.
There's no guarantee the user's machine has the same major version of
libpq and thus no guarantee that DBD::Pq can be redirected to use your
custom libpq by LD_LIBRARY_PATH. It might also override the library
search path with rpath linking. Building your own would be pretty much
unavoidable unless you're prepared to either require the user to provide
a matching version of DBD::Pg or have the tests running with whatever
random version happens to be lying around.
Using whatever DBD::Pg version happens to be present on the machine
would be a bit of a nightmare for reproducibility of test results, and
would be really unattractive for use in the standard tests. "make check
fails on my some-random-distro" would become painfully common on the
GENERAL list...
Is bundling a Perl module in the source tree and building it as part of
the Pg build a reasonable choice? Personally, I don't think so.
Additionally, a dedicated testing tool like some folks have been talking
about would be really handy for users who want to test their schema.
I've had to write my own (in Java, or I'd be offering it) for this
purpose, as psql is completely unsuitable for concurrent-run testing and
I needed to show that my locking was safe and deadlock-free in some of
the more complex stored procs I have.
--
Craig Ringer
-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160
Using DBI/DBD::Pg would raise another issue - what version of libpq
would it be using? Not the one in the build being tested, that's for
sure.Er...why not? That's what psql uses.
Because you'd have to build DBD::Pg against the new libpq, as you do
psql. That means you need DBD::Pg sources and the build environment for
Perl (headers etc) not just a working Perl runtime. Big difference.
Yes, but that is what I was envisioning. As you point out, that's the
only sane way to make sure we have a good version of DBD::Pg with
which to test. As a side effect, it put libpq through some extra
paces as well. :)
Is bundling a Perl module in the source tree and building it as part of
the Pg build a reasonable choice? Personally, I don't think so.
*shrug* It's different, but it's the best solution to the problem at
hand. It wouldn't be built as part of Pg, only as part of the tests.
Additionally, a dedicated testing tool like some folks have been talking
about would be really handy for users who want to test their schema.
I've had to write my own (in Java, or I'd be offering it) for this
purpose, as psql is completely unsuitable for concurrent-run testing and
I needed to show that my locking was safe and deadlock-free in some of
the more complex stored procs I have.
Sure, but it's the difference between waiting for someone to write something
(and then dealing with the invevitable bugs, tweaks, and enhancements), or
using a solid, known quantity (DBI + Test::More).
- --
Greg Sabino Mullane greg@turnstep.com
PGP Key: 0x14964AC8 201001102316
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-----BEGIN PGP SIGNATURE-----
iEYEAREDAAYFAktKplIACgkQvJuQZxSWSsj7TQCeMOXWS+uLIZE9QbeBWPxYv/rg
HhEAn0QZUzE2/8uyg5Oi+K8qL/oTeDSO
=R8Az
-----END PGP SIGNATURE-----
On Mon, Jan 11, 2010 at 04:17:42AM -0000, Greg Sabino Mullane wrote:
Because you'd have to build DBD::Pg against the new libpq, as you do
psql. That means you need DBD::Pg sources and the build environment for
Perl (headers etc) not just a working Perl runtime. Big difference.Yes, but that is what I was envisioning. As you point out, that's the
only sane way to make sure we have a good version of DBD::Pg with
which to test. As a side effect, it put libpq through some extra
paces as well. :)
Is there a reason why you're suggesting using DBI? There is also the Pg
perl module which works as well and is one tenth of the size. It also
doesn't have external dependancies. It's just a plain wrapper around
libpq, which for the purposes of testing may be better.
http://search.cpan.org/~mergl/pgsql_perl5-1.9.0/Pg.pm
Have a nice day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/
Show quoted text
Please line up in a tree and maintain the heap invariant while
boarding. Thank you for flying nlogn airlines.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160
Is there a reason why you're suggesting using DBI? There is also the Pg
perl module which works as well and is one tenth of the size. It also
doesn't have external dependancies. It's just a plain wrapper around
libpq, which for the purposes of testing may be better.
Works as well? Did you take a look at that link? The last update was
early 2000, which should give you an indication of just how dead
it is.
- --
Greg Sabino Mullane greg@turnstep.com
PGP Key: 0x14964AC8 201001111241
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-----BEGIN PGP SIGNATURE-----
iEYEAREDAAYFAktLYtwACgkQvJuQZxSWSsjNJgCg1IZBNIZsUMZ2V83/NjgJnrGO
3NEAoK/w0byO45zmni/i9lnhliD4UpkU
=ndmb
-----END PGP SIGNATURE-----
On Mon, Jan 11, 2010 at 05:42:08PM -0000, Greg Sabino Mullane wrote:
Is there a reason why you're suggesting using DBI? There is also the Pg
perl module which works as well and is one tenth of the size. It also
doesn't have external dependancies. It's just a plain wrapper around
libpq, which for the purposes of testing may be better.Works as well? Did you take a look at that link? The last update was
early 2000, which should give you an indication of just how dead
it is.
Dead or not, it still works, even against 8.4. I have many programs
that use it. It's simply a wrapper around the libpq interface and as
long as the libpq interface remains stable (which we go to great pains
to do), so will this module.
Given the talk of importing some perl module into the postgresql tree
it just seemed more logical to me to take something that was close to
libpq and had no external dependancies than taking a module with an
external dependancy (namely DBI).
Have a nice day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/
Show quoted text
Please line up in a tree and maintain the heap invariant while
boarding. Thank you for flying nlogn airlines.
Hi,
Markus Wanner wrote:
Sorry, if that didn't get clear. I'm trying to put together something I
can release real soon now (tm). I'll keep you informed.
Okay, here we go: dtester version 0.0.
This emerged out of Postgres-R, where I don't just need to test multiple
client connections, but multiple postmasters interacting with each
other. None the less, it may be suitable for other needs as well,
especially testing with concurrent sessions.
I've decided to release this as a separate project named dtester, as
proposed by Michael Tan (thanks for your inspiration).
It's certainly missing lots of things, mainly documentation. However,
I've attached a patch which integrates nicely into the Postgres
Makefiles, so you just need to say: make dcheck.
That very same patch includes a test case with three concurrent
transactions with circular dependencies, where the current SERIALIZABLE
isolation level fails to provide serializability.
Installing dtester itself is as simple as 'python setup.py install' in
the extracted archive's directory.
Go try it, read the code and simply ask, if you get stuck. I'll try to
come up with some more documentation and such...
Regards
Markus Wanner
Attachments:
dtester-0.0.tar.bz2application/x-bzip; name=dtester-0.0.tar.bz2Download
BZh91AY&SY&� � /
�����z����������� ! `6�=��CB�w�gx��R���j�O{���n| ��W�;�\��z�����|���kw���:���i���v�b����i�z)���n�g�������������y�������������M�K���7��O�{���'�|��������f:���;�����y�h�!"B &��$bhL�x���y'��M
�����
�$A�h ��12'��=#�`M4� � �@��Q�����S��<�Bz����!� ��4yF�
!I4iOh��Q�SFy0�bi��FC 424a�E0��T�������L�������{T�aO�0�����O�
�$�S��*zL�5OSdF��2i���h�mM22��|��Z���xT�3 b�PDd�������/�U��Z��P�W �5�"�k���eeus��B
)Q$Q�@@��,��"}'3��wy��+{��~)���u��y�W���g���������,,�������f�������7�i?��U���Z������������F�ITvp�
)%�X����������]J������BZ�-YZ`r}tB�1�i���dw�'����F�'���1��MW>W]�����:�? 7��>�<������]��?��m[R2��"(G�E?DIQ�[��['f�$��(�bem�V��G!q�& ��=��>78k6@�.
�@�"�T6�q�N��t����q=f��
����KJ�!L���Bk�/Oo�n^M�M�y�8�F0l:��^�����;qr;���v��9(s���1B!N��#`&3Lo��I�a����XO�f]T��w<r���o�`�;�a�
2H�����TDc��D������]���)]�#���C|4v��
�VzzD��TP���.� ���M,�m���(`���5�6W�c���;Vcu,G3�.6#L��>��9�7��$��v�
(����u���wf\0�77��!�����&T���g�R5a�8i�����}<z���d6���9l��_��QP�y����.������T9�(��(p�P�@�C�F^w,Dq���d�@��"�%�B��n���}�>�3�+��G�F��o��PR�)��pl����r1�S9��E�������E�mGT�~���`�R 5aQ>:�|��\�-��8��3���L�L�"��\+� i��qYt>���B��d5m0)��i)�@L�����tU�5�� �����&�)d�Y9 ���a�=� x�����u���Z
AZ��_��B=;������u����J ����n���<���f��WC���=
��P]�;�mC������Ys�0�OX�R3�$J`>x�~pu<���{e��#�B��o=�2G��J���~3�&�"5�"�0b�mTV�����=Zf�0���c7��?\��X[�k�r*���JR�����s�DP?�
YOOV���C�t��9�~��.A�H��A��Lp���1�[/�%�U���|����K�s$��$R#�������[��/#!��B2"\mH�[�\O]u������sw���b~��
��y���W��w�Tr%��������f�b������oc���������\I��}�T����N��
SQ�Fu��k{�������}��>]�]#���OV"��
,[b�MRT�R�D�#=��cz��o�y���*<-I�������^�,����~L��P�S4��U�B��3D���t^���P������sG�� ��A3��B����jB��i��M���Es�����c)��m�l����e��\]�d���HR?�U�{������v���M��FJp��v����G������{��.�k2K>�"�q���2�we�x��K��WU�eR��7Z5�M��������q3���D#�so��ZR��S
���������~;�����+���C�g�K�Z��|b2w{V�Tr���~��b(dJ����#85�����e�z�b�j�"D�k�B&��p�*\�$n�H������+���2��9����N!�%]�G�����.:��|R%TM������*)��y�c��v|.���;���t�,�������4���N$�($��<+�]��x������jy������i^lb�R�=I�+ bW/E��_���"���� ��I��$J���(�6p��l��VB�"+cOQ
W������SG��B\c�y���9��g��P�H����`t����bg+���5��Y��W��������i�,EJ����m�B<o�R����t�(�^����"5u����/[����0�RR����G��r�X�c/��X��Pd,�������8�4,~���kO���OYUGO,��vDe�����p���o��u|���"O�}�?���5�3n�`cF����8l����|��X,n�����2b �����xb��a0��"m��/xz���S,]X3"zPZc��Sxq�N�+XM@d$l���(:�5!>���� ���\G�^��8�u��[Ht�9��GQ�'1�]5������O��w��)k�r�Y�sE>�����E6�D��Q~���e��`>��]�7����[��j'��U�" �0p�6�����i���2�4�
dN��r���B����1Ff�|.'�(�9��|��
`g~\�U7R����$�ee�x��v����>oU��B�����<��,G=b��r�P�R�a��F$���D�/���>y�+a��jw�
�qG*CD])�����0k���2���q�v���3�dX@rf_~y�o�F�f�8�J��j`�F��Y�4J�3�i����CG
s{�8���wCI���\�AMEEc�vbj�SM���`�.�sI��dGaj��
�h�1�P0�C*��%��k����D����r��K�9�����e�>�����F�2=��5�cj=k�������A�����
b �N��W����:N<N���_�����V���.��������4x�~���S��A�3z1M�C��yx-23���|�����=8�h�d�]-B�*[6q&f��M�;����������qV������.���k���������0����������32���!�=�b���4�k�&�c"f�!����s������D����� �B`���Jz'�(u��
2�!����g/�Y�;u�^�pV8�7��q��^���|�V�L��
~�U�(l� �������:H��F����?$���$-IzX����k%~Im`=�j��2"�:2���a�\����V2�7p�x/4T�i�A���ij"a.�J0��B�N=sQ_���v���h���a������C�}-f�����r������q��p��i��WG�N��;��)�����QvL# �Yt�%�e�q;�����k�"ez�Lt��������)�+mK��0�B��Qf�G��c�B��� p���_�#��x�������5$��~���U��v��3����\�N���8�@8��������&K��6���v[��0e�t�q��*������!�p0��,��3 �6G���p2��<���� ������ \��y4�o��)�q&"f��A����f���Cm�6���j�:$4Rm� �G�(����n���wJ��Jy��zZ�h9G�sQ��-,f���(���S1�6�(�c��l����f*����*I� 0o#��@J!w��_[
0�\�[>��^S��I_A=��|�k�=�i|^%}�.���0W��>�4���|�_����|���_�B��^����at+���V"���s����=�P�t��#��h����0����b� �v��3$;P�c��$5����W��M>���Gc�|`���%�x��",������t,E^f�}�+���F�����DO�A�����12|R$���\��*�:��������z!x�)`��?�S��������@�=v�C
�����g,��J���E��U�"1��Q^�r���XYQr<d9�b6]j��Q|_�mB���gd>X%�0B� ����`?����.|V�/��s�@f�����$G �2 �&!4(hi�����?{�Y�����i����g/�q���Q�:��>?�0g�PZ�RA�^P`z�:q'�����
���k�7Y�yv�Bm4���~(�oX�����6��N����?��%<:}���&��a��C��"K���On�PH�Dh�5�*
��F�6�������\LP�l��+"A�>nA�z��#����%��������3���#�����������>sO��^�.K^^�z����,�{c��>�����6DQ/����`����Y�hzx���p���m��6�'1w�����"y�c�>����E}��y�]�C]���~����X�>Q�<}&�Pe���s��@��j�����7����7�=��N���$�j����#��ne����$����V6V�c/?d�+�Zv�+(�^
��+f���q���+1\���H�aw�5w��P���X��a��0��h��k
���_����x��L��������{��H��{6
�0C�����
P=;��=y��e��m�.�LV����JE���0�|F���|xpo��`9���5�tj9�7��y1 b�k�'