RFC: adding pytest as a supported test framework
Hi all,
For the v18 cycle, I would like to try to get pytest [1]https://docs.pytest.org/ in as a
supported test driver, in addition to the current offerings.
(I'm tempted to end the email there.)
We had an unconference session at PGConf.dev [2]https://wiki.postgresql.org/wiki/PGConf.dev_2024_Developer_Unconference#New_testing_frameworks around this topic.
There seemed to be a number of nodding heads and some growing
momentum. So here's a thread to try to build wider consensus. If you
have a competing or complementary test proposal in mind, heads up!
== Problem Statement(s) ==
1. We'd like to rerun a failing test by itself.
2. It'd be helpful to see _what_ failed without poring over logs.
These two got the most nodding heads of the points I presented. (#1
received tongue-in-cheek applause.) I think most modern test
frameworks are going to give you these things, but unfortunately we
don't have them.
Additionally,
3. Many would like to use modern developer tooling during testing
(language servers! autocomplete! debuggers! type checking!) and we
can't right now.
4. It'd be great to split apart client-side tests from server-side
tests. Driving Postgres via psql all the time is fine for acceptance
testing, but it becomes a big problem when you need to test how
clients talk to servers with incompatible feature sets, or how a peer
behaves when talking to something buggy.
5. Personally, I want to implement security features test-first (for
high code coverage and other benefits), and our Perl-based tests are
usually too cumbersome for that.
== Why pytest? ==
From the small and biased sample at the unconference session, it looks
like a number of people have independently settled on pytest in their
own projects. In my opinion, pytest occupies a nice space where it
solves some of the above problems for us, and it gives us plenty of
tools to solve the other problems without too much pain.
Problem 1 (rerun failing tests): One architectural roadblock to this
in our Test::More suite is that tests depend on setup that's done by
previous tests. pytest allows you to declare each test's setup
requirements via pytest fixtures, letting the test runner build up the
world exactly as it needs to be for a single isolated test. These
fixtures may be given a "scope" so that multiple tests may share the
same setup for performance or other reasons.
Problem 2 (seeing what failed): pytest does this via assertion
introspection and very detailed failure reporting. If you haven't seen
this before, take a look at the pytest homepage [1]https://docs.pytest.org/; there's an
example of a full log.
Problem 3 (modern tooling): We get this from Python's very active
developer base.
Problems 4 (splitting client and server tests) and 5 (making it easier
to write tests first) aren't really Python- or pytest-specific, but I
have done both quite happily in my OAuth work [3]https://github.com/jchampio/pg-pytest-suite, and I've since
adapted that suite multiple times to develop and test other proposals
on this list, like LDAP/SCRAM, client encryption, direct SSL, and
compression.
Python's standard library has lots of power by itself, with very good
documentation. And virtualenvs and better package tooling have made it
much easier, IMO, to avoid the XKCD dependency tangle [4]https://xkcd.com/1987/ of the
2010s. When it comes to third-party packages, which I think we're
probably going to want in moderation, we would still need to discuss
supply chain safety. Python is not as mature here as, say, Go.
== A Plan ==
Even if everyone were on board immediately, there's a lot of work to
do. I'd like to add pytest in a more probationary status, so we can
iron out the inevitable wrinkles. My proposal would be:
1. Commit bare-bones support in our Meson setup for running pytest, so
everyone can kick the tires independently.
2. Add a test for something that we can't currently exercise.
3. Port a test from a place where the maintenance is terrible, to see
if we can improve it.
If we hate it by that point, no harm done; tear it back out. Otherwise
we keep rolling forward.
Thoughts? Suggestions?
Thanks,
--Jacob
[1]: https://docs.pytest.org/
[2]: https://wiki.postgresql.org/wiki/PGConf.dev_2024_Developer_Unconference#New_testing_frameworks
[3]: https://github.com/jchampio/pg-pytest-suite
[4]: https://xkcd.com/1987/
Hi!
On Mon, Jun 10, 2024 at 9:46 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:
Thoughts? Suggestions?
Thank you for working on this.
Do you think you could re-use something from testgres[1] package?
Links.
1. https://github.com/postgrespro/testgres
------
Regards,
Alexander Korotkov
Supabase
Hi,
Just for context for the rest the email: I think we desperately need to move
off perl for tests. The infrastructure around our testing is basically
unmaintained and just about nobody that started doing dev stuff in the last 10
years learned perl.
On 2024-06-10 11:46:00 -0700, Jacob Champion wrote:
4. It'd be great to split apart client-side tests from server-side
tests. Driving Postgres via psql all the time is fine for acceptance
testing, but it becomes a big problem when you need to test how
clients talk to servers with incompatible feature sets, or how a peer
behaves when talking to something buggy.
That seems orthogonal to using pytest vs something else?
== Why pytest? ==
From the small and biased sample at the unconference session, it looks
like a number of people have independently settled on pytest in their
own projects. In my opinion, pytest occupies a nice space where it
solves some of the above problems for us, and it gives us plenty of
tools to solve the other problems without too much pain.
We might be able to alleviate that by simply abstracting it away, but I found
pytest's testrunner pretty painful. Oodles of options that are not very well
documented and that often don't work because they are very specific to some
situations, without that being explained.
Problem 1 (rerun failing tests): One architectural roadblock to this
in our Test::More suite is that tests depend on setup that's done by
previous tests. pytest allows you to declare each test's setup
requirements via pytest fixtures, letting the test runner build up the
world exactly as it needs to be for a single isolated test. These
fixtures may be given a "scope" so that multiple tests may share the
same setup for performance or other reasons.
OTOH, that's quite likely to increase overall test times very
significantly. Yes, sometimes that can be avoided with careful use of various
features, but often that's hard, and IME is rarely done rigiorously.
Problem 2 (seeing what failed): pytest does this via assertion
introspection and very detailed failure reporting. If you haven't seen
this before, take a look at the pytest homepage [1]; there's an
example of a full log.
That's not really different than what the perl tap test stuff allows. We
indeed are bad at utilizing it, but I'm not sure that switching languages will
change that.
I think part of the problem is that the information about what precisely
failed is often much harder to collect when testing multiple servers
interacting than when doing localized unit tests.
I think we ought to invest a bunch in improving that, I'd hope that a lot of
that work would be largely independent of the language the tests are written
in.
Python's standard library has lots of power by itself, with very good
documentation. And virtualenvs and better package tooling have made it
much easier, IMO, to avoid the XKCD dependency tangle [4] of the
2010s.
Ugh, I think this is actually python's weakest area. There's about a dozen
package managers and "python distributions", that are at best half compatible,
and the documentation situation around this is *awful*.
When it comes to third-party packages, which I think we're
probably going to want in moderation, we would still need to discuss
supply chain safety. Python is not as mature here as, say, Go.
What external dependencies are you imagining?
== A Plan ==
Even if everyone were on board immediately, there's a lot of work to
do. I'd like to add pytest in a more probationary status, so we can
iron out the inevitable wrinkles. My proposal would be:1. Commit bare-bones support in our Meson setup for running pytest, so
everyone can kick the tires independently.
2. Add a test for something that we can't currently exercise.
3. Port a test from a place where the maintenance is terrible, to see
if we can improve it.If we hate it by that point, no harm done; tear it back out. Otherwise
we keep rolling forward.
I think somewhere between 1 and 4 a *substantial* amount of work would be
required to provide a bunch of the infrastructure that Cluster.pm etc
provide. Otherwise we'll end up with a lot of copy pasted code between tests.
Greetings,
Andres Freund
On 2024-06-10 Mo 16:04, Andres Freund wrote:
Hi,
Just for context for the rest the email: I think we desperately need to move
off perl for tests. The infrastructure around our testing is basically
unmaintained and just about nobody that started doing dev stuff in the last 10
years learned perl.
Andres,
I get that you don't like perl. But it's hard for me to take this
terribly seriously. "desperately" seems like massive overstatement at
best. As for what up and coming developers learn, they mostly don't
learn C either, and that's far more critical to what we do.
I'm not sure what part of the testing infrastructure you think is
unmaintained. For example, the last release of Test::Simple was all the
way back on April 25.
Maybe there are some technical superiorities about what Jacob is
proposing, enough for us to add it to our armory. I'll keep an open mind
on that.
But let's not throw the baby out with the bathwater. Quite apart from
anything else, a wholesale rework of the test infrastructure would make
backpatching more painful.
cheers
andrew
--
Andrew Dunstan
EDB:https://www.enterprisedb.com
On Mon, 10 Jun 2024 at 20:46, Jacob Champion
<jacob.champion@enterprisedb.com> wrote:
For the v18 cycle, I would like to try to get pytest [1] in as a
supported test driver, in addition to the current offerings.
Huge +1 from me (but I'm definitely biased here)
Thoughts? Suggestions?
I think the most important thing is that we make it easy for people to
use this thing, and use it in a "correct" way. I have met very few
people that actually like writing tests, so I think it's very
important to make the barrier to do so as low as possible.
For the PgBouncer repo I created my own pytest based test suite more
~1.5 years ago now. I tried to make it as easy as possible to write
tests there, and it has worked out quite well imho. I don't think it
makes sense to copy all things I did there verbatim, because some of
it is quite specific to testing PgBouncer. But I do think there's
quite a few things that could probably be copied (or at least inspire
what you do). Some examples:
1. helpers to easily run shell commands, most importantly setting
check=True by default[1]https://github.com/pgbouncer/pgbouncer/blob/3f791020fb017c570fcd2db390600a353f1cba0c/test/utils.py#L83-L131
2. helper to get a free tcp port[2]https://github.com/pgbouncer/pgbouncer/blob/3f791020fb017c570fcd2db390600a353f1cba0c/test/utils.py#L210-L233
3. helper to check if the log contains a specific string[3]https://github.com/pgbouncer/pgbouncer/blob/3f791020fb017c570fcd2db390600a353f1cba0c/test/utils.py#L1125-L1143
4. automatically show PG logs on test failure[4]https://github.com/pgbouncer/pgbouncer/blob/3f791020fb017c570fcd2db390600a353f1cba0c/test/utils.py#L1075-L1103
5. helpers to easily run sql commands (psycopg interface isn't very
user friendly imho for the common case)[5]https://github.com/pgbouncer/pgbouncer/blob/3f791020fb017c570fcd2db390600a353f1cba0c/test/utils.py#L326-L338
6. startup/teardown cleanup logic[6]https://github.com/pgbouncer/pgbouncer/blob/3f791020fb017c570fcd2db390600a353f1cba0c/test/utils.py#L546-L642
[1]: https://github.com/pgbouncer/pgbouncer/blob/3f791020fb017c570fcd2db390600a353f1cba0c/test/utils.py#L83-L131
[2]: https://github.com/pgbouncer/pgbouncer/blob/3f791020fb017c570fcd2db390600a353f1cba0c/test/utils.py#L210-L233
[3]: https://github.com/pgbouncer/pgbouncer/blob/3f791020fb017c570fcd2db390600a353f1cba0c/test/utils.py#L1125-L1143
[4]: https://github.com/pgbouncer/pgbouncer/blob/3f791020fb017c570fcd2db390600a353f1cba0c/test/utils.py#L1075-L1103
[5]: https://github.com/pgbouncer/pgbouncer/blob/3f791020fb017c570fcd2db390600a353f1cba0c/test/utils.py#L326-L338
[6]: https://github.com/pgbouncer/pgbouncer/blob/3f791020fb017c570fcd2db390600a353f1cba0c/test/utils.py#L546-L642
On Mon, 10 Jun 2024 at 22:04, Andres Freund <andres@anarazel.de> wrote:
Problem 1 (rerun failing tests): One architectural roadblock to this
in our Test::More suite is that tests depend on setup that's done by
previous tests. pytest allows you to declare each test's setup
requirements via pytest fixtures, letting the test runner build up the
world exactly as it needs to be for a single isolated test. These
fixtures may be given a "scope" so that multiple tests may share the
same setup for performance or other reasons.OTOH, that's quite likely to increase overall test times very
significantly. Yes, sometimes that can be avoided with careful use of various
features, but often that's hard, and IME is rarely done rigiorously.
You definitely want to cache things like initdb and "pg_ctl start".
But that's fairly easy to do with some startup/teardown logic. For
PgBouncer I create a dedicated schema for each test that needs to
create objects and automatically drop that schema at the end of the
test[6]https://github.com/pgbouncer/pgbouncer/blob/3f791020fb017c570fcd2db390600a353f1cba0c/test/utils.py#L546-L642 (including any temporary objects outside of schemas like
users/replication slots). You can even choose not to clean up certain
large schemas if they are shared across multiple tests.
Problem 2 (seeing what failed): pytest does this via assertion
introspection and very detailed failure reporting. If you haven't seen
this before, take a look at the pytest homepage [1]; there's an
example of a full log.That's not really different than what the perl tap test stuff allows. We
indeed are bad at utilizing it, but I'm not sure that switching languages will
change that.
It's not about allowing, it's about doing the thing that you want by
default. The following code
assert a == b
will show you the actual values of both a and b when the test fails,
instead of saying something like "false is not true". Ofcourse you can
provide a message here too, like with perl its ok function, but even
when you don't the output is helpful.
I think part of the problem is that the information about what precisely
failed is often much harder to collect when testing multiple servers
interacting than when doing localized unit tests.I think we ought to invest a bunch in improving that, I'd hope that a lot of
that work would be largely independent of the language the tests are written
in.
Well, as you already noted no-one that started doing dev stuff in the
last 10 years knows Perl nor wants to learn it. So a large part of the
community tries to touch the current perl test suite as little as
possible. I personally haven't tried to improve anything about our
perl testing framework, even though I'm normally very much into
improving developer tooling.
Python's standard library has lots of power by itself, with very good
documentation. And virtualenvs and better package tooling have made it
much easier, IMO, to avoid the XKCD dependency tangle [4] of the
2010s.Ugh, I think this is actually python's weakest area. There's about a dozen
package managers and "python distributions", that are at best half compatible,
and the documentation situation around this is *awful*.
I definitely agree this is Python its weakest area. But since venv is
part of the python standard library it's much better. I have the
following short blurb in PgBouncer its test README[7]https://github.com/pgbouncer/pgbouncer/blob/master/test/README.md and it has
worked for all contributors so far:
# create a virtual environment (only needed once)
python3 -m venv env
# activate the environment. You will need to activate this environment in
# your shell every time you want to run the tests. (so it's needed once per
# shell).
source env/bin/activate
# Install the dependencies (only needed once, or whenever extra dependencies
# get added to requirements.txt)
pip install -r requirements.txt
[7]: https://github.com/pgbouncer/pgbouncer/blob/master/test/README.md
I think somewhere between 1 and 4 a *substantial* amount of work would be
required to provide a bunch of the infrastructure that Cluster.pm etc
provide. Otherwise we'll end up with a lot of copy pasted code between tests.
Totally agreed, that we should have a fairly decent base to work on
top of. I think we should at least port a few tests to show that the
base has at least the most basic functionality.
On Mon, 10 Jun 2024 at 22:47, Andrew Dunstan <andrew@dunslane.net> wrote:
As for what up and coming developers learn, they mostly don't learn C either, and that's far more critical to what we do.
I think many up and coming devs have at least touched C somewhere
(e.g. in university). And because it's more critical to the project
and also to many other low level projects, they don't mind learning it
so much if they don't know it yet. But I, for example, try to write as
few Perl tests as possible, because getting good at Perl has pretty
much no use to me outside of writing tests for postgres.
(I do personally think that official Rust support in Postgres would
probably be a good thing, but that is a whole other discussion that
I'd like to save for some other day)
But let's not throw the baby out with the bathwater. Quite apart from anything else, a wholesale rework of the test infrastructure would make backpatching more painful.
Backporting test improvements to decrease backporting pain is
something we don't look badly upon afaict (Citus its test suite breaks
semi-regularly on minor PG version updates due to some slight output
changes introduced by e.g. an updated version of the isolationtester).
Hi,
On 2024-06-10 16:46:56 -0400, Andrew Dunstan wrote:
On 2024-06-10 Mo 16:04, Andres Freund wrote:
Hi,
Just for context for the rest the email: I think we desperately need to move
off perl for tests. The infrastructure around our testing is basically
unmaintained and just about nobody that started doing dev stuff in the last 10
years learned perl.
Andres,
I get that you don't like perl.
I indeed don't particularly like perl - but that's really not the main
issue. I've already learned [some of] it. What is the main issue is that I've
also watched several newer folks try to write tests in it, and it was not
pretty.
But it's hard for me to take this terribly seriously. "desperately" seems
like massive overstatement at best.
Shrug.
As for what up and coming developers learn, they mostly don't learn C
either, and that's far more critical to what we do.
C is a a lot more useful to to them than perl. And it's actually far more
widely known these days than perl. C does teach you some reasonably
low-level-ish understanding of hardware. There are gazillions of programs
written in C that we'll have to maintain for decades. I don't think that's
comparably true for perl.
I'm not sure what part of the testing infrastructure you think is
unmaintained. For example, the last release of Test::Simple was all the way
back on April 25.
IPC::Run is quite buggy and basically just maintained by Noah these days.
Greetings,
Andres Freund
On 2024-06-10 Mo 21:49, Andres Freund wrote:
Hi,
On 2024-06-10 16:46:56 -0400, Andrew Dunstan wrote:
On 2024-06-10 Mo 16:04, Andres Freund wrote:
Hi,
Just for context for the rest the email: I think we desperately need to move
off perl for tests. The infrastructure around our testing is basically
unmaintained and just about nobody that started doing dev stuff in the last 10
years learned perl.Andres,
I get that you don't like perl.
I indeed don't particularly like perl - but that's really not the main
issue. I've already learned [some of] it. What is the main issue is that I've
also watched several newer folks try to write tests in it, and it was not
pretty.
Hmm. I've done webinars in the past about how to write TAP tests for
PostgreSQL, maybe I need to beef that up some.
I'm not sure what part of the testing infrastructure you think is
unmaintained. For example, the last release of Test::Simple was all the way
back on April 25.IPC::Run is quite buggy and basically just maintained by Noah these days.
Yes, that's true. I think the biggest pain point is possibly the
recovery tests.
Some time ago I did some work on wrapping libpq using the perl FFI
module. It worked pretty well, and would mean we could probably avoid
many uses of IPC::Run, and would probably be substantially more
efficient (no fork required). It wouldn't avoid all uses of IPC::Run,
though.
But my point was mainly that while a new framework might have value, I
don't think we need to run out and immediately rewrite several hundred
TAP tests. Let's pick the major pain points and address those.
cheers
andrew
--
Andrew Dunstan
EDB:https://www.enterprisedb.com
On Mon, Jun 10, 2024 at 1:04 PM Andres Freund <andres@anarazel.de> wrote:
Just for context for the rest the email: I think we desperately need to move
off perl for tests. The infrastructure around our testing is basically
unmaintained and just about nobody that started doing dev stuff in the last 10
years learned perl.
Okay. Personally, I'm going to try to stay out of discussions around
subtracting Perl and focus on adding Python, for a bunch of different
reasons:
- Tests aren't cheap, but in my experience, the maintenance-cost math
for tests is a lot different than the math for implementations.
- I don't personally care for Perl, but having tests in any form is
usually better than not having them.
- Trying to convince people to get rid of X while adding Y is a good
way to make sure Y never happens.
On 2024-06-10 11:46:00 -0700, Jacob Champion wrote:
4. It'd be great to split apart client-side tests from server-side
tests. Driving Postgres via psql all the time is fine for acceptance
testing, but it becomes a big problem when you need to test how
clients talk to servers with incompatible feature sets, or how a peer
behaves when talking to something buggy.That seems orthogonal to using pytest vs something else?
Yes, I think that's fair. It's going to be hard not to talk about
"things that pytest+Python don't give us directly but are much easier
to build" in all of this (and I tried to call that out in the next
section, maybe belatedly). I think I'm going to have to convince both
a group of people who want to ask "why pytest in particular?" and a
group of people who ask "why isn't what we have good enough?"
== Why pytest? ==
From the small and biased sample at the unconference session, it looks
like a number of people have independently settled on pytest in their
own projects. In my opinion, pytest occupies a nice space where it
solves some of the above problems for us, and it gives us plenty of
tools to solve the other problems without too much pain.We might be able to alleviate that by simply abstracting it away, but I found
pytest's testrunner pretty painful. Oodles of options that are not very well
documented and that often don't work because they are very specific to some
situations, without that being explained.
Hm. There are a bunch of them, but I've never needed to go through the
oodles of options. Anything in particular that caused problems?
Problem 1 (rerun failing tests): One architectural roadblock to this
in our Test::More suite is that tests depend on setup that's done by
previous tests. pytest allows you to declare each test's setup
requirements via pytest fixtures, letting the test runner build up the
world exactly as it needs to be for a single isolated test. These
fixtures may be given a "scope" so that multiple tests may share the
same setup for performance or other reasons.OTOH, that's quite likely to increase overall test times very
significantly. Yes, sometimes that can be avoided with careful use of various
features, but often that's hard, and IME is rarely done rigiorously.
Well, scopes are pretty front and center when you start building
pytest fixtures, and the complicated longer setups will hopefully
converge correctly early on and be reused everywhere else. I imagine
no one wants to build cluster setup from scratch.
On a slight tangent, is this not a problem today? I mean... part of my
personal long-term goal is in increasing test hygiene, which is going
to take some shifts in practice. As long as review keeps the quality
of the tests fairly high, I see the inevitable "our tests take too
long" problem as a good one. That's true no matter what framework we
use, unless the framework is so bad that no one uses it and the
runtime is trivial. If we're worried that people will immediately
start exploding the runtime and no one will notice during review,
maybe we can have some infrastructure flag how much a patch increased
it?
Problem 2 (seeing what failed): pytest does this via assertion
introspection and very detailed failure reporting. If you haven't seen
this before, take a look at the pytest homepage [1]; there's an
example of a full log.That's not really different than what the perl tap test stuff allows. We
indeed are bad at utilizing it, but I'm not sure that switching languages will
change that.
Jelte already touched on this, but I wanted to hammer on the point: If
no one, not even the developers who chose and like Perl, is using
Test::More in a way that's maintainable, I would prefer to use a
framework that does maintainable things by default so that you have to
try really hard to screw it up. It is possible to screw up `assert
actual == expected`, but it takes more work than doing it the right
way.
I think part of the problem is that the information about what precisely
failed is often much harder to collect when testing multiple servers
interacting than when doing localized unit tests.I think we ought to invest a bunch in improving that, I'd hope that a lot of
that work would be largely independent of the language the tests are written
in.
We do a lot more acceptance testing than internal testing, which came
up as a major complaint from me and others during the unconference.
One of the reasons people avoid writing internal tests in Perl is
because it's very painful to find a rhythm with Test::More. From
experience test-driving the OAuth work, I'm *very* happy with the
development cycle that pytest gave me.
Other languages _could_ do that, sure. It's a simple matter of programming...
Ugh, I think this is actually python's weakest area. There's about a dozen
package managers and "python distributions", that are at best half compatible,
and the documentation situation around this is *awful*.
So... don't support the half-compatible stuff? I thought this
conversation was still going on with Windows Perl (ActiveState?
Strawberry?) but everyone just seems to pick what works for them and
move on to better things to do.
Modern CPython includes pip and venv. Done. If someone comes to us
with some horrible Anaconda setup wanting to know why their duct tape
doesn't work, can't we just tell them no?
When it comes to third-party packages, which I think we're
probably going to want in moderation, we would still need to discuss
supply chain safety. Python is not as mature here as, say, Go.What external dependencies are you imagining?
The OAuth pytest suite makes extensive use of
- psycopg, to easily drive libpq;
- construct, for on-the-wire packet representations and manipulation; and
- pyca/cryptography, for easy generation of certificates and manual
crypto testing.
I'd imagine each would need considerable discussion, if there is
interest in doing the same things that I do with them.
I think somewhere between 1 and 4 a *substantial* amount of work would be
required to provide a bunch of the infrastructure that Cluster.pm etc
provide. Otherwise we'll end up with a lot of copy pasted code between tests.
Possibly, yes. I think it depends on what you want to test first, and
there's a green-field aspect of hope/anxiety/ennui, too. Are you
trying to port the acceptance-test framework that we already have, or
are you trying to build a framework that can handle the things we
can't currently test? Will it be easier to refactor duplication into
shared fixtures when the language doesn't encourage an infinite number
of ways to do things? Or will we have to keep on top of it to avoid
pain?
--Jacob
On Mon, Jun 10, 2024 at 12:26 PM Alexander Korotkov
<aekorotkov@gmail.com> wrote:
Thank you for working on this.
Do you think you could re-use something from testgres[1] package?
Possibly? I think we're all coming at this with our own bags of tricks
and will need to carve off pieces to port, contribute, or reimplement.
Does testgres have something in particular you'd like to see the
Postgres tests support?
Thanks,
--Jacob
On Mon, Jun 10, 2024 at 06:49:11PM -0700, Andres Freund wrote:
On 2024-06-10 16:46:56 -0400, Andrew Dunstan wrote:
On 2024-06-10 Mo 16:04, Andres Freund wrote:
Just for context for the rest the email: I think we desperately need to move
off perl for tests. The infrastructure around our testing is basically
unmaintained and just about nobody that started doing dev stuff in the last 10
years learned perl.
As for what up and coming developers learn, they mostly don't learn C
either, and that's far more critical to what we do.C is a a lot more useful to to them than perl. And it's actually far more
widely known these days than perl.
If we're going to test in a non-Perl language, I'd pick C over Python. There
would be several other unlikely-community-choice languages I'd pick over
Python (C#, Java, C++). We'd need a library like today's Perl
PostgreSQL::Test to make C-language tests nice, but the same would apply to
any new language.
I also want the initial scope to be the new language coexisting with the
existing Perl tests. If a bulk translation ever happens, it should happen
long after the debut of the new framework. That said, I don't much trust a
human-written bulk language translation to go through without some tests
accidentally ceasing to test what they test in Perl today.
On 2024-06-11 Tu 19:48, Noah Misch wrote:
On Mon, Jun 10, 2024 at 06:49:11PM -0700, Andres Freund wrote:
On 2024-06-10 16:46:56 -0400, Andrew Dunstan wrote:
On 2024-06-10 Mo 16:04, Andres Freund wrote:
Just for context for the rest the email: I think we desperately need to move
off perl for tests. The infrastructure around our testing is basically
unmaintained and just about nobody that started doing dev stuff in the last 10
years learned perl.As for what up and coming developers learn, they mostly don't learn C
either, and that's far more critical to what we do.C is a a lot more useful to to them than perl. And it's actually far more
widely known these days than perl.If we're going to test in a non-Perl language, I'd pick C over Python. There
would be several other unlikely-community-choice languages I'd pick over
Python (C#, Java, C++). We'd need a library like today's Perl
PostgreSQL::Test to make C-language tests nice, but the same would apply to
any new language.
Indeed. We've invested quite a lot of effort on that infrastructure. I
guess people can learn from what we've done so a second language might
be easier to support.
(Java would be my pick from your unlikely set, but I can see the
attraction of Python.)
I also want the initial scope to be the new language coexisting with the
existing Perl tests. If a bulk translation ever happens, it should happen
long after the debut of the new framework. That said, I don't much trust a
human-written bulk language translation to go through without some tests
accidentally ceasing to test what they test in Perl today.
+1
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Wed, 12 Jun 2024 at 01:48, Noah Misch <noah@leadboat.com> wrote:
If we're going to test in a non-Perl language, I'd pick C over Python. There
would be several other unlikely-community-choice languages I'd pick over
Python (C#, Java, C++).
My main goals of this thread are:
1. Allowing people to quickly write tests
2. Have those tests do what the writer intended them to do
3. Have good error reporting by default
Those goals indeed don't necesitate python.
But I think those are really hard to achieve with any C based
framework, and probably with C++ too. Also manual memory management in
tests seems to add tons of complexity for basically no benefit.
I think C#, Java, Go, Rust, Kotlin, and Swift would be acceptable
choices for me (and possibly some more). They allow some type of
introspection, they have a garbage collector, and their general
tooling is quite good.
But I think a dynamically typed scripting language is much more
fitting for writing tests like this. I love static typing for
production code, but imho it really doesn't have much benefit for
tests.
As scripting languages go, the ones that are still fairly heavily in
use are Javascript, Python, Ruby, and PHP. I think all of those could
probably work, but my personal order of preference would be Python,
Ruby, Javascript, PHP.
Finally, I'm definitely biased towards using Python myself. But I
think there's good reasons for that:
1. In the data space, that Postgres in, Python is very heavily used for analysis
2. Everyone coming out of university these days knows it to some extent
3. Multiple people in the community have been writing Postgres related
tests in python and have enjoyed doing so (me[1]https://github.com/pgbouncer/pgbouncer/tree/master/test, Jacob[2]https://github.com/jchampio/pg-pytest-suite,
Alexander[3]https://github.com/postgrespro/testgres)
What language people want to write tests in is obviously very
subjective. And obviously we cannot allow every language for writing
tests. But I think if ~25% of the community prefers to write their
tests in Python. Then that should be enough of a reason to allow them
to do so.
TO CLARIFY: This thread is not a proposal to replace Perl with Python.
It's a proposal to allow people to also write tests in Python.
I also want the initial scope to be the new language coexisting with the
existing Perl tests. If a bulk translation ever happens, it should happen
long after the debut of the new framework. That said, I don't much trust a
human-written bulk language translation to go through without some tests
accidentally ceasing to test what they test in Perl today.
I definitely don't think we should rewrite all the tests that we have
in Perl today into some other language. But I do think that whatever
language we choose, that language should make it as least as easy to
write tests, as easy to read them and as easy to see that they are
testing the intended thing, as is currently the case for Perl.
Rewriting a few Perl tests into the new language, even if not merging
the rewrite, is a good way of validating that imho.
PS. For PgBouncer I actually hand-rewrote all the tests that we had in
bash (which is the worst testing language ever) in Python and doing so
actually found more bugs in PgBouncer code that our bash tests
wouldn't catch. So it's not necessarily the case that you lose
coverage by rewriting tests.
[1]: https://github.com/pgbouncer/pgbouncer/tree/master/test
[2]: https://github.com/jchampio/pg-pytest-suite
[3]: https://github.com/postgrespro/testgres
On Tue, Jun 11, 2024 at 5:31 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:
On Mon, Jun 10, 2024 at 12:26 PM Alexander Korotkov
<aekorotkov@gmail.com> wrote:Thank you for working on this.
Do you think you could re-use something from testgres[1] package?Possibly? I think we're all coming at this with our own bags of tricks
and will need to carve off pieces to port, contribute, or reimplement.
Does testgres have something in particular you'd like to see the
Postgres tests support?
Generally, testgres was initially designed as Python analogue of what
we have in src/test/perl/PostgreSQL/Test. In particular its
testgres.PostgresNode is analogue of PostgreSQL::Test::Cluster. It
comes under PostgreSQL License. So, I wonder if we could revise it
and fetch most part of it into our source tree.
------
Regards,
Alexander Korotkov
Supabase
On Wed, Jun 12, 2024 at 2:48 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:
On Tue, Jun 11, 2024 at 5:31 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:On Mon, Jun 10, 2024 at 12:26 PM Alexander Korotkov
<aekorotkov@gmail.com> wrote:Thank you for working on this.
Do you think you could re-use something from testgres[1] package?Possibly? I think we're all coming at this with our own bags of tricks
and will need to carve off pieces to port, contribute, or reimplement.
Does testgres have something in particular you'd like to see the
Postgres tests support?Generally, testgres was initially designed as Python analogue of what
we have in src/test/perl/PostgreSQL/Test. In particular its
testgres.PostgresNode is analogue of PostgreSQL::Test::Cluster. It
comes under PostgreSQL License. So, I wonder if we could revise it
and fetch most part of it into our source tree.
Plus testgres exists from 2016 and already have quite amount of use
cases. This is what I quickly found on github.
https://github.com/adjust/pg_querylog
https://github.com/postgrespro/pg_pathman
https://github.com/lanterndata/lantern
https://github.com/orioledb/orioledb
https://github.com/cbandy/pgtwixt
https://github.com/OpenNTI/nti.testing
https://github.com/postgrespro/pg_probackup
https://github.com/postgrespro/rum
------
Regards,
Alexander Korotkov
Supabase
On Wed, 12 Jun 2024 at 01:48, Noah Misch <noah@leadboat.com> wrote:
If we're going to test in a non-Perl language, I'd pick C over Python.
<snip>
We'd need a library like today's Perl
PostgreSQL::Test to make C-language tests nice, but the same would apply to
any new language.
P.P.S. We already write tests in C, we use it for testing libpq[1]https://github.com/postgres/postgres/blob/master/src/test/modules/libpq_pipeline/libpq_pipeline.c.
I'd personally definitely welcome a C library to make those tests
nicer to write, because I've written a fair bit of those in the past
and currently it's not very fun to do.
[1]: https://github.com/postgres/postgres/blob/master/src/test/modules/libpq_pipeline/libpq_pipeline.c
On Jun 12, 2024, at 6:40 AM, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
I think C#, Java, Go, Rust, Kotlin, and Swift would be acceptable
choices for me (and possibly some more). They allow some type of
introspection, they have a garbage collector, and their general
tooling is quite good.
Having used Python for 15+ years and then abandoned it for all projects I would
say the single most important points for a long term project like Postgres is,
not necessarily in order, package stability, package depth, semantic versioning,
available resources, and multiprocessor support.
The reason I abandoned Python was for the constant API breaks in packages. Yes,
python is a great language to teach in school for a one-time class project, but
that is not Postgres’s use-case. Remember that Fortran and Pascal were the
darlings for teaching in school prior to Python and no-one uses them any more.
Yes Python innovates fast and is fashionable. But again, not Postgres’s use-case.
I believe that anyone coming out of school these days would have a relatively
easy transition to any of Go, Rust, Kotlin, Swift, etc. In other words, any of
the modern languages. In addition, the language should scale well to
multiprocessors, because parallel testing is becoming more important every day.
If the Postgres project is going to pick a new language for testing, it should
pick a language for the next 50 years based on the projects needs.
Python is good for package depth and resource availability, but fails IMO in the
other categories. My experience with python where the program flow can change
because of non-visible characters is a terrible way to write robust long term
maintainable code. Because of this most of the modern languages are going to be
closer in style to Postgres’s C code base than Python.
Jelte Fennema-Nio:
As scripting languages go, the ones that are still fairly heavily in
use are Javascript, Python, Ruby, and PHP. I think all of those could
probably work, but my personal order of preference would be Python,
Ruby, Javascript, PHP.Finally, I'm definitely biased towards using Python myself. But I
think there's good reasons for that:
1. In the data space, that Postgres in, Python is very heavily used for analysis
2. Everyone coming out of university these days knows it to some extent
3. Multiple people in the community have been writing Postgres related
tests in python and have enjoyed doing so (me[1], Jacob[2],
Alexander[3])
PostgREST also uses pytest for integration tests - and that was a very
good decision compared to the bash based tests we had before.
One more argument for Python compared to the other mentioned scripting
languages: Python is already a development dependency via meson. None of
the other 3 are. In a future where meson will be the only build system,
we will have python "for free" already.
Best,
Wolfgang
Hi,
On 2024-06-11 08:04:57 -0400, Andrew Dunstan wrote:
Some time ago I did some work on wrapping libpq using the perl FFI module.
It worked pretty well, and would mean we could probably avoid many uses of
IPC::Run, and would probably be substantially more efficient (no fork
required). It wouldn't avoid all uses of IPC::Run, though.
FWIW, I'd *love* to see work on this continue. The reduction in test runtime
on windows is substantial and would shorten the hack->CI->fail->hack loop a
good bit shorter. And save money.
But my point was mainly that while a new framework might have value, I don't
think we need to run out and immediately rewrite several hundred TAP tests.
Oh, yea. That's not at all feasible to just do in one go.
Greetings,
Andres Freund
On Wed, 12 Jun 2024 at 15:49, FWS Neil <neil@fairwindsoft.com> wrote:
I believe that anyone coming out of school these days would have a relatively
easy transition to any of Go, Rust, Kotlin, Swift, etc. In other words, any of
the modern languages.
Agreed, which is why I said they'd be acceptable to me. But I think
one important advantage of Python is that it's clear that many people
in the community are willing to write tests in it. At PGConf.dev there
were a lot of people in the unconference session about this. Also many
people already wrote a Postgres testing framework for python, and are
using it (see list of projects that Alexander shared). I haven't seen
that level of willingness to write tests for any of those other
languages (yet).
In addition, the language should scale well to
multiprocessors, because parallel testing is becoming more important every day.
<snip>
Python is good for package depth and resource availability, but fails IMO in the
other categories.
You can easily pin packages or call a different function based on the
version of the package, so I'm not sure what the problem is with
package stability. Also chances are we'll pull in very little external
packages and rely mostly on the stdlib (which is quite stable).
Regarding parallelised running of tests, I agree that's very
important. And indeed normally parallelism in python can be a pain
(although async/await makes I/O parallelism a lot better at least).
But running pytest tests in parallel is extremely easy by using
pytest-xdist[1]https://pypi.org/project/pytest-xdist/, so I don't really think there's an issue for this
specific Python usecase.
My experience with python where the program flow can change
because of non-visible characters is a terrible way to write robust long term
maintainable code. Because of this most of the modern languages are going to be
closer in style to Postgres’s C code base than Python.
I'm assuming this is about spaces vs curly braces for blocks? Now that
we have auto formatters for every modern programming language I indeed
prefer curly braces myself too. But honestly that's pretty much a tabs
vs spaces discussion.
[1]: https://pypi.org/project/pytest-xdist/
Show quoted text
On Wed, 12 Jun 2024 at 15:49, FWS Neil <neil@fairwindsoft.com> wrote:
On Jun 12, 2024, at 6:40 AM, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
I think C#, Java, Go, Rust, Kotlin, and Swift would be acceptable
choices for me (and possibly some more). They allow some type of
introspection, they have a garbage collector, and their general
tooling is quite good.Having used Python for 15+ years and then abandoned it for all projects I would
say the single most important points for a long term project like Postgres is,
not necessarily in order, package stability, package depth, semantic versioning,
available resources, and multiprocessor support.The reason I abandoned Python was for the constant API breaks in packages. Yes,
python is a great language to teach in school for a one-time class project, but
that is not Postgres’s use-case. Remember that Fortran and Pascal were the
darlings for teaching in school prior to Python and no-one uses them any more.Yes Python innovates fast and is fashionable. But again, not Postgres’s use-case.
I believe that anyone coming out of school these days would have a relatively
easy transition to any of Go, Rust, Kotlin, Swift, etc. In other words, any of
the modern languages. In addition, the language should scale well to
multiprocessors, because parallel testing is becoming more important every day.If the Postgres project is going to pick a new language for testing, it should
pick a language for the next 50 years based on the projects needs.Python is good for package depth and resource availability, but fails IMO in the
other categories. My experience with python where the program flow can change
because of non-visible characters is a terrible way to write robust long term
maintainable code. Because of this most of the modern languages are going to be
closer in style to Postgres’s C code base than Python.
Hi,
On 2024-06-11 07:28:23 -0700, Jacob Champion wrote:
On Mon, Jun 10, 2024 at 1:04 PM Andres Freund <andres@anarazel.de> wrote:
Just for context for the rest the email: I think we desperately need to move
off perl for tests. The infrastructure around our testing is basically
unmaintained and just about nobody that started doing dev stuff in the last 10
years learned perl.Okay. Personally, I'm going to try to stay out of discussions around
subtracting Perl and focus on adding Python, for a bunch of different
reasons:
I think I might have formulated my paragraph above badly - I didn't mean that
we should move away from perl tests tomorrow, but that we need a path forward
that allows folks to write tests without perl.
- Tests aren't cheap, but in my experience, the maintenance-cost math
for tests is a lot different than the math for implementations.
At the moment they tend to be *more* expensive often, due to spurious
failures. That's mostly not perl's fault, don't get me wrong, but us not
having better infrastructure for testing complicated behaviour and/or testing
things on a more narrow basis.
Problem 1 (rerun failing tests): One architectural roadblock to this
in our Test::More suite is that tests depend on setup that's done by
previous tests. pytest allows you to declare each test's setup
requirements via pytest fixtures, letting the test runner build up the
world exactly as it needs to be for a single isolated test. These
fixtures may be given a "scope" so that multiple tests may share the
same setup for performance or other reasons.OTOH, that's quite likely to increase overall test times very
significantly. Yes, sometimes that can be avoided with careful use of various
features, but often that's hard, and IME is rarely done rigiorously.Well, scopes are pretty front and center when you start building
pytest fixtures, and the complicated longer setups will hopefully
converge correctly early on and be reused everywhere else. I imagine
no one wants to build cluster setup from scratch.
One (the?) prime source of state in our tap tests is the
database. Realistically we can't just tear that one down and reset it between
tests without causing the test times to explode. So we'll have to live with
some persistent state.
On a slight tangent, is this not a problem today?
It is, but that doesn't mean making it even bigger is unproblematic :)
I think part of the problem is that the information about what precisely
failed is often much harder to collect when testing multiple servers
interacting than when doing localized unit tests.I think we ought to invest a bunch in improving that, I'd hope that a lot of
that work would be largely independent of the language the tests are written
in.We do a lot more acceptance testing than internal testing, which came
up as a major complaint from me and others during the unconference.
One of the reasons people avoid writing internal tests in Perl is
because it's very painful to find a rhythm with Test::More.
What definition of internal tests are you using here?
I think a lot of our tests are complicated, fragile and slow because we almost
exclusively do end-to-end tests, because with a few exceptions we don't have a
way to exercise code in a more granular way.
When it comes to third-party packages, which I think we're
probably going to want in moderation, we would still need to discuss
supply chain safety. Python is not as mature here as, say, Go.What external dependencies are you imagining?
The OAuth pytest suite makes extensive use of
- psycopg, to easily drive libpq;
That's probably not going to fly. It introduces painful circular dependencies
between building postgres (for libpq), building psycopg (requiring libpq) and
testing postgres (requiring psycopg).
You *can* solve such issues, but we've debated that in the past, and I doubt
we'll find agreement on the awkwardness it introduces.
- construct, for on-the-wire packet representations and manipulation; and
That seems fairly minimal.
- pyca/cryptography, for easy generation of certificates and manual
crypto testing.
That's a bit more painful, but I guess maybe not unrealistic?
I'd imagine each would need considerable discussion, if there is
interest in doing the same things that I do with them.
One thing worth thinking about is that such dependencies have to work on a
relatively large number of platforms / architectures. A lot of projects
don't...
Greetings,
Andres Freund
On Wed, 12 Jun 2024 at 17:50, Andres Freund <andres@anarazel.de> wrote:
The OAuth pytest suite makes extensive use of
- psycopg, to easily drive libpq;That's probably not going to fly. It introduces painful circular dependencies
between building postgres (for libpq), building psycopg (requiring libpq) and
testing postgres (requiring psycopg).You *can* solve such issues, but we've debated that in the past, and I doubt
we'll find agreement on the awkwardness it introduces.
psycopg has a few implementations binary, c, & pure python. The pure
python one can be linked to a specific libpq.so file at runtime[1]. As
long as we don't break the libpq API (which we shouldn't), we can just
point it to the libpq compiled by meson/make. We wouldn't be able to
use the newest libpq features that way (because psycopg wouldn't know
about them), but that seems totally fine for most usages (i.e. sending
a query over a connection). If we really want to use those from the
python tests we could write our own tiny CFFI layer specifically for
those.
One thing worth thinking about is that such dependencies have to work on a
relatively large number of platforms / architectures. A lot of projects
don't...
Do they really? A bunch of the Perl tests we just skip on windows or
uncommon platforms. I think we could do the same for these.
On Wed, Jun 12, 2024 at 7:08 PM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
On Wed, 12 Jun 2024 at 17:50, Andres Freund <andres@anarazel.de> wrote:
The OAuth pytest suite makes extensive use of
- psycopg, to easily drive libpq;That's probably not going to fly. It introduces painful circular dependencies
between building postgres (for libpq), building psycopg (requiring libpq) and
testing postgres (requiring psycopg).You *can* solve such issues, but we've debated that in the past, and I doubt
we'll find agreement on the awkwardness it introduces.psycopg has a few implementations binary, c, & pure python. The pure
python one can be linked to a specific libpq.so file at runtime[1]. As
long as we don't break the libpq API (which we shouldn't), we can just
point it to the libpq compiled by meson/make. We wouldn't be able to
use the newest libpq features that way (because psycopg wouldn't know
about them), but that seems totally fine for most usages (i.e. sending
a query over a connection). If we really want to use those from the
python tests we could write our own tiny CFFI layer specifically for
those.
I guess you mean pg8000. Note that pg8000 and psycopg2 have some
differences in interpretation of datatypes (AFAIR arrays, jsonb...).
So, it would be easier to chose one particular driver. However, with
a bit efforts it's possible to make all the code driver agnostic.
------
Regards,
Alexander Korotkov
Supabase
On 12 Jun 2024, at 18:08, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
On Wed, 12 Jun 2024 at 17:50, Andres Freund <andres@anarazel.de> wrote:
One thing worth thinking about is that such dependencies have to work on a
relatively large number of platforms / architectures. A lot of projects
don't...Do they really? A bunch of the Perl tests we just skip on windows or
uncommon platforms. I think we could do the same for these.
For a project intended to improve on the status quo it seems like a too low bar to just port over the deficincies in the thing we’re trying to improve over.
./daniel
On Wed, 12 Jun 2024 at 18:08, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
On Wed, 12 Jun 2024 at 17:50, Andres Freund <andres@anarazel.de> wrote:
The OAuth pytest suite makes extensive use of
- psycopg, to easily drive libpq;That's probably not going to fly. It introduces painful circular dependencies
between building postgres (for libpq), building psycopg (requiring libpq) and
testing postgres (requiring psycopg).psycopg has a few implementations binary, c, & pure python. The pure
python one can be linked to a specific libpq.so file at runtime[1]. As
This is true, but [citation needed] :D I assume the pointer wanted to
be https://www.psycopg.org/psycopg3/docs/api/pq.html#pq-impl
I see the following use cases and how I would use psycopg to implement them:
- by installing 'psycopg[binary]' you would get a binary bundle
shipping with a stable version of the libpq, so you can test the
database server regardless of libpq instabilities in the same
codebase.
- using the pure Python psycopg (enforced by exporting
'PSYCOPG_IMPL=python') you would use the libpq found on the
LD_LIBRARY_PATH, which can be useful to test regressions to the libpq
itself.
- if you want to test new libpq functions you can reach them in Python
by dynamic lookup. See [2]https://github.com/psycopg/psycopg/blob/2bf7783d66ab239a2fa330a842fd461c4bb17c48/psycopg/psycopg/pq/_pq_ctypes.py#L564-L569 for an example of a function only available
from libpq v17.
-- Daniele
On Wed, Jun 12, 2024 at 7:34 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:
On Wed, Jun 12, 2024 at 7:08 PM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
On Wed, 12 Jun 2024 at 17:50, Andres Freund <andres@anarazel.de> wrote:
The OAuth pytest suite makes extensive use of
- psycopg, to easily drive libpq;That's probably not going to fly. It introduces painful circular dependencies
between building postgres (for libpq), building psycopg (requiring libpq) and
testing postgres (requiring psycopg).You *can* solve such issues, but we've debated that in the past, and I doubt
we'll find agreement on the awkwardness it introduces.psycopg has a few implementations binary, c, & pure python. The pure
python one can be linked to a specific libpq.so file at runtime[1]. As
long as we don't break the libpq API (which we shouldn't), we can just
point it to the libpq compiled by meson/make. We wouldn't be able to
use the newest libpq features that way (because psycopg wouldn't know
about them), but that seems totally fine for most usages (i.e. sending
a query over a connection). If we really want to use those from the
python tests we could write our own tiny CFFI layer specifically for
those.I guess you mean pg8000. Note that pg8000 and psycopg2 have some
differences in interpretation of datatypes (AFAIR arrays, jsonb...).
So, it would be easier to chose one particular driver. However, with
a bit efforts it's possible to make all the code driver agnostic.
Ops, this is probably outdated due to presence of psycopg3, as pointed
by Daniele Varrazzo [1].
Links.
1. /messages/by-id/CA+mi_8Zj0gpzPKUEcEx2mPOAsm0zPvznhbcnQDA_eeHVnVqg9Q@mail.gmail.com
------
Regards,
Alexander Korotkov
Supabase
On 12 Jun 2024, at 17:50, Andres Freund <andres@anarazel.de> wrote:
On 2024-06-11 07:28:23 -0700, Jacob Champion wrote:
The OAuth pytest suite makes extensive use of
- psycopg, to easily drive libpq;That's probably not going to fly. It introduces painful circular dependencies
between building postgres (for libpq), building psycopg (requiring libpq) and
testing postgres (requiring psycopg).
I might be missing something obvious, but if we use a third-party libpq driver
in the testsuite doesn't that imply that a patch adding net new functionality
to libpq also need to add it to the driver in order to write the tests? I'm
thinking about the SCRAM situation a few years back when drivers weren't up to
date.
--
Daniel Gustafsson
On Wed, 12 Jun 2024 at 19:30, Daniel Gustafsson <daniel@yesql.se> wrote:
I might be missing something obvious, but if we use a third-party libpq driver
in the testsuite doesn't that imply that a patch adding net new functionality
to libpq also need to add it to the driver in order to write the tests? I'm
thinking about the SCRAM situation a few years back when drivers weren't up to
date.
As Jelte pointed out, new libpq functions can be tested via CFFI. I
posted a practical example in a link upthread (pure Python Psycopg is
entirely implemented on FFI).
-- Daniele
On Wed, Jun 12, 2024 at 1:30 PM Daniel Gustafsson <daniel@yesql.se> wrote:
On 12 Jun 2024, at 17:50, Andres Freund <andres@anarazel.de> wrote:
On 2024-06-11 07:28:23 -0700, Jacob Champion wrote:The OAuth pytest suite makes extensive use of
- psycopg, to easily drive libpq;That's probably not going to fly. It introduces painful circular dependencies
between building postgres (for libpq), building psycopg (requiring libpq) and
testing postgres (requiring psycopg).I might be missing something obvious, but if we use a third-party libpq driver
in the testsuite doesn't that imply that a patch adding net new functionality
to libpq also need to add it to the driver in order to write the tests? I'm
thinking about the SCRAM situation a few years back when drivers weren't up to
date.
Yeah, I don't think depending on psycopg2 is practical at all. We can
either shell out to psql like we do now, or we can use something like
CFFI.
On the overall topic of this thread, I personally find most of the
rationale articulated in the original message unconvincing. Many of
those improvements could be made on top of the Perl framework we have
today, and some of them have been discussed, but nobody's done the
work. I also don't understand the argument that assert a == b is some
new and wonderful thing; I mean, you can already do is($a,$b,"test
name") which *also* shows you the values when they don't match, and
includes a test name, too! I personally think that most of the
frustration associated with writing TAP tests has to do with (1)
Windows behavior being randomly different than on other platforms in
ways that are severely under-documented, (2)
PostgreSQL::Test::whatever being somewhat clunky and under-designed,
and (3) the general difficulty of producing race-free test cases. A
new test framework isn't going to solve (3), and (1) and (2) could be
fixed anyway.
However, I understand that a lot of people would prefer to code in
Python than in Perl. I am not one of them: I learned Perl in the early
nineties, and I haven't learned Python yet. Nonetheless, Python being
more popular than Perl is a reasonable reason to consider allowing its
use in PostgreSQL. But if that's the reason, let's be up front about
it.
I do think we want a scripting language here i.e. not C.
--
Robert Haas
EDB: http://www.enterprisedb.com
On Tue, Jun 11, 2024 at 4:48 PM Noah Misch <noah@leadboat.com> wrote:
If we're going to test in a non-Perl language, I'd pick C over Python.
We already test in C, though? If the complaint is that those tests are
driven by Perl, I agree that something like libcheck or GTest or
whatever people are using nowadays would be nicer. But that can be
added at any point; the activation energy for a new C-based test
runner seems pretty small. IMO, there's no reason to pick it _over_
another language, when we already support C tests and agree that
developers need to be fluent in C.
We'd need a library like today's Perl
PostgreSQL::Test to make C-language tests nice, but the same would apply to
any new language.
I think the effort required in rebuilding end-to-end tests in C is
going to be a lot different than in pretty much any other modern
high-level language, so I don't agree that "the same would apply".
For the five problem statements I put forward, I think C moves the
needle for zero of them. It neither solves the problems we have nor
gives us stronger tools to solve them ourselves. And for my personally
motivating use case of OAuth, where I need to manipulate HTTP and JSON
and TLS and so on and so forth, implementing all of that in C would be
an absolute nightmare. Given that choice, I would rather use Perl --
and that's saying something, because I like C a lot more than I like
Perl -- because it's the difference between being given a rusty but
still functional table saw, and being given a box of Legos to build a
"better" table saw, when all I want to do is cut a 2x4 in half and
move on with my work.
I will use the rusty saw if I have to. But I want to get a better saw
-- that somebody else with experience in making saws has constructed,
and other people are pretty happy with -- as opposed to building my
own.
I also want the initial scope to be the new language coexisting with the
existing Perl tests.
Strongly agreed.
Thanks,
--Jacob
On Wed, Jun 12, 2024 at 4:40 AM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
I think C#, Java, Go, Rust, Kotlin, and Swift would be acceptable
choices for me (and possibly some more). They allow some type of
introspection, they have a garbage collector, and their general
tooling is quite good.But I think a dynamically typed scripting language is much more
fitting for writing tests like this. I love static typing for
production code, but imho it really doesn't have much benefit for
tests.
+1. I write mostly protocol mocks and glue code in my authn testing,
to try to set up the system into some initial state and then break it.
Of the languages mentioned here, I've only used C#, Java, and Go. If I
had to reimplement my tests, I'd probably reach for Go out of all of
those, but the glue would still be more painful than it probably needs
to be.
As scripting languages go, the ones that are still fairly heavily in
use are Javascript, Python, Ruby, and PHP. I think all of those could
probably work, but my personal order of preference would be Python,
Ruby, Javascript, PHP.
- Python is the easiest language I've personally used to glue things
together, bar none.
- I like Ruby as a language but have no experience using it for
testing. (RSpec did come up during the unconference session and
subsequent hallway conversations.)
- Javascript is a completely different mental model from what we're
used to, IMO. I think we're likely to spend a lot of time fighting the
engine unless everyone is very clear on how it works.
- I don't see a use case for PHP here.
TO CLARIFY: This thread is not a proposal to replace Perl with Python.
It's a proposal to allow people to also write tests in Python.
+1. It doesn't need to replace anything. It just needs to help us do
more things than we're currently doing.
--Jacob
On Wed, Jun 12, 2024 at 4:48 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:
Generally, testgres was initially designed as Python analogue of what
we have in src/test/perl/PostgreSQL/Test. In particular its
testgres.PostgresNode is analogue of PostgreSQL::Test::Cluster. It
comes under PostgreSQL License. So, I wonder if we could revise it
and fetch most part of it into our source tree.
Okay. If there's wide interest in a port of PostgreSQL::Test::Cluster,
that might be something to take a look at. (Since I'm focused on
testing things that the current Perl suite can't do at all, I would
probably not be the first to volunteer.)
--Jacob
On Wed, Jun 12, 2024 at 8:50 AM Andres Freund <andres@anarazel.de> wrote:
I think I might have formulated my paragraph above badly - I didn't mean that
we should move away from perl tests tomorrow, but that we need a path forward
that allows folks to write tests without perl.
Okay, agreed.
- Tests aren't cheap, but in my experience, the maintenance-cost math
for tests is a lot different than the math for implementations.At the moment they tend to be *more* expensive often, due to spurious
failures. That's mostly not perl's fault, don't get me wrong, but us not
having better infrastructure for testing complicated behaviour and/or testing
things on a more narrow basis.
Well, okay, but I'm not sure how to respond to this in the frame of
this discussion. Bad tests will continue to exist. I am trying to add
a tool that, in my view, has made it easier for me to test complicated
behavior than what we currently have. I can't prove that it will solve
other issues too.
Well, scopes are pretty front and center when you start building
pytest fixtures, and the complicated longer setups will hopefully
converge correctly early on and be reused everywhere else. I imagine
no one wants to build cluster setup from scratch.One (the?) prime source of state in our tap tests is the
database. Realistically we can't just tear that one down and reset it between
tests without causing the test times to explode. So we'll have to live with
some persistent state.
Yes? If I've given the impression that I disagree, sorry; I agree.
On a slight tangent, is this not a problem today?
It is, but that doesn't mean making it even bigger is unproblematic :)
Given all that's been said, I don't understand why you think the
problem would get bigger. We would cache expensive state that we need,
including the cluster, and pytest lets us do that, and my test suite
does that. I've never written a suite that spun up a separate cluster
for every single test and then immediately threw it away.
(If you want to _enable_ that behavior, to test in extreme isolation,
then pytest lets you do that too. But it's not something we should do
by default.)
We do a lot more acceptance testing than internal testing, which came
up as a major complaint from me and others during the unconference.
One of the reasons people avoid writing internal tests in Perl is
because it's very painful to find a rhythm with Test::More.What definition of internal tests are you using here?
There's a spectrum from unit-testing unexported functions all the way
to end-to-end acceptance, and personally I believe that anything
finer-grained than end-to-end acceptance is unnecessarily painful. My
OAuth suite sits somewhere in the middle, where it mocks the protocol
layer and can test the client and server as independent pieces. Super
useful for OAuth, which is asymmetrical.
I'd like to additionally see better support for unit tests of backend
internals, but I don't know those seams as well as all of you do and I
should not be driving that. I don't think Python will necessarily help
you with it. But it sure helped me break apart the client and the
server while enjoying the testing process, and other people want to do
that too, so that's what I'm pushing for.
I think a lot of our tests are complicated, fragile and slow because we almost
exclusively do end-to-end tests, because with a few exceptions we don't have a
way to exercise code in a more granular way.
Yep.
That's probably not going to fly. It introduces painful circular dependencies
between building postgres (for libpq), building psycopg (requiring libpq) and
testing postgres (requiring psycopg).
I am trying very hard not to drag that, which I understand is
controversial and is in no way a linchpin of my proposal, into the
discussion of whether or not we should try supporting pytest.
I get it; I understand that the circular dependency is weird; there
are alternatives if it's unacceptable; none of that has anything to do
with Python+pytest.
One thing worth thinking about is that such dependencies have to work on a
relatively large number of platforms / architectures. A lot of projects
don't...
Agreed.
Thanks,
--Jacob
On Wed, Jun 12, 2024 at 10:30 AM Daniel Gustafsson <daniel@yesql.se> wrote:
I might be missing something obvious, but if we use a third-party libpq driver
in the testsuite doesn't that imply that a patch adding net new functionality
to libpq also need to add it to the driver in order to write the tests?
I use the third-party driver to perform the "basics" at a high level
-- connections, queries during cluster setup, things that don't
involve ABI changes. For new ABI I use ctypes, or as other people have
mentioned CFFI would work.
--Jacob
On Wed, Jun 12, 2024 at 01:40:30PM +0200, Jelte Fennema-Nio wrote:
On Wed, 12 Jun 2024 at 01:48, Noah Misch <noah@leadboat.com> wrote:
I also want the initial scope to be the new language coexisting with the
existing Perl tests. If a bulk translation ever happens, it should happen
long after the debut of the new framework. That said, I don't much trust a
human-written bulk language translation to go through without some tests
accidentally ceasing to test what they test in Perl today.I definitely don't think we should rewrite all the tests that we have
in Perl today into some other language. But I do think that whatever
language we choose, that language should make it as least as easy to
write tests, as easy to read them and as easy to see that they are
testing the intended thing, as is currently the case for Perl.
Rewriting a few Perl tests into the new language, even if not merging
the rewrite, is a good way of validating that imho.
Agreed.
PS. For PgBouncer I actually hand-rewrote all the tests that we had in
bash (which is the worst testing language ever) in Python and doing so
actually found more bugs in PgBouncer code that our bash tests
wouldn't catch. So it's not necessarily the case that you lose
coverage by rewriting tests.
Yep.
Hi,
(I don't have an opinion which language should be selected
here.)
In <CAOYmi+mA7-uNqpY-0jNZY=fE-QsbfeM1j5Mc-vu1Xm+=B8NOXA@mail.gmail.com>
"Re: RFC: adding pytest as a supported test framework" on Wed, 12 Jun 2024 12:31:23 -0700,
Jacob Champion <jacob.champion@enterprisedb.com> wrote:
- I like Ruby as a language but have no experience using it for
testing. (RSpec did come up during the unconference session and
subsequent hallway conversations.)
If we want to select Ruby, I can help. (I'm a Ruby committer
and a maintainer of a testing framework bundled in Ruby.)
I'm using Ruby for PGroonga's tests that can't be covered by
pg_regress. For example, streaming replication related
tests. PGroonga has a small utility for it:
https://github.com/pgroonga/pgroonga/blob/main/test/helpers/sandbox.rb
Here is a streaming replication test with it:
https://github.com/pgroonga/pgroonga/blob/main/test/test-streaming-replication.rb
I'm using test-unit as testing framework that is bundled in
Ruby: https://github.com/test-unit/test-unit/
I don't recommend that we use RSpec as testing framework
even if we select Ruby. RSpec may change API. (RSpec did it
several times in the past.) If testing framework changes API, we
need to rewrite our tests to adapt the change.
I'll never change test-unit API because I don't want to
rewrite existing tests.
Thanks,
--
kou
On Wed, 12 Jun 2024 at 18:46, Daniele Varrazzo
<daniele.varrazzo@gmail.com> wrote:
This is true, but [citation needed] :D I assume the pointer wanted to
be https://www.psycopg.org/psycopg3/docs/api/pq.html#pq-impl
Ugh, yes I definitely meant to add a link to that [1]https://www.psycopg.org/psycopg3/docs/basic/install.html#pure-python-installation. I meant this one though:
[1]: https://www.psycopg.org/psycopg3/docs/basic/install.html#pure-python-installation
- using the pure Python psycopg (enforced by exporting
'PSYCOPG_IMPL=python') you would use the libpq found on the
LD_LIBRARY_PATH, which can be useful to test regressions to the libpq
itself.
This indeed was the main idea I had in mind.
- if you want to test new libpq functions you can reach them in Python
by dynamic lookup. See [2] for an example of a function only available
from libpq v17.
Yeah, that dynamic lookup would work. But due to the cyclic dependency
on postgres commit vs psycopg PR we couldn't depend on psycopg for
those dynamic lookups. So we'd need to have our own dynamic lookup
code to do this.
I don't see a huge problem with using psycopg for already released
commonly used features (i.e. connecting to postgres and doing
queries), but still use our own custom dynamic lookup for the rare
tests that test newly added features. But I can definitely see people
making the argument that if we need to write & maintain some dynamic
lookup code ourselves anyway, we might as well have all the dynamic
lookup code in our repo to avoid dependencies.
On Wed, 12 Jun 2024 at 18:46, Daniele Varrazzo
<daniele.varrazzo@gmail.com> wrote:
Show quoted text
On Wed, 12 Jun 2024 at 18:08, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
On Wed, 12 Jun 2024 at 17:50, Andres Freund <andres@anarazel.de> wrote:
The OAuth pytest suite makes extensive use of
- psycopg, to easily drive libpq;That's probably not going to fly. It introduces painful circular dependencies
between building postgres (for libpq), building psycopg (requiring libpq) and
testing postgres (requiring psycopg).psycopg has a few implementations binary, c, & pure python. The pure
python one can be linked to a specific libpq.so file at runtime[1]. AsThis is true, but [citation needed] :D I assume the pointer wanted to
be https://www.psycopg.org/psycopg3/docs/api/pq.html#pq-implI see the following use cases and how I would use psycopg to implement them:
- by installing 'psycopg[binary]' you would get a binary bundle
shipping with a stable version of the libpq, so you can test the
database server regardless of libpq instabilities in the same
codebase.
- using the pure Python psycopg (enforced by exporting
'PSYCOPG_IMPL=python') you would use the libpq found on the
LD_LIBRARY_PATH, which can be useful to test regressions to the libpq
itself.
- if you want to test new libpq functions you can reach them in Python
by dynamic lookup. See [2] for an example of a function only available
from libpq v17.-- Daniele
On Tue, Jun 11, 2024 at 8:05 AM Andrew Dunstan <andrew@dunslane.net> wrote:
On 2024-06-10 Mo 21:49, Andres Freund wrote:
Hi,
On 2024-06-10 16:46:56 -0400, Andrew Dunstan wrote:
I'm not sure what part of the testing infrastructure you think is
unmaintained. For example, the last release of Test::Simple was all the way
back on April 25.IPC::Run is quite buggy and basically just maintained by Noah these days.
Yes, that's true. I think the biggest pain point is possibly the recovery tests.
Some time ago I did some work on wrapping libpq using the perl FFI module. It worked pretty well, and would mean we could probably avoid many uses of IPC::Run, and would probably be substantially more efficient (no fork required). It wouldn't avoid all uses of IPC::Run, though.
But my point was mainly that while a new framework might have value, I don't think we need to run out and immediately rewrite several hundred TAP tests. Let's pick the major pain points and address those.
FWIW, I felt a lot of pain trying to write recovery TAP tests with
IPC::Run's pumping functionality. It was especially painful (as
someone who knows even less Perl than the "street fighting Perl"
Thomas Munro has described having) before the additional test
infrastructure was added in BackgroudPsql.pm last year. As an example
of the "worst case", it took me two full work days to go from a repro
(with psql sessions on a primary and replica node) of the vacuum hang
issue being explored in [1]/messages/by-id/CAAKRu_bXH2g_pchG7rN_4fs-_6_kVbbJ97gYRoN0Zdb9P04Wag@mail.gmail.com to a sort-of working TAP test which
demonstrated it - and that was with help from several other
committers. Granted, this is a complex case.
A small part of the issue is that, as Tristan has said elsewhere,
there aren't good developer tool integrations that I know about for
Perl. I use neovim's LSP support for C and Python (in other projects),
and there is a whole ecosystem of tools I can use for both C and
Python. I know not everyone likes or needs these, but I find that they
help me write and debug code faster.
I had offered to take a stab at writing some of the BackgroundPsql
test infrastructure in Python. I haven't started exploring that yet or
looking at what Jacob has done so far, but I am optimistic that this
is an area where it is worth seeing what is available to us outside of
IPC::Run.
- Melanie
[1]: /messages/by-id/CAAKRu_bXH2g_pchG7rN_4fs-_6_kVbbJ97gYRoN0Zdb9P04Wag@mail.gmail.com
On Wed, 12 Jun 2024 at 21:07, Robert Haas <robertmhaas@gmail.com> wrote:
Yeah, I don't think depending on psycopg2 is practical at all. We can
either shell out to psql like we do now, or we can use something like
CFFI.
Quick clarification I meant psycopg3, not psycopg2. And I'd very much
like to avoid using psql for sending queries, luckily CFFI in python
is very good.
Many of
those improvements could be made on top of the Perl framework we have
today, and some of them have been discussed, but nobody's done the
work.
I agree it's not a technical issue. It is a people issue. There are
very few people skilled in Perl active in the community. And most of
those are very senior hackers that have much more important things to
do that make our Perl testing framework significantly better. And the
less senior people that might see improving tooling as a way to get
help out in the community, are try to stay away from Perl with a 10
foot pole. So the result is, nothing gets improved. Especially since
very few people outside our community improve this tooling either.
I also don't understand the argument that assert a == b is some
new and wonderful thing; I mean, you can already do is($a,$b,"test
name") which *also* shows you the values when they don't match, and
includes a test name, too!
Sure you can, if you know the function exists. And clearly not
everyone knows that it exists, as the quick grep below demonstrates:
❯ grep 'ok(.* == .*' **.pl | wc -l
41
But apart from the obvious syntax doing what you want, the output is
also much better when looking at a slightly more complex case. With
the following code:
def some_returning_func():
return 1234
def some_func(val):
if val > 100:
return 100
return val
def test_mytest():
assert some_func(some_returning_func()) == 50
Pytest will show the following output
def test_mytest():
assert some_func(some_returning_func()) == 50
E assert 100 == 50
E + where 100 = some_func(1234)
E + where 1234 = some_returning_func()
I have no clue how you could get output that's even close to that
clear with Perl.
Another problem I run into is that, as you probably know, sometimes
you need to look at the postgres logs to find out what actually went
wrong. Currently the only way to find them (for me) is following the
following steps: hmm, let me figure out what that directory was called
again... ah okay it is build/testrun/pg_upgrade/001_basic/... okay
let's start opening log files that all have very similar names until
find the right one.
When a test in pytest fails it automatically outputs all stdout/stderr
that was outputted, and hides it on success. So for the PgBouncer test
suite. I simply send all the relevant log files to stdout, prefixed by
some capitalized identifying line with a few newlines around it.
Something like "PG_LOG: /path/to/actual/logfile". Then when a test
fails in my terminal window I can look at the files related to the
failed test instantly. This allows me to debug failures much faster.
A related thing that also doesn't help at all is that (afaik) seeing
any of the perl tap test output in your terminal requires running
`meson test` with the -v option, and then scrolling up past all the
super verbose output of successfully passing tests to find out what
exactly failed in the single test that failed. And if you don't want
to do that you have to navigate to the magic directory path (
build/testrun/pg_upgrade/001_basic/) of the specific tests to look at
the stdout file there... Which then turns out not to even be there if
you actually had a compilation failure in your perl script (which
happens very often to anyone that doesn't use perl often). So now you
have to scroll up anyway.
Pytest instead is very good at only showing output for the tests that
failed, and hiding pretty much all output for the tests that passed.
On 13 Jun 2024, at 00:34, Melanie Plageman <melanieplageman@gmail.com> wrote:
FWIW, I felt a lot of pain trying to write recovery TAP tests with
IPC::Run's pumping functionality. It was especially painful (as
someone who knows even less Perl than the "street fighting Perl"
Thomas Munro has described having) before the additional test
infrastructure was added in BackgroudPsql.pm last year.
A key aspect of this, which isn't specific to Perl or our use of it, is that
this was done in backbranches which doesn't have the (recently) much improved
BackgroundPsql.pm. The quality of our tools and the ease of use they provide
is directly related to the investment we make into continuously improving our
testharness. Regardless of which toolset we adopt, if we don't make this
investment (taking learnings from the past years and toolsets into account)
we're bound to repeat this thread in a few years advocating for toolset X+1.
--
Daniel Gustafsson
On Wed, Jun 12, 2024 at 6:43 PM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
I agree it's not a technical issue. It is a people issue. There are
very few people skilled in Perl active in the community. And most of
those are very senior hackers that have much more important things to
do that make our Perl testing framework significantly better. And the
less senior people that might see improving tooling as a way to get
help out in the community, are try to stay away from Perl with a 10
foot pole. So the result is, nothing gets improved. Especially since
very few people outside our community improve this tooling either.
I agree with you, but I'm skeptical that solving it will be as easy as
switching to Python. For whatever reason, it seems like every piece of
infrastructure that the PostgreSQL community has suffers from severe
neglect. Literally everything I know of either has one or maybe two
very senior hackers maintaining it, or no maintainer at all. Andrew
maintains the buildfarm and it evolves quite slowly. Andres did all
the work on meson, with some help from Peter. Thomas maintains cfbot
as a skunkworks. The Perl-based TAP test framework gets barely any
love at all. The CommitFest application is pretty much totally
stagnant, and in fact is a great example of what I'm talking about
here: I wrote an original version in Perl and somebody -- I think
Magnus -- rewrote it in a more maintainable framework -- and then the
development pace went to basically zero. All of this stuff is critical
project infrastructure and yet it feels like nobody wants to work on
it.
Now, this case may prove to be an exception to that rule and that will
be great. But what I think is a lot more likely is that we'll get a
lot of pressure to commit something as soon as parity with the Perl
TAP test system has been achieved, or maybe even before that, and then
the rate of further improvements will slow to a trickle. That's not to
say that sticking with Perl is better. A quick Google search finds a
web page that says Python is two orders of magnitude more popular than
Perl, and that's not something we should just ignore. But I still
think it's fair to question whether the preference of many developers
for Python over Perl will translate into sustained investment in
improving the infrastructure. Again, I will be thrilled if it does,
but that just doesn't seem to be the way that things go around here,
and I bet the reasons go well beyond choice of programming language.
--
Robert Haas
EDB: http://www.enterprisedb.com
Robert Haas <robertmhaas@gmail.com> writes:
On Wed, Jun 12, 2024 at 6:43 PM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
I agree it's not a technical issue. It is a people issue. There are
very few people skilled in Perl active in the community. And most of
those are very senior hackers that have much more important things to
do that make our Perl testing framework significantly better. And the
less senior people that might see improving tooling as a way to get
help out in the community, are try to stay away from Perl with a 10
foot pole. So the result is, nothing gets improved. Especially since
very few people outside our community improve this tooling either.
I agree with you, but I'm skeptical that solving it will be as easy as
switching to Python. For whatever reason, it seems like every piece of
infrastructure that the PostgreSQL community has suffers from severe
neglect.
Yeah. In this case it's perhaps more useful to look at our external
dependencies, the large majority of which are suffering from age
and neglect:
* autoconf & gmake (although meson may get us out from under these)
* bison
* flex
* perl
* tcl
* regex library (basically from tcl)
* libxml2
* kerberos
* ldap
* pam
* uuid library
I think the basic problem is inherent in being a successful long-lived
project. Or maybe we're just spectacularly bad at picking which
things to depend on. Whichever it is, we'd better have a 10- or 20-
year perspective when thinking about adopting new major dependencies.
In the case at hand, I share Robert's doubts about Python. Sure it's
more popular than Perl, but I don't think it's actually better, and
in some ways it's worse. (The moving-target package collection was
mentioned as a problem, for instance.) Is it going to age better
than Perl? Doubt it.
I wonder if we should be checking out some of the other newer
languages that were mentioned upthread. It feels like going to
Python here will lead to having two testing infrastructures with
mas-o-menos the same capabilities, leaving us with a situation
where people have to know both languages in order to make sense of
our test suite. I find it hard to picture that as an improvement
over the status quo.
regards, tom lane
On Thu, 13 Jun 2024 at 15:38, Robert Haas <robertmhaas@gmail.com> wrote:
For whatever reason, it seems like every piece of
infrastructure that the PostgreSQL community has suffers from severe
neglect. Literally everything I know of either has one or maybe two
very senior hackers maintaining it, or no maintainer at all. Andrew
maintains the buildfarm and it evolves quite slowly. Andres did all
the work on meson, with some help from Peter. Thomas maintains cfbot
as a skunkworks. The Perl-based TAP test framework gets barely any
love at all. The CommitFest application is pretty much totally
stagnant, and in fact is a great example of what I'm talking about
here: I wrote an original version in Perl and somebody -- I think
Magnus -- rewrote it in a more maintainable framework -- and then the
development pace went to basically zero. All of this stuff is critical
project infrastructure and yet it feels like nobody wants to work on
it.
Overall, I agree with the sentiment of us not maintaining our tooling
well (although I think meson maintenance has been pretty decent so
far). I think there's a bunch of reasons for this (not all apply to
each of the tools):
1. pretty much solely maintained by senior community members who don't
have time to maintain it
2. no clear way to contribute. e.g. where should I send a patch/PR for
the commitfest app, or the cfbot?
3. (related to 1) unresponsive when somehow contributions are actually
sent in (I have two open PRs on the cfbot app from 3 years ago without
any response)
I think 1 & 3 could be addressed by more easily giving commit/merge
access to these tools than to the main PG repo. And I think 2 could be
addressed by writing on the relevant wiki page where to go, and
probably putting a link to the wiki page on the actual website of the
tool.
But Perl is at the next level of unmaintained infrastructure. It is
actually clear how you can contribute to it, but still no new
community members actually want to contribute to it. Also, it's not
only unmaintained by us but it's also pretty much unmaintained by the
upstream community.
But I still
think it's fair to question whether the preference of many developers
for Python over Perl will translate into sustained investment in
improving the infrastructure. Again, I will be thrilled if it does,
but that just doesn't seem to be the way that things go around here,
and I bet the reasons go well beyond choice of programming language.
As you said, no one in our community wants to maintain our testsuite
full time. But our test suite consists partially of upstream
dependencies and partially of our own code. Right now pretty much
no-one improves the ustream code, and pretty much no-one improves our
own code. Using a more modern language gives up much more frequent
upstream improvements for free, and it will allow new community
members to contribute to our own test suite.
And I understand you are sceptical that people will contribute to our
own test suite, just because it's Python. But as a counterpoint:
people are currently already doing exactly that, just outside of the
core postgres repo[1]https://github.com/pgbouncer/pgbouncer/tree/master/test[2]https://github.com/jchampio/pg-pytest-suite[3]https://github.com/postgrespro/testgres. I don't see why those people would
suddenly stop doing that if we include such a suite in the official
repo. Apparently many people hate writing tests in Perl so much that
they'd rather build Python test frameworks to test their extensions,
than to use/improve the Perl testing framework included in Postgres.
[1]: https://github.com/pgbouncer/pgbouncer/tree/master/test
[2]: https://github.com/jchampio/pg-pytest-suite
[3]: https://github.com/postgrespro/testgres
PS. I don't think it makes sense to host our tooling like the
commitfest app on our own git server instead of github/gitlab. That
only makes it harder for community members to contribute and also much
harder to set up CI. I understand the reasons why we use mailing lists
for the development of core postgres, but I don't think those apply
nearly as much to our tooling repos. And honestly also not to stuff
like the website.
On Thu, 13 Jun 2024 at 17:19, Tom Lane <tgl@sss.pgh.pa.us> wrote:
I wonder if we should be checking out some of the other newer
languages that were mentioned upthread.
If this is actually something that we want to seriously evaluate, I
think that's a significant effort. And I think the people that want a
language would need to step in to make that effort. So far Jacob[1]https://github.com/pgbouncer/pgbouncer/tree/master/test,
Alexander[2]https://github.com/jchampio/pg-pytest-suite and me[3]https://github.com/postgrespro/testgres seem to be doing that for Python, and Sutou has
done that for Ruby[4]https://github.com/pgroonga/pgroonga/blob/main/test/test-streaming-replication.rb.
[1]: https://github.com/pgbouncer/pgbouncer/tree/master/test
[2]: https://github.com/jchampio/pg-pytest-suite
[3]: https://github.com/postgrespro/testgres
[4]: https://github.com/pgroonga/pgroonga/blob/main/test/test-streaming-replication.rb
It feels like going to
Python here will lead to having two testing infrastructures with
mas-o-menos the same capabilities, leaving us with a situation
where people have to know both languages in order to make sense of
our test suite. I find it hard to picture that as an improvement
over the status quo.
You don't have to be fluent in writing Python to be able to read and
understand tests written in it. As someone familiar with Python I can
definitely read our test suite, and I expect everyone smart enough to
be fluent in Perl to be able to read and understand Python with fairly
little effort too.
I think having significantly more tests being written, and those tests
being written faster and more correctly, is definitely worth the
slight mental effort of learning to read two very similarly looking
scripting languages (they both pretty much looking like pseudo code).
Jelte Fennema-Nio <postgres@jeltef.nl> writes:
You don't have to be fluent in writing Python to be able to read and
understand tests written in it.
[ shrug... ] I think the same can be said of Perl, with about as
much basis. It matters a lot if you have previous experience with
the language.
regards, tom lane
On Thu, Jun 13, 2024 at 11:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
I wonder if we should be checking out some of the other newer
languages that were mentioned upthread. It feels like going to
Python here will lead to having two testing infrastructures with
mas-o-menos the same capabilities, leaving us with a situation
where people have to know both languages in order to make sense of
our test suite. I find it hard to picture that as an improvement
over the status quo.
As I see it, one big problem is that if you pick a language that's too
new, it's more likely to fade away. Python is very well-established,
e.g. see
https://www.tiobe.com/tiobe-index/
That gives Python a rating of 15.39%; vs. Perl at 0.69%. There are
other things that you could pick, for sure, like Javascript, but if
you want a scripting language that's popular now, Python is hard to
beat. And that means it's more likely to still have some life in it 10
or 20 years from now than many other things.
Not all sites agree on which programming languages are actually the
most popular and I'm not strongly against considering other
possibilities, but Python seems to be pretty high on most lists, often
#1, and that does matter.
--
Robert Haas
EDB: http://www.enterprisedb.com
On Thu, Jun 13, 2024 at 7:27 AM Daniel Gustafsson <daniel@yesql.se> wrote:
On 13 Jun 2024, at 00:34, Melanie Plageman <melanieplageman@gmail.com> wrote:
FWIW, I felt a lot of pain trying to write recovery TAP tests with
IPC::Run's pumping functionality. It was especially painful (as
someone who knows even less Perl than the "street fighting Perl"
Thomas Munro has described having) before the additional test
infrastructure was added in BackgroudPsql.pm last year.A key aspect of this, which isn't specific to Perl or our use of it, is that
this was done in backbranches which doesn't have the (recently) much improved
BackgroundPsql.pm. The quality of our tools and the ease of use they provide
is directly related to the investment we make into continuously improving our
testharness. Regardless of which toolset we adopt, if we don't make this
investment (taking learnings from the past years and toolsets into account)
we're bound to repeat this thread in a few years advocating for toolset X+1.
True. And thank you for committing BackgroundPsql.pm (and Andres for
starting that work). My specific case is likely one of a poor work
person blaming her tools :)
- Melanie
On Thu, Jun 13, 2024 at 1:08 PM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
I think 1 & 3 could be addressed by more easily giving commit/merge
access to these tools than to the main PG repo. And I think 2 could be
addressed by writing on the relevant wiki page where to go, and
probably putting a link to the wiki page on the actual website of the
tool.
+1.
But Perl is at the next level of unmaintained infrastructure. It is
actually clear how you can contribute to it, but still no new
community members actually want to contribute to it. Also, it's not
only unmaintained by us but it's also pretty much unmaintained by the
upstream community.
I feel like I already agreed to this in a previous email and you're
continuing to argue with me as if I were disagreeing.
As you said, no one in our community wants to maintain our testsuite
full time. But our test suite consists partially of upstream
dependencies and partially of our own code. Right now pretty much
no-one improves the ustream code, and pretty much no-one improves our
own code. Using a more modern language gives up much more frequent
upstream improvements for free, and it will allow new community
members to contribute to our own test suite.
I also agree with this. I'm just not super optimistic about how much
of that will actually happen. And I'd like to hear you acknowledge
that concern and think about whether it can be addressed in some way,
instead of just repeating that we should do it anyway. Because I agree
we probably should do it anyway, but that doesn't mean I wouldn't like
to see the downsides mitigated as much as we can. In particular, if
the proposal is exactly "let's add the smallest possible patch that
enables people to write tests in Python and then add a few new tests
in Python while leaving almost everything else in Perl, with no
migration plan and no clear vision of how the Python support ever gets
any better than the minimum stub that is proposed for initial commit,"
then I don't know that I can vote for that plan. Honestly, that sounds
like very little work for the person proposing that minimal patch and
a whole lot of work for the rest of the community later on, and the
evidence is not in favor of volunteers showing up to take care of that
work. The plan should be more front-loaded than that: enough initial
development should get done by the people making the proposal that if
the work stops after, we don't have another big mess on our hands.
Or so I think, anyway.
--
Robert Haas
EDB: http://www.enterprisedb.com
On 13 Jun 2024, at 20:09, Melanie Plageman <melanieplageman@gmail.com> wrote:
On Thu, Jun 13, 2024 at 7:27 AM Daniel Gustafsson <daniel@yesql.se> wrote:
On 13 Jun 2024, at 00:34, Melanie Plageman <melanieplageman@gmail.com> wrote:
FWIW, I felt a lot of pain trying to write recovery TAP tests with
IPC::Run's pumping functionality. It was especially painful (as
someone who knows even less Perl than the "street fighting Perl"
Thomas Munro has described having) before the additional test
infrastructure was added in BackgroudPsql.pm last year.A key aspect of this, which isn't specific to Perl or our use of it, is that
this was done in backbranches which doesn't have the (recently) much improved
BackgroundPsql.pm. The quality of our tools and the ease of use they provide
is directly related to the investment we make into continuously improving our
testharness. Regardless of which toolset we adopt, if we don't make this
investment (taking learnings from the past years and toolsets into account)
we're bound to repeat this thread in a few years advocating for toolset X+1.True. And thank you for committing BackgroundPsql.pm (and Andres for
starting that work). My specific case is likely one of a poor work
person blaming her tools :)
I don't think it is since the tools we had then were really hard to use. I
wrote very similar tests to yours for the online checksums patch and they were
quite complicated to get right. The point is that the complexity was greatly
reduced by the community, and that kind of work will be equally important
regardless of toolset.
--
Daniel Gustafsson
On Thu, 13 Jun 2024 at 20:11, Robert Haas <robertmhaas@gmail.com> wrote:
But Perl is at the next level of unmaintained infrastructure. It is
actually clear how you can contribute to it, but still no new
community members actually want to contribute to it. Also, it's not
only unmaintained by us but it's also pretty much unmaintained by the
upstream community.I feel like I already agreed to this in a previous email and you're
continuing to argue with me as if I were disagreeing.
Sorry about that.
I also agree with this. I'm just not super optimistic about how much
of that will actually happen. And I'd like to hear you acknowledge
that concern and think about whether it can be addressed in some way,
instead of just repeating that we should do it anyway. Because I agree
we probably should do it anyway, but that doesn't mean I wouldn't like
to see the downsides mitigated as much as we can.
I'm significantly more optimistic than you, but I also definitely
understand and agree with the concern. I also agree that mitigating
that concern beforehand would be a good thing.
In particular, if
the proposal is exactly "let's add the smallest possible patch that
enables people to write tests in Python and then add a few new tests
in Python while leaving almost everything else in Perl, with no
migration plan and no clear vision of how the Python support ever gets
any better than the minimum stub that is proposed for initial commit,"
then I don't know that I can vote for that plan. Honestly, that sounds
like very little work for the person proposing that minimal patch and
a whole lot of work for the rest of the community later on, and the
evidence is not in favor of volunteers showing up to take care of that
work. The plan should be more front-loaded than that: enough initial
development should get done by the people making the proposal that if
the work stops after, we don't have another big mess on our hands.Or so I think, anyway.
I understand and agree with your final stated goal of not ending up in
another big mess. It's also clear to me that you don't think the
current proposal achieves that goal. So I assume you have some
additional ideas for the proposal to help achieve that goal and/or
some specific worries that you'd like to get addressed better in the
proposal. But currently it's not really clear to me what either of
those are. Could you clarify?
On Thu, Jun 13, 2024 at 9:38 AM Robert Haas <robertmhaas@gmail.com> wrote:
I agree with you, but I'm skeptical that solving it will be as easy as
switching to Python. For whatever reason, it seems like every piece of
infrastructure that the PostgreSQL community has suffers from severe
neglect. Literally everything I know of either has one or maybe two
very senior hackers maintaining it, or no maintainer at all.
...
All of this stuff is critical project infrastructure and yet it feels like
nobody wants to work on
it.
I feel at least some of this is a visibility / marketing problem. I've not
seen any dire requests for help come across on the lists, nor things on the
various todos/road maps/ blog posts people make from time to time. If I
had, I would have jumped in. And for the record, I'm very proficient with
Perl.
Cheers,
Greg
On Thu, Jun 13, 2024 at 2:52 PM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
I understand and agree with your final stated goal of not ending up in
another big mess. It's also clear to me that you don't think the
current proposal achieves that goal. So I assume you have some
additional ideas for the proposal to help achieve that goal and/or
some specific worries that you'd like to get addressed better in the
proposal. But currently it's not really clear to me what either of
those are. Could you clarify?
Hmm, I don't know that I have what you're hoping I have, or at least
not any more than what I've said already.
I interpreted Jacob's original email as articulating a goal ("For the
v18 cycle, I would like to try to get pytest [1] in as a supported
test driver, in addition to the current offerings") rather than a
plan. There's no patch set yet and, as I understand it, no detailed
plan for a patch set: that email seemed to focus on the question of
desirability, rather than on outlining a plan of work, which I assume
is still to come. Some things I'd like to see when a patch set does
show up are:
- good documentation for people who have no previous experience with
Python and/or pytest e.g. here's how to set up your environment on
Linux, Windows, macOS, *BSD so you can run the tests, here's how to
run the tests, here's how it's different from the Perl framework we
have now
- no external dependencies on PostgreSQL connectors. psql or libpq
foreign function interface. the latter would be a cool increment of
progress over the status quo.
- at least as much in-tree support for writing tests as we have today
with PostgreSQL::Test::whatever, but not necessarily a 1:1 reinvention
of the stuff we have now, and documentation of those facilities that
is as good or, ideally, better than what we have today.
- high overall code quality and level of maturity, not just something
someone threw together for parity with the Perl system.
- enough tests written for or converted to the new system to give
reviewers confidence that it's truly usable and fit for purpose.
The important thing to me here (as it so often is) is to think like a
maintainer. Imagine that immediately after the patches for this
feature are committed, the developers who did the work all disappear
from the community and are never heard from again. How much pain does
that end us causing? The answer doesn't need to be zero; that is
unrealistic. But it also shouldn't be "well, if that happens we're
going to have to rip the feature out" or "well, a bunch of committers
who didn't want to write tests in Python in the first place are now
going to have to do a lot of work in Python to stabilize the work
already committed."
--
Robert Haas
EDB: http://www.enterprisedb.com
On Thu, Jun 13, 2024 at 3:17 PM Greg Sabino Mullane <htamfids@gmail.com> wrote:
I feel at least some of this is a visibility / marketing problem. I've not seen any dire requests for help come across on the lists, nor things on the various todos/road maps/ blog posts people make from time to time. If I had, I would have jumped in. And for the record, I'm very proficient with Perl.
I agree with all of that!
--
Robert Haas
EDB: http://www.enterprisedb.com
On Thu, Jun 13, 2024 at 11:11 AM Robert Haas <robertmhaas@gmail.com> wrote:
I feel like I already agreed to this in a previous email and you're
continuing to argue with me as if I were disagreeing.
I also think that maybe arguments are starting to sail past each
other, and the temperature is starting to climb. (And Jelte may be
arguing to all readers of the thread, rather than just a single
individual. It's hard to tell with the list format.) And now I see
that there's another email that came in while I was writing this, but
I think I'm going to have to send this as-is because I can't write
emails that fast.
I also agree with this. I'm just not super optimistic about how much
of that will actually happen. And I'd like to hear you acknowledge
that concern and think about whether it can be addressed in some way,
instead of just repeating that we should do it anyway. Because I agree
we probably should do it anyway, but that doesn't mean I wouldn't like
to see the downsides mitigated as much as we can.
Okay, +1.
In particular, if
the proposal is exactly "let's add the smallest possible patch that
enables people to write tests in Python and then add a few new tests
in Python while leaving almost everything else in Perl, with no
migration plan and no clear vision of how the Python support ever gets
any better than the minimum stub that is proposed for initial commit,"
then I don't know that I can vote for that plan.
(that's not the proposal and I know/think you know that but having my
original email twisted into that is making me feel a bit crispy)
I do not want to migrate, and I have stated so multiple times, which
is why I have not proposed a migration plan. Other committers have
already expressed resistance to the idea that we would rewrite the
Perl stuff. I think they're right. I think we should not. I think we
should accept the cost of both Perl and something else for the
near-to-medium future, as long as the "something else" gives us value
offsetting the additional cost.
Honestly, that sounds
like very little work for the person proposing that minimal patch and
a whole lot of work for the rest of the community later on, and the
evidence is not in favor of volunteers showing up to take care of that
work.
Okay, cool. Here: as the person who is 100% signing himself up to do
that, time for me to jump in.
I have an entire 6000-something-line suite of protocol tests that has
been linked like four times above. It does something fundamentally
different from the end-to-end Perl suite; it is not a port. It is far
from perfect and I do not want to pressure people to adopt it as-is,
which is why I have not. In this thread, I am offering it solely as
evidence that I have follow-up intent.
But I might get hit by a bus. Or, as far as anyone except me knows, I
might lose interest after things get hard, which would be sad. Which
is why my very first proposal was to add an entry point that can be
reverted. The suite is not going to infect the codebase any more than
the Perl does. A revert will involve pulling the Meson test entry
code, and deleting all pytest subdirectories (of which there is only
one, IIRC, in my OAuth suite).
The plan should be more front-loaded than that: enough initial
development should get done by the people making the proposal that if
the work stops after, we don't have another big mess on our hands.
For me personally, the problem is the opposite. I have done _so much_
initial development by myself that there's no way it could ever be
reviewed and accepted. But I had to do that to get meaningful
development done in my style of work, which is focused on security and
testability and verifiable implementation.
I am trying to carve off pieces of that and say "hey, does this look
nice to anyone else?" That will take time, probably over multiple
different threads. In the meantime, I don't want to be a serialization
point for other people who are excited about trying new testing
methods, because very few people are currently doing the exact kind of
work I am doing. They may want to do other things, as evidenced by the
thread contents. At least one committer would have to sign up to be a
serialization point, unfortunately, but I think that's going to be
true regardless of plan, if we want multiple non-committer members of
the community to be involved instead of just one torch-bearer.
Because of how many moving parts and competing interests and personal
disagreements there are, I am firmly in the camp of "try something
that many people think *might* work better, that can be undone if it
sucks, and iterate on it." I want to build community momentum, because
I think that's a pretty effective way to change the cultural norms
that you're saying you're frustrated with. That doesn't mean I want to
do this without a plan; it just means that the plan can involve saying
"this is not working and we can undo it" which makes the uncertainty
easier to take.
--Jacob
On Thu, Jun 13, 2024 at 3:28 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:
(that's not the proposal and I know/think you know that but having my
original email twisted into that is making me feel a bit crispy)
I definitely did not mean to imply that. I took your original email as
a goal, rather than a proposal or plan. My statement was strictly
intended as a hypothetical because I didn't think any plan had been
proposed - I only meant to say that *if* the plan were to do X, that
would be a hard sell for me.
I do not want to migrate, and I have stated so multiple times, which
is why I have not proposed a migration plan. Other committers have
already expressed resistance to the idea that we would rewrite the
Perl stuff. I think they're right. I think we should not. I think we
should accept the cost of both Perl and something else for the
near-to-medium future, as long as the "something else" gives us value
offsetting the additional cost.
I agree. It's not terribly pretty, IMHO, but it's hard to see doing
things any other way.
For me personally, the problem is the opposite. I have done _so much_
initial development by myself that there's no way it could ever be
reviewed and accepted. But I had to do that to get meaningful
development done in my style of work, which is focused on security and
testability and verifiable implementation.
I admire this attitude. I think a lot of people who go off and do a
ton of initial work outside core show up and are like "ok, now take
all of my code." As you say, that's not realistic. One caveat here,
perhaps, is that the focus of the work you've done up until now and
the things that I and other community members may want as a condition
of merging stuff may be somewhat distinct. You will have naturally
been focused on your goals rather than other people's goals, or so I
assume.
I am trying to carve off pieces of that and say "hey, does this look
nice to anyone else?" That will take time, probably over multiple
different threads.
This makes sense, but I would be a bit wary of splitting it up over
too many different threads. It may well make sense to split it up, but
it will probably be easier to review if the core work to enable this
is one patch set on one thread where someone can read just that one
thread and understand the situation, rather than many threads where
you have to read them all.
Because of how many moving parts and competing interests and personal
disagreements there are, I am firmly in the camp of "try something
that many people think *might* work better, that can be undone if it
sucks, and iterate on it." I want to build community momentum, because
I think that's a pretty effective way to change the cultural norms
that you're saying you're frustrated with. That doesn't mean I want to
do this without a plan; it just means that the plan can involve saying
"this is not working and we can undo it" which makes the uncertainty
easier to take.
As a community, we're really bad at this. Once something gets
committed, getting a consensus to revert it is really hard, especially
if a major release has happened meanwhile, but most of the time even
if it hasn't. It might be a little easier in this case, since after
all it's not a directly user-visible feature. But historically what
happens if somebody says "hey, there are six unfixed problems with
this feature!" is that everybody says "well, you're free to fix the
problems if you want, but you're not allowed to revert the feature."
And that is *exactly* how we end up with stuff like the current TAP
test framework: ripping that out would mean removing all the TAP tests
that depend on it, and that wouldn't have achieved consensus two
months after the feature went in, let alone today.
Now, it has been suggested to me by at least one other person involved
with the project that we need to be more open to the kind of thing
that you propose here: add experimental things and take them out if it
doesn't work out. I can definitely understand that this might be a
culturally better approach than what we currently do. So maybe that's
the way forward, but it is hard (at least for me) to get past the fear
of being the one left holding the bag, and I suspect that other
committers have similar fears. What exactly we should do about that,
I'm not sure.
--
Robert Haas
EDB: http://www.enterprisedb.com
On 2024-06-12 We 11:28, Andres Freund wrote:
Hi,
On 2024-06-11 08:04:57 -0400, Andrew Dunstan wrote:
Some time ago I did some work on wrapping libpq using the perl FFI module.
It worked pretty well, and would mean we could probably avoid many uses of
IPC::Run, and would probably be substantially more efficient (no fork
required). It wouldn't avoid all uses of IPC::Run, though.FWIW, I'd *love* to see work on this continue. The reduction in test runtime
on windows is substantial and would shorten the hack->CI->fail->hack loop a
good bit shorter. And save money.
OK, I will put it high on my list. I just did some checking and it seems
to be feasible on Windows. StrawberryPerl at least has FFI::Platypus out
of the box, so we would not need to turn any great handsprings to make
progress on this on a fairly wide variety of platforms.
What seems a good place to start would be a simple
PostgreSQL::Test::Session object that would allow us to get rid of a
whole heap of start/pump_until/kill cycles and deal with the backend in
a much more straightforward and comprehensible way, not to mention the
possibility of removing lots of $node->{safe_}psql calls.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Thu, Jun 13, 2024 at 12:20 PM Robert Haas <robertmhaas@gmail.com> wrote:
I interpreted Jacob's original email as articulating a goal ("For the
v18 cycle, I would like to try to get pytest [1] in as a supported
test driver, in addition to the current offerings") rather than a
plan.
That's the first part of it.
There's no patch set yet and, as I understand it, no detailed
plan for a patch set: that email seemed to focus on the question of
desirability, rather than on outlining a plan of work, which I assume
is still to come.
There was a four-step plan sketch at the end of that email, titled "A
Plan". That was not intended to be "the final detailed plan", because
I was soliciting feedback on the exact pieces people wanted to try to
implement first, and I am not the God Emperor of Pytest. But it was
definitely A Plan.
Some things I'd like to see when a patch set does
show up are:- good documentation for people who have no previous experience with
Python and/or pytest e.g. here's how to set up your environment on
Linux, Windows, macOS, *BSD so you can run the tests, here's how to
run the tests, here's how it's different from the Perl framework we
have now
+1
- no external dependencies on PostgreSQL connectors. psql or libpq
foreign function interface. the latter would be a cool increment of
progress over the status quo.
If this is a -1 for psycopg, then I will cast my vote for ctypes/CFFI
and against psql.
- at least as much in-tree support for writing tests as we have today
with PostgreSQL::Test::whatever, but not necessarily a 1:1 reinvention
of the stuff we have now, and documentation of those facilities that
is as good or, ideally, better than what we have today.
I think this is way too much expectation for a v1 patch. If you were
committing this by yourself, would you agree to develop the entirety
of PostgreSQL::Test in a single commit, without the benefit of the
buildfarm checking you as you went, and other people trying to write
tests with it?
- high overall code quality and level of maturity, not just something
someone threw together for parity with the Perl system.
+1
- enough tests written for or converted to the new system to give
reviewers confidence that it's truly usable and fit for purpose.
This is that "know everything up front" tax that I think is not
reasonable for a test framework. If the thing you're trying to avoid
is the foot-in-the-door phenomenon, I would agree with you for a
Postgres feature. But these are tests; we don't ship them, we have
different rules for backporting them, they are developed in a very
different way.
The important thing to me here (as it so often is) is to think like a
maintainer. Imagine that immediately after the patches for this
feature are committed, the developers who did the work all disappear
from the community and are never heard from again. How much pain does
that end us causing? The answer doesn't need to be zero; that is
unrealistic. But it also shouldn't be "well, if that happens we're
going to have to rip the feature out"
Can you elaborate on why that's not an okay outcome?
or "well, a bunch of committers
who didn't want to write tests in Python in the first place are now
going to have to do a lot of work in Python to stabilize the work
already committed."
Is it that? If the problem is that, we should address that. Because if
that is truly the fear, I cannot assuage that fear without showing you
something, and I cannot show you something you do not want to see, if
you don't want to write tests in Python in the first place.
--Jacob
On 2024-06-12 We 18:34, Melanie Plageman wrote:
On Tue, Jun 11, 2024 at 8:05 AM Andrew Dunstan <andrew@dunslane.net> wrote:
On 2024-06-10 Mo 21:49, Andres Freund wrote:
Hi,
On 2024-06-10 16:46:56 -0400, Andrew Dunstan wrote:
I'm not sure what part of the testing infrastructure you think is
unmaintained. For example, the last release of Test::Simple was all the way
back on April 25.IPC::Run is quite buggy and basically just maintained by Noah these days.
Yes, that's true. I think the biggest pain point is possibly the recovery tests.
Some time ago I did some work on wrapping libpq using the perl FFI module. It worked pretty well, and would mean we could probably avoid many uses of IPC::Run, and would probably be substantially more efficient (no fork required). It wouldn't avoid all uses of IPC::Run, though.
But my point was mainly that while a new framework might have value, I don't think we need to run out and immediately rewrite several hundred TAP tests. Let's pick the major pain points and address those.
FWIW, I felt a lot of pain trying to write recovery TAP tests with
IPC::Run's pumping functionality. It was especially painful (as
someone who knows even less Perl than the "street fighting Perl"
Thomas Munro has described having) before the additional test
infrastructure was added in BackgroudPsql.pm last year. As an example
of the "worst case", it took me two full work days to go from a repro
(with psql sessions on a primary and replica node) of the vacuum hang
issue being explored in [1] to a sort-of working TAP test which
demonstrated it - and that was with help from several other
committers. Granted, this is a complex case.
The pump stuff is probably the least comprehensible and most fragile
part of the whole infrastructure. As I just mentioned to Andres, I'm
hoping to make a substantial improvement in that area.
A small part of the issue is that, as Tristan has said elsewhere,
there aren't good developer tool integrations that I know about for
Perl. I use neovim's LSP support for C and Python (in other projects),
and there is a whole ecosystem of tools I can use for both C and
Python. I know not everyone likes or needs these, but I find that they
help me write and debug code faster.
You might find this useful:
<https://climatechangechat.com/setting_up_lsp_nvim-lspconfig_and_perl_in_neovim.html>
(I don't use neovim - I'm an old emacs dinosaur.)
I had offered to take a stab at writing some of the BackgroundPsql
test infrastructure in Python. I haven't started exploring that yet or
looking at what Jacob has done so far, but I am optimistic that this
is an area where it is worth seeing what is available to us outside of
IPC::Run.
Yeah, like I said, I'm working on reducing our reliance on especially
the more fragile parts of IPC::Run.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Thu, Jun 13, 2024 at 4:06 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:
There was a four-step plan sketch at the end of that email, titled "A
Plan". That was not intended to be "the final detailed plan", because
I was soliciting feedback on the exact pieces people wanted to try to
implement first, and I am not the God Emperor of Pytest. But it was
definitely A Plan.
Well, OK, now I feel a bit dumb. I guess I missed that or forgot about it.
- at least as much in-tree support for writing tests as we have today
with PostgreSQL::Test::whatever, but not necessarily a 1:1 reinvention
of the stuff we have now, and documentation of those facilities that
is as good or, ideally, better than what we have today.I think this is way too much expectation for a v1 patch. If you were
committing this by yourself, would you agree to develop the entirety
of PostgreSQL::Test in a single commit, without the benefit of the
buildfarm checking you as you went, and other people trying to write
tests with it?
Eh... I'm confused. PostgreSQL::Test::Cluster is more than half of the
code in that directory, and without it you wouldn't be able to write
most of the TAP tests that we have today. You would really want to
call this project done without having an equivalent?
The important thing to me here (as it so often is) is to think like a
maintainer. Imagine that immediately after the patches for this
feature are committed, the developers who did the work all disappear
from the community and are never heard from again. How much pain does
that end us causing? The answer doesn't need to be zero; that is
unrealistic. But it also shouldn't be "well, if that happens we're
going to have to rip the feature out"Can you elaborate on why that's not an okay outcome?
Well, you just argued that it should be an okay outcome, and I do sort
of see your point, but I refer you to my earlier reply about the
difficulty of getting anything reverted in the culture as it stands.
or "well, a bunch of committers
who didn't want to write tests in Python in the first place are now
going to have to do a lot of work in Python to stabilize the work
already committed."Is it that? If the problem is that, we should address that. Because if
that is truly the fear, I cannot assuage that fear without showing you
something, and I cannot show you something you do not want to see, if
you don't want to write tests in Python in the first place.
I have zero desire to write tests in Python. If I could convince
everyone here to spend their time and energy improving the stuff we
have in Perl instead of introducing a whole new test framework, I
would 100% do that. But I'm pretty sure that I can't, and I think the
project needs to pick from among realistic options rather than
theoretical ones. Said differently, it's not all about me.
--
Robert Haas
EDB: http://www.enterprisedb.com
On 2024-06-12 We 11:50, Andres Freund wrote:
Hi,
On 2024-06-11 07:28:23 -0700, Jacob Champion wrote:
On Mon, Jun 10, 2024 at 1:04 PM Andres Freund<andres@anarazel.de> wrote:
Just for context for the rest the email: I think we desperately need to move
off perl for tests. The infrastructure around our testing is basically
unmaintained and just about nobody that started doing dev stuff in the last 10
years learned perl.Okay. Personally, I'm going to try to stay out of discussions around
subtracting Perl and focus on adding Python, for a bunch of different
reasons:I think I might have formulated my paragraph above badly - I didn't mean that
we should move away from perl tests tomorrow,
OK, glad we're on the same page there. Let's move on.
but that we need a path forward
that allows folks to write tests without perl.
OK, although to be honest I'm more interested in fixing some of the
things that have made testing with perl a pain, especially the IPC::Run
pump stuff.
cheers
andrew
--
Andrew Dunstan
EDB:https://www.enterprisedb.com
On 2024-06-13 Th 15:16, Greg Sabino Mullane wrote:
I'm very proficient with Perl.
Yes you are, and just as well!
cheers
andrew
--
Andrew Dunstan
EDB:https://www.enterprisedb.com
On 2024-06-12 We 18:43, Jelte Fennema-Nio wrote:
I agree it's not a technical issue. It is a people issue. There are
very few people skilled in Perl active in the community. And most of
those are very senior hackers that have much more important things to
do that make our Perl testing framework significantly better. And the
less senior people that might see improving tooling as a way to get
help out in the community, are try to stay away from Perl with a 10
foot pole. So the result is, nothing gets improved. Especially since
very few people outside our community improve this tooling either.
FTR, I have put a lot of effort into maintaining and improving the
infrastructure over the years. And I don't think there is anything much
more important. So I'm going to put more effort in. And I'm not alone.
Andres, Alvaro, Noah and Thomas are some of those who have spent a lot
of effort on extending and improving our testing.
People tend to get a bit hung up about languages. I lost count of the
various languages I had learned when it got somewhere north of 30.
Still, I understand that perl has a few oddities that make people
scratch their heads (as do most languages). It's probably losing market
share, along with some of the other things we rely on. Not sure that
alone is a reason to move away from it.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Thu, Jun 13, 2024 at 4:54 PM Andrew Dunstan <andrew@dunslane.net> wrote:
FTR, I have put a lot of effort into maintaining and improving the
infrastructure over the years. And I don't think there is anything much
more important. So I'm going to put more effort in. And I'm not alone.
Andres, Alvaro, Noah and Thomas are some of those who have spent a lot
of effort on extending and improving our testing.
I appreciate the work you've done, and the work others have done, and
I'm sorry if my comments about the state of the project's
infrastructure came across as a personal attack. Some of what is wrong
here is completely outside of our control e.g. Perl is less popular.
And even there, some people have done heroic work, like Noah stepping
up to help maintain IPC::Run. And even with the stuff that is under
our control, it's not that I don't like what you're doing. It's rather
that I think we need more people doing it. For example, the fact that
nobody's helping Thomas maintain this cfbot that we all have come to
rely on, or helping him get that integrated into
commitfest.postgresql.org, is a problem. You're not on the hook to do
that, nor is anyone else. Likewise, the PostgreSQL::Test::whatever
modules are mostly evolving when it's absolutely necessary to get some
other patch committed, rather than anyone looking to improve them very
much for their own sake. Maybe part of the problem, as Greg said, is
that we don't do a good enough job advertising what the problems are
or how people can help, but whatever the cause, it's not a very
enjoyable experience, at least for me.
But again, I don't blame you for any of that. You're clearly a big
part of why it's going as well as it is!
--
Robert Haas
EDB: http://www.enterprisedb.com
On 2024-06-13 Th 17:23, Robert Haas wrote:
On Thu, Jun 13, 2024 at 4:54 PM Andrew Dunstan <andrew@dunslane.net> wrote:
FTR, I have put a lot of effort into maintaining and improving the
infrastructure over the years. And I don't think there is anything much
more important. So I'm going to put more effort in. And I'm not alone.
Andres, Alvaro, Noah and Thomas are some of those who have spent a lot
of effort on extending and improving our testing.I appreciate the work you've done, and the work others have done, and
I'm sorry if my comments about the state of the project's
infrastructure came across as a personal attack. Some of what is wrong
here is completely outside of our control e.g. Perl is less popular.
And even there, some people have done heroic work, like Noah stepping
up to help maintain IPC::Run. And even with the stuff that is under
our control, it's not that I don't like what you're doing. It's rather
that I think we need more people doing it. For example, the fact that
nobody's helping Thomas maintain this cfbot that we all have come to
rely on, or helping him get that integrated into
commitfest.postgresql.org, is a problem. You're not on the hook to do
that, nor is anyone else. Likewise, the PostgreSQL::Test::whatever
modules are mostly evolving when it's absolutely necessary to get some
other patch committed, rather than anyone looking to improve them very
much for their own sake. Maybe part of the problem, as Greg said, is
that we don't do a good enough job advertising what the problems are
or how people can help, but whatever the cause, it's not a very
enjoyable experience, at least for me.But again, I don't blame you for any of that. You're clearly a big
part of why it's going as well as it is!
Thank you, I'm not offended by anything you or anyone else has said.
Clearly there are areas we can improve, and we need to be somewhat more
proactive about it.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Thu, 13 Jun 2024 at 23:35, Andrew Dunstan <andrew@dunslane.net> wrote:
Clearly there are areas we can improve, and we need to be somewhat more
proactive about it.
To follow that great suggestion, I updated meson wiki[1]https://wiki.postgresql.org/wiki/Meson#Test_related_commands after I
realized some of the major gripes I had with the Perl tap test output
were not actually caused by Perl but by meson:
The main changes I made was using "-q --print-errorlogs" instead "-v",
to reduce the enormous clutter in the output of the commands in the
wiki to something much more reasonable.
As well as adding examples on how to run specific tests
[1]: https://wiki.postgresql.org/wiki/Meson#Test_related_commands
On Thu, Jun 13, 2024 at 1:04 PM Robert Haas <robertmhaas@gmail.com> wrote:
One caveat here,
perhaps, is that the focus of the work you've done up until now and
the things that I and other community members may want as a condition
of merging stuff may be somewhat distinct. You will have naturally
been focused on your goals rather than other people's goals, or so I
assume.
Right. That's a risk I knew I was taking when I wrote it, so it's not
going to offend me when I need to rewrite things.
I would be a bit wary of splitting it up over
too many different threads. It may well make sense to split it up, but
it will probably be easier to review if the core work to enable this
is one patch set on one thread where someone can read just that one
thread and understand the situation, rather than many threads where
you have to read them all.
I'll try to avoid too many threads. But right now there is indeed just
one thread (OAUTHBEARER) and it's way too much:
- the introduction of pytest
- a Construct-based manipulation of the wire protocol, including
Wireshark-style network traces on failure
- pytest fixtures which spin up libpq and the server in isolation from
each other, relying on the Construct implementation to complete the
seam
- OAuth, which was one of the motivating use cases (but not the only
one) for all of the previous items
I really don't want to derail this thread with those. There are other
people here with their own hopes and dreams (see: unconference notes),
and I want to give them a platform too.
That doesn't mean I want to
do this without a plan; it just means that the plan can involve saying
"this is not working and we can undo it" which makes the uncertainty
easier to take.As a community, we're really bad at this. [...]
I will carry the response to this to the next email.
Thanks,
--Jacob
On Thu, Jun 13, 2024 at 1:27 PM Robert Haas <robertmhaas@gmail.com> wrote:
On Thu, Jun 13, 2024 at 4:06 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:There was a four-step plan sketch at the end of that email, titled "A
Plan". That was not intended to be "the final detailed plan", because
I was soliciting feedback on the exact pieces people wanted to try to
implement first, and I am not the God Emperor of Pytest. But it was
definitely A Plan.Well, OK, now I feel a bit dumb. I guess I missed that or forgot about it.
No worries. It's a really long thread. :D
But also: do you have opinions on what to fill in as steps 2
(something we have no ability to test today) and 3 (something we do
test today, but hate)?
My vote for step 2 is "client and server separation", perhaps by
testing libpq fallback against a server that claims support for
different build-time options. I don't want to have a vote in step 3,
because part of that step is proving that this framework can provide
value for a part of the project I don't really know much about.
I think this is way too much expectation for a v1 patch. If you were
committing this by yourself, would you agree to develop the entirety
of PostgreSQL::Test in a single commit, without the benefit of the
buildfarm checking you as you went, and other people trying to write
tests with it?Eh... I'm confused. PostgreSQL::Test::Cluster is more than half of the
code in that directory, and without it you wouldn't be able to write
most of the TAP tests that we have today.
Well, in my defense, you said "PostgreSQL::Test::whatever", which I
assumed meant all of it, including Kerberos.pm and SSL::Server and
AdjustUpgrade and... That seemed like way too much to me (and still
does!), but if that's not what you were arguing then never mind.
Yes, Cluster.pm seems like a pretty natural thing to ask for. I
imagine it's one of the first things we're going to need. And yet...
You would really want to
call this project done without having an equivalent?
...I have this really weird sneaking suspicion that, if a replacement
of the end-to-end Perl acceptance tests can be made an explicit
anti-goal in the short term, we might not necessarily need an
"equivalent" for v1. I realize that seems bizarre, because of course
we need a way to start the server if we want to test the server. But
frankly, starting a server is Pretty Easy (tm), and Cluster.pm has to
do a lot more than that because IMO it's designed for a variety of
acceptance-oriented tasks. 3000+ lines!
If there's widespread interest (as opposed to being just my own
personal fever dream) in testing Postgres components as individual
pieces rather than setting up the world, then I wonder if the
functionality from Cluster.pm couldn't be pared down a lot. Maybe you
don't need a centralized ->psql() or a ->command_ok() helper, because
you're usually not trying to test psql and other utilities during your
server-only tests.
Maybe you can just stand up a standby without a primary and drive it
via mock replication. Do you need quite as many "poll and wait for
some asynchronous result" type things when you're not waiting for a
result to cascade through a multinode system? Does something like (for
example) ->pg_recvlogical_upto() really have to be implemented in our
"core" fixtures or can it be done more easily by whoever needs that in
the future? Maybe You Ain't Gonna Need It.
If (he said, atop his naive idealistic soapbox) we can find a way to
put off writing utilities until we write the tests that need them,
without procrastinating, and without putting all of the negative
externalities of that approach on the committers with low-quality
copy-paste proliferation, and I'd like a pony while I'm at it, then I
think the result might end up being pretty coherent and maintainable.
Then not having "at least as much in-tree support for writing tests as
we have today" for the very first commit would be a feature and not a
bug.
Now, maybe if the collective ability to do that existed, we would have
done it already with Perl, but I do actually wonder whether that's
true or not.
Or, maybe, the very first suggestion for Step 3 will be something that
needs absolutely everything in Cluster.pm. So be it; I can live
without a pony.
You would really want to
call this project done without having an equivalent?
(A cop-out but not-really-cop-out alternative answer to this question
is that this project is not going to be "done" any more than Postgres
will ever be "done", and that's part of what I'm arguing should be
considered natural and okay. I understand that it is easier for me to
take that stance when I am not on the hook for maintaining it, so I
don't expect us to necessarily see eye-to-eye on it.)
Can you elaborate on why that's not an okay outcome?
Well, you just argued that it should be an okay outcome, and I do sort
of see your point, but I refer you to my earlier reply about the
difficulty of getting anything reverted in the culture as it stands.
Earlier reply was:
As a community, we're really bad at this. Once something gets
committed, getting a consensus to revert it is really hard, especially
if a major release has happened meanwhile, but most of the time even
if it hasn't. It might be a little easier in this case, since after
all it's not a directly user-visible feature. But historically what
happens if somebody says "hey, there are six unfixed problems with
this feature!" is that everybody says "well, you're free to fix the
problems if you want, but you're not allowed to revert the feature."
And that is *exactly* how we end up with stuff like the current TAP
test framework: ripping that out would mean removing all the TAP tests
that depend on it, and that wouldn't have achieved consensus two
months after the feature went in, let alone today.
Well... I don't know how to fix that. Here's a draft proposal after a
few minutes of thought, which may need to be discarded after a few
more minutes of thought.
If there's agreement that New Tests -- not necessarily written in
Python, but I selfishly hope they are -- exist on a probationary
status, then maybe part of that is going to have to be an agreement:
New features have to be able to have some minimum maintainability
level *on the basis of the Perl tests only*, while the probationary
period is in effect. It can't be the equivalent maintainability level,
because that's either proof that the New Tests are giving us nothing,
or proof that everyone is being forced to implement the exact same
tests in both Perl and New Test. Neither is good.
Since we're currently focused on end-to-end acceptance with Perl, that
is probably a lower bar than what we'd maybe prefer, but I think that
is the bar we have right now. It also exists as a forcing function to
make sure that the additional tests are adding value over what we get
with the Perl, which may paradoxically increase the chances of New
Test success. (I can't tell if this is magical thinking or not.)
So if a committer doesn't want responsibility for the feature if the
new tests were deleted, they don't commit. Maybe that's unrealistic
and too painful. It does increase the review requirements of
committers quite a bit. It might disqualify my OAuth work (which is
maybe evidence in its favor?). Maybe it increases the foot-in-the-door
effect too much. Maybe there would have to be some trust-building
where right now there is not? Not sure.
Now, it has been suggested to me by at least one other person involved
with the project that we need to be more open to the kind of thing
that you propose here: add experimental things and take them out if it
doesn't work out. I can definitely understand that this might be a
culturally better approach than what we currently do. So maybe that's
the way forward, but it is hard (at least for me) to get past the fear
of being the one left holding the bag, and I suspect that other
committers have similar fears. What exactly we should do about that,
I'm not sure.
Yeah.
I have zero desire to write tests in Python. If I could convince
everyone here to spend their time and energy improving the stuff we
have in Perl instead of introducing a whole new test framework, I
would 100% do that. But I'm pretty sure that I can't, and I think the
project needs to pick from among realistic options rather than
theoretical ones. Said differently, it's not all about me.
Then, for what it's worth: I really do want to make sure that your
life, and the life of all the other committers, does not get
significantly harder if this goes in. I don't think it will, but if
I'm wrong, I want it to come back out, and then we can regroup or
pivot entirely and move forward together.
--Jacob
On Thu, Jun 13, 2024 at 8:12 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:
But also: do you have opinions on what to fill in as steps 2
(something we have no ability to test today) and 3 (something we do
test today, but hate)?My vote for step 2 is "client and server separation", perhaps by
testing libpq fallback against a server that claims support for
different build-time options. I don't want to have a vote in step 3,
because part of that step is proving that this framework can provide
value for a part of the project I don't really know much about.
I mean, both Perl and Python are Turing-complete. Anything you can do
in one, you can do in the other, especially when you consider that
we're not likely to accept too many dependencies on external Perl or
Python modules. That's why I see this as nothing more or less than
exercise in letting people use the programming language they prefer.
We've talked about a libpq FFI interface, but it hasn't been done; now
we're talking about maybe having a Python one. Perhaps we'll end up
with both. Then you can imagine porting tests from one language to the
other and the only difference is whether you'd rather have line noise
before all of your variable names or semantically significant
whitespace.
I just don't believe in the idea that we're going to write one
category of tests in one language and another category in another
language. As soon as we open the door to Python tests, people are
going to start writing the TAP tests that they would have written in
Perl in Python instead. And if the test utilities that we have for
Perl are not there for Python, then they'll either open code things
for which they would have used a module, or they'll write a
stripped-down version of the module that will then get built-up patch
by patch until, 50 or 100 or 200 hours of committer-review later, it
resembles the existing Perl module. And, if the committer pushes back
and says, hey, why not write this test in Perl which already has all
of this test infrastructure in place already, then the submitter will
wander off muttering about how PostgreSQL committers are hidebound
backward individuals who just try to ruin everybody's day. So as I see
it, the only reasonable plan here if we want to introduce testing in
Python (or C#, or Ruby, or Go, or JavaScript, or Lua, or LOLCODE) is
to try to achieve a reasonable degree of parity between that language
and Perl. Because then we can at least review the new infrastructure
all at once, instead of incrementally spread across many patches
written, reviewed, and committed by many different people.
Now, I completely understand if you're not excited about getting
sucked down that rabbit-hole, and maybe some other committer is going
to see this differently than I do, and that's fine. But my view is
that if you're not interested in doing all the work to let people do
more or less the kind of stuff that they currently do in Perl in
Python instead, then your alternative is to take the tests that you
want to add and rewrite them in Perl. And I am fairly cetain that if
you choose that option, it will save me, and a bunch of other
committers, a whole lot of work, at least in the short term. If we add
support for Python, we are going to end up having to do a lot of
things twice for the next let's say ten to twenty years until somebody
rewrites the remaining Perl tests in Python or whatever language is
hot and cool by then. My opinion is that we need to be open to
enduring that pain because we can't indefinitely hold our breath and
insist on using obsolete tools for everything, but that doesn't mean
that I don't think it will be painful.
Consider the meson build system project. To get that committed, Andres
had to make it do pretty much everything MSVC could do and mostly
everything that configure could do, and the places where he didn't
make it do everything configure could do remain sore spots that I, at
least, am not happy about. And in that case, he also managed to get
MSVC removed entirely, so that we do not have a larger number of build
systems in total than we had before. Furthermore, the amount of code
in buildsystem files (makefiles, meson.build) in a typical patch
needing review is usually none or very little, whereas the amount of
test code in a patch is sometimes quite large. I've come around to the
belief that the meson work was worthwhile -- running tests is so much
faster and nicer now -- but it was a ton of work to get done and
impacted everyone in the development community, and I think the blast
radius for this change is likely to be larger for the reasons
suggested earlier in this paragraph.
--
Robert Haas
EDB: http://www.enterprisedb.com
On 2024-06-14 Fr 08:10, Robert Haas wrote:
We've talked about a libpq FFI interface, but it hasn't been done;
Hold my beer :-)
I just posted a POC for that.
cheers
andrew
--
Andrew Dunstan
EDB:https://www.enterprisedb.com
Robert Haas <robertmhaas@gmail.com> writes:
I mean, both Perl and Python are Turing-complete. Anything you can do
in one, you can do in the other, especially when you consider that
we're not likely to accept too many dependencies on external Perl or
Python modules. That's why I see this as nothing more or less than
exercise in letting people use the programming language they prefer.
I think that's an oversimplified analysis. Sure, the languages are
both Turing-complete, but for our purposes here they are both simply
glue languages around some set of testing facilities. Some of those
facilities will be provided by the base languages (plus whatever
extension modules we choose to require) and some by code we write.
The overall experience of writing tests will be determined by the
testing facilities far more than by the language used for glue.
That being the case, I do agree with your point that Python
equivalents to most of PostgreSQL::Test will need to be built up PDQ.
Maybe they can be better than the originals, in features or ease of
use, but "not there at all" is not better.
But what I'd really like to see is some comparison of the
language-provided testing facilities that we're proposing
to depend on. Why is pytest (or whatever) better than Test::More?
I also wonder about integration of python-based testing with what
we already have. A significant part of what you called the meson
work had to do with persuading pg_regress, isolationtester, etc
to output test results in the common format established by TAP.
Am I right in guessing that pytest will have nothing to do with that?
Can we even manage to dump perl and python test scripts into the same
subdirectory and sort things out automatically? I'm definitely going
to be -1 for a python testing feature that cannot integrate with what
we have because it demands its own control and result-collection
infrastructure.
regards, tom lane
Hi,
On 2024-06-14 11:49:29 -0400, Tom Lane wrote:
I also wonder about integration of python-based testing with what
we already have. A significant part of what you called the meson
work had to do with persuading pg_regress, isolationtester, etc
to output test results in the common format established by TAP.
FWIW, meson's testrunner doesn't require TAP, the lowest common denominator is
just an exit code. However, for things that run many "sub" tests, it's a lot
nicer if the failures can be reported more granularly than just "the entire
testsuite failed".
Meson currently supports:
exitcode: the executable's exit code is used by the test harness to record the outcome of the test).
tap: Test Anything Protocol.
gtest (since 0.55.0): for Google Tests.
rust (since 0.56.0): for native rust tests
Am I right in guessing that pytest will have nothing to do with that?
Looks like there's a plugin for pytest to support tap as output:
https://pypi.org/project/pytest-tap/
However, it's not available as a debian package. I know that some folks just
advocate installing dependencies via venv, but I personally don't think
that'll fly. For one, it'll basically prevent tests being run by packagers.
Can we even manage to dump perl and python test scripts into the same
subdirectory and sort things out automatically?
That shouldn't be much of a problem.
I'm definitely going to be -1 for a python testing feature that cannot
integrate with what we have because it demands its own control and
result-collection infrastructure.
Dito.
Greetings,
Andres Freund
On 2024-06-14 09:11:11 -0700, Andres Freund wrote:
On 2024-06-14 11:49:29 -0400, Tom Lane wrote:
Am I right in guessing that pytest will have nothing to do with that?
Looks like there's a plugin for pytest to support tap as output:
https://pypi.org/project/pytest-tap/However, it's not available as a debian package. I know that some folks just
advocate installing dependencies via venv, but I personally don't think
that'll fly. For one, it'll basically prevent tests being run by packagers.
If this were the blocker, I think we could just ship an output adapter
ourselves. pytest-tap is not a lot of code:
https://github.com/python-tap/pytest-tap/blob/main/src/pytest_tap/plugin.py
So either vendoring it or just writing an even simpler version ourselves seems
entirely feasible.
On Thu, Jun 13, 2024 at 1:08 PM Jelte Fennema-Nio <postgres@jeltef.nl>
wrote:
But Perl is at the next level of unmaintained infrastructure. It is
actually clear how you can contribute to it, but still no new
community members actually want to contribute to it. Also, it's not
only unmaintained by us but it's also pretty much unmaintained by the
upstream community.
I am not happy with the state of Perl, as it has made some MAJOR missteps
along the way, particularly in the last 5 years. But can we dispel this
strawman? There is a difference between "unpopular" and "unmaintained". The
latest version of Perl was released May 20, 2024. The latest release of
Test::More was April 25, 2024. Both are heavily used. Just not as heavily
as they used to be. :)
Cheers,
Greg
On Fri, 14 Jun 2024 at 22:33, Greg Sabino Mullane <htamfids@gmail.com> wrote:
I am not happy with the state of Perl, as it has made some MAJOR missteps along the way, particularly in the last 5 years. But can we dispel this strawman? There is a difference between "unpopular" and "unmaintained". The latest version of Perl was released May 20, 2024. The latest release of Test::More was April 25, 2024. Both are heavily used. Just not as heavily as they used to be. :)
Sorry, yes I exaggerated here. Looking at the last Perl changelog[1]https://perldoc.perl.org/perldelta
it's definitely getting more new features and improvements than I had
thought.
Test::More on the other hand, while indeed still maintained, it's
definitely not getting significant new feature development or
improvements[2]https://github.com/Test-More/test-more/blob/master/Changes. Especially when comparing it to pytest[3]https://docs.pytest.org/en/stable/changelog.html.
[1]: https://perldoc.perl.org/perldelta
[2]: https://github.com/Test-More/test-more/blob/master/Changes
[3]: https://docs.pytest.org/en/stable/changelog.html
On Fri, Jun 14, 2024 at 9:24 AM Robert Haas <robertmhaas@gmail.com> wrote:
For example, the fact that
nobody's helping Thomas maintain this cfbot that we all have come to
rely on, or helping him get that integrated into
commitfest.postgresql.org, is a problem.
I've been talking to Magnus and Jelte about cfbot and we're hoping to
have some good news soon...
On Fri, 14 Jun 2024 at 17:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:
But what I'd really like to see is some comparison of the
language-provided testing facilities that we're proposing
to depend on. Why is pytest (or whatever) better than Test::More?
Some advantages of pytest over Test::More:
1. It's much easier to debug failing tests using the output that
pytest gives. A good example of this is on pytest its homepage[1]https://docs.pytest.org/en/8.2.x/#a-quick-example
(i.e. it shows the value given to the call to inc in the error)
2. No need to remember a specific comparison DSL
(is/isnt/is_deeply/like/ok/cmp_ok/isa_ok), just put assert in front of
a boolean expression and your output is great (if you want to add a
message too for clarity you can use: assert a == b, "the world is
ending")
3. Very easy to postgres log files on stderr/stdout when a test fails.
This might be possible/easy with Perl too, but we currently don't do
that. So right now for many failures you're forced to traverse the
build/testrun/... directory tree to find the logs.
4. Pytest has autodiscovery of test files and functions, so we
probably wouldn't have to specify all of the exact test files anymore
in the meson.build files.
Regarding 2, there are ~150 checks that are using a suboptimal way of
testing for a comparison. Mostly a lot that could use "like(..., ...)"
instead of "ok(... ~= ...)"
❯ grep '\bok(.*=' **.pl | wc -l
151
On 2024-06-14 Fr 18:11, Jelte Fennema-Nio wrote:
On Fri, 14 Jun 2024 at 17:49, Tom Lane<tgl@sss.pgh.pa.us> wrote:
But what I'd really like to see is some comparison of the
language-provided testing facilities that we're proposing
to depend on. Why is pytest (or whatever) better than Test::More?Some advantages of pytest over Test::More:
1. It's much easier to debug failing tests using the output that
pytest gives. A good example of this is on pytest its homepage[1]
(i.e. it shows the value given to the call to inc in the error)
2. No need to remember a specific comparison DSL
(is/isnt/is_deeply/like/ok/cmp_ok/isa_ok), just put assert in front of
a boolean expression and your output is great (if you want to add a
message too for clarity you can use: assert a == b, "the world is
ending")
3. Very easy to postgres log files on stderr/stdout when a test fails.
This might be possible/easy with Perl too, but we currently don't do
that. So right now for many failures you're forced to traverse the
build/testrun/... directory tree to find the logs.
I see the fact that we stash the output in a file as a feature. Without
it, capturing failure information in the buildfarm client would be more
difficult, especially if there are multiple failures. So this is
actually something I think we would need for any alternative framework.
Maybe we need an environment setting that would output the
regress_log_00whatever file to stderr on failure. That should be pretty
easy to arrange in the END handler for PostgreSQL::Test::Utils.
4. Pytest has autodiscovery of test files and functions, so we
probably wouldn't have to specify all of the exact test files anymore
in the meson.build files.
I find this comment a bit ironic. We don't need to do that with the
Makefiles, and the requirement to do so was promoted as a meson feature
rather than a limitation, ISTR.
Regarding 2, there are ~150 checks that are using a suboptimal way of
testing for a comparison. Mostly a lot that could use "like(..., ...)"
instead of "ok(... ~= ...)"
❯ grep '\bok(.*=' **.pl | wc -l
151
Well, let's fix those. I would be tempted to use cmp_ok() for just about
all of them.
But the fact that Test::More has a handful of test primitives rather
than just one strikes me as a relatively minor complaint.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Sat, 15 Jun 2024 at 16:45, Andrew Dunstan <andrew@dunslane.net> wrote:
I see the fact that we stash the output in a file as a feature. Without
it, capturing failure information in the buildfarm client would be more
difficult, especially if there are multiple failures. So this is
actually something I think we would need for any alternative framework.
I indeed heard that the current behaviour was somehow useful to the
buildfarm client.
Maybe we need an environment setting that would output the
regress_log_00whatever file to stderr on failure. That should be pretty
easy to arrange in the END handler for PostgreSQL::Test::Utils.
That sounds awesome! But I'm wondering: do we really need a setting to
enable/disable that? Can't we always output it to stderr on failure?
If we output the log both to stderr and as a file, would that be fine
for the build farm? If not, a setting should work. (but I'd prefer the
default for that setting to be on in that case, it seems much easier
to turn it off in the buildfarm client, instead of asking every
developer to turn the feature on)
4. Pytest has autodiscovery of test files and functions, so we
probably wouldn't have to specify all of the exact test files anymore
in the meson.build files.I find this comment a bit ironic. We don't need to do that with the
Makefiles, and the requirement to do so was promoted as a meson feature
rather than a limitation, ISTR.
Now, I'm very curious why that would be considered a feature. I
certainly have had many cases where I forgot to add the test file to
the meson.build file.
Regarding 2, there are ~150 checks that are using a suboptimal way of
testing for a comparison. Mostly a lot that could use "like(..., ...)"
instead of "ok(... ~= ...)"
❯ grep '\bok(.*=' **.pl | wc -l
151Well, let's fix those. I would be tempted to use cmp_ok() for just about
all of them.
Sounds great to me.
But the fact that Test::More has a handful of test primitives rather
than just one strikes me as a relatively minor complaint.
It is indeed a minor paper cut, but paper-cuts add up.
Honestly, my primary *objective* complaint about our current test
suite, is that when a test fails, it's very often impossible for me to
understand why the test failed, by only looking at the output of
"meson test". I think logging the postgres log to stderr for Perl, as
you proposed, would significantly improve that situation. I think the
only thing that we cannot get from Perl Test::More that we can from
pytest, is the fancy recursive introspection of the expression that
pytest shows on error.
Apart from that my major *subjective* complaint is that I very much
dislike writing Perl code. I'm slow at writing it and I don't (want
to) improve at it because I don't have reasons to use it except for
Postgres tests. So currently I'm not really incentivised to write more
tests than the bare minimum, help improve the current test tooling, or
add new testing frameworks for things we currently cannot test.
Afaict, there's a significant part of our current community who feel
the same way (and I'm pretty sure every sub-30 year old person who
newly joins the community would feel the exact same way too).
As a project I think we would like to have more tests, and to have
more custom tooling to test things that we currently cannot (e.g.
oauth or manually messing with the wire-protocol). I think the only
way to achieve that is by encouraging more people to work on these
things. I very much appreciate that you and others are improving our
Perl tooling, because that makes our current tests easier to work
with. But I don't think it significantly increases the willingness to
write tests or test-tooling for people that don't want to write Perl
in the first place.
So I think the only way to get more people involved in contributing
tests and test-tooling is by allowing testing in another language than
Perl (but also still allow writing tests in Perl). Even if that means
that we have two partially-overlapping test frameworks, that are both
easy to use for different things. In my view that's even a positive
thing, because that means we are able to test more with two languages
than we would be able to with either one (and it's thus useful to have
both).
And I agree with Robbert that Python seems like the best choice for
this other language, given its current popularity level. But as I said
before, I'm open to other languages as well.
On Sat, Jun 15, 2024 at 12:48 PM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
Honestly, my primary *objective* complaint about our current test
suite, is that when a test fails, it's very often impossible for me to
understand why the test failed, by only looking at the output of
"meson test". I think logging the postgres log to stderr for Perl, as
you proposed, would significantly improve that situation. I think the
only thing that we cannot get from Perl Test::More that we can from
pytest, is the fancy recursive introspection of the expression that
pytest shows on error.
This surprises me. I agree that the current state of affairs is kind
of annoying, but the contents of regress_log_whatever are usually
quite long. Printing all of that out to standard output seems like
it's just going to flood the terminal with output. I don't think I'd
be a fan of that change.
I think I basically agree with all the nearby comments about how the
advantages you cite for Python aren't, I don't know, entirely
compelling. Switching from ok() to is() or cmp_ok() or like() is minor
stuff. Where the output goes is minor stuff. The former can be fixed,
and the latter can be worked around with scripts and aliases. The one
thing I know about that *I* think is a pretty big problem about Perl
is that IPC::Run is not really maintained. But I wonder if the
solution to that is to do something ourselves instead of depending on
IPC::Run. Beyond that, I think this is just a language popularity
contest.
--
Robert Haas
EDB: http://www.enterprisedb.com
Hi,
On 2024-06-15 10:45:16 -0400, Andrew Dunstan wrote:
4. Pytest has autodiscovery of test files and functions, so we
probably wouldn't have to specify all of the exact test files anymore
in the meson.build files.I find this comment a bit ironic. We don't need to do that with the
Makefiles, and the requirement to do so was promoted as a meson feature
rather than a limitation, ISTR.
The reason its good to have the list of tests somewhere explicit is that we
have different types of test. With make, there is a single target for all tap
tests. If you want to run tests concurrently, make can only schedule the tap
tests at the granularity of a directory. If you want concurrency below that,
you need to use concurrency on the prove level. But that means that you have
extremely varying concurrency, depending on whether make runs targets that
have no internal concurrency or make runs e.g. the recovery tap tests.
I don't think we should rely on global test discovery via pytest. That'll lead
to uncontrollable concurrency again, which means much longer test times. We'll
always have different types of tests, just scheduling running them via
"top-level" tools for different test types just won't work well. That's not
true for many projects where tests have vastly lower resource usage.
Greetings,
Andres Freund
On Sat, 15 Jun 2024 at 19:27, Robert Haas <robertmhaas@gmail.com> wrote:
This surprises me. I agree that the current state of affairs is kind
of annoying, but the contents of regress_log_whatever are usually
quite long. Printing all of that out to standard output seems like
it's just going to flood the terminal with output. I don't think I'd
be a fan of that change.
I think at the very least the locations of the different logs should
be listed in the output.
On Sat, Jun 15, 2024 at 12:48 PM Jelte Fennema-Nio <postgres@jeltef.nl>
wrote:
Afaict, there's a significant part of our current community who feel the
same way (and I'm pretty sure every sub-30 year old person who
newly joins the community would feel the exact same way too).
Those young-uns are also the same group who hold their nose when coding in
C, and are always clamoring for rewriting Postgres in Rust. And before
that, C++. And next year, some other popular language that is clearly
better and more popular than C.
And I agree with Robbert that Python seems like the best choice for this
other language, given its current popularity level. But as I said
before, I'm open to other languages as well.
Despite my previous posts, I am open to other languages too, including
Python, but the onus is really on the new language promoters to prove that
the very large amount of time and trouble is worth it, and worth it for
language X.
Cheers,
Greg
On Fri, Jun 14, 2024 at 5:09 PM Jelte Fennema-Nio <postgres@jeltef.nl>
wrote:
Test::More on the other hand, while indeed still maintained, it's
definitely not getting significant new feature development or
improvements[2]. Especially when comparing it to pytest[3].
That's fair, although it's a little hard to tell if the lack of new
features is because they are not needed for a stable, mature project, or
because few people are asking for and developing new features. Probably a
bit of both. But I'll be the first to admit Perl is dying; I just don't
know what should replace it (or how - or when). Python has its quirks, but
all languages do, and your claim that it will encourage more and easier
test writing by developers is a good one.
Cheers,
Greg
On Sat, Jun 15, 2024 at 5:53 PM Greg Sabino Mullane <htamfids@gmail.com> wrote:
On Sat, Jun 15, 2024 at 12:48 PM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
Afaict, there's a significant part of our current community who feel the same way (and I'm pretty sure every sub-30 year old person who
newly joins the community would feel the exact same way too).Those young-uns are also the same group who hold their nose when coding in C, and are always clamoring for rewriting Postgres in Rust. And before that, C++. And next year, some other popular language that is clearly better and more popular than C.
Writing a new test framework in a popular language that makes it more
likely that more people will write more tests and test infrastructure
is such a completely different thing than suggesting we rewrite
Postgres in Rust that I feel that this comparison is unfair and,
frankly, a distraction from the discussion at hand.
- Melanie
On Sat, Jun 15, 2024 at 6:00 PM Melanie Plageman
<melanieplageman@gmail.com> wrote:
Those young-uns are also the same group who hold their nose when coding in C, and are always clamoring for rewriting Postgres in Rust. And before that, C++. And next year, some other popular language that is clearly better and more popular than C.
Writing a new test framework in a popular language that makes it more
likely that more people will write more tests and test infrastructure
is such a completely different thing than suggesting we rewrite
Postgres in Rust that I feel that this comparison is unfair and,
frankly, a distraction from the discussion at hand.
I don't really agree with this. We have been told before that we would
attract more developers to our community if only we allowed backend
code to be written in C++ or Rust, and that is not altogether a
different thing than saying that we would attract more test developers
if only we allowed test code to be written in Python or whatever. The
difference is one of degree rather than of kind. We have a lot more
backend code than we do test code, I'm fairly sure, and our tests are
more self-contained: it's not *as* problematic if some tests are
written in one language and others in another as it would be if
different parts of the backend used different languages, and it
wouldn't be *as* hard if at some point we decided we wanted to convert
all remaining code to the new language. So, I have a much harder time
imagining that we would start allowing a new language for backend code
than that we would start allowing a new language for tests, but I
don't think the issues are fundamentally different.
But that said, I'm not sure the programming language is the real
issue. If I really wanted to participate in an open source project,
I'd probably be willing to learn a new programming language to do
that. Maybe some people wouldn't, but I had to learn a whole bunch of
them in college, and learning one more doesn't sound like the biggest
of deals. But, would I feel respected and valued as a participant in
that project? Would I have to use weird tools and follow arcane and
frustrating processes? If I did, *that* would make me give up. I don't
want to say that the choice of programming language doesn't matter at
all, but it seems to me that it might matter more because it's a
symptom of being unwilling to modernize things rather than for its own
sake.
--
Robert Haas
EDB: http://www.enterprisedb.com
Hi Greg, Jelte,
On Sat, 15 Jun 2024 at 23:53, Greg Sabino Mullane <htamfids@gmail.com>
wrote:
On Sat, Jun 15, 2024 at 12:48 PM Jelte Fennema-Nio <postgres@jeltef.nl>
wrote:
Afaict, there's a significant part of our current community who feel the
same way (and I'm pretty sure every sub-30 year old person who
newly joins the community would feel the exact same way too).
I feel I'm still relatively new (started in the past 4 years) and I have
quite some time left before I hit 30 years of age.
I don't specifically feel the way you describe, nor do I think I've ever
really felt like that in the previous nearly 4 years of hacking. Then
again, I'm not interested in testing frameworks, so I don't feel much at
all about which test frameworks we use.
Those young-uns are also the same group who hold their nose when coding
in C, and are always clamoring for rewriting Postgres in Rust.
Could you point me to one occasion I have 'always' clamored for this, or
any of "those young-uns" in the community? I may not be a huge fan of C,
but rewriting PostgreSQL in [other language] is not on the list of things
I'm clamoring for. I may have given off-hand mentions that [other language]
would've helped in certain cases, sure, but I'd hardly call that clamoring.
Kind regards,
Matthias van de Meent
Hi Jacob,
For the v18 cycle, I would like to try to get pytest [1] in as a
supported test driver, in addition to the current offerings.(I'm tempted to end the email there.)
Huge +1 from me and many thanks for working on this.
Two cents from me.
I spent a fair part of my career writing in Perl. Personally I like
the language and often find it more convenient for the tasks I'm
working on than Python.
This being said, there were several projects I was involved in where
we had to choose a scripting language. In all the cases there was a
strong push-back against Perl and Python always seemed to be a common
denominator for everyone. So I ended up with the rule of thumb to use
Perl for projects I'm working on alone and Python otherwise. Although
the objective reality in the entire industry is unknown to me I spoke
to many people whose observations were similar.
We could speculate about the reasons why people seem to prefer Python
(good IDE support*, unique** libraries like Matplotlib / NumPy /
SciPy, ...) but honestly I don't think they are extremely relevant in
this discussion.
I believe supporting Python in our test infrastructure will attract
more contributors and thus would be a good step for the project in the
long run.
* including PyTest integration
** citation needed
--
Best regards,
Aleksander Alekseev
On Sun, 16 Jun 2024 at 23:04, Robert Haas <robertmhaas@gmail.com> wrote:
On Sat, Jun 15, 2024 at 6:00 PM Melanie Plageman
<melanieplageman@gmail.com> wrote:Writing a new test framework in a popular language that makes it more
likely that more people will write more tests and test infrastructure
is such a completely different thing than suggesting we rewrite
Postgres in Rust that I feel that this comparison is unfair and,
frankly, a distraction from the discussion at hand.I don't really agree with this.
<snip>
it's not *as* problematic if some tests are
written in one language and others in another as it would be if
different parts of the backend used different languages, and it
wouldn't be *as* hard if at some point we decided we wanted to convert
all remaining code to the new language.
Honestly, it sounds like you actually do agree with each other. It
seems you interpreted Melanie her use of "thing" as "people wanting to
use Rust/Python in the Postgres codebase" while I believe she probably
meant "all the problems and effort involved in the task making that
possible''. And afaict from your response, you definitely agree that
making it possible to use Rust in our main codebase is a lot more
difficult than for Python for our testing code.
But, would I feel respected and valued as a participant in
that project? Would I have to use weird tools and follow arcane and
frustrating processes? If I did, *that* would make me give up. I don't
want to say that the choice of programming language doesn't matter at
all, but it seems to me that it might matter more because it's a
symptom of being unwilling to modernize things rather than for its own
sake.
I can personally definitely relate to this (although I wouldn't frame
it as strongly as you did). Postgres development definitely requires
weird tools and arcane processes (imho) when compared to most other
open source projects. The elephant in the room is of course the
mailing list development flow. But we have some good reasons for using
that[^1]. But most people have some limit on the amount of weirdness
they are willing to accept when wanting to contribute, and the mailing
list pushes us quite close to that limit for a bunch of people
already. Any additional weird tools/arcane processes might push some
people over that limit.
We've definitely made big improvements in modernizing our development
workflow over the last few years though: We now have CI (cfbot), a
modern build system (meson), and working autoformatting (requiring
pgindent on commit). These improvements have been very noticeable to
me, and I think we should continue such efforts. I think allowing
people to write tests in Python is one of the easier improvements that
we can make.
[^1]: Although I think those reasons apply much less to the
documentation, maybe we could allow github contributions for just
those.
On 2024-06-17 Mo 4:27 AM, Matthias van de Meent wrote:
Hi Greg, Jelte,
On Sat, 15 Jun 2024 at 23:53, Greg Sabino Mullane <htamfids@gmail.com>
wrote:Those young-uns are also the same group who hold their nose when
coding in C, and are always clamoring for rewriting Postgres in Rust.
Could you point me to one occasion I have 'always' clamored for this,
or any of "those young-uns" in the community? I may not be a huge fan
of C, but rewriting PostgreSQL in [other language] is not on the list
of things I'm clamoring for. I may have given off-hand mentions that
[other language] would've helped in certain cases, sure, but I'd
hardly call that clamoring.
Greg was being a but jocular here. I didn't take him seriously. But
there's maybe a better case to make the point he was making. Back in the
dark ages we used a source code control system called CVS. It's quite
unlike git and has a great many limitations and uglinesses, and there
was some pressure for us to move off it. If we had done so when it was
first suggested, we would probably have moved to using Subversion, which
is rather like CVS with many of the warts knocked off. Before long, some
distributed systems like Mercurial and git came along, and we, like most
of the world, chose git. Thus by waiting and not immediately doing what
was suggested we got a better solution. Moving twice would have been ...
painful.
I have written Python in the past. Not a huge amount, but it doesn't
feel like a foreign country to me, just the next town over instead of my
immediate neighbourhood. We even have a python script in the buildfarm
server code (not written by me). I'm sure if we started writing tests in
Python I would adjust. But I think we need to know what the advantages
are, beyond simple language preference. And to get to an equivalent
place for Python that we are at with perl will involve some work.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
(slowly catching up from the weekend email backlog)
On Fri, Jun 14, 2024 at 5:10 AM Robert Haas <robertmhaas@gmail.com> wrote:
I mean, both Perl and Python are Turing-complete.
Tom responded to this better than I could have, but I don't think this
is a helpful statement. In fact I opened the unconference session with
it and immediately waved it away as not-the-point.
I just don't believe in the idea that we're going to write one
category of tests in one language and another category in another
language.
You and I will probably not agree, then, because IMO we already do
that. SQL behavior is tested in SQL via pg_regress characterization
tests. End-to-end tests are written in Perl. Lower-level tests are
often written in C (and, unfortunately, then driven in Perl instead of
C; see above mail to Noah).
I'm fundamentally a polyglot tester by philosophy, so I don't see
careful use of multiple languages as an inherent problem to be solved.
They increase complexity (considerably so!) but generally provide
sufficient value to offset the cost.
As soon as we open the door to Python tests, people are
going to start writing the TAP tests that they would have written in
Perl in Python instead.
There's a wide spectrum of opinions between yours (which I will
cheekily paraphrase as "people will love testing in Python so much
they'll be willing to reinvent all of the wheels" -- which in the
short term would increase maintenance cost but in the long term sounds
like a very good problem to have), and people who seem to think that
new test suite infrastructure would sit unused because no one wants to
write tests anyway (to pull a strawman out of some hallway
conversations at PGConf.dev). I think the truth is probably somewhere
in the middle?
My prior mail was an attempt to bridge the gap between today and the
medium term, by introducing a series of compromises and incremental
steps in response to specific fears. We can jump forward to the end
state and try to talk about it, but I don't control the end state and
I don't have a crystal ball.
So as I see
it, the only reasonable plan here if we want to introduce testing in
Python (or C#, or Ruby, or Go, or JavaScript, or Lua, or LOLCODE) is
to try to achieve a reasonable degree of parity between that language
and Perl. Because then we can at least review the new infrastructure
all at once, instead of incrementally spread across many patches
written, reviewed, and committed by many different people.
I don't at all believe that a test API which is ported en masse as a
horizontal layer, without motivating vertical slices of test
functionality, is going to be fit for purpose.
And "written, reviewed, and committed by many different people" is a
feature for me, not a bug. One of the goals of the thread is to
encourage more community test writing than we currently have.
Otherwise, I could have kept silent (I am very happy with my personal
suite and have been comfortably maintaining it for a while). I am
trying to build community momentum around a pain point that is
currently rusted in place.
Consider the meson build system project. To get that committed, Andres
had to make it do pretty much everything MSVC could do and mostly
everything that configure could do
I think some lessons can be pulled from that, but fundamentally that's
a port of the build infrastructure done by a person with a commit bit.
There are some pretty considerable differences. (And even then, it
wasn't "done" with the first volley of patches, right?)
Thanks,
--Jacob
On Fri, Jun 14, 2024 at 8:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
I think that's an oversimplified analysis. Sure, the languages are
both Turing-complete, but for our purposes here they are both simply
glue languages around some set of testing facilities. Some of those
facilities will be provided by the base languages (plus whatever
extension modules we choose to require) and some by code we write.
The overall experience of writing tests will be determined by the
testing facilities far more than by the language used for glue.
+1. As an example, the more extensive (and high-quality) a language's
standard library, the more tests you'll be able to write. Convincing
committers to adopt a new third-party dependency is hard (for good
reason); the strength of the standard library should be considered as
a point of technical comparison.
That being the case, I do agree with your point that Python
equivalents to most of PostgreSQL::Test will need to be built up PDQ.
Maybe they can be better than the originals, in features or ease of
use, but "not there at all" is not better.
There's a wide gulf between "not there at all" and "reimplement it all
as a prerequisite for v1" as Robert proposed. I'm arguing for a middle
ground.
But what I'd really like to see is some comparison of the
language-provided testing facilities that we're proposing
to depend on. Why is pytest (or whatever) better than Test::More?
People are focusing a lot on failure reporting, and I agree with them,
but I did try to include more than just that in my OP.
I'll requote what I personally think is the #1 killer feature of
pytest, which prove and Test::More appear to be unable to accomplish
on their own: configurable isolation of tests from each other via
fixtures [1]https://docs.pytest.org/en/stable/how-to/fixtures.html.
Problem 1 (rerun failing tests): One architectural roadblock to this
in our Test::More suite is that tests depend on setup that's done by
previous tests. pytest allows you to declare each test's setup
requirements via pytest fixtures, letting the test runner build up the
world exactly as it needs to be for a single isolated test. These
fixtures may be given a "scope" so that multiple tests may share the
same setup for performance or other reasons.
When I'm doing red-to-green feature development (e.g. OAuth) or
green-to-green refactoring (e.g. replacement of libiddawc with libcurl
in OAuth), quick cycle time and reduction of noise is extremely
important. I want to be able to rerun just the single red test I care
about before moving on.
(Tests may additionally be organized with custom attributes. My OAuth
suite contains tests that must run slowly due to mandatory timeouts;
I've marked them as slow, and they can be easily skipped from the test
runner.)
2. The ability to break into a test case with the built-in debugger
[2]: https://docs.pytest.org/en/stable/how-to/failures.html
print() statements.
(Along similar lines, even the ability to use Python's built-in REPL
increases velocity. Python users understand that they can execute
`python3` and be dropped into a sandbox to try out syntax or some
unfamiliar library. Searching for how to do this in Perl results in a
handful of custom-built scripts; people here may know which to use as
a Perl monk, sure, but the point is to make it easy for newcomers to
write tests.)
I also wonder about integration of python-based testing with what
we already have. A significant part of what you called the meson
work had to do with persuading pg_regress, isolationtester, etc
to output test results in the common format established by TAP.
Am I right in guessing that pytest will have nothing to do with that?
Andres covered this pretty well. I will note that I had problems with
pytest-tap itself [3]https://github.com/python-tap/pytest-tap/issues/30, and I'm unclear whether that represents a bug
in pytest-tap or a bug in pytest.
Thanks,
--Jacob
[1]: https://docs.pytest.org/en/stable/how-to/fixtures.html
[2]: https://docs.pytest.org/en/stable/how-to/failures.html
[3]: https://github.com/python-tap/pytest-tap/issues/30
Hi everyone, it's been a while for this thread.
= Status =
Since the last email, quite a bit of offline chatter has happened,
particularly at FOSDEM [1]https://wiki.postgresql.org/wiki/FOSDEM/PGDay_2025_Developer_Meeting#Additional_toolchains_in_the_test_suite and PGConf.dev [2]https://wiki.postgresql.org/wiki/PGConf.dev_2025_Developer_Meeting. The dev-meeting
audiences, at least, appear to have reached a critical mass of nodding
heads and cautious optimism around the use of a pytest suite. Some are
more optimistic than others, some are quite skeptical, some are
completely silent, but no one has -1'd it yet.
I think I've been solidly outvoted in my attempt to support Python and
Perl suites side-by-side in perpetuity. Many, many people have
expressed that they'd feel disappointed if that were a permanent
situation, so I've been trying to adapt my approach to the idea that
we would need to port existing Perl tests at _some_ point in the
timeline.
Simultaneously, I've been approached by some people offline who are
pretty clearly waiting for me to make the first move here. I'm a bit
uncomfortable with that -- no one needs my permission to post patches
in this vein! -- but I get that this entire topic is a balancing act,
so it makes some sense. Thank you all for your patience. Here is a
patchset.
= Patches =
This is dev-quality, not targeted for commit yet, but hopefully enough
to spark some opinions and conversation.
0001: Prepares for pytest support, by adding Meson summary lines
indicating when the Perl TAP tests are enabled.
Peter E mentioned to me that it was hard to figure out which suites
had been enabled, so hopefully this helps a bit. I'm not sure what to
do on the Autoconf side.
0002: Adds Meson and Autoconf support for running pytest, via
--enable-pytest or -Dpytest=enabled.
This is a skeleton. There is no Postgres-specific test logic in this
patch. So if you like pytest, but hate how I write tests in it, you
have the option of basing your alternative on top of this patch to
show us. I also added a failing test so that you can all see what that
looks like in the CI. (I'll put this in the Draft CF [3]https://commitfest.postgresql.org/54/.)
When enabling the feature, the check_pytest.py script checks that the
configured `PYTHON` executable has all of pytest-requirements.txt
installed. Peter pointed out that this is incorrect: what we actually
want to check is that the interpreter used by pytest has all of the
required packages, and the two could be different.
One way to solve that problem is to just get rid of the check script,
and assume that if pytest is installed, we're good to go. But I ran
into enough installation weirdness in the CI (especially for MinGW,
ugh) that it was really helpful to have a script say "no, you're still
missing this." I'd like to help buildfarm operators in a similar way.
Unfortunately, check_pytest.py is pretty complex, and it doesn't make
as much sense to pay that cost unless and until we add more package
dependencies, as 0003 does later in the set. Thoughts?
For Autoconf, I'm just running pytest directly rather than translating
its output through pgtap + prove, mostly because I think that's a
better dev experience. It has color and everything. Is that okay?
0003: Ports a tiny subset of the SSL client tests.
This is enough to show off
- pytest fixtures and caching via scopes
- FFI binding to libpq via ctypes
- generation of TLS certs via the py-cryptography package
- low-level protocol testing without a "real" server
Andres requested a while back that we control concurrency via Meson
rather than pytest's global discovery. So there is no use of
pytest-xdist here. (I don't think we're in much danger of having CPUs
go idle during a `meson test` run, anyway.)
0004: Ports a tiny subset of the SSL server tests.
This additionally shows off
- more complicated test parameterization
- my vision of how we can cache and configure a running server
instance via pytest fixtures
Basically, server configuration changes are pushed and popped off of a
stack as fixtures are entered and exited. (C++ folks will be
completely unimpressed, but I'm still proud of it.) We'd need to
discuss the proper use of scopes and establish project-wide fixture
conventions if we go in this direction, but I hope we do something
like it; I really want to be able to run each and every test in
isolation.
There are some Win32 syscall examples as well, for those who are
interested, but there's nothing that uses that code at the moment.
0005: Adds a new variable to Cirrus to control the default Meson suites.
0006: Sets that new variable to just the pytest suites, so that I
don't burn too much Cirrus time for this proof of concept, and turns
on all OSes.
= Other General Thoughts =
You might immediately complain that we don't want to open-code every
single byte in the protocol for a test, as I have done here. I don't
want to, either; I want to use Construct [4]https://construct.readthedocs.io/en/latest/ for that. I also want us
to have end-to-end tests that combine the fixtures in 0003 and 0004 --
but I want everyone to be able to choose between end-to-end tests and
mocked client/server tests, and I'm most excited about the latter, so
that's what I focused on here.
The distinction between the fixture scopes "session" and "module" is
useful in the Autoconf case (which runs all tests in a pyt/ folder at
once) and useless in the Meson case (which runs each pyt/*.py test in
an individual pytest instance). I suspect this will cause some
friction, but I don't know how much. I haven't been very careful in my
use of "module" scope yet.
I think devs should be able to run pytest directly from the source
root. So I've put settings specific to Autoconf/Meson into
Makefile.global/meson.build instead of pytest.ini. (For instance, I
don't imagine anyone wants to read the pgtap output directly.)
Formatting of the patchset has been provided by isort + black. I hear
ruff is the new kid on the block, but I don't see raw speed being
incredibly important for us. And for this particular use case, I
really like the idea of an "unconfigurable" PEP8 formatter so we can
do other things with our time.
Huge thanks to Andres and Bilal for helping me debug Windows problems in the CI.
Wow, thanks for reading this far. WDYT?
--Jacob
[1]: https://wiki.postgresql.org/wiki/FOSDEM/PGDay_2025_Developer_Meeting#Additional_toolchains_in_the_test_suite
[2]: https://wiki.postgresql.org/wiki/PGConf.dev_2025_Developer_Meeting
[3]: https://commitfest.postgresql.org/54/
[4]: https://construct.readthedocs.io/en/latest/
Attachments:
v1-0001-meson-Include-TAP-tests-in-the-configuration-summ.patchapplication/octet-stream; name=v1-0001-meson-Include-TAP-tests-in-the-configuration-summ.patchDownload
From be79415cd1f443732a9a92f15216d3c9e456ac02 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Fri, 5 Sep 2025 16:39:08 -0700
Subject: [PATCH v1 1/6] meson: Include TAP tests in the configuration summary
...to make it obvious when they've been enabled. prove is added to the
executables list for good measure.
TODO: does Autoconf need something similar?
Per complaint by Peter Eisentraut.
---
meson.build | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/meson.build b/meson.build
index ab8101d67b2..2598758f6d3 100644
--- a/meson.build
+++ b/meson.build
@@ -3949,6 +3949,7 @@ summary(
'bison': '@0@ @1@'.format(bison.full_path(), bison_version),
'dtrace': dtrace,
'flex': '@0@ @1@'.format(flex.full_path(), flex_version),
+ 'prove': prove,
},
section: 'Programs',
)
@@ -3985,3 +3986,11 @@ summary(
section: 'External libraries',
list_sep: ' ',
)
+
+summary(
+ {
+ 'tap': tap_tests_enabled,
+ },
+ section: 'Other features',
+ list_sep: ' ',
+)
--
2.34.1
v1-0002-Add-support-for-pytest-test-suites.patchapplication/octet-stream; name=v1-0002-Add-support-for-pytest-test-suites.patchDownload
From 11759351232507bd290ae825d2a57f4af9fa88e0 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 13 Aug 2025 10:58:56 -0700
Subject: [PATCH v1 2/6] Add support for pytest test suites
Specify --enable-pytest/-Dpytest=enabled at configure time. This
contains no Postgres test logic -- it is just a "vanilla" pytest
skeleton.
I've written a custom pgtap output plugin, used by the Meson mtest
runner, to fully control what we see during CI test failures. The
pytest-tap plugin would have been preferable, but it's now in
maintenance mode, and it has problems with accidentally suppressing
important collection failures.
test_something.py is intended to show a sample failure in the CI.
TODOs:
- check_pytest.py has a known issue: it checks the Python interpreter
linked to PL/Python rather than the Python interpreter in use by
pytest. Perhaps the check script should just go away?
- OpenBSD has an ANSI-related terminal bug, but I'm not sure if the bug
is in Cirrus, the image, pytest, Python, or readline. The TERM envvar
is unset to work around it. If this workaround is removed, a bad ANSI
escape is inserted into the pgtap output and mtest is unable to parse
it.
- The Chocolatey CI setup is subpar. Need to find a way to bless the
dependencies in use rather than pulling from pip... or maybe that will
be done by the image baker.
---
.cirrus.tasks.yml | 38 +++--
.gitignore | 1 +
config/check_pytest.py | 138 ++++++++++++++++++
config/pytest-requirements.txt | 22 +++
configure | 170 ++++++++++++++++++++++-
configure.ac | 30 +++-
meson.build | 88 ++++++++++++
meson_options.txt | 8 +-
pytest.ini | 1 +
src/Makefile.global.in | 23 +++
src/makefiles/meson.build | 2 +
src/test/Makefile | 11 +-
src/test/meson.build | 1 +
src/test/pytest/Makefile | 20 +++
src/test/pytest/README | 1 +
src/test/pytest/meson.build | 16 +++
src/test/pytest/plugins/pgtap.py | 193 ++++++++++++++++++++++++++
src/test/pytest/pyt/test_something.py | 17 +++
18 files changed, 765 insertions(+), 15 deletions(-)
create mode 100644 config/check_pytest.py
create mode 100644 config/pytest-requirements.txt
create mode 100644 pytest.ini
create mode 100644 src/test/pytest/Makefile
create mode 100644 src/test/pytest/README
create mode 100644 src/test/pytest/meson.build
create mode 100644 src/test/pytest/plugins/pgtap.py
create mode 100644 src/test/pytest/pyt/test_something.py
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index eca9d62fc22..80f9b394bd2 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -21,7 +21,8 @@ env:
# target to test, for all but windows
CHECK: check-world PROVE_FLAGS=$PROVE_FLAGS
- CHECKFLAGS: -Otarget
+ # TODO were we avoiding --keep-going on purpose?
+ CHECKFLAGS: -Otarget --keep-going
PROVE_FLAGS: --timer
# Build test dependencies as part of the build step, to see compiler
# errors/warnings in one place.
@@ -44,6 +45,7 @@ env:
-Dldap=enabled
-Dssl=openssl
-Dtap_tests=enabled
+ -Dpytest=enabled
-Dplperl=enabled
-Dplpython=enabled
-Ddocs=enabled
@@ -222,7 +224,9 @@ task:
chown root:postgres /tmp/cores
sysctl kern.corefile='/tmp/cores/%N.%P.core'
setup_additional_packages_script: |
- #pkg install -y ...
+ pkg install -y \
+ py311-packaging \
+ py311-pytest
# NB: Intentionally build without -Dllvm. The freebsd image size is already
# large enough to make VM startup slow, and even without llvm freebsd
@@ -311,7 +315,10 @@ task:
-Dpam=enabled
setup_additional_packages_script: |
- #pkgin -y install ...
+ pkgin -y install \
+ py312-packaging \
+ py312-test
+ ln -s /usr/pkg/bin/pytest-3.12 /usr/pkg/bin/pytest
<<: *netbsd_task_template
- name: OpenBSD - Meson
@@ -322,6 +329,7 @@ task:
OS_NAME: openbsd
IMAGE_FAMILY: pg-ci-openbsd-postgres
PKGCONFIG_PATH: '/usr/lib/pkgconfig:/usr/local/lib/pkgconfig'
+ TERM: # TODO why does pytest print ANSI escapes on OpenBSD?
MESON_FEATURES: >-
-Dbsd_auth=enabled
@@ -330,7 +338,9 @@ task:
-Duuid=e2fs
setup_additional_packages_script: |
- #pkg_add -I ...
+ pkg_add -I \
+ py3-test \
+ py3-packaging
# Always core dump to ${CORE_DUMP_DIR}
set_core_dump_script: sysctl -w kern.nosuidcoredump=2
<<: *openbsd_task_template
@@ -489,8 +499,10 @@ task:
EOF
setup_additional_packages_script: |
- #apt-get update
- #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+ apt-get update
+ DEBIAN_FRONTEND=noninteractive apt-get -y install \
+ python3-pytest \
+ python3-packaging
matrix:
# SPECIAL:
@@ -513,14 +525,15 @@ task:
su postgres <<-EOF
./configure \
--enable-cassert --enable-injection-points --enable-debug \
- --enable-tap-tests --enable-nls \
+ --enable-tap-tests --enable-pytest --enable-nls \
--with-segsize-blocks=6 \
--with-libnuma \
--with-liburing \
\
${LINUX_CONFIGURE_FEATURES} \
\
- CLANG="ccache clang-16"
+ CLANG="ccache clang-16" \
+ PYTEST="env LD_PRELOAD=/lib/x86_64-linux-gnu/libasan.so.8 pytest"
EOF
build_script: su postgres -c "make -s -j${BUILD_JOBS} world-bin"
upload_caches: ccache
@@ -650,6 +663,8 @@ task:
p5.34-io-tty
p5.34-ipc-run
python312
+ py312-packaging
+ py312-pytest
tcl
zstd
@@ -699,6 +714,7 @@ task:
sh src/tools/ci/ci_macports_packages.sh $MACOS_PACKAGE_LIST
# system python doesn't provide headers
sudo /opt/local/bin/port select python3 python312
+ sudo /opt/local/bin/port select pytest pytest312
# Make macports install visible for subsequent steps
echo PATH=/opt/local/sbin/:/opt/local/bin/:$PATH >> $CIRRUS_ENV
upload_caches: macports
@@ -772,6 +788,8 @@ task:
-Dldap=enabled
-Dssl=openssl
-Dtap_tests=enabled
+ -Dpytest=enabled
+ -DPYTEST=c:\Windows\system32\config\systemprofile\AppData\Roaming\Python\Python310\Scripts\pytest.exe
-Dplperl=enabled
-Dplpython=enabled
@@ -780,8 +798,10 @@ task:
depends_on: SanityCheck
only_if: $CI_WINDOWS_ENABLED
+ # XXX Does Chocolatey really not have any Python package installers?
setup_additional_packages_script: |
REM choco install -y --no-progress ...
+ pip3 install --user packaging pytest
setup_hosts_file_script: |
echo 127.0.0.1 pg-loadbalancetest >> c:\Windows\System32\Drivers\etc\hosts
@@ -844,7 +864,7 @@ task:
folder: ${CCACHE_DIR}
setup_additional_packages_script: |
- REM C:\msys64\usr\bin\pacman.exe -S --noconfirm ...
+ C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-python-packaging mingw-w64-ucrt-x86_64-python-pytest
mingw_info_script: |
%BASH% -c "where gcc"
diff --git a/.gitignore b/.gitignore
index 4e911395fe3..268426003b1 100644
--- a/.gitignore
+++ b/.gitignore
@@ -31,6 +31,7 @@ win32ver.rc
*.exe
lib*dll.def
lib*.pc
+__pycache__/
# Local excludes in root directory
/GNUmakefile
diff --git a/config/check_pytest.py b/config/check_pytest.py
new file mode 100644
index 00000000000..14b2f3eec9b
--- /dev/null
+++ b/config/check_pytest.py
@@ -0,0 +1,138 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+#
+# Verify that pytest-requirements.txt is satisfied. This would probably be
+# easier with pip, but requiring pip on build machines is a non-starter for
+# many.
+#
+# The design philosophy of this script is to bend over backwards to help people
+# figure out what is missing. The target audience for error output is the
+# buildfarm operator who just wants to get the tests running, not the test
+# developer who presumably already knows how to solve these problems.
+
+import sys
+from typing import List # TODO: Python 3.9 will remove the need for this
+
+
+def main():
+ if len(sys.argv) != 2:
+ sys.exit("usage: python {} REQUIREMENTS_FILE".format(sys.argv[0]))
+
+ requirements_file = sys.argv[1]
+ with open(requirements_file, "r") as f:
+ requirements = f.readlines()
+
+ found = packaging_check(requirements)
+ if not found:
+ sys.exit("See src/test/pytest/README for package installation help.")
+
+
+def packaging_check(requirements: List[str]) -> bool:
+ """
+ The preferred dependency check, which unfortunately needs newer Python
+ facilities. Returns True if all dependencies were found.
+ """
+ try:
+ # First, attempt to find importlib.metadata. This is part of the
+ # standard library from 3.8 onwards. Earlier Python versions have an
+ # official backport called importlib_metadata, which can generally be
+ # installed as a separate OS package (e.g. python3-importlib-metadata).
+ # This complication can be removed once we stop supporting Python 3.7.
+ try:
+ from importlib import metadata
+ except ImportError:
+ import importlib_metadata as metadata
+
+ # packaging contains the PyPA definitions of requirement specifiers.
+ # This is again contained in a separate OS package (for example,
+ # python3-packaging).
+ import packaging
+ from packaging.requirements import Requirement
+
+ except ImportError as err:
+ # We don't even have enough prerequisites to check our prerequisites.
+ # Try to fall back on the deprecated parser, to get a better error
+ # message.
+ found = setuptools_fallback(requirements)
+
+ if not found:
+ # Well, the best we can do is just print the import error as-is.
+ print(err, file=sys.stderr)
+
+ return False
+
+ # Strip extraneous whitespace, whole-line comments, and empty lines from our
+ # specifier list.
+ requirements = [r.strip() for r in requirements]
+ requirements = [r for r in requirements if r and r[0] != "#"]
+
+ found = True
+ for spec in requirements:
+ req = Requirement(spec)
+
+ # Skip any packages marked as unneeded for this particular Python env.
+ if req.marker and not req.marker.evaluate():
+ continue
+
+ # Make sure the package is installed...
+ try:
+ version = metadata.version(req.name)
+ except metadata.PackageNotFoundError:
+ print("Package '{}' is not installed".format(req.name), file=sys.stderr)
+ found = False
+ continue
+
+ # ...and that it has a compatible version.
+ if not req.specifier.contains(version):
+ print(
+ "Package '{}' has version {}, but '{}' is required".format(
+ req.name, version, req.specifier
+ ),
+ file=sys.stderr,
+ )
+ found = False
+ continue
+
+ return found
+
+
+def setuptools_fallback(requirements: List[str]) -> bool:
+ """
+ An alternative dependency helper, based on the old deprecated pkg_resources
+ module in setuptools, which is pretty widely available in older Pythons. The
+ point of this is to bootstrap the user into an environment that can run the
+ packaging_check().
+
+ Returns False if pkg_resources is also unavailable, in which case we just
+ have to do our best.
+ """
+ try:
+ import pkg_resources
+ except ModuleNotFoundError:
+ return False
+
+ # An extra newline makes the Autoconf output easier to read.
+ print(file=sys.stderr)
+
+ # Go one-by-one through the requirements, printing each missing dependency.
+ found = True
+ for r in requirements:
+ try:
+ pkg_resources.require(r)
+ except pkg_resources.DistributionNotFound as err:
+ # The error descriptions given here are pretty good as-is.
+ print(err, file=sys.stderr)
+ found = False
+ except pkg_resources.RequirementParseError as err:
+ assert False # TODO
+
+ # The only reason the fallback would be called is if we're missing required
+ # packages. So if we "found them", the requirements file is broken...
+ assert (
+ not found
+ ), "setuptools_fallback() succeeded unexpectedly; is the requirements file incomplete?"
+
+ return True
+
+
+if __name__ == "__main__":
+ main()
diff --git a/config/pytest-requirements.txt b/config/pytest-requirements.txt
new file mode 100644
index 00000000000..157262a684f
--- /dev/null
+++ b/config/pytest-requirements.txt
@@ -0,0 +1,22 @@
+#
+# This file contains the Python packages which are required in order for us to
+# enable pytest.
+#
+# The syntax is a *subset* of pip's requirements.txt syntax, so that both pip
+# and check_pytest.py can use it. Only whole-line comments and standard Python
+# dependency specifiers are allowed. pip-specific goodies like includes and
+# environment substitutions are not supported; keep it simple.
+#
+# Packages belong here if their absence should cause a configuration failure. If
+# you'd like to make a package optional, consider using pytest.importorskip()
+# instead.
+#
+
+# pytest 7.0 was the last version which supported Python 3.6, but the BSDs have
+# started putting 8.x into ports, so we support both. (pytest 8 can be used
+# throughout once we drop support for Python 3.7.)
+pytest >= 7.0, < 9
+
+# These are meta-packages which allow check_pytest.py to run.
+packaging
+importlib_metadata ; python_version < "3.8"
diff --git a/configure b/configure
index 39c68161cec..860b07763dc 100755
--- a/configure
+++ b/configure
@@ -630,6 +630,7 @@ vpath_build
PG_SYSROOT
PG_VERSION_NUM
LDFLAGS_EX_BE
+PYTEST
PROVE
DBTOEPUB
FOP
@@ -773,6 +774,7 @@ CFLAGS
CC
enable_injection_points
PG_TEST_EXTRA
+enable_pytest
enable_tap_tests
enable_dtrace
DTRACEFLAGS
@@ -851,6 +853,7 @@ enable_profiling
enable_coverage
enable_dtrace
enable_tap_tests
+enable_pytest
enable_injection_points
with_blocksize
with_segsize
@@ -1551,7 +1554,10 @@ Optional Features:
--enable-profiling build with profiling enabled
--enable-coverage build with coverage testing instrumentation
--enable-dtrace build with DTrace support
- --enable-tap-tests enable TAP tests (requires Perl and IPC::Run)
+ --enable-tap-tests enable (Perl-based) TAP tests (requires Perl and
+ IPC::Run)
+ --enable-pytest enable (Python-based) pytest suites (requires
+ Python)
--enable-injection-points
enable injection points (for testing)
--enable-depend turn on automatic dependency tracking
@@ -3639,7 +3645,7 @@ fi
#
-# TAP tests
+# Test frameworks
#
@@ -3667,6 +3673,32 @@ fi
+
+# Check whether --enable-pytest was given.
+if test "${enable_pytest+set}" = set; then :
+ enableval=$enable_pytest;
+ case $enableval in
+ yes)
+ :
+ ;;
+ no)
+ :
+ ;;
+ *)
+ as_fn_error $? "no argument expected for --enable-pytest option" "$LINENO" 5
+ ;;
+ esac
+
+else
+ enable_pytest=no
+
+fi
+
+
+
+
+
+
#
# Injection points
#
@@ -19120,6 +19152,140 @@ $as_echo "$modulestderr" >&6; }
fi
fi
+if test "$enable_pytest" = yes; then
+ # Mirror the prove checks, above, for pytest. We don't require the user to
+ # have selected --with-python, but we do need a Python installation.
+ if test -z "$PYTHON"; then
+ if test -z "$PYTHON"; then
+ for ac_prog in python3 python
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_PYTHON+:} false; then :
+ $as_echo_n "(cached) " >&6
+else
+ case $PYTHON in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_PYTHON="$PYTHON" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_path_PYTHON="$as_dir/$ac_word$ac_exec_ext"
+ $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+PYTHON=$ac_cv_path_PYTHON
+if test -n "$PYTHON"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTHON" >&5
+$as_echo "$PYTHON" >&6; }
+else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+ test -n "$PYTHON" && break
+done
+
+else
+ # Report the value of PYTHON in configure's output in all cases.
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for PYTHON" >&5
+$as_echo_n "checking for PYTHON... " >&6; }
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTHON" >&5
+$as_echo "$PYTHON" >&6; }
+fi
+
+if test x"$PYTHON" = x""; then
+ as_fn_error $? "Python not found" "$LINENO" 5
+fi
+
+ fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for Python packages required for pytest" >&5
+$as_echo_n "checking for Python packages required for pytest... " >&6; }
+ modulestderr=`"$PYTHON" "$srcdir/config/check_pytest.py" "$srcdir/config/pytest-requirements.txt" 2>&1 >/dev/null`
+ if test $? -eq 0; then
+ echo "$modulestderr" >&5
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+ else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $modulestderr" >&5
+$as_echo "$modulestderr" >&6; }
+ as_fn_error $? "Additional Python packages are required to run the pytest suites" "$LINENO" 5
+ fi
+ if test -z "$PYTEST"; then
+ for ac_prog in pytest py.test
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_PYTEST+:} false; then :
+ $as_echo_n "(cached) " >&6
+else
+ case $PYTEST in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_PYTEST="$PYTEST" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_path_PYTEST="$as_dir/$ac_word$ac_exec_ext"
+ $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+PYTEST=$ac_cv_path_PYTEST
+if test -n "$PYTEST"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTEST" >&5
+$as_echo "$PYTEST" >&6; }
+else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+ test -n "$PYTEST" && break
+done
+
+else
+ # Report the value of PYTEST in configure's output in all cases.
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for PYTEST" >&5
+$as_echo_n "checking for PYTEST... " >&6; }
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTEST" >&5
+$as_echo "$PYTEST" >&6; }
+fi
+
+ if test -z "$PYTEST"; then
+ as_fn_error $? "pytest not found" "$LINENO" 5
+ fi
+fi
+
# If compiler will take -Wl,--as-needed (or various platform-specific
# spellings thereof) then add that to LDFLAGS. This is much easier than
# trying to filter LIBS to the minimum for each executable.
diff --git a/configure.ac b/configure.ac
index 066e3976c0a..f4bf94a078f 100644
--- a/configure.ac
+++ b/configure.ac
@@ -231,11 +231,16 @@ AC_SUBST(DTRACEFLAGS)])
AC_SUBST(enable_dtrace)
#
-# TAP tests
+# Test frameworks
#
PGAC_ARG_BOOL(enable, tap-tests, no,
- [enable TAP tests (requires Perl and IPC::Run)])
+ [enable (Perl-based) TAP tests (requires Perl and IPC::Run)])
AC_SUBST(enable_tap_tests)
+
+PGAC_ARG_BOOL(enable, pytest, no,
+ [enable (Python-based) pytest suites (requires Python)])
+AC_SUBST(enable_pytest)
+
AC_ARG_VAR(PG_TEST_EXTRA,
[enable selected extra tests (overridden at runtime by PG_TEST_EXTRA environment variable)])
@@ -2442,6 +2447,27 @@ if test "$enable_tap_tests" = yes; then
fi
fi
+if test "$enable_pytest" = yes; then
+ # Mirror the prove checks, above, for pytest. We don't require the user to
+ # have selected --with-python, but we do need a Python installation.
+ if test -z "$PYTHON"; then
+ PGAC_PATH_PYTHON
+ fi
+ AC_MSG_CHECKING(for Python packages required for pytest)
+ [modulestderr=`"$PYTHON" "$srcdir/config/check_pytest.py" "$srcdir/config/pytest-requirements.txt" 2>&1 >/dev/null`]
+ if test $? -eq 0; then
+ echo "$modulestderr" >&AS_MESSAGE_LOG_FD
+ AC_MSG_RESULT(yes)
+ else
+ AC_MSG_RESULT([$modulestderr])
+ AC_MSG_ERROR([Additional Python packages are required to run the pytest suites])
+ fi
+ PGAC_PATH_PROGS(PYTEST, pytest py.test)
+ if test -z "$PYTEST"; then
+ AC_MSG_ERROR([pytest not found])
+ fi
+fi
+
# If compiler will take -Wl,--as-needed (or various platform-specific
# spellings thereof) then add that to LDFLAGS. This is much easier than
# trying to filter LIBS to the minimum for each executable.
diff --git a/meson.build b/meson.build
index 2598758f6d3..5166d82e607 100644
--- a/meson.build
+++ b/meson.build
@@ -1699,6 +1699,35 @@ endif
+###############################################################
+# Library: pytest
+###############################################################
+
+pytest_enabled = false
+pytest = not_found_dep
+
+pytestopt = get_option('pytest')
+if not pytestopt.disabled()
+ pytest_check = run_command(python, 'config/check_pytest.py',
+ 'config/pytest-requirements.txt', check: false)
+ if pytest_check.returncode() != 0
+ message(pytest_check.stderr().strip())
+ if pytestopt.enabled()
+ error('Additional Python packages are required to run the pytest suites.')
+ else
+ warning('Additional Python packages are required to run the pytest suites.')
+ endif
+ endif
+
+ pytest = find_program(get_option('PYTEST'), native: true, required: pytestopt)
+
+ if pytest.found() and pytest_check.returncode() == 0
+ pytest_enabled = true
+ endif
+endif
+
+
+
###############################################################
# Library: zstd
###############################################################
@@ -3776,6 +3805,63 @@ foreach test_dir : tests
)
endforeach
install_suites += test_group
+ elif kind == 'pytest'
+ testwrap_pytest = testwrap_base
+ if not pytest_enabled
+ testwrap_pytest += ['--skip', 'pytest not enabled']
+ endif
+
+ test_command = [
+ pytest.full_path(),
+ '-c', meson.project_source_root() / 'pytest.ini',
+ '--verbose',
+ '-p', 'pgtap', # enable our test reporter plugin
+ '-ra', # show skipped and xfailed tests too
+ ]
+
+ # Add temporary install, the build directory for non-installed binaries and
+ # also test/ for non-installed test binaries built separately.
+ env = test_env
+ env.prepend('PATH', temp_install_bindir, test_dir['bd'], test_dir['bd'] / 'test')
+ temp_install_datadir = '@0@@1@'.format(test_install_destdir, dir_prefix / dir_data)
+ env.set('share_contrib_dir', temp_install_datadir / 'contrib')
+ env.prepend('PYTHONPATH', meson.project_source_root() / 'src' / 'test' / 'pytest' / 'plugins')
+
+ foreach name, value : t.get('env', {})
+ env.set(name, value)
+ endforeach
+
+ test_group = test_dir['name']
+ test_kwargs = {
+ 'protocol': 'tap',
+ 'suite': test_group,
+ 'timeout': 1000,
+ 'depends': test_deps + t.get('deps', []),
+ 'env': env,
+ } + t.get('test_kwargs', {})
+
+ foreach onetest : t['tests']
+ # Make test names prettier, remove pyt/ and .py
+ onetest_p = onetest
+ if onetest_p.startswith('pyt/')
+ onetest_p = onetest.split('pyt/')[1]
+ endif
+ if onetest_p.endswith('.py')
+ onetest_p = fs.stem(onetest_p)
+ endif
+
+ test(test_dir['name'] / onetest_p,
+ python,
+ kwargs: test_kwargs,
+ args: testwrap_pytest + [
+ '--testgroup', test_dir['name'],
+ '--testname', onetest_p,
+ '--', test_command,
+ test_dir['sd'] / onetest,
+ ],
+ )
+ endforeach
+ install_suites += test_group
else
error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
endif
@@ -3950,6 +4036,7 @@ summary(
'dtrace': dtrace,
'flex': '@0@ @1@'.format(flex.full_path(), flex_version),
'prove': prove,
+ 'pytest': pytest,
},
section: 'Programs',
)
@@ -3990,6 +4077,7 @@ summary(
summary(
{
'tap': tap_tests_enabled,
+ 'pytest': pytest_enabled,
},
section: 'Other features',
list_sep: ' ',
diff --git a/meson_options.txt b/meson_options.txt
index 06bf5627d3c..88f22e699d9 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -41,7 +41,10 @@ option('cassert', type: 'boolean', value: false,
description: 'Enable assertion checks (for debugging)')
option('tap_tests', type: 'feature', value: 'auto',
- description: 'Enable TAP tests')
+ description: 'Enable (Perl-based) TAP tests')
+
+option('pytest', type: 'feature', value: 'auto',
+ description: 'Enable (Python-based) pytest suites')
option('injection_points', type: 'boolean', value: false,
description: 'Enable injection points')
@@ -195,6 +198,9 @@ option('PERL', type: 'string', value: 'perl',
option('PROVE', type: 'string', value: 'prove',
description: 'Path to prove binary')
+option('PYTEST', type: 'array', value: ['pytest', 'py.test'],
+ description: 'Path to pytest binary')
+
option('PYTHON', type: 'array', value: ['python3', 'python'],
description: 'Path to python binary')
diff --git a/pytest.ini b/pytest.ini
new file mode 100644
index 00000000000..eea2c180278
--- /dev/null
+++ b/pytest.ini
@@ -0,0 +1 @@
+[pytest]
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 8b1b357beaa..fc744166bd2 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -211,6 +211,7 @@ enable_dtrace = @enable_dtrace@
enable_coverage = @enable_coverage@
enable_injection_points = @enable_injection_points@
enable_tap_tests = @enable_tap_tests@
+enable_pytest = @enable_pytest@
python_includespec = @python_includespec@
python_libdir = @python_libdir@
@@ -354,6 +355,7 @@ MSGFMT = @MSGFMT@
MSGFMT_FLAGS = @MSGFMT_FLAGS@
MSGMERGE = @MSGMERGE@
OPENSSL = @OPENSSL@
+PYTEST = @PYTEST@
PYTHON = @PYTHON@
TAR = @TAR@
XGETTEXT = @XGETTEXT@
@@ -508,6 +510,27 @@ prove_installcheck = @echo "TAP tests not enabled. Try configuring with --enable
prove_check = $(prove_installcheck)
endif
+ifeq ($(enable_pytest),yes)
+
+pytest_installcheck = @echo "Installcheck is not currently supported for pytest."
+
+define pytest_check
+echo "# +++ pytest check in $(subdir) +++" && \
+rm -rf '$(CURDIR)'/tmp_check && \
+$(MKDIR_P) '$(CURDIR)'/tmp_check && \
+cd $(srcdir) && \
+ TESTLOGDIR='$(CURDIR)/tmp_check/log' \
+ TESTDATADIR='$(CURDIR)/tmp_check' \
+ PYTHONPATH='$(abs_top_srcdir)/src/test/pytest/plugins:$$PYTHONPATH' \
+ $(with_temp_install) \
+ $(PYTEST) -c '$(abs_top_srcdir)/pytest.ini' --verbose -ra ./pyt/
+endef
+
+else
+pytest_installcheck = @echo "pytest is not enabled. Try configuring with --enable-pytest"
+pytest_check = $(pytest_installcheck)
+endif
+
# Installation.
install_bin = @install_bin@
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 54dbc059ada..f69eb1068db 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -56,6 +56,7 @@ pgxs_kv = {
'enable_nls': libintl.found() ? 'yes' : 'no',
'enable_injection_points': get_option('injection_points') ? 'yes' : 'no',
'enable_tap_tests': tap_tests_enabled ? 'yes' : 'no',
+ 'enable_pytest': pytest_enabled ? 'yes' : 'no',
'enable_debug': get_option('debug') ? 'yes' : 'no',
'enable_coverage': 'no',
'enable_dtrace': dtrace.found() ? 'yes' : 'no',
@@ -147,6 +148,7 @@ pgxs_bins = {
'OPENSSL': openssl,
'PERL': perl,
'PROVE': prove,
+ 'PYTEST': pytest,
'PYTHON': python,
'TAR': tar,
'ZSTD': program_zstd,
diff --git a/src/test/Makefile b/src/test/Makefile
index 511a72e6238..0be9771d71f 100644
--- a/src/test/Makefile
+++ b/src/test/Makefile
@@ -12,7 +12,16 @@ subdir = src/test
top_builddir = ../..
include $(top_builddir)/src/Makefile.global
-SUBDIRS = perl postmaster regress isolation modules authentication recovery subscription
+SUBDIRS = \
+ authentication \
+ isolation \
+ modules \
+ perl \
+ postmaster \
+ pytest \
+ recovery \
+ regress \
+ subscription
ifeq ($(with_icu),yes)
SUBDIRS += icu
diff --git a/src/test/meson.build b/src/test/meson.build
index ccc31d6a86a..d08a6ef61c2 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -5,6 +5,7 @@ subdir('isolation')
subdir('authentication')
subdir('postmaster')
+subdir('pytest')
subdir('recovery')
subdir('subscription')
subdir('modules')
diff --git a/src/test/pytest/Makefile b/src/test/pytest/Makefile
new file mode 100644
index 00000000000..2bdca96ccbe
--- /dev/null
+++ b/src/test/pytest/Makefile
@@ -0,0 +1,20 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for pytest
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/pytest/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/test/pytest
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+check:
+ $(pytest_check)
+
+clean distclean maintainer-clean:
+ rm -rf tmp_check
diff --git a/src/test/pytest/README b/src/test/pytest/README
new file mode 100644
index 00000000000..1333ed77b7e
--- /dev/null
+++ b/src/test/pytest/README
@@ -0,0 +1 @@
+TODO
diff --git a/src/test/pytest/meson.build b/src/test/pytest/meson.build
new file mode 100644
index 00000000000..abd128dfa24
--- /dev/null
+++ b/src/test/pytest/meson.build
@@ -0,0 +1,16 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+if not pytest_enabled
+ subdir_done()
+endif
+
+tests += {
+ 'name': 'pytest',
+ 'sd': meson.current_source_dir(),
+ 'bd': meson.current_build_dir(),
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_something.py',
+ ],
+ },
+}
diff --git a/src/test/pytest/plugins/pgtap.py b/src/test/pytest/plugins/pgtap.py
new file mode 100644
index 00000000000..ef8291e291c
--- /dev/null
+++ b/src/test/pytest/plugins/pgtap.py
@@ -0,0 +1,193 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import os
+import sys
+from typing import Optional
+
+import pytest
+
+#
+# Helpers
+#
+
+
+class TAP:
+ """
+ A basic API for reporting via the TAP protocol.
+ """
+
+ def __init__(self):
+ self.count = 0
+
+ # XXX interacts poorly with testwrap's boilerplate diagnostics
+ # self.print("TAP version 13")
+
+ def expect(self, num: int):
+ self.print(f"1..{num}")
+
+ def print(self, *args):
+ print(*args, file=sys.__stdout__)
+
+ def ok(self, name: str):
+ self.count += 1
+ self.print("ok", self.count, "-", name)
+
+ def skip(self, name: str, reason: str):
+ self.count += 1
+ self.print("ok", self.count, "-", name, "# skip", reason)
+
+ def fail(self, name: str, details: str):
+ self.count += 1
+ self.print("not ok", self.count, "-", name)
+
+ # mtest has some odd behavior around TAP tests where it won't print
+ # diagnostics on failure if they're part of the stdout stream, so we
+ # might as well just dump the details directly to stderr instead.
+ print(details, file=sys.__stderr__)
+
+
+tap = TAP()
+
+
+class TestNotes:
+ """
+ Annotations for a single test. The existing pytest hooks keep interesting
+ information somewhat separated across the different stages
+ (setup/test/teardown), so this class is used to correlate them.
+ """
+
+ skipped = False
+ skip_reason = None
+
+ failed = False
+ details = ""
+
+
+# Register a custom key in the stash dictionary for keeping our TestNotes.
+notes_key = pytest.StashKey[TestNotes]()
+
+
+#
+# Hook Implementations
+#
+
+
+@pytest.hookimpl(tryfirst=True)
+def pytest_configure(config):
+ """
+ Hijacks the standard streams as soon as possible during pytest startup. The
+ pytest-formatted output gets logged to file instead, and we'll use the
+ original sys.__stdout__/__stderr__ streams for the TAP protocol.
+ """
+ logdir = os.getenv("TESTLOGDIR")
+ if not logdir:
+ raise RuntimeError("pgtap requires the TESTLOGDIR envvar to be set")
+
+ os.makedirs(logdir)
+ logpath = os.path.join(logdir, "pytest.log")
+ sys.stdout = sys.stderr = open(logpath, "a", buffering=1)
+
+
+@pytest.hookimpl(trylast=True)
+def pytest_sessionfinish(session, exitstatus):
+ """
+ Suppresses nonzero exit codes due to failed tests. (In that case, we want
+ Meson to report a failure count, not a generic ERROR.)
+ """
+ if exitstatus == pytest.ExitCode.TESTS_FAILED:
+ session.exitstatus = pytest.ExitCode.OK
+
+
+@pytest.hookimpl
+def pytest_collectreport(report):
+ # Include collection failures directly in Meson error output.
+ if report.failed:
+ print(report.longreprtext, file=sys.__stderr__)
+
+
+@pytest.hookimpl
+def pytest_internalerror(excrepr, excinfo):
+ # Include internal errors directly in Meson error output.
+ print(excrepr, file=sys.__stderr__)
+
+
+#
+# Hook Wrappers
+#
+# In pytest parlance, a "wrapper" for a hook can inspect and optionally modify
+# existing hooks' behavior, but it does not replace the hook chain. This is done
+# through a generator-style API which chains the hooks together (see the use of
+# `yield`).
+#
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_collection(session):
+ """Reports the number of gathered tests after collection is finished."""
+ res = yield
+ tap.expect(session.testscollected)
+ return res
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_runtest_makereport(item, call):
+ """
+ Annotates a test item with our TestNotes and grabs relevant information for
+ reporting.
+
+ This is called multiple times per test, so it's not correct to print the TAP
+ result here. (A test and its teardown stage can both fail, and we want to
+ see the details for both.) We instead combine all the information for use by
+ our pytest_runtest_protocol wrapper later on.
+ """
+ res = yield
+
+ if notes_key not in item.stash:
+ item.stash[notes_key] = TestNotes()
+ notes = item.stash[notes_key]
+
+ report = res.get_result()
+ if report.passed:
+ pass # no annotation needed
+
+ elif report.skipped:
+ notes.skipped = True
+ _, _, notes.skip_reason = report.longrepr
+
+ elif report.failed:
+ notes.failed = True
+
+ if not notes.details:
+ notes.details += "{:_^72}\n\n".format(f" {report.head_line} ")
+
+ if report.when in ("setup", "teardown"):
+ notes.details += "\n{:_^72}\n\n".format(
+ f" Error during {report.when} of {report.head_line} "
+ )
+
+ notes.details += report.longreprtext + "\n"
+
+ else:
+ raise RuntimeError("pytest_runtest_makereport received unknown test status")
+
+ return res
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_runtest_protocol(item, nextitem):
+ """
+ Reports the TAP result for this test item using our gathered TestNotes.
+ """
+ res = yield
+
+ assert notes_key in item.stash, "pgtap didn't annotate a test item?"
+ notes = item.stash[notes_key]
+
+ if notes.failed:
+ tap.fail(item.nodeid, notes.details)
+ elif notes.skipped:
+ tap.skip(item.nodeid, notes.skip_reason)
+ else:
+ tap.ok(item.nodeid)
+
+ return res
diff --git a/src/test/pytest/pyt/test_something.py b/src/test/pytest/pyt/test_something.py
new file mode 100644
index 00000000000..5bd45618512
--- /dev/null
+++ b/src/test/pytest/pyt/test_something.py
@@ -0,0 +1,17 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import pytest
+
+
+@pytest.fixture
+def hey():
+ yield
+ raise "uh-oh"
+
+
+def test_something(hey):
+ assert 2 == 4
+
+
+def test_something_else():
+ assert 2 == 2
--
2.34.1
v1-0003-WIP-pytest-Add-some-SSL-client-tests.patchapplication/octet-stream; name=v1-0003-WIP-pytest-Add-some-SSL-client-tests.patchDownload
From 97908c2d566fc2df9e9b9247eb044dda4bfd56c3 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 19 Aug 2025 12:56:45 -0700
Subject: [PATCH v1 3/6] WIP: pytest: Add some SSL client tests
This is a sample client-only test suite. It tests some handshake
failures against a mock server, as well as a full SSL handshake + empty
query + response.
pyca/cryptography is added as a new package dependency. Certificates for
testing are generated on the fly.
The `pg` test package contains some helpers and fixtures (as well as
some self-tests for more complicated behavior). Of note:
- pg.require_test_extra() lets you mark a test/class/module as skippable
if PG_TEST_EXTRA does not contain the necessary strings.
- pg.remaining_timeout() is a function which can be repeatedly called to
determine how much of the PG_TEST_TIMEOUT_DEFAULT remains for the
current test item.
- pg.libpq is a fixture that wraps libpq.so in a more friendly, but
still low-level, ctypes FFI. Allocated resources are unwound and
released during test teardown.
The mock design is threaded: the server socket is listening on a
background thread, and the test provides the server logic via a
callback. There is some additional work still needed to make this
production-ready; see the notes for _TCPServer.background(). (Currently,
an exception in the wrong place could result in a hang-until-timeout
rather than an immediate failure.)
TODOs:
- local_server and tcp_server_class are nearly identical and should
share code.
- fix exception-related timeouts for .background()
- figure out the proper use of "session" vs "module" scope
- ensure that pq.libpq unwinds (to close connections) before tcp_server;
see comment in test_server_with_ssl_disabled()
---
.cirrus.tasks.yml | 18 +-
config/pytest-requirements.txt | 10 ++
pytest.ini | 5 +
src/test/pytest/meson.build | 1 +
src/test/pytest/pg/__init__.py | 3 +
src/test/pytest/pg/_env.py | 55 ++++++
src/test/pytest/pg/fixtures.py | 212 +++++++++++++++++++++++
src/test/pytest/pyt/conftest.py | 3 +
src/test/pytest/pyt/test_libpq.py | 171 ++++++++++++++++++
src/test/ssl/Makefile | 2 +
src/test/ssl/meson.build | 6 +
src/test/ssl/pyt/conftest.py | 129 ++++++++++++++
src/test/ssl/pyt/test_client.py | 278 ++++++++++++++++++++++++++++++
13 files changed, 887 insertions(+), 6 deletions(-)
create mode 100644 src/test/pytest/pg/__init__.py
create mode 100644 src/test/pytest/pg/_env.py
create mode 100644 src/test/pytest/pg/fixtures.py
create mode 100644 src/test/pytest/pyt/conftest.py
create mode 100644 src/test/pytest/pyt/test_libpq.py
create mode 100644 src/test/ssl/pyt/conftest.py
create mode 100644 src/test/ssl/pyt/test_client.py
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 80f9b394bd2..4e744f1c105 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -225,6 +225,7 @@ task:
sysctl kern.corefile='/tmp/cores/%N.%P.core'
setup_additional_packages_script: |
pkg install -y \
+ py311-cryptography \
py311-packaging \
py311-pytest
@@ -316,6 +317,7 @@ task:
setup_additional_packages_script: |
pkgin -y install \
+ py312-cryptography \
py312-packaging \
py312-test
ln -s /usr/pkg/bin/pytest-3.12 /usr/pkg/bin/pytest
@@ -339,8 +341,9 @@ task:
setup_additional_packages_script: |
pkg_add -I \
- py3-test \
- py3-packaging
+ py3-cryptography \
+ py3-packaging \
+ py3-test
# Always core dump to ${CORE_DUMP_DIR}
set_core_dump_script: sysctl -w kern.nosuidcoredump=2
<<: *openbsd_task_template
@@ -501,8 +504,9 @@ task:
setup_additional_packages_script: |
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get -y install \
- python3-pytest \
- python3-packaging
+ python3-cryptography \
+ python3-packaging \
+ python3-pytest
matrix:
# SPECIAL:
@@ -643,6 +647,7 @@ task:
CIRRUS_WORKING_DIR: ${HOME}/pgsql/
CCACHE_DIR: ${HOME}/ccache
MACPORTS_CACHE: ${HOME}/macports-cache
+ PYTEST_DEBUG_TEMPROOT: /tmp # default is too long for UNIX sockets on Mac
MESON_FEATURES: >-
-Dbonjour=enabled
@@ -663,6 +668,7 @@ task:
p5.34-io-tty
p5.34-ipc-run
python312
+ py312-cryptography
py312-packaging
py312-pytest
tcl
@@ -801,7 +807,7 @@ task:
# XXX Does Chocolatey really not have any Python package installers?
setup_additional_packages_script: |
REM choco install -y --no-progress ...
- pip3 install --user packaging pytest
+ pip3 install --user cryptography packaging pytest
setup_hosts_file_script: |
echo 127.0.0.1 pg-loadbalancetest >> c:\Windows\System32\Drivers\etc\hosts
@@ -864,7 +870,7 @@ task:
folder: ${CCACHE_DIR}
setup_additional_packages_script: |
- C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-python-packaging mingw-w64-ucrt-x86_64-python-pytest
+ C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-python-cryptography mingw-w64-ucrt-x86_64-python-packaging mingw-w64-ucrt-x86_64-python-pytest
mingw_info_script: |
%BASH% -c "where gcc"
diff --git a/config/pytest-requirements.txt b/config/pytest-requirements.txt
index 157262a684f..d2e26040466 100644
--- a/config/pytest-requirements.txt
+++ b/config/pytest-requirements.txt
@@ -20,3 +20,13 @@ pytest >= 7.0, < 9
# These are meta-packages which allow check_pytest.py to run.
packaging
importlib_metadata ; python_version < "3.8"
+
+# Notes on the cryptography package:
+# - 3.3.2 is shipped on Debian bullseye.
+# - 3.4.x drops support for Python 2, making it a version of note for older LTS
+# distros.
+# - 35.x switched versioning schemes and moved to Rust parsing.
+# - 40.x is the last version supporting Python 3.6.
+# XXX Is it appropriate to require cryptography, or should we simply skip
+# dependent tests?
+cryptography >= 3.3.2
diff --git a/pytest.ini b/pytest.ini
index eea2c180278..837097ba0bd 100644
--- a/pytest.ini
+++ b/pytest.ini
@@ -1 +1,6 @@
[pytest]
+
+minversion = 7.0
+
+# Common test code can be found here.
+pythonpath = src/test/pytest
diff --git a/src/test/pytest/meson.build b/src/test/pytest/meson.build
index abd128dfa24..f53193e8686 100644
--- a/src/test/pytest/meson.build
+++ b/src/test/pytest/meson.build
@@ -11,6 +11,7 @@ tests += {
'pytest': {
'tests': [
'pyt/test_something.py',
+ 'pyt/test_libpq.py',
],
},
}
diff --git a/src/test/pytest/pg/__init__.py b/src/test/pytest/pg/__init__.py
new file mode 100644
index 00000000000..ef8faf54ca4
--- /dev/null
+++ b/src/test/pytest/pg/__init__.py
@@ -0,0 +1,3 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+from ._env import has_test_extra, require_test_extra
diff --git a/src/test/pytest/pg/_env.py b/src/test/pytest/pg/_env.py
new file mode 100644
index 00000000000..6f18af07844
--- /dev/null
+++ b/src/test/pytest/pg/_env.py
@@ -0,0 +1,55 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import logging
+import os
+from typing import List, Optional
+
+import pytest
+
+logger = logging.getLogger(__name__)
+
+
+def has_test_extra(key: str) -> bool:
+ """
+ Returns True if the PG_TEST_EXTRA environment variable contains the given
+ key.
+ """
+ extra = os.getenv("PG_TEST_EXTRA", "")
+ return key in extra.split()
+
+
+def require_test_extra(*keys: str) -> bool:
+ """
+ A convenience annotation which will skip tests if all of the required keys
+ are not present in PG_TEST_EXTRA.
+
+ To skip a particular test function or class:
+
+ @pg.require_test_extra("ldap")
+ def test_some_ldap_feature():
+ ...
+
+ To skip an entire module:
+
+ pytestmark = pg.require_test_extra("ssl", "kerberos")
+ """
+ return pytest.mark.skipif(
+ not all([has_test_extra(k) for k in keys]),
+ reason="requires {} to be set in PG_TEST_EXTRA".format(", ".join(keys)),
+ )
+
+
+def test_timeout_default() -> int:
+ """
+ Returns the value of the PG_TEST_TIMEOUT_DEFAULT environment variable, in
+ seconds, or 180 if one was not provided.
+ """
+ default = os.getenv("PG_TEST_TIMEOUT_DEFAULT", "")
+ if not default:
+ return 180
+
+ try:
+ return int(default)
+ except ValueError as v:
+ logger.warning("PG_TEST_TIMEOUT_DEFAULT could not be parsed: " + str(v))
+ return 180
diff --git a/src/test/pytest/pg/fixtures.py b/src/test/pytest/pg/fixtures.py
new file mode 100644
index 00000000000..b5d3bff69a8
--- /dev/null
+++ b/src/test/pytest/pg/fixtures.py
@@ -0,0 +1,212 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import ctypes
+import platform
+import time
+from typing import Any, Callable, Dict
+
+import pytest
+
+from ._env import test_timeout_default
+
+
+@pytest.fixture
+def remaining_timeout():
+ """
+ This fixture provides a function that returns how much of the
+ PG_TEST_TIMEOUT_DEFAULT remains for the current test, in fractional seconds.
+ This value is never less than zero.
+
+ This fixture is per-test, so the deadline is also reset on a per-test basis.
+ """
+ now = time.monotonic()
+ deadline = now + test_timeout_default()
+
+ return lambda: max(deadline - time.monotonic(), 0)
+
+
+class _PGconn(ctypes.Structure):
+ pass
+
+
+class _PGresult(ctypes.Structure):
+ pass
+
+
+_PGconn_p = ctypes.POINTER(_PGconn)
+_PGresult_p = ctypes.POINTER(_PGresult)
+
+
+@pytest.fixture(scope="session")
+def libpq_handle():
+ """
+ Loads a ctypes handle for libpq. Some common function prototypes are
+ initialized for general use.
+ """
+ system = platform.system()
+
+ if system in ("Linux", "FreeBSD", "NetBSD", "OpenBSD"):
+ name = "libpq.so.5"
+ elif system == "Darwin":
+ name = "libpq.5.dylib"
+ elif system == "Windows":
+ name = "libpq.dll"
+ else:
+ assert False, f"the libpq fixture must be updated for {system}"
+
+ # XXX ctypes.CDLL() is a little stricter with load paths on Windows. The
+ # preferred way around that is to know the absolute path to libpq.dll, but
+ # that doesn't seem to mesh well with the current test infrastructure. For
+ # now, enable "standard" LoadLibrary behavior.
+ loadopts = {}
+ if system == "Windows":
+ loadopts["winmode"] = 0
+
+ lib = ctypes.CDLL(name, **loadopts)
+
+ #
+ # Function Prototypes
+ #
+
+ lib.PQconnectdb.restype = _PGconn_p
+ lib.PQconnectdb.argtypes = [ctypes.c_char_p]
+
+ lib.PQstatus.restype = ctypes.c_int
+ lib.PQstatus.argtypes = [_PGconn_p]
+
+ lib.PQexec.restype = _PGresult_p
+ lib.PQexec.argtypes = [_PGconn_p, ctypes.c_char_p]
+
+ lib.PQresultStatus.restype = ctypes.c_int
+ lib.PQresultStatus.argtypes = [_PGresult_p]
+
+ lib.PQclear.restype = None
+ lib.PQclear.argtypes = [_PGresult_p]
+
+ lib.PQerrorMessage.restype = ctypes.c_char_p
+ lib.PQerrorMessage.argtypes = [_PGconn_p]
+
+ lib.PQfinish.restype = None
+ lib.PQfinish.argtypes = [_PGconn_p]
+
+ return lib
+
+
+class PGresult(contextlib.AbstractContextManager):
+ """Wraps a raw _PGresult_p with a more friendly interface."""
+
+ def __init__(self, lib: ctypes.CDLL, res: _PGresult_p):
+ self._lib = lib
+ self._res = res
+
+ def __exit__(self, *exc):
+ self._lib.PQclear(self._res)
+ self._res = None
+
+ def status(self):
+ return self._lib.PQresultStatus(self._res)
+
+
+class PGconn(contextlib.AbstractContextManager):
+ """
+ Wraps a raw _PGconn_p with a more friendly interface. This is just a
+ stub; it's expected to grow.
+ """
+
+ def __init__(
+ self,
+ lib: ctypes.CDLL,
+ handle: _PGconn_p,
+ stack: contextlib.ExitStack,
+ ):
+ self._lib = lib
+ self._handle = handle
+ self._stack = stack
+
+ def __exit__(self, *exc):
+ self._lib.PQfinish(self._handle)
+ self._handle = None
+
+ def exec(self, query: str) -> PGresult:
+ """
+ Executes a query via PQexec() and returns a PGresult.
+ """
+ res = self._lib.PQexec(self._handle, query.encode())
+ return self._stack.enter_context(PGresult(self._lib, res))
+
+
+@pytest.fixture
+def libpq(libpq_handle, remaining_timeout):
+ """
+ Provides a ctypes-based API wrapped around libpq.so. This fixture keeps
+ track of allocated resources and cleans them up during teardown. See
+ _Libpq's public API for details.
+ """
+
+ class _Libpq(contextlib.ExitStack):
+ CONNECTION_OK = 0
+
+ PGRES_EMPTY_QUERY = 0
+
+ class Error(RuntimeError):
+ """
+ libpq.Error is the exception class for application-level errors that
+ are encountered during libpq operations.
+ """
+
+ pass
+
+ def __init__(self):
+ super().__init__()
+ self.lib = libpq_handle
+
+ def _connstr(self, opts: Dict[str, Any]) -> str:
+ """
+ Flattens the provided options into a libpq connection string. Values
+ are converted to str and quoted/escaped as necessary.
+ """
+ settings = []
+
+ for k, v in opts.items():
+ v = str(v)
+ if not v:
+ v = "''"
+ else:
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+
+ if " " in v:
+ v = f"'{v}'"
+
+ settings.append(f"{k}={v}")
+
+ return " ".join(settings)
+
+ def must_connect(self, **opts) -> PGconn:
+ """
+ Connects to a server, using the given connection options, and
+ returns a libpq.PGconn object wrapping the connection handle. A
+ failure will raise libpq.Error.
+
+ Connections honor PG_TEST_TIMEOUT_DEFAULT unless connect_timeout is
+ explicitly overridden in opts.
+ """
+
+ if "connect_timeout" not in opts:
+ t = int(remaining_timeout())
+ opts["connect_timeout"] = max(t, 1)
+
+ conn_p = self.lib.PQconnectdb(self._connstr(opts).encode())
+
+ # Ensure the connection handle is always closed at the end of the
+ # test.
+ conn = self.enter_context(PGconn(self.lib, conn_p, stack=self))
+
+ if self.lib.PQstatus(conn_p) != self.CONNECTION_OK:
+ raise self.Error(self.lib.PQerrorMessage(conn_p).decode())
+
+ return conn
+
+ with _Libpq() as lib:
+ yield lib
diff --git a/src/test/pytest/pyt/conftest.py b/src/test/pytest/pyt/conftest.py
new file mode 100644
index 00000000000..ecb72be26d7
--- /dev/null
+++ b/src/test/pytest/pyt/conftest.py
@@ -0,0 +1,3 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+from pg.fixtures import *
diff --git a/src/test/pytest/pyt/test_libpq.py b/src/test/pytest/pyt/test_libpq.py
new file mode 100644
index 00000000000..9f0857cc612
--- /dev/null
+++ b/src/test/pytest/pyt/test_libpq.py
@@ -0,0 +1,171 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import os
+import socket
+import struct
+import threading
+from typing import Callable
+
+import pytest
+
+
+@pytest.mark.parametrize(
+ "opts, expected",
+ [
+ (dict(), ""),
+ (dict(port=5432), "port=5432"),
+ (dict(port=5432, dbname="postgres"), "port=5432 dbname=postgres"),
+ (dict(host=""), "host=''"),
+ (dict(host=" "), r"host=' '"),
+ (dict(keyword="'"), r"keyword=\'"),
+ (dict(keyword=" \\' "), r"keyword=' \\\' '"),
+ ],
+)
+def test_connstr(libpq, opts, expected):
+ """Tests the escape behavior for libpq._connstr()."""
+ assert libpq._connstr(opts) == expected
+
+
+def test_must_connect_errors(libpq):
+ """Tests that must_connect() raises libpq.Error."""
+ with pytest.raises(libpq.Error, match="invalid connection option"):
+ libpq.must_connect(some_unknown_keyword="whatever")
+
+
+@pytest.fixture
+def local_server(tmp_path, remaining_timeout):
+ """
+ Opens up a local UNIX socket for mocking a Postgres server on a background
+ thread. See the _Server API for usage.
+
+ This fixture requires AF_UNIX support; dependent tests will be skipped on
+ platforms that don't provide it.
+ """
+
+ try:
+ from socket import AF_UNIX
+ except ImportError:
+ pytest.skip("AF_UNIX not supported on this platform")
+
+ class _Server(contextlib.ExitStack):
+ """
+ Implementation class for local_server. See .background() for the primary
+ entry point for tests. Postgres clients may connect to this server via
+ local_server.host/local_server.port.
+
+ _Server derives from contextlib.ExitStack to provide easy cleanup of
+ associated resources; see the documentation for that class for a full
+ explanation.
+ """
+
+ def __init__(self):
+ super().__init__()
+
+ self.host = tmp_path
+ self.port = 5432
+
+ self._thread = None
+ self._thread_exc = None
+ self._listener = self.enter_context(
+ socket.socket(AF_UNIX, socket.SOCK_STREAM),
+ )
+
+ def bind_and_listen(self):
+ """
+ Does the actual work of binding the UNIX socket using the Postgres
+ server conventions and listening for connections.
+
+ The listen backlog is currently hardcoded to one.
+ """
+ sockfile = self.host / ".s.PGSQL.{}".format(self.port)
+
+ # Lock down the permissions on the new socket.
+ prev_mask = os.umask(0o077)
+
+ # Bind (creating the socket file), and immediately register it for
+ # deletion from disk when the stack is cleaned up.
+ self._listener.bind(bytes(sockfile))
+ self.callback(os.unlink, sockfile)
+
+ os.umask(prev_mask)
+
+ self._listener.listen(1)
+
+ def background(self, fn: Callable[[socket.socket], None]) -> None:
+ """
+ Accepts a client connection on a background thread and passes it to
+ the provided callback. Any exceptions raised from the callback will
+ be re-raised on the main thread during fixture teardown.
+
+ Blocking operations on the connected socket default to using the
+ remaining_timeout(), though this can be changed by the test via the
+ socket's .settimeout().
+ """
+
+ def _bg():
+ try:
+ self._listener.settimeout(remaining_timeout())
+ sock, _ = self._listener.accept()
+
+ with sock:
+ sock.settimeout(remaining_timeout())
+ fn(sock)
+
+ except Exception as e:
+ # Save the exception for re-raising on the main thread.
+ self._thread_exc = e
+
+ # TODO: rather than using callback(), consider explicitly signaling
+ # the fn() implementation to stop early if we get an exception.
+ # Otherwise we'll hang until the end of the timeout.
+ self._thread = threading.Thread(target=_bg)
+ self.callback(self._join)
+
+ self._thread.start()
+
+ def _join(self):
+ """
+ Waits for the background thread to finish and raises any thrown
+ exception. This is called during fixture teardown.
+ """
+ # Give a little bit of wiggle room on the join timeout, since we're
+ # racing against the test's own use of remaining_timeout(). (It's
+ # preferable to let tests report timeouts; the stack traces will
+ # help with debugging.)
+ self._thread.join(remaining_timeout() + 1)
+ if self._thread.is_alive():
+ raise TimeoutError("background thread is still running after timeout")
+
+ if self._thread_exc is not None:
+ raise self._thread_exc
+
+ with _Server() as s:
+ s.bind_and_listen()
+ yield s
+
+
+def test_connection_is_finished_on_error(libpq, local_server, remaining_timeout):
+ """Tests that PQfinish() gets called at the end of testing."""
+ expected_error = "something is wrong"
+
+ def serve_error(s: socket.socket) -> None:
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Quick check for the startup packet version.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (3, 0)
+
+ # Discard the remainder of the startup packet and send a v2 error.
+ s.recv(pktlen - 8)
+ s.send(b"E" + expected_error.encode() + b"\0")
+
+ # And now the socket should be closed.
+ assert not s.recv(1), "client sent unexpected data"
+
+ local_server.background(serve_error)
+
+ with pytest.raises(libpq.Error, match=expected_error):
+ # Exiting this context should result in PQfinish().
+ with libpq:
+ libpq.must_connect(host=local_server.host, port=local_server.port)
diff --git a/src/test/ssl/Makefile b/src/test/ssl/Makefile
index e8a1639db2d..895ea5ea41c 100644
--- a/src/test/ssl/Makefile
+++ b/src/test/ssl/Makefile
@@ -30,6 +30,8 @@ clean distclean:
# Doesn't depend on sslfiles because we don't rebuild them by default
check:
$(prove_check)
+ # XXX these suites should run independently, not serially
+ $(pytest_check)
installcheck:
$(prove_installcheck)
diff --git a/src/test/ssl/meson.build b/src/test/ssl/meson.build
index d8e0fb518e0..a0ee2af0899 100644
--- a/src/test/ssl/meson.build
+++ b/src/test/ssl/meson.build
@@ -15,4 +15,10 @@ tests += {
't/003_sslinfo.pl',
],
},
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_client.py',
+ 'pyt/test_server.py',
+ ],
+ },
}
diff --git a/src/test/ssl/pyt/conftest.py b/src/test/ssl/pyt/conftest.py
new file mode 100644
index 00000000000..fb4db372f03
--- /dev/null
+++ b/src/test/ssl/pyt/conftest.py
@@ -0,0 +1,129 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import datetime
+import tempfile
+from collections import namedtuple
+
+import pytest
+
+import pg
+from pg.fixtures import *
+
+
+@pytest.fixture(scope="session")
+def cryptography():
+ return pytest.importorskip("cryptography", "3.3.2")
+
+
+Cert = namedtuple("Cert", "cert, certpath, key, keypath")
+
+
+@pytest.fixture(scope="session")
+def certs(cryptography, tmp_path_factory):
+ """
+ Caches commonly used certificates at the session level, and provides a way
+ to create new ones.
+
+ - certs.ca: the root CA certificate
+
+ - certs.server: the "standard" server certficate, signed by certs.ca
+
+ - certs.server_host: the hostname of the certs.server certificate
+
+ - certs.new(): creates a custom certificate, signed by certs.ca
+ """
+
+ from cryptography import x509
+ from cryptography.hazmat.primitives import hashes, serialization
+ from cryptography.hazmat.primitives.asymmetric import rsa
+ from cryptography.x509.oid import NameOID
+
+ tmpdir = tmp_path_factory.mktemp("test-certs")
+
+ class _Certs:
+ def __init__(self):
+ self.ca = self.new(
+ x509.Name(
+ [x509.NameAttribute(NameOID.COMMON_NAME, "PG pytest CA")],
+ ),
+ ca=True,
+ )
+
+ self.server_host = "example.org"
+ self.server = self.new(
+ x509.Name(
+ [x509.NameAttribute(NameOID.COMMON_NAME, self.server_host)],
+ )
+ )
+
+ def new(self, subject: x509.Name, *, ca=False) -> Cert:
+ """
+ Creates and signs a new Cert with the given subject name. If ca is
+ True, the certificate will be self-signed; otherwise the certificate
+ is signed by self.ca.
+ """
+ key = rsa.generate_private_key(
+ public_exponent=65537,
+ key_size=2048,
+ )
+
+ builder = x509.CertificateBuilder()
+ now = datetime.datetime.now(datetime.timezone.utc)
+
+ builder = (
+ builder.subject_name(subject)
+ .public_key(key.public_key())
+ .serial_number(x509.random_serial_number())
+ .not_valid_before(now)
+ .not_valid_after(now + datetime.timedelta(hours=1))
+ )
+
+ if ca:
+ builder = builder.issuer_name(subject)
+ else:
+ builder = builder.issuer_name(self.ca.cert.subject)
+
+ builder = builder.add_extension(
+ x509.BasicConstraints(ca=ca, path_length=None),
+ critical=True,
+ )
+
+ cert = builder.sign(
+ private_key=key if ca else self.ca.key,
+ algorithm=hashes.SHA256(),
+ )
+
+ # Dump the certificate and key to file.
+ keypath = self._tofile(
+ key.private_bytes(
+ serialization.Encoding.PEM,
+ serialization.PrivateFormat.PKCS8,
+ serialization.NoEncryption(),
+ ),
+ suffix=".key",
+ )
+ certpath = self._tofile(
+ cert.public_bytes(serialization.Encoding.PEM),
+ suffix="-ca.crt" if ca else ".crt",
+ )
+
+ return Cert(
+ cert=cert,
+ certpath=certpath,
+ key=key,
+ keypath=keypath,
+ )
+
+ def _tofile(self, data: bytes, *, suffix) -> str:
+ """
+ Dumps data to a file on disk with the requested suffix and returns
+ the path. The file is located somewhere in pytest's temporary
+ directory root.
+ """
+ f = tempfile.NamedTemporaryFile(suffix=suffix, dir=tmpdir, delete=False)
+ with f:
+ f.write(data)
+
+ return f.name
+
+ return _Certs()
diff --git a/src/test/ssl/pyt/test_client.py b/src/test/ssl/pyt/test_client.py
new file mode 100644
index 00000000000..28110ae0717
--- /dev/null
+++ b/src/test/ssl/pyt/test_client.py
@@ -0,0 +1,278 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import ctypes
+import socket
+import ssl
+import struct
+import threading
+from typing import Callable
+
+import pytest
+
+import pg
+
+# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
+pytestmark = pg.require_test_extra("ssl")
+
+
+@pytest.fixture(scope="session", autouse=True)
+def skip_if_no_ssl_support(libpq_handle):
+ """Skips tests if SSL support is not configured."""
+
+ # Declare PQsslAttribute().
+ PQsslAttribute = libpq_handle.PQsslAttribute
+ PQsslAttribute.restype = ctypes.c_char_p
+ PQsslAttribute.argtypes = [ctypes.c_void_p, ctypes.c_char_p]
+
+ if not PQsslAttribute(None, b"library"):
+ pytest.skip("requires SSL support to be configured")
+
+
+#
+# Test Fixtures
+#
+
+
+@pytest.fixture
+def tcp_server_class(remaining_timeout):
+ """
+ Metafixture to combine related logic for tcp_server and ssl_server.
+
+ TODO: combine with test_libpq.local_server
+ """
+
+ class _TCPServer(contextlib.ExitStack):
+ """
+ Implementation class for tcp_server. See .background() for the primary
+ entry point for tests. Postgres clients may connect to this server via
+ **tcp_server.conninfo.
+
+ _TCPServer derives from contextlib.ExitStack to provide easy cleanup of
+ associated resources; see the documentation for that class for a full
+ explanation.
+ """
+
+ def __init__(self):
+ super().__init__()
+
+ self._thread = None
+ self._thread_exc = None
+ self._listener = self.enter_context(
+ socket.socket(socket.AF_INET, socket.SOCK_STREAM),
+ )
+
+ self._bind_and_listen()
+ sockname = self._listener.getsockname()
+ self.conninfo = dict(
+ hostaddr=sockname[0],
+ port=sockname[1],
+ )
+
+ def _bind_and_listen(self):
+ """
+ Does the actual work of binding the socket and listening for
+ connections.
+
+ The listen backlog is currently hardcoded to one.
+ """
+ self._listener.bind(("127.0.0.1", 0))
+ self._listener.listen(1)
+
+ def background(self, fn: Callable[[socket.socket], None]) -> None:
+ """
+ Accepts a client connection on a background thread and passes it to
+ the provided callback. Any exceptions raised from the callback will
+ be re-raised on the main thread during fixture teardown.
+
+ Blocking operations on the connected socket default to using the
+ remaining_timeout(), though this can be changed by the test via the
+ socket's .settimeout().
+ """
+
+ def _bg():
+ try:
+ self._listener.settimeout(remaining_timeout())
+ sock, _ = self._listener.accept()
+
+ with sock:
+ sock.settimeout(remaining_timeout())
+ fn(sock)
+
+ except Exception as e:
+ # Save the exception for re-raising on the main thread.
+ self._thread_exc = e
+
+ # TODO: rather than using callback(), consider explicitly signaling
+ # the fn() implementation to stop early if we get an exception.
+ # Otherwise we'll hang until the end of the timeout.
+ self._thread = threading.Thread(target=_bg)
+ self.callback(self._join)
+
+ self._thread.start()
+
+ def _join(self):
+ """
+ Waits for the background thread to finish and raises any thrown
+ exception. This is called during fixture teardown.
+ """
+ # Give a little bit of wiggle room on the join timeout, since we're
+ # racing against the test's own use of remaining_timeout(). (It's
+ # preferable to let tests report timeouts; the stack traces will
+ # help with debugging.)
+ self._thread.join(remaining_timeout() + 1)
+ if self._thread.is_alive():
+ raise TimeoutError("background thread is still running after timeout")
+
+ if self._thread_exc is not None:
+ raise self._thread_exc
+
+ return _TCPServer
+
+
+@pytest.fixture
+def tcp_server(tcp_server_class):
+ """
+ Opens up a local TCP socket for mocking a Postgres server on a background
+ thread. See the _TCPServer API for usage.
+ """
+ with tcp_server_class() as s:
+ yield s
+
+
+@pytest.fixture
+def ssl_server(tcp_server_class, certs):
+ """
+ Like tcp_server, but with an additional .background_ssl() method which will
+ perform a SSLRequest handshake on the socket before handing the connection
+ to the test callback.
+
+ This server uses certs.server as its identity.
+ """
+
+ class _SSLServer(tcp_server_class):
+ def __init__(self):
+ super().__init__()
+
+ self.conninfo["host"] = certs.server_host
+
+ self._ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
+ self._ctx.load_cert_chain(certs.server.certpath, certs.server.keypath)
+
+ def background_ssl(self, fn: Callable[[ssl.SSLSocket], None]) -> None:
+ """
+ Invokes a server callback as with .background(), but an SSLRequest
+ handshake is performed first, and the socket provided to the
+ callback has been wrapped in an OpenSSL layer.
+ """
+
+ def handshake(s: socket.socket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Make sure we get an SSLRequest.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (1234, 5679)
+ assert pktlen == 8
+
+ # Accept the SSLRequest.
+ s.send(b"S")
+
+ with self._ctx.wrap_socket(s, server_side=True) as wrapped:
+ fn(wrapped)
+
+ self.background(handshake)
+
+ with _SSLServer() as s:
+ yield s
+
+
+#
+# Tests
+#
+
+
+@pytest.mark.parametrize("sslmode", ("require", "verify-ca", "verify-full"))
+def test_server_with_ssl_disabled(libpq, tcp_server, certs, sslmode):
+ """
+ Make sure client refuses to talk to non-SSL servers with stricter
+ sslmodes.
+ """
+
+ def refuse_ssl(s: socket.socket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Make sure we get an SSLRequest.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (1234, 5679)
+ assert pktlen == 8
+
+ # Refuse the SSLRequest.
+ s.send(b"N")
+
+ # Wait for the client to close the connection.
+ assert not s.recv(1), "client sent unexpected data"
+
+ tcp_server.background(refuse_ssl)
+
+ with pytest.raises(libpq.Error, match="server does not support SSL"):
+ with libpq: # XXX tests shouldn't need to do this
+ libpq.must_connect(
+ **tcp_server.conninfo,
+ sslrootcert=certs.ca.certpath,
+ sslmode=sslmode,
+ )
+
+
+def test_verify_full_connection(libpq, ssl_server, certs):
+ """Completes a verify-full connection and empty query."""
+
+ def handle_empty_query(s: ssl.SSLSocket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Check the startup packet version, then discard the remainder.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (3, 0)
+ s.recv(pktlen - 8)
+
+ # Send the required litany of server messages.
+ s.send(struct.pack("!cII", b"R", 8, 0)) # AuthenticationOK
+
+ # ParameterStatus: client_encoding
+ key = b"client_encoding\0"
+ val = b"UTF-8\0"
+ s.send(struct.pack("!cI", b"S", 4 + len(key) + len(val)) + key + val)
+
+ # ParameterStatus: DateStyle
+ key = b"DateStyle\0"
+ val = b"ISO, MDY\0"
+ s.send(struct.pack("!cI", b"S", 4 + len(key) + len(val)) + key + val)
+
+ s.send(struct.pack("!cIII", b"K", 12, 1234, 1234)) # BackendKeyData
+ s.send(struct.pack("!cIc", b"Z", 5, b"I")) # ReadyForQuery
+
+ # Expect an empty query.
+ pkttype = s.recv(1)
+ assert pkttype == b"Q"
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+ assert s.recv(pktlen - 4) == b"\0"
+
+ # Send an EmptyQueryResponse+ReadyForQuery.
+ s.send(struct.pack("!cI", b"I", 4))
+ s.send(struct.pack("!cIc", b"Z", 5, b"I"))
+
+ # libpq should terminate and close the connection.
+ assert s.recv(1) == b"X"
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+ assert pktlen == 4
+
+ assert not s.recv(1), "client sent unexpected data"
+
+ ssl_server.background_ssl(handle_empty_query)
+
+ conn = libpq.must_connect(
+ **ssl_server.conninfo,
+ sslrootcert=certs.ca.certpath,
+ sslmode="verify-full",
+ )
+ with conn:
+ assert conn.exec("").status() == libpq.PGRES_EMPTY_QUERY
--
2.34.1
v1-0004-WIP-pytest-Add-some-server-side-SSL-tests.patchapplication/octet-stream; name=v1-0004-WIP-pytest-Add-some-server-side-SSL-tests.patchDownload
From 110ee527ae2b6cac4fc815eb1839068c33dedeb5 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Fri, 22 Aug 2025 17:39:40 -0700
Subject: [PATCH v1 4/6] WIP: pytest: Add some server-side SSL tests
In the same vein as the previous commit, this is a server-only test
suite operating against a mock client. The test itself is a heavily
parameterized check for direct-SSL handshake behavior, using a
combination of "standard" and "custom" certificates via the certs
fixture.
installcheck is currently unsupported, but the architecture has some
extension points that should make it possible later. For now, a new
server is always started for the test session.
New session-level fixtures have been added which probably need to
migrate to the `pg` package. Of note:
- datadir points to the server's data directory
- sockdir points to the server's UNIX socket/lock directory
- server_instance actually inits and starts a server via the pg_ctl on
PATH (and could eventually point at an installcheck target)
Wrapping these session-level fixtures is pg_server[_session], which
provides APIs for configuration changes that unwind themselves at the
end of fixture scopes. There's also an example of nested scopes, via
pg_server_session.subcontext(). Many TODOs remain before we're on par
with Test::Cluster, but this should illustrate my desired architecture
pretty well.
Windows currently uses SCRAM-over-UNIX for the admin account rather than
SSPI-over-TCP. There's some dead Win32 code in pg.current_windows_user,
but I've kept it as an illustration of how a developer might write such
code for SSPI. I'll probably remove it in a future patch version.
TODOs:
- port more server configuration behavior from PostgreSQL::Test::Cluster
- decide again on "session" vs. "module" scope for server fixtures
- improve remaining_timeout() integration with socket operations; at the
moment, the timeout resets on every call rather than decrementing
---
src/test/pytest/pg/__init__.py | 1 +
src/test/pytest/pg/_win32.py | 145 +++++++++
src/test/ssl/pyt/conftest.py | 113 +++++++
src/test/ssl/pyt/test_server.py | 538 ++++++++++++++++++++++++++++++++
4 files changed, 797 insertions(+)
create mode 100644 src/test/pytest/pg/_win32.py
create mode 100644 src/test/ssl/pyt/test_server.py
diff --git a/src/test/pytest/pg/__init__.py b/src/test/pytest/pg/__init__.py
index ef8faf54ca4..5dae49b6406 100644
--- a/src/test/pytest/pg/__init__.py
+++ b/src/test/pytest/pg/__init__.py
@@ -1,3 +1,4 @@
# Copyright (c) 2025, PostgreSQL Global Development Group
from ._env import has_test_extra, require_test_extra
+from ._win32 import current_windows_user
diff --git a/src/test/pytest/pg/_win32.py b/src/test/pytest/pg/_win32.py
new file mode 100644
index 00000000000..3fd67b10191
--- /dev/null
+++ b/src/test/pytest/pg/_win32.py
@@ -0,0 +1,145 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import ctypes
+import platform
+
+
+def current_windows_user():
+ """
+ A port of pg_regress.c's current_windows_user() helper. Returns
+ (accountname, domainname).
+
+ XXX This is dead code now, but I'm keeping it as a motivating example of
+ Win32 interaction, and someone may find it useful in the future when writing
+ SSPI tests?
+ """
+ try:
+ advapi32 = ctypes.windll.advapi32
+ kernel32 = ctypes.windll.kernel32
+ except AttributeError:
+ raise RuntimeError(
+ f"current_windows_user() is not supported on {platform.system()}"
+ )
+
+ def raise_winerror_when_false(result, func, arguments):
+ """
+ A ctypes errcheck handler that raises WinError (which will contain the
+ result of GetLastError()) when the function's return value is false.
+ """
+ if not result:
+ raise ctypes.WinError()
+
+ #
+ # Function Prototypes
+ #
+
+ from ctypes import wintypes
+
+ # GetCurrentProcess
+ kernel32.GetCurrentProcess.restype = wintypes.HANDLE
+ kernel32.GetCurrentProcess.argtypes = []
+
+ # OpenProcessToken
+ TOKEN_READ = 0x00020008
+
+ advapi32.OpenProcessToken.restype = wintypes.BOOL
+ advapi32.OpenProcessToken.argtypes = [
+ wintypes.HANDLE,
+ wintypes.DWORD,
+ wintypes.PHANDLE,
+ ]
+ advapi32.OpenProcessToken.errcheck = raise_winerror_when_false
+
+ # GetTokenInformation
+ PSID = wintypes.LPVOID # we don't need the internals
+ TOKEN_INFORMATION_CLASS = wintypes.INT
+ TokenUser = 1
+
+ class SID_AND_ATTRIBUTES(ctypes.Structure):
+ _fields_ = [
+ ("Sid", PSID),
+ ("Attributes", wintypes.DWORD),
+ ]
+
+ class TOKEN_USER(ctypes.Structure):
+ _fields_ = [
+ ("User", SID_AND_ATTRIBUTES),
+ ]
+
+ advapi32.GetTokenInformation.restype = wintypes.BOOL
+ advapi32.GetTokenInformation.argtypes = [
+ wintypes.HANDLE,
+ TOKEN_INFORMATION_CLASS,
+ wintypes.LPVOID,
+ wintypes.DWORD,
+ wintypes.PDWORD,
+ ]
+ advapi32.GetTokenInformation.errcheck = raise_winerror_when_false
+
+ # LookupAccountSid
+ SID_NAME_USE = wintypes.INT
+ PSID_NAME_USE = ctypes.POINTER(SID_NAME_USE)
+
+ advapi32.LookupAccountSidW.restype = wintypes.BOOL
+ advapi32.LookupAccountSidW.argtypes = [
+ wintypes.LPCWSTR,
+ PSID,
+ wintypes.LPWSTR,
+ wintypes.LPDWORD,
+ wintypes.LPWSTR,
+ wintypes.LPDWORD,
+ PSID_NAME_USE,
+ ]
+ advapi32.LookupAccountSidW.errcheck = raise_winerror_when_false
+
+ #
+ # Implementation (see pg_SSPI_recv_auth())
+ #
+
+ # Get the current process token...
+ token = wintypes.HANDLE()
+ proc = kernel32.GetCurrentProcess()
+ advapi32.OpenProcessToken(proc, TOKEN_READ, token)
+
+ # ...then read the TOKEN_USER struct for that token...
+ info = TOKEN_USER()
+ infolen = wintypes.DWORD()
+
+ try:
+ # (GetTokenInformation creates a buffer bigger than TOKEN_USER, so we
+ # have to query the correct length first.)
+ advapi32.GetTokenInformation(token, TokenUser, None, 0, ctypes.byref(infolen))
+ assert False, "GetTokenInformation succeeded unexpectedly"
+
+ except OSError as err:
+ assert err.winerror == 122 # insufficient buffer
+
+ ctypes.resize(info, infolen.value)
+ advapi32.GetTokenInformation(
+ token,
+ TokenUser,
+ ctypes.byref(info),
+ ctypes.sizeof(info),
+ ctypes.byref(infolen),
+ )
+
+ # ...then pull the account and domain names out of the user SID.
+ MAXPGPATH = 1024
+
+ account = ctypes.create_unicode_buffer(MAXPGPATH)
+ domain = ctypes.create_unicode_buffer(MAXPGPATH)
+ accountlen = wintypes.DWORD(ctypes.sizeof(account))
+ domainlen = wintypes.DWORD(ctypes.sizeof(domain))
+ use = SID_NAME_USE()
+
+ advapi32.LookupAccountSidW(
+ None,
+ info.User.Sid,
+ account,
+ ctypes.byref(accountlen),
+ domain,
+ ctypes.byref(domainlen),
+ ctypes.byref(use),
+ )
+
+ return (account.value, domain.value)
diff --git a/src/test/ssl/pyt/conftest.py b/src/test/ssl/pyt/conftest.py
index fb4db372f03..85d2c994828 100644
--- a/src/test/ssl/pyt/conftest.py
+++ b/src/test/ssl/pyt/conftest.py
@@ -1,6 +1,12 @@
# Copyright (c) 2025, PostgreSQL Global Development Group
import datetime
+import os
+import pathlib
+import platform
+import secrets
+import socket
+import subprocess
import tempfile
from collections import namedtuple
@@ -127,3 +133,110 @@ def certs(cryptography, tmp_path_factory):
return f.name
return _Certs()
+
+
+@pytest.fixture(scope="session")
+def datadir(tmp_path_factory):
+ """
+ Returns the directory name to use as the server data directory. If
+ TESTDATADIR is provided, that will be used; otherwise a new temporary
+ directory is created in the pytest temp root.
+ """
+ d = os.getenv("TESTDATADIR")
+ if d:
+ d = pathlib.Path(d)
+ else:
+ d = tmp_path_factory.mktemp("tmp_check")
+
+ return d
+
+
+@pytest.fixture(scope="session")
+def sockdir(tmp_path_factory):
+ """
+ Returns the directory name to use as the server's unix_socket_directories
+ setting. Local client connections use this as the PGHOST.
+
+ At the moment, this is always put under the pytest temp root.
+ """
+ return tmp_path_factory.mktemp("sockfiles")
+
+
+@pytest.fixture(scope="session")
+def winpassword():
+ """The per-session SCRAM password for the server admin on Windows."""
+ return secrets.token_urlsafe(16)
+
+
+@pytest.fixture(scope="session")
+def server_instance(certs, datadir, sockdir, winpassword):
+ """
+ Starts a running Postgres server listening on localhost. The HBA initially
+ allows only local UNIX connections from the same user.
+
+ TODO: when installcheck is supported, this should optionally point to the
+ currently running server instead.
+ """
+
+ # Lock down the HBA by default; tests can open it back up later.
+ if platform.system() == "Windows":
+ # On Windows, for admin connections, use SCRAM with a generated password
+ # over local sockets. This requires additional work during initdb.
+ method = "scram-sha-256"
+
+ # NamedTemporaryFile doesn't work very nicely on Windows until Python
+ # 3.12, which introduces NamedTemporaryFile(delete_on_close=False).
+ # Until then, specify delete=False and manually unlink after use.
+ with tempfile.NamedTemporaryFile("w", delete=False) as pwfile:
+ pwfile.write(winpassword)
+
+ subprocess.check_call(
+ ["initdb", "--auth=scram-sha-256", "--pwfile", pwfile.name, datadir]
+ )
+ os.unlink(pwfile.name)
+
+ else:
+ # For other OSes we can just use peer auth.
+ method = "peer"
+ subprocess.check_call(["pg_ctl", "-D", datadir, "init"])
+
+ with open(datadir / "pg_hba.conf", "w") as f:
+ print(f"# default: local {method} connections only", file=f)
+ print(f"local all all {method}", file=f)
+
+ # Figure out a port to listen on. Attempt to reserve both IPv4 and IPv6
+ # addresses in one go.
+ #
+ # Note: socket.has_dualstack_ipv6/create_server are only in Python 3.8+.
+ if hasattr(socket, "has_dualstack_ipv6") and socket.has_dualstack_ipv6():
+ addr = ("::1", 0)
+ s = socket.create_server(addr, family=socket.AF_INET6, dualstack_ipv6=True)
+
+ hostaddr, port, _, _ = s.getsockname()
+ addrs = [hostaddr, "127.0.0.1"]
+
+ else:
+ addr = ("127.0.0.1", 0)
+
+ s = socket.socket()
+ s.bind(addr)
+
+ hostaddr, port = s.getsockname()
+ addrs = [hostaddr]
+
+ log = os.path.join(datadir, "postgresql.log")
+
+ with s, open(os.path.join(datadir, "postgresql.conf"), "a") as f:
+ print(file=f)
+ print("unix_socket_directories = '{}'".format(sockdir.as_posix()), file=f)
+ print("listen_addresses = '{}'".format(",".join(addrs)), file=f)
+ print("port =", port, file=f)
+ print("log_connections = all", file=f)
+
+ # Between closing of the socket, s, and server start, we're racing against
+ # anything that wants to open up ephemeral ports, so try not to put any new
+ # work here.
+
+ subprocess.check_call(["pg_ctl", "-D", datadir, "-l", log, "start"])
+ yield (hostaddr, port)
+ subprocess.check_call(["pg_ctl", "-D", datadir, "-l", log, "stop"])
diff --git a/src/test/ssl/pyt/test_server.py b/src/test/ssl/pyt/test_server.py
new file mode 100644
index 00000000000..2d0be735371
--- /dev/null
+++ b/src/test/ssl/pyt/test_server.py
@@ -0,0 +1,538 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import os
+import pathlib
+import platform
+import re
+import shutil
+import socket
+import ssl
+import struct
+import subprocess
+import tempfile
+from collections import namedtuple
+from typing import Dict, List, Union
+
+import pytest
+
+import pg
+
+# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
+pytestmark = pg.require_test_extra("ssl")
+
+
+#
+# Test Fixtures
+#
+
+
+@pytest.fixture(scope="session")
+def connenv(server_instance, sockdir, datadir):
+ """
+ Provides the values for several PG* environment variables needed for our
+ utility programs to connect to the server_instance.
+ """
+ return {
+ "PGHOST": str(sockdir),
+ "PGPORT": str(server_instance[1]),
+ "PGDATABASE": "postgres",
+ "PGDATA": str(datadir),
+ }
+
+
+class FileBackup(contextlib.AbstractContextManager):
+ """
+ A context manager which backs up a file's contents, restoring them on exit.
+ """
+
+ def __init__(self, file: pathlib.Path):
+ super().__init__()
+
+ self._file = file
+
+ def __enter__(self):
+ with tempfile.NamedTemporaryFile(
+ prefix=self._file.name, dir=self._file.parent, delete=False
+ ) as f:
+ self._backup = pathlib.Path(f.name)
+
+ shutil.copyfile(self._file, self._backup)
+
+ return self
+
+ def __exit__(self, *exc):
+ # Swap the backup and the original file, so that the modified contents
+ # can still be inspected in case of failure.
+ #
+ # TODO: this is less helpful if there are multiple layers, because it's
+ # not clear which backup to look at. Can the backup name be printed as
+ # part of the failed test output? Should we only swap on test failure?
+ tmp = self._backup.parent / (self._backup.name + ".tmp")
+
+ shutil.copyfile(self._file, tmp)
+ shutil.copyfile(self._backup, self._file)
+ shutil.move(tmp, self._backup)
+
+
+class HBA(FileBackup):
+ """
+ Backs up a server's HBA configuration and provides means for temporarily
+ editing it. See also pg_server, which provides an instance of this class and
+ context managers for enforcing the reload/restart order of operations.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "pg_hba.conf")
+
+ def prepend(self, *lines: Union[str, List[str]]):
+ """
+ Temporarily prepends lines to the server's pg_hba.conf.
+
+ As sugar for aligning HBA columns in the tests, each line can be either
+ a string or a list of strings. List elements will be joined by single
+ spaces before they are written to file.
+ """
+ with open(self._file, "r") as f:
+ prior_data = f.read()
+
+ with open(self._file, "w") as f:
+ for l in lines:
+ if isinstance(l, list):
+ print(*l, file=f)
+ else:
+ print(l, file=f)
+
+ f.write(prior_data)
+
+
+class Config(FileBackup):
+ """
+ Backs up a server's postgresql.conf and provides means for temporarily
+ editing it. See also pg_server, which provides an instance of this class and
+ context managers for enforcing the reload/restart order of operations.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "postgresql.conf")
+
+ def set(self, **gucs):
+ """
+ Temporarily appends GUC settings to the server's postgresql.conf.
+ """
+
+ with open(self._file, "a") as f:
+ print(file=f)
+
+ for n, v in gucs.items():
+ v = str(v)
+
+ # TODO: proper quoting
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+ v = "'{}'".format(v)
+
+ print(n, "=", v, file=f)
+
+
+@pytest.fixture(scope="session")
+def pg_server_session(server_instance, connenv, datadir, winpassword):
+ """
+ Provides common routines for configuring and connecting to the
+ server_instance. For example:
+
+ users = pg_server_session.create_users("one", "two")
+ dbs = pg_server_session.create_dbs("default")
+
+ with pg_server_session.reloading() as s:
+ s.hba.prepend(["local", dbs["default"], users["two"], "peer"])
+
+ conn = connect_somehow(**pg_server_session.conninfo)
+ ...
+
+ Attributes of note are
+ - .conninfo: provides TCP connection info for the server
+
+ This fixture unwinds its configuration changes at the end of the pytest
+ session. For more granular changes, pg_server_session.subcontext() splits
+ off a "nested" context to allow smaller scopes.
+ """
+
+ class _Server(contextlib.ExitStack):
+ conninfo = dict(
+ hostaddr=server_instance[0],
+ port=server_instance[1],
+ )
+
+ # for _backup_configuration()
+ _Backup = namedtuple("Backup", "conf, hba")
+
+ def subcontext(self):
+ """
+ Creates a new server stack instance that can be tied to a smaller
+ scope than "session".
+ """
+ # So far, there doesn't seem to be a need to link the two objects,
+ # since HBA/Config/FileBackup operate directly on the filesystem and
+ # will appear to "nest" naturally.
+ return self.__class__()
+
+ def create_users(self, *userkeys: str) -> Dict[str, str]:
+ """
+ Creates new users which will be dropped at the end of the server
+ context.
+
+ For each provided key, a related user name will be selected and
+ stored in a map. This map is returned to let calling code look up
+ the selected usernames (instead of hardcoding them and potentially
+ stomping on an existing installation).
+ """
+ usermap = {}
+
+ for u in userkeys:
+ # TODO: use a uniquifier to support installcheck
+ name = u + "user"
+ usermap[u] = name
+
+ # TODO: proper escaping
+ self.psql("-c", "CREATE USER " + name)
+ self.callback(self.psql, "-c", "DROP USER " + name)
+
+ return usermap
+
+ def create_dbs(self, *dbkeys: str) -> Dict[str, str]:
+ """
+ Creates new databases which will be dropped at the end of the server
+ context. See create_users() for the meaning of the keys and returned
+ map.
+ """
+ dbmap = {}
+
+ for d in dbkeys:
+ # TODO: use a uniquifier to support installcheck
+ name = d + "db"
+ dbmap[d] = name
+
+ # TODO: proper escaping
+ self.psql("-c", "CREATE DATABASE " + name)
+ self.callback(self.psql, "-c", "DROP DATABASE " + name)
+
+ return dbmap
+
+ @contextlib.contextmanager
+ def reloading(self):
+ """
+ Provides a context manager for making configuration changes.
+
+ If the context suite finishes successfully, the configuration will
+ be reloaded via pg_ctl. On teardown, the configuration changes will
+ be unwound, and the server will be signaled to reload again.
+
+ The context target contains the following attributes which can be
+ used to configure the server:
+ - .conf: modifies postgresql.conf
+ - .hba: modifies pg_hba.conf
+
+ For example:
+
+ with pg_server_session.reloading() as s:
+ s.conf.set(log_connections="on")
+ s.hba.prepend("local all all trust")
+ """
+ try:
+ # Push a reload onto the stack before making any other
+ # unwindable changes. That way the order of operations will be
+ #
+ # # test
+ # - config change 1
+ # - config change 2
+ # - reload
+ # # teardown
+ # - undo config change 2
+ # - undo config change 1
+ # - reload
+ #
+ self.callback(self.pg_ctl, "reload")
+ yield self._backup_configuration()
+ except:
+ # We only want to reload at the end of the suite if there were
+ # no errors. During exceptions, the pushed callback handles
+ # things instead, so there's nothing to do here.
+ raise
+ else:
+ # Suite completed successfully.
+ self.pg_ctl("reload")
+
+ @contextlib.contextmanager
+ def restarting(self):
+ """Like .reloading(), but with a full server restart."""
+ try:
+ self.callback(self.pg_ctl, "restart")
+ yield self._backup_configuration()
+ except:
+ raise
+ else:
+ self.pg_ctl("restart")
+
+ def psql(self, *args):
+ """
+ Runs psql with the given arguments. Password prompts are always
+ disabled. On Windows, the admin password will be included in the
+ environment.
+ """
+ if platform.system() == "Windows":
+ pw = dict(PGPASSWORD=winpassword)
+ else:
+ pw = None
+
+ self._run("psql", "-w", *args, addenv=pw)
+
+ def pg_ctl(self, *args):
+ """
+ Runs pg_ctl with the given arguments. Log output will be placed in
+ postgresql.log in the server's data directory.
+
+ TODO: put the log in TESTLOGDIR
+ """
+ self._run("pg_ctl", "-l", str(datadir / "postgresql.log"), *args)
+
+ def _run(self, cmd, *args, addenv: dict = None):
+ # Override the existing environment with the connenv values and
+ # anything the caller wanted to add. (Python 3.9 gives us the
+ # less-ugly `os.environ | connenv` merge operator.)
+ subenv = dict(os.environ, **connenv)
+ if addenv:
+ subenv.update(addenv)
+
+ subprocess.check_call([cmd, *args], env=subenv)
+
+ def _backup_configuration(self):
+ # Wrap the existing HBA and configuration with FileBackups.
+ return self._Backup(
+ hba=self.enter_context(HBA(datadir)),
+ conf=self.enter_context(Config(datadir)),
+ )
+
+ with _Server() as s:
+ yield s
+
+
+@pytest.fixture(scope="module", autouse=True)
+def ssl_setup(pg_server_session, certs, datadir):
+ """
+ Sets up required server settings for all tests in this module. The fixture
+ variable is a tuple (users, dbs) containing the user and database names that
+ have been chosen for the test session.
+ """
+ try:
+ with pg_server_session.restarting() as s:
+ s.conf.set(
+ ssl="on",
+ ssl_ca_file=certs.ca.certpath,
+ ssl_cert_file=certs.server.certpath,
+ ssl_key_file=certs.server.keypath,
+ )
+
+ # Reject by default.
+ s.hba.prepend("hostssl all all all reject")
+
+ except subprocess.CalledProcessError:
+ # This is a decent place to skip if the server isn't set up for SSL.
+ logpath = datadir / "postgresql.log"
+ unsupported = re.compile("SSL is not supported")
+
+ with open(logpath, "r") as log:
+ for line in log:
+ if unsupported.search(line):
+ pytest.skip("the server does not support SSL")
+
+ # Some other error happened.
+ raise
+
+ users = pg_server_session.create_users(
+ "ssl",
+ )
+
+ dbs = pg_server_session.create_dbs(
+ "ssl",
+ )
+
+ return (users, dbs)
+
+
+@pytest.fixture(scope="module")
+def client_cert(ssl_setup, certs):
+ """
+ Creates a Cert for the "ssl" user.
+ """
+ from cryptography import x509
+ from cryptography.x509.oid import NameOID
+
+ users, _ = ssl_setup
+ user = users["ssl"]
+
+ return certs.new(x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, user)]))
+
+
+@pytest.fixture
+def pg_server(pg_server_session):
+ """
+ A per-test instance of pg_server_session. Use this fixture to make changes
+ to the server which will be rolled back at the end of every test.
+ """
+ with pg_server_session.subcontext() as s:
+ yield s
+
+
+#
+# Tests
+#
+
+
+# For use with the `creds` parameter below.
+CLIENT = "client"
+SERVER = "server"
+
+
+@pytest.mark.parametrize(
+ # fmt: off
+ "auth_method, creds, expected_error",
+[
+ # Trust allows anything.
+ ("trust", None, None),
+ ("trust", CLIENT, None),
+ ("trust", SERVER, None),
+
+ # verify-ca allows any CA-signed certificate.
+ ("trust clientcert=verify-ca", None, "requires a valid client certificate"),
+ ("trust clientcert=verify-ca", CLIENT, None),
+ ("trust clientcert=verify-ca", SERVER, None),
+
+ # cert and verify-full allow only the correct certificate.
+ ("trust clientcert=verify-full", None, "requires a valid client certificate"),
+ ("trust clientcert=verify-full", CLIENT, None),
+ ("trust clientcert=verify-full", SERVER, "authentication failed for user"),
+ ("cert", None, "requires a valid client certificate"),
+ ("cert", CLIENT, None),
+ ("cert", SERVER, "authentication failed for user"),
+],
+ # fmt: on
+)
+def test_direct_ssl_certificate_authentication(
+ pg_server,
+ ssl_setup,
+ certs,
+ client_cert,
+ remaining_timeout,
+ # test parameters
+ auth_method,
+ creds,
+ expected_error,
+):
+ """
+ Tests direct SSL connections with various client-certificate/HBA
+ combinations.
+ """
+
+ # Set up the HBA as desired by the test.
+ users, dbs = ssl_setup
+
+ user = users["ssl"]
+ db = dbs["ssl"]
+
+ with pg_server.reloading() as s:
+ s.hba.prepend(
+ ["hostssl", db, user, "127.0.0.1/32", auth_method],
+ ["hostssl", db, user, "::1/128", auth_method],
+ )
+
+ # Configure the SSL settings for the client.
+ ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
+ ctx.load_verify_locations(cafile=certs.ca.certpath)
+ ctx.set_alpn_protocols(["postgresql"]) # for direct SSL
+
+ # Load up a client certificate if required by the test.
+ if creds == CLIENT:
+ ctx.load_cert_chain(client_cert.certpath, client_cert.keypath)
+ elif creds == SERVER:
+ # Using a server certificate as the client credential is expected to
+ # work only for clientcert=verify-ca (and `trust`, naturally).
+ ctx.load_cert_chain(certs.server.certpath, certs.server.keypath)
+
+ # Make a direct SSL connection. There's no SSLRequest in the handshake; we
+ # simply wrap a TCP connection with OpenSSL.
+ addr = (pg_server.conninfo["hostaddr"], pg_server.conninfo["port"])
+ with socket.create_connection(addr) as s:
+ s.settimeout(remaining_timeout()) # XXX this resets every operation
+
+ with ctx.wrap_socket(s, server_hostname=certs.server_host) as conn:
+ # Build and send the startup packet.
+ startup_options = dict(
+ user=user,
+ database=db,
+ application_name="pytest",
+ )
+
+ payload = b""
+ for k, v in startup_options.items():
+ payload += k.encode() + b"\0"
+ payload += str(v).encode() + b"\0"
+ payload += b"\0" # null terminator
+
+ pktlen = 4 + 4 + len(payload)
+ conn.send(struct.pack("!IHH", pktlen, 3, 0) + payload)
+
+ if not expected_error:
+ # Expect an AuthenticationOK to come back.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"R"
+ assert pktlen == 8
+
+ authn_result = struct.unpack("!I", conn.recv(4))[0]
+ assert authn_result == 0
+
+ # Read and discard to ReadyForQuery.
+ while True:
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ payload = conn.recv(pktlen - 4)
+
+ if pkttype == b"Z":
+ assert payload == b"I"
+ break
+
+ # Send an empty query.
+ conn.send(struct.pack("!cI", b"Q", 5) + b"\0")
+
+ # Expect EmptyQueryResponse+ReadyForQuery.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"I"
+ assert pktlen == 4
+
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"Z"
+
+ payload = conn.recv(pktlen - 4)
+ assert payload == b"I"
+
+ else:
+ # Match the expected authentication error.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"E"
+
+ payload = conn.recv(pktlen - 4)
+ msg = None
+
+ for component in payload.split(b"\0"):
+ if not component:
+ break # end of message
+
+ key, val = component[:1], component[1:]
+ if key == b"S":
+ assert val == b"FATAL"
+ elif key == b"M":
+ msg = val.decode()
+
+ assert re.search(expected_error, msg), "server error did not match"
+
+ # Terminate.
+ conn.send(struct.pack("!cI", b"X", 4))
--
2.34.1
v1-0005-ci-Add-MTEST_SUITES-for-optional-test-tailoring.patchapplication/octet-stream; name=v1-0005-ci-Add-MTEST_SUITES-for-optional-test-tailoring.patchDownload
From eab31cf9c285a28a90d49ad9b90ea5d05050715a Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 2 Sep 2025 15:37:53 -0700
Subject: [PATCH v1 5/6] ci: Add MTEST_SUITES for optional test tailoring
Should make it easier to control the test cycle time for Cirrus. Add the
desired suites (remembering `--suite setup`!) to the top-level envvar.
---
.cirrus.tasks.yml | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 4e744f1c105..706a809f641 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -28,6 +28,7 @@ env:
# errors/warnings in one place.
MBUILD_TARGET: all testprep
MTEST_ARGS: --print-errorlogs --no-rebuild -C build
+ MTEST_SUITES: # --suite setup --suite ssl --suite ...
PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
@@ -247,7 +248,7 @@ task:
test_world_script: |
su postgres <<-EOF
ulimit -c unlimited
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
# test runningcheck, freebsd chosen because it's currently fast enough
@@ -391,7 +392,7 @@ task:
# Otherwise tests will fail on OpenBSD, due to inability to start enough
# processes.
ulimit -p 256
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
on_failure:
@@ -605,7 +606,7 @@ task:
test_world_script: |
su postgres <<-EOF
ulimit -c unlimited
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
# so that we don't upload 64bit logs if 32bit fails
rm -rf build/
@@ -617,7 +618,7 @@ task:
test_world_32_script: |
su postgres <<-EOF
ulimit -c unlimited
- PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS}
+ PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
on_failure:
@@ -743,7 +744,7 @@ task:
test_world_script: |
ulimit -c unlimited # default is 0
ulimit -n 1024 # default is 256, pretty low
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
on_failure:
<<: *on_failure_meson
@@ -826,7 +827,7 @@ task:
check_world_script: |
vcvarsall x64
- meson test %MTEST_ARGS% --num-processes %TEST_JOBS%
+ meson test %MTEST_ARGS% --num-processes %TEST_JOBS% %MTEST_SUITES%
on_failure:
<<: *on_failure_meson
@@ -887,7 +888,7 @@ task:
upload_caches: ccache
test_world_script: |
- %BASH% -c "meson test %MTEST_ARGS% --num-processes %TEST_JOBS%"
+ %BASH% -c "meson test %MTEST_ARGS% --num-processes %TEST_JOBS% %MTEST_SUITES%"
on_failure:
<<: *on_failure_meson
--
2.34.1
v1-0006-XXX-run-pytest-and-ssl-suite-all-OSes.patchapplication/octet-stream; name=v1-0006-XXX-run-pytest-and-ssl-suite-all-OSes.patchDownload
From 1776770731802ff4300cf61660ff05fee9cf7ffc Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 2 Sep 2025 15:38:52 -0700
Subject: [PATCH v1 6/6] XXX run pytest and ssl suite, all OSes
---
.cirrus.star | 2 +-
.cirrus.tasks.yml | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/.cirrus.star b/.cirrus.star
index e9bb672b959..7c1caaa12f1 100644
--- a/.cirrus.star
+++ b/.cirrus.star
@@ -73,7 +73,7 @@ def compute_environment_vars():
# REPO_CI_AUTOMATIC_TRIGGER_TASKS="task_name other_task" under "Repository
# Settings" on Cirrus CI's website.
- default_manual_trigger_tasks = ['mingw', 'netbsd', 'openbsd']
+ default_manual_trigger_tasks = []
repo_ci_automatic_trigger_tasks = env.get('REPO_CI_AUTOMATIC_TRIGGER_TASKS', '')
for task in default_manual_trigger_tasks:
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 706a809f641..ddb5305dc81 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -28,7 +28,7 @@ env:
# errors/warnings in one place.
MBUILD_TARGET: all testprep
MTEST_ARGS: --print-errorlogs --no-rebuild -C build
- MTEST_SUITES: # --suite setup --suite ssl --suite ...
+ MTEST_SUITES: --suite setup --suite pytest --suite ssl
PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
--
2.34.1
On Tue, Sep 9, 2025 at 2:50 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:
When enabling the feature, the check_pytest.py script checks that the
configured `PYTHON` executable has all of pytest-requirements.txt
installed. Peter pointed out that this is incorrect: what we actually
want to check is that the interpreter used by pytest has all of the
required packages, and the two could be different.
Turns out we've already solved this exact problem for Perl and Prove
[1]: http://postgr.es/c/c4fe3199a
pytest. In other words: make the requirements check into a test.
--Jacob
On Thu, Sep 18, 2025 at 11:22 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:
Turns out we've already solved this exact problem for Perl and Prove
[1], and I should probably choose a similar solution for Python and
pytest. In other words: make the requirements check into a test.
Done this way in v2-0002. pytest and the linked `PYTHON` can now be
independent of each other. This adds some scaffolding complexity, to
get the configure script and pytest to talk to each other nicely, but
I've gotten rid of some architectural complexity in check_pytest.py to
make up for it.
Thanks,
--Jacob
Attachments:
v2-0001-meson-Include-TAP-tests-in-the-configuration-summ.patchapplication/octet-stream; name=v2-0001-meson-Include-TAP-tests-in-the-configuration-summ.patchDownload
From 8443d4262985edaa8acb204f2396e9c55d404c07 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Fri, 5 Sep 2025 16:39:08 -0700
Subject: [PATCH v2 1/6] meson: Include TAP tests in the configuration summary
...to make it obvious when they've been enabled. prove is added to the
executables list for good measure.
TODO: does Autoconf need something similar?
Per complaint by Peter Eisentraut.
---
meson.build | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/meson.build b/meson.build
index 395416a6060..37ed68ceeb4 100644
--- a/meson.build
+++ b/meson.build
@@ -3952,6 +3952,7 @@ summary(
'bison': '@0@ @1@'.format(bison.full_path(), bison_version),
'dtrace': dtrace,
'flex': '@0@ @1@'.format(flex.full_path(), flex_version),
+ 'prove': prove,
},
section: 'Programs',
)
@@ -3988,3 +3989,11 @@ summary(
section: 'External libraries',
list_sep: ' ',
)
+
+summary(
+ {
+ 'tap': tap_tests_enabled,
+ },
+ section: 'Other features',
+ list_sep: ' ',
+)
--
2.34.1
v2-0002-Add-support-for-pytest-test-suites.patchapplication/octet-stream; name=v2-0002-Add-support-for-pytest-test-suites.patchDownload
From c4df7dfd125106144d828b790d236b4e7f83d169 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 13 Aug 2025 10:58:56 -0700
Subject: [PATCH v2 2/6] Add support for pytest test suites
Specify --enable-pytest/-Dpytest=enabled at configure time. This
contains no Postgres test logic -- it is just a "vanilla" pytest
skeleton.
I've written a custom pgtap output plugin, used by the Meson mtest
runner, to fully control what we see during CI test failures. The
pytest-tap plugin would have been preferable, but it's now in
maintenance mode, and it has problems with accidentally suppressing
important collection failures.
test_something.py is intended to show a sample failure in the CI.
TODOs:
- OpenBSD has an ANSI-related terminal bug, but I'm not sure if the bug
is in Cirrus, the image, pytest, Python, or readline. The TERM envvar
is unset to work around it. If this workaround is removed, a bad ANSI
escape is inserted into the pgtap output and mtest is unable to parse
it.
- The Chocolatey CI setup is subpar. Need to find a way to bless the
dependencies in use rather than pulling from pip... or maybe that will
be done by the image baker.
---
.cirrus.tasks.yml | 38 +++--
.gitignore | 1 +
config/check_pytest.py | 150 ++++++++++++++++++++
config/conftest.py | 18 +++
config/pytest-requirements.txt | 21 +++
configure | 108 +++++++++++++-
configure.ac | 25 +++-
meson.build | 92 ++++++++++++
meson_options.txt | 8 +-
pytest.ini | 6 +
src/Makefile.global.in | 23 +++
src/makefiles/meson.build | 2 +
src/test/Makefile | 11 +-
src/test/meson.build | 1 +
src/test/pytest/Makefile | 20 +++
src/test/pytest/README | 1 +
src/test/pytest/meson.build | 16 +++
src/test/pytest/plugins/pgtap.py | 193 ++++++++++++++++++++++++++
src/test/pytest/pyt/test_something.py | 17 +++
19 files changed, 736 insertions(+), 15 deletions(-)
create mode 100644 config/check_pytest.py
create mode 100644 config/conftest.py
create mode 100644 config/pytest-requirements.txt
create mode 100644 pytest.ini
create mode 100644 src/test/pytest/Makefile
create mode 100644 src/test/pytest/README
create mode 100644 src/test/pytest/meson.build
create mode 100644 src/test/pytest/plugins/pgtap.py
create mode 100644 src/test/pytest/pyt/test_something.py
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index eca9d62fc22..80f9b394bd2 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -21,7 +21,8 @@ env:
# target to test, for all but windows
CHECK: check-world PROVE_FLAGS=$PROVE_FLAGS
- CHECKFLAGS: -Otarget
+ # TODO were we avoiding --keep-going on purpose?
+ CHECKFLAGS: -Otarget --keep-going
PROVE_FLAGS: --timer
# Build test dependencies as part of the build step, to see compiler
# errors/warnings in one place.
@@ -44,6 +45,7 @@ env:
-Dldap=enabled
-Dssl=openssl
-Dtap_tests=enabled
+ -Dpytest=enabled
-Dplperl=enabled
-Dplpython=enabled
-Ddocs=enabled
@@ -222,7 +224,9 @@ task:
chown root:postgres /tmp/cores
sysctl kern.corefile='/tmp/cores/%N.%P.core'
setup_additional_packages_script: |
- #pkg install -y ...
+ pkg install -y \
+ py311-packaging \
+ py311-pytest
# NB: Intentionally build without -Dllvm. The freebsd image size is already
# large enough to make VM startup slow, and even without llvm freebsd
@@ -311,7 +315,10 @@ task:
-Dpam=enabled
setup_additional_packages_script: |
- #pkgin -y install ...
+ pkgin -y install \
+ py312-packaging \
+ py312-test
+ ln -s /usr/pkg/bin/pytest-3.12 /usr/pkg/bin/pytest
<<: *netbsd_task_template
- name: OpenBSD - Meson
@@ -322,6 +329,7 @@ task:
OS_NAME: openbsd
IMAGE_FAMILY: pg-ci-openbsd-postgres
PKGCONFIG_PATH: '/usr/lib/pkgconfig:/usr/local/lib/pkgconfig'
+ TERM: # TODO why does pytest print ANSI escapes on OpenBSD?
MESON_FEATURES: >-
-Dbsd_auth=enabled
@@ -330,7 +338,9 @@ task:
-Duuid=e2fs
setup_additional_packages_script: |
- #pkg_add -I ...
+ pkg_add -I \
+ py3-test \
+ py3-packaging
# Always core dump to ${CORE_DUMP_DIR}
set_core_dump_script: sysctl -w kern.nosuidcoredump=2
<<: *openbsd_task_template
@@ -489,8 +499,10 @@ task:
EOF
setup_additional_packages_script: |
- #apt-get update
- #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+ apt-get update
+ DEBIAN_FRONTEND=noninteractive apt-get -y install \
+ python3-pytest \
+ python3-packaging
matrix:
# SPECIAL:
@@ -513,14 +525,15 @@ task:
su postgres <<-EOF
./configure \
--enable-cassert --enable-injection-points --enable-debug \
- --enable-tap-tests --enable-nls \
+ --enable-tap-tests --enable-pytest --enable-nls \
--with-segsize-blocks=6 \
--with-libnuma \
--with-liburing \
\
${LINUX_CONFIGURE_FEATURES} \
\
- CLANG="ccache clang-16"
+ CLANG="ccache clang-16" \
+ PYTEST="env LD_PRELOAD=/lib/x86_64-linux-gnu/libasan.so.8 pytest"
EOF
build_script: su postgres -c "make -s -j${BUILD_JOBS} world-bin"
upload_caches: ccache
@@ -650,6 +663,8 @@ task:
p5.34-io-tty
p5.34-ipc-run
python312
+ py312-packaging
+ py312-pytest
tcl
zstd
@@ -699,6 +714,7 @@ task:
sh src/tools/ci/ci_macports_packages.sh $MACOS_PACKAGE_LIST
# system python doesn't provide headers
sudo /opt/local/bin/port select python3 python312
+ sudo /opt/local/bin/port select pytest pytest312
# Make macports install visible for subsequent steps
echo PATH=/opt/local/sbin/:/opt/local/bin/:$PATH >> $CIRRUS_ENV
upload_caches: macports
@@ -772,6 +788,8 @@ task:
-Dldap=enabled
-Dssl=openssl
-Dtap_tests=enabled
+ -Dpytest=enabled
+ -DPYTEST=c:\Windows\system32\config\systemprofile\AppData\Roaming\Python\Python310\Scripts\pytest.exe
-Dplperl=enabled
-Dplpython=enabled
@@ -780,8 +798,10 @@ task:
depends_on: SanityCheck
only_if: $CI_WINDOWS_ENABLED
+ # XXX Does Chocolatey really not have any Python package installers?
setup_additional_packages_script: |
REM choco install -y --no-progress ...
+ pip3 install --user packaging pytest
setup_hosts_file_script: |
echo 127.0.0.1 pg-loadbalancetest >> c:\Windows\System32\Drivers\etc\hosts
@@ -844,7 +864,7 @@ task:
folder: ${CCACHE_DIR}
setup_additional_packages_script: |
- REM C:\msys64\usr\bin\pacman.exe -S --noconfirm ...
+ C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-python-packaging mingw-w64-ucrt-x86_64-python-pytest
mingw_info_script: |
%BASH% -c "where gcc"
diff --git a/.gitignore b/.gitignore
index 4e911395fe3..268426003b1 100644
--- a/.gitignore
+++ b/.gitignore
@@ -31,6 +31,7 @@ win32ver.rc
*.exe
lib*dll.def
lib*.pc
+__pycache__/
# Local excludes in root directory
/GNUmakefile
diff --git a/config/check_pytest.py b/config/check_pytest.py
new file mode 100644
index 00000000000..1562d16bcda
--- /dev/null
+++ b/config/check_pytest.py
@@ -0,0 +1,150 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+#
+# Verify that pytest-requirements.txt is satisfied. This would probably be
+# easier with pip, but requiring pip on build machines is a non-starter for
+# many.
+#
+# This is coded as a pytest suite in order to check the Python distribution in
+# use by pytest, as opposed to the Python distribution being linked against
+# Postgres. In some setups they are separate.
+#
+# The design philosophy of this script is to bend over backwards to help people
+# figure out what is missing. The target audience for error output is the
+# buildfarm operator who just wants to get the tests running, not the test
+# developer who presumably already knows how to solve these problems.
+
+import importlib
+import sys
+from typing import List, Union # needed for earlier Python versions
+
+# importlib.metadata is part of the standard library from 3.8 onwards. Earlier
+# Python versions have an official backport called importlib_metadata, which can
+# generally be installed as a separate OS package (python3-importlib-metadata).
+# This complication can be removed once we stop supporting Python 3.7.
+try:
+ from importlib import metadata
+except ImportError:
+ try:
+ import importlib_metadata as metadata
+ except ImportError:
+ # package_version() will need to fall back. This is unlikely to happen
+ # in practice, because pytest 7.x depends on importlib_metadata itself.
+ metadata = None
+
+
+def report(*args):
+ """
+ Prints a configure-time message to the user. (The configure scripts will
+ display these messages and ignore the output from the pytest suite.) This
+ assumes --capture=no is in use, to avoid pytest's standard stream capture.
+ """
+ print(*args, file=sys.stderr)
+
+
+def package_version(pkg: str) -> Union[str, None]:
+ """
+ Returns the version of the named package, or None if the package is not
+ installed.
+
+ This function prefers to use the distribution package version, if we have
+ the necessary prerequisites. Otherwise it will fall back to the __version__
+ of the imported module, which aligns with pytest.importorskip().
+ """
+ if metadata is not None:
+ try:
+ return metadata.version(pkg)
+ except metadata.PackageNotFoundError:
+ return None
+
+ # This is an older Python and we don't have importlib_metadata. Fall back to
+ # __version__ instead.
+ try:
+ mod = importlib.import_module(pkg)
+ except ModuleNotFoundError:
+ return None
+
+ if hasattr(mod, "__version__"):
+ return mod.__version__
+
+ # We're out of options. If this turns out to cause problems in practice, we
+ # might need to require importlib_metadata on older buildfarm members. But
+ # since our top-level requirements list will be small, and this possibility
+ # will eventually age out with newer Pythons, don't spend more effort on
+ # this case for now.
+ report(f"Fix check_pytest.py! {pkg} has no __version__")
+ assert False, "internal error in package_version()"
+
+
+def packaging_check(requirements: List[str]) -> bool:
+ """
+ Reports the status of each required package to the configure program.
+ Returns True if all dependencies were found.
+ """
+ report() # an opening newline makes the configure output easier to read
+
+ try:
+ # packaging contains the PyPA definitions of requirement specifiers.
+ # This is contained in a separate OS package (for example,
+ # python3-packaging), but it's extremely likely that the user has it
+ # installed already, because modern versions of pytest depend on it too.
+ import packaging
+ from packaging.requirements import Requirement
+
+ except ImportError as err:
+ # We don't even have enough prerequisites to check our prerequisites.
+ # Print the import error as-is.
+ report(err)
+ return False
+
+ # Strip extraneous whitespace, whole-line comments, and empty lines from our
+ # specifier list.
+ requirements = [r.strip() for r in requirements]
+ requirements = [r for r in requirements if r and r[0] != "#"]
+
+ found = True
+ for spec in requirements:
+ req = Requirement(spec)
+
+ # Skip any packages marked as unneeded for this particular Python env.
+ if req.marker and not req.marker.evaluate():
+ continue
+
+ # Make sure the package is installed...
+ version = package_version(req.name)
+ if version is None:
+ report(f"package '{req.name}': not installed")
+ found = False
+ continue
+
+ # ...and that it has a compatible version.
+ if not req.specifier.contains(version):
+ report(
+ "package '{}': has version {}, but '{}' is required".format(
+ req.name, version, req.specifier
+ ),
+ )
+ found = False
+ continue
+
+ # Report installed packages too, to mirror check_modules.pl.
+ report(f"package '{req.name}': installed (version {version})")
+
+ return found
+
+
+def test_packages(requirements_file):
+ """
+ Entry point.
+ """
+ try:
+ with open(requirements_file, "r") as f:
+ requirements = f.readlines()
+
+ all_found = packaging_check(requirements)
+
+ except Exception as err:
+ # Surface any breakage to the configure script before failing the test.
+ report(err)
+ raise
+
+ assert all_found, "required packages are missing"
diff --git a/config/conftest.py b/config/conftest.py
new file mode 100644
index 00000000000..a9c2bc546e8
--- /dev/null
+++ b/config/conftest.py
@@ -0,0 +1,18 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+#
+# Support for check_pytest.py. The configure script provides the path to
+# pytest-requirements.txt via the --requirements option added here.
+
+import pytest
+
+
+def pytest_addoption(parser):
+ parser.addoption(
+ "--requirements",
+ help="path to pytest-requirements.txt",
+ )
+
+
+@pytest.fixture
+def requirements_file(request):
+ return request.config.getoption("--requirements")
diff --git a/config/pytest-requirements.txt b/config/pytest-requirements.txt
new file mode 100644
index 00000000000..b941624b2f3
--- /dev/null
+++ b/config/pytest-requirements.txt
@@ -0,0 +1,21 @@
+#
+# This file contains the Python packages which are required in order for us to
+# enable pytest.
+#
+# The syntax is a *subset* of pip's requirements.txt syntax, so that both pip
+# and check_pytest.py can use it. Only whole-line comments and standard Python
+# dependency specifiers are allowed. pip-specific goodies like includes and
+# environment substitutions are not supported; keep it simple.
+#
+# Packages belong here if their absence should cause a configuration failure. If
+# you'd like to make a package optional, consider using pytest.importorskip()
+# instead.
+#
+
+# pytest 7.0 was the last version which supported Python 3.6, but the BSDs have
+# started putting 8.x into ports, so we support both. (pytest 8 can be used
+# throughout once we drop support for Python 3.7.)
+pytest >= 7.0, < 9
+
+# packaging is used by check_pytest.py at configure time.
+packaging
diff --git a/configure b/configure
index 22cd866147b..aa93fa5f0aa 100755
--- a/configure
+++ b/configure
@@ -630,6 +630,7 @@ vpath_build
PG_SYSROOT
PG_VERSION_NUM
LDFLAGS_EX_BE
+PYTEST
PROVE
DBTOEPUB
FOP
@@ -771,6 +772,7 @@ CFLAGS
CC
enable_injection_points
PG_TEST_EXTRA
+enable_pytest
enable_tap_tests
enable_dtrace
DTRACEFLAGS
@@ -849,6 +851,7 @@ enable_profiling
enable_coverage
enable_dtrace
enable_tap_tests
+enable_pytest
enable_injection_points
with_blocksize
with_segsize
@@ -1549,7 +1552,10 @@ Optional Features:
--enable-profiling build with profiling enabled
--enable-coverage build with coverage testing instrumentation
--enable-dtrace build with DTrace support
- --enable-tap-tests enable TAP tests (requires Perl and IPC::Run)
+ --enable-tap-tests enable (Perl-based) TAP tests (requires Perl and
+ IPC::Run)
+ --enable-pytest enable (Python-based) pytest suites (requires
+ Python)
--enable-injection-points
enable injection points (for testing)
--enable-depend turn on automatic dependency tracking
@@ -3631,7 +3637,7 @@ fi
#
-# TAP tests
+# Test frameworks
#
@@ -3659,6 +3665,32 @@ fi
+
+# Check whether --enable-pytest was given.
+if test "${enable_pytest+set}" = set; then :
+ enableval=$enable_pytest;
+ case $enableval in
+ yes)
+ :
+ ;;
+ no)
+ :
+ ;;
+ *)
+ as_fn_error $? "no argument expected for --enable-pytest option" "$LINENO" 5
+ ;;
+ esac
+
+else
+ enable_pytest=no
+
+fi
+
+
+
+
+
+
#
# Injection points
#
@@ -19074,6 +19106,78 @@ $as_echo "$modulestderr" >&6; }
fi
fi
+if test "$enable_pytest" = yes; then
+ if test -z "$PYTEST"; then
+ for ac_prog in pytest py.test
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_PYTEST+:} false; then :
+ $as_echo_n "(cached) " >&6
+else
+ case $PYTEST in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_PYTEST="$PYTEST" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_path_PYTEST="$as_dir/$ac_word$ac_exec_ext"
+ $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+PYTEST=$ac_cv_path_PYTEST
+if test -n "$PYTEST"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTEST" >&5
+$as_echo "$PYTEST" >&6; }
+else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+ test -n "$PYTEST" && break
+done
+
+else
+ # Report the value of PYTEST in configure's output in all cases.
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for PYTEST" >&5
+$as_echo_n "checking for PYTEST... " >&6; }
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTEST" >&5
+$as_echo "$PYTEST" >&6; }
+fi
+
+ if test -z "$PYTEST"; then
+ as_fn_error $? "pytest not found" "$LINENO" 5
+ fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for Python packages required for pytest" >&5
+$as_echo_n "checking for Python packages required for pytest... " >&6; }
+ modulestderr=`$PYTEST -c "$srcdir/pytest.ini" --confcutdir="$srcdir/config" --capture=no "$srcdir/config/check_pytest.py" --requirements "$srcdir/config/pytest-requirements.txt" 2>&1 >/dev/null`
+ if test $? -eq 0; then
+ echo "$modulestderr" >&5
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+ else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $modulestderr" >&5
+$as_echo "$modulestderr" >&6; }
+ as_fn_error $? "Additional Python packages are required to run the pytest suites" "$LINENO" 5
+ fi
+fi
+
# If compiler will take -Wl,--as-needed (or various platform-specific
# spellings thereof) then add that to LDFLAGS. This is much easier than
# trying to filter LIBS to the minimum for each executable.
diff --git a/configure.ac b/configure.ac
index e44943aa6fe..25442050f34 100644
--- a/configure.ac
+++ b/configure.ac
@@ -225,11 +225,16 @@ AC_SUBST(DTRACEFLAGS)])
AC_SUBST(enable_dtrace)
#
-# TAP tests
+# Test frameworks
#
PGAC_ARG_BOOL(enable, tap-tests, no,
- [enable TAP tests (requires Perl and IPC::Run)])
+ [enable (Perl-based) TAP tests (requires Perl and IPC::Run)])
AC_SUBST(enable_tap_tests)
+
+PGAC_ARG_BOOL(enable, pytest, no,
+ [enable (Python-based) pytest suites (requires Python)])
+AC_SUBST(enable_pytest)
+
AC_ARG_VAR(PG_TEST_EXTRA,
[enable selected extra tests (overridden at runtime by PG_TEST_EXTRA environment variable)])
@@ -2415,6 +2420,22 @@ if test "$enable_tap_tests" = yes; then
fi
fi
+if test "$enable_pytest" = yes; then
+ PGAC_PATH_PROGS(PYTEST, pytest py.test)
+ if test -z "$PYTEST"; then
+ AC_MSG_ERROR([pytest not found])
+ fi
+ AC_MSG_CHECKING(for Python packages required for pytest)
+ [modulestderr=`$PYTEST -c "$srcdir/pytest.ini" --confcutdir="$srcdir/config" --capture=no "$srcdir/config/check_pytest.py" --requirements "$srcdir/config/pytest-requirements.txt" 2>&1 >/dev/null`]
+ if test $? -eq 0; then
+ echo "$modulestderr" >&AS_MESSAGE_LOG_FD
+ AC_MSG_RESULT(yes)
+ else
+ AC_MSG_RESULT([$modulestderr])
+ AC_MSG_ERROR([Additional Python packages are required to run the pytest suites])
+ fi
+fi
+
# If compiler will take -Wl,--as-needed (or various platform-specific
# spellings thereof) then add that to LDFLAGS. This is much easier than
# trying to filter LIBS to the minimum for each executable.
diff --git a/meson.build b/meson.build
index 37ed68ceeb4..06eb7a19210 100644
--- a/meson.build
+++ b/meson.build
@@ -1702,6 +1702,39 @@ endif
+###############################################################
+# Library: pytest
+###############################################################
+
+pytest_enabled = false
+pytest = not_found_dep
+
+pytestopt = get_option('pytest')
+if not pytestopt.disabled()
+ pytest = find_program(get_option('PYTEST'), native: true, required: pytestopt)
+ if pytest.found()
+ pytest_check = run_command(pytest,
+ '-c', 'pytest.ini',
+ '--confcutdir=config',
+ '--capture=no',
+ 'config/check_pytest.py',
+ '--requirements', 'config/pytest-requirements.txt',
+ check: false)
+ if pytest_check.returncode() != 0
+ message(pytest_check.stderr())
+ if pytestopt.enabled()
+ error('Additional Python packages are required to run the pytest suites.')
+ else
+ warning('Additional Python packages are required to run the pytest suites.')
+ endif
+ else
+ pytest_enabled = true
+ endif
+ endif
+endif
+
+
+
###############################################################
# Library: zstd
###############################################################
@@ -3779,6 +3812,63 @@ foreach test_dir : tests
)
endforeach
install_suites += test_group
+ elif kind == 'pytest'
+ testwrap_pytest = testwrap_base
+ if not pytest_enabled
+ testwrap_pytest += ['--skip', 'pytest not enabled']
+ endif
+
+ test_command = [
+ pytest.full_path(),
+ '-c', meson.project_source_root() / 'pytest.ini',
+ '--verbose',
+ '-p', 'pgtap', # enable our test reporter plugin
+ '-ra', # show skipped and xfailed tests too
+ ]
+
+ # Add temporary install, the build directory for non-installed binaries and
+ # also test/ for non-installed test binaries built separately.
+ env = test_env
+ env.prepend('PATH', temp_install_bindir, test_dir['bd'], test_dir['bd'] / 'test')
+ temp_install_datadir = '@0@@1@'.format(test_install_destdir, dir_prefix / dir_data)
+ env.set('share_contrib_dir', temp_install_datadir / 'contrib')
+ env.prepend('PYTHONPATH', meson.project_source_root() / 'src' / 'test' / 'pytest' / 'plugins')
+
+ foreach name, value : t.get('env', {})
+ env.set(name, value)
+ endforeach
+
+ test_group = test_dir['name']
+ test_kwargs = {
+ 'protocol': 'tap',
+ 'suite': test_group,
+ 'timeout': 1000,
+ 'depends': test_deps + t.get('deps', []),
+ 'env': env,
+ } + t.get('test_kwargs', {})
+
+ foreach onetest : t['tests']
+ # Make test names prettier, remove pyt/ and .py
+ onetest_p = onetest
+ if onetest_p.startswith('pyt/')
+ onetest_p = onetest.split('pyt/')[1]
+ endif
+ if onetest_p.endswith('.py')
+ onetest_p = fs.stem(onetest_p)
+ endif
+
+ test(test_dir['name'] / onetest_p,
+ python,
+ kwargs: test_kwargs,
+ args: testwrap_pytest + [
+ '--testgroup', test_dir['name'],
+ '--testname', onetest_p,
+ '--', test_command,
+ test_dir['sd'] / onetest,
+ ],
+ )
+ endforeach
+ install_suites += test_group
else
error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
endif
@@ -3953,6 +4043,7 @@ summary(
'dtrace': dtrace,
'flex': '@0@ @1@'.format(flex.full_path(), flex_version),
'prove': prove,
+ 'pytest': pytest,
},
section: 'Programs',
)
@@ -3993,6 +4084,7 @@ summary(
summary(
{
'tap': tap_tests_enabled,
+ 'pytest': pytest_enabled,
},
section: 'Other features',
list_sep: ' ',
diff --git a/meson_options.txt b/meson_options.txt
index 06bf5627d3c..88f22e699d9 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -41,7 +41,10 @@ option('cassert', type: 'boolean', value: false,
description: 'Enable assertion checks (for debugging)')
option('tap_tests', type: 'feature', value: 'auto',
- description: 'Enable TAP tests')
+ description: 'Enable (Perl-based) TAP tests')
+
+option('pytest', type: 'feature', value: 'auto',
+ description: 'Enable (Python-based) pytest suites')
option('injection_points', type: 'boolean', value: false,
description: 'Enable injection points')
@@ -195,6 +198,9 @@ option('PERL', type: 'string', value: 'perl',
option('PROVE', type: 'string', value: 'prove',
description: 'Path to prove binary')
+option('PYTEST', type: 'array', value: ['pytest', 'py.test'],
+ description: 'Path to pytest binary')
+
option('PYTHON', type: 'array', value: ['python3', 'python'],
description: 'Path to python binary')
diff --git a/pytest.ini b/pytest.ini
new file mode 100644
index 00000000000..8e8388f3afc
--- /dev/null
+++ b/pytest.ini
@@ -0,0 +1,6 @@
+[pytest]
+minversion = 7.0
+
+# Ignore ./config (which contains the configure-time check_pytest.py tests) by
+# default.
+addopts = --ignore ./config
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 0aa389bc710..8a6885206ce 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -211,6 +211,7 @@ enable_dtrace = @enable_dtrace@
enable_coverage = @enable_coverage@
enable_injection_points = @enable_injection_points@
enable_tap_tests = @enable_tap_tests@
+enable_pytest = @enable_pytest@
python_includespec = @python_includespec@
python_libdir = @python_libdir@
@@ -353,6 +354,7 @@ MSGFMT = @MSGFMT@
MSGFMT_FLAGS = @MSGFMT_FLAGS@
MSGMERGE = @MSGMERGE@
OPENSSL = @OPENSSL@
+PYTEST = @PYTEST@
PYTHON = @PYTHON@
TAR = @TAR@
XGETTEXT = @XGETTEXT@
@@ -507,6 +509,27 @@ prove_installcheck = @echo "TAP tests not enabled. Try configuring with --enable
prove_check = $(prove_installcheck)
endif
+ifeq ($(enable_pytest),yes)
+
+pytest_installcheck = @echo "Installcheck is not currently supported for pytest."
+
+define pytest_check
+echo "# +++ pytest check in $(subdir) +++" && \
+rm -rf '$(CURDIR)'/tmp_check && \
+$(MKDIR_P) '$(CURDIR)'/tmp_check && \
+cd $(srcdir) && \
+ TESTLOGDIR='$(CURDIR)/tmp_check/log' \
+ TESTDATADIR='$(CURDIR)/tmp_check' \
+ PYTHONPATH='$(abs_top_srcdir)/src/test/pytest/plugins:$$PYTHONPATH' \
+ $(with_temp_install) \
+ $(PYTEST) -c '$(abs_top_srcdir)/pytest.ini' --verbose -ra ./pyt/
+endef
+
+else
+pytest_installcheck = @echo "pytest is not enabled. Try configuring with --enable-pytest"
+pytest_check = $(pytest_installcheck)
+endif
+
# Installation.
install_bin = @install_bin@
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 0def244c901..f68acd57bc4 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -56,6 +56,7 @@ pgxs_kv = {
'enable_nls': libintl.found() ? 'yes' : 'no',
'enable_injection_points': get_option('injection_points') ? 'yes' : 'no',
'enable_tap_tests': tap_tests_enabled ? 'yes' : 'no',
+ 'enable_pytest': pytest_enabled ? 'yes' : 'no',
'enable_debug': get_option('debug') ? 'yes' : 'no',
'enable_coverage': 'no',
'enable_dtrace': dtrace.found() ? 'yes' : 'no',
@@ -145,6 +146,7 @@ pgxs_bins = {
'OPENSSL': openssl,
'PERL': perl,
'PROVE': prove,
+ 'PYTEST': pytest,
'PYTHON': python,
'TAR': tar,
'ZSTD': program_zstd,
diff --git a/src/test/Makefile b/src/test/Makefile
index 511a72e6238..0be9771d71f 100644
--- a/src/test/Makefile
+++ b/src/test/Makefile
@@ -12,7 +12,16 @@ subdir = src/test
top_builddir = ../..
include $(top_builddir)/src/Makefile.global
-SUBDIRS = perl postmaster regress isolation modules authentication recovery subscription
+SUBDIRS = \
+ authentication \
+ isolation \
+ modules \
+ perl \
+ postmaster \
+ pytest \
+ recovery \
+ regress \
+ subscription
ifeq ($(with_icu),yes)
SUBDIRS += icu
diff --git a/src/test/meson.build b/src/test/meson.build
index ccc31d6a86a..d08a6ef61c2 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -5,6 +5,7 @@ subdir('isolation')
subdir('authentication')
subdir('postmaster')
+subdir('pytest')
subdir('recovery')
subdir('subscription')
subdir('modules')
diff --git a/src/test/pytest/Makefile b/src/test/pytest/Makefile
new file mode 100644
index 00000000000..2bdca96ccbe
--- /dev/null
+++ b/src/test/pytest/Makefile
@@ -0,0 +1,20 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for pytest
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/pytest/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/test/pytest
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+check:
+ $(pytest_check)
+
+clean distclean maintainer-clean:
+ rm -rf tmp_check
diff --git a/src/test/pytest/README b/src/test/pytest/README
new file mode 100644
index 00000000000..1333ed77b7e
--- /dev/null
+++ b/src/test/pytest/README
@@ -0,0 +1 @@
+TODO
diff --git a/src/test/pytest/meson.build b/src/test/pytest/meson.build
new file mode 100644
index 00000000000..abd128dfa24
--- /dev/null
+++ b/src/test/pytest/meson.build
@@ -0,0 +1,16 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+if not pytest_enabled
+ subdir_done()
+endif
+
+tests += {
+ 'name': 'pytest',
+ 'sd': meson.current_source_dir(),
+ 'bd': meson.current_build_dir(),
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_something.py',
+ ],
+ },
+}
diff --git a/src/test/pytest/plugins/pgtap.py b/src/test/pytest/plugins/pgtap.py
new file mode 100644
index 00000000000..ef8291e291c
--- /dev/null
+++ b/src/test/pytest/plugins/pgtap.py
@@ -0,0 +1,193 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import os
+import sys
+from typing import Optional
+
+import pytest
+
+#
+# Helpers
+#
+
+
+class TAP:
+ """
+ A basic API for reporting via the TAP protocol.
+ """
+
+ def __init__(self):
+ self.count = 0
+
+ # XXX interacts poorly with testwrap's boilerplate diagnostics
+ # self.print("TAP version 13")
+
+ def expect(self, num: int):
+ self.print(f"1..{num}")
+
+ def print(self, *args):
+ print(*args, file=sys.__stdout__)
+
+ def ok(self, name: str):
+ self.count += 1
+ self.print("ok", self.count, "-", name)
+
+ def skip(self, name: str, reason: str):
+ self.count += 1
+ self.print("ok", self.count, "-", name, "# skip", reason)
+
+ def fail(self, name: str, details: str):
+ self.count += 1
+ self.print("not ok", self.count, "-", name)
+
+ # mtest has some odd behavior around TAP tests where it won't print
+ # diagnostics on failure if they're part of the stdout stream, so we
+ # might as well just dump the details directly to stderr instead.
+ print(details, file=sys.__stderr__)
+
+
+tap = TAP()
+
+
+class TestNotes:
+ """
+ Annotations for a single test. The existing pytest hooks keep interesting
+ information somewhat separated across the different stages
+ (setup/test/teardown), so this class is used to correlate them.
+ """
+
+ skipped = False
+ skip_reason = None
+
+ failed = False
+ details = ""
+
+
+# Register a custom key in the stash dictionary for keeping our TestNotes.
+notes_key = pytest.StashKey[TestNotes]()
+
+
+#
+# Hook Implementations
+#
+
+
+@pytest.hookimpl(tryfirst=True)
+def pytest_configure(config):
+ """
+ Hijacks the standard streams as soon as possible during pytest startup. The
+ pytest-formatted output gets logged to file instead, and we'll use the
+ original sys.__stdout__/__stderr__ streams for the TAP protocol.
+ """
+ logdir = os.getenv("TESTLOGDIR")
+ if not logdir:
+ raise RuntimeError("pgtap requires the TESTLOGDIR envvar to be set")
+
+ os.makedirs(logdir)
+ logpath = os.path.join(logdir, "pytest.log")
+ sys.stdout = sys.stderr = open(logpath, "a", buffering=1)
+
+
+@pytest.hookimpl(trylast=True)
+def pytest_sessionfinish(session, exitstatus):
+ """
+ Suppresses nonzero exit codes due to failed tests. (In that case, we want
+ Meson to report a failure count, not a generic ERROR.)
+ """
+ if exitstatus == pytest.ExitCode.TESTS_FAILED:
+ session.exitstatus = pytest.ExitCode.OK
+
+
+@pytest.hookimpl
+def pytest_collectreport(report):
+ # Include collection failures directly in Meson error output.
+ if report.failed:
+ print(report.longreprtext, file=sys.__stderr__)
+
+
+@pytest.hookimpl
+def pytest_internalerror(excrepr, excinfo):
+ # Include internal errors directly in Meson error output.
+ print(excrepr, file=sys.__stderr__)
+
+
+#
+# Hook Wrappers
+#
+# In pytest parlance, a "wrapper" for a hook can inspect and optionally modify
+# existing hooks' behavior, but it does not replace the hook chain. This is done
+# through a generator-style API which chains the hooks together (see the use of
+# `yield`).
+#
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_collection(session):
+ """Reports the number of gathered tests after collection is finished."""
+ res = yield
+ tap.expect(session.testscollected)
+ return res
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_runtest_makereport(item, call):
+ """
+ Annotates a test item with our TestNotes and grabs relevant information for
+ reporting.
+
+ This is called multiple times per test, so it's not correct to print the TAP
+ result here. (A test and its teardown stage can both fail, and we want to
+ see the details for both.) We instead combine all the information for use by
+ our pytest_runtest_protocol wrapper later on.
+ """
+ res = yield
+
+ if notes_key not in item.stash:
+ item.stash[notes_key] = TestNotes()
+ notes = item.stash[notes_key]
+
+ report = res.get_result()
+ if report.passed:
+ pass # no annotation needed
+
+ elif report.skipped:
+ notes.skipped = True
+ _, _, notes.skip_reason = report.longrepr
+
+ elif report.failed:
+ notes.failed = True
+
+ if not notes.details:
+ notes.details += "{:_^72}\n\n".format(f" {report.head_line} ")
+
+ if report.when in ("setup", "teardown"):
+ notes.details += "\n{:_^72}\n\n".format(
+ f" Error during {report.when} of {report.head_line} "
+ )
+
+ notes.details += report.longreprtext + "\n"
+
+ else:
+ raise RuntimeError("pytest_runtest_makereport received unknown test status")
+
+ return res
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_runtest_protocol(item, nextitem):
+ """
+ Reports the TAP result for this test item using our gathered TestNotes.
+ """
+ res = yield
+
+ assert notes_key in item.stash, "pgtap didn't annotate a test item?"
+ notes = item.stash[notes_key]
+
+ if notes.failed:
+ tap.fail(item.nodeid, notes.details)
+ elif notes.skipped:
+ tap.skip(item.nodeid, notes.skip_reason)
+ else:
+ tap.ok(item.nodeid)
+
+ return res
diff --git a/src/test/pytest/pyt/test_something.py b/src/test/pytest/pyt/test_something.py
new file mode 100644
index 00000000000..5bd45618512
--- /dev/null
+++ b/src/test/pytest/pyt/test_something.py
@@ -0,0 +1,17 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import pytest
+
+
+@pytest.fixture
+def hey():
+ yield
+ raise "uh-oh"
+
+
+def test_something(hey):
+ assert 2 == 4
+
+
+def test_something_else():
+ assert 2 == 2
--
2.34.1
v2-0003-WIP-pytest-Add-some-SSL-client-tests.patchapplication/octet-stream; name=v2-0003-WIP-pytest-Add-some-SSL-client-tests.patchDownload
From b7243e7da3b1d5574f8a82b55f26164d65124b3c Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 19 Aug 2025 12:56:45 -0700
Subject: [PATCH v2 3/6] WIP: pytest: Add some SSL client tests
This is a sample client-only test suite. It tests some handshake
failures against a mock server, as well as a full SSL handshake + empty
query + response.
pyca/cryptography is added as a new package dependency. Certificates for
testing are generated on the fly.
The `pg` test package contains some helpers and fixtures (as well as
some self-tests for more complicated behavior). Of note:
- pg.require_test_extra() lets you mark a test/class/module as skippable
if PG_TEST_EXTRA does not contain the necessary strings.
- pg.remaining_timeout() is a function which can be repeatedly called to
determine how much of the PG_TEST_TIMEOUT_DEFAULT remains for the
current test item.
- pg.libpq is a fixture that wraps libpq.so in a more friendly, but
still low-level, ctypes FFI. Allocated resources are unwound and
released during test teardown.
The mock design is threaded: the server socket is listening on a
background thread, and the test provides the server logic via a
callback. There is some additional work still needed to make this
production-ready; see the notes for _TCPServer.background(). (Currently,
an exception in the wrong place could result in a hang-until-timeout
rather than an immediate failure.)
TODOs:
- local_server and tcp_server_class are nearly identical and should
share code.
- fix exception-related timeouts for .background()
- figure out the proper use of "session" vs "module" scope
- ensure that pq.libpq unwinds (to close connections) before tcp_server;
see comment in test_server_with_ssl_disabled()
---
.cirrus.tasks.yml | 18 +-
config/pytest-requirements.txt | 10 ++
pytest.ini | 3 +
src/test/pytest/meson.build | 1 +
src/test/pytest/pg/__init__.py | 3 +
src/test/pytest/pg/_env.py | 55 ++++++
src/test/pytest/pg/fixtures.py | 212 +++++++++++++++++++++++
src/test/pytest/pyt/conftest.py | 3 +
src/test/pytest/pyt/test_libpq.py | 171 ++++++++++++++++++
src/test/ssl/Makefile | 2 +
src/test/ssl/meson.build | 6 +
src/test/ssl/pyt/conftest.py | 129 ++++++++++++++
src/test/ssl/pyt/test_client.py | 278 ++++++++++++++++++++++++++++++
13 files changed, 885 insertions(+), 6 deletions(-)
create mode 100644 src/test/pytest/pg/__init__.py
create mode 100644 src/test/pytest/pg/_env.py
create mode 100644 src/test/pytest/pg/fixtures.py
create mode 100644 src/test/pytest/pyt/conftest.py
create mode 100644 src/test/pytest/pyt/test_libpq.py
create mode 100644 src/test/ssl/pyt/conftest.py
create mode 100644 src/test/ssl/pyt/test_client.py
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 80f9b394bd2..4e744f1c105 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -225,6 +225,7 @@ task:
sysctl kern.corefile='/tmp/cores/%N.%P.core'
setup_additional_packages_script: |
pkg install -y \
+ py311-cryptography \
py311-packaging \
py311-pytest
@@ -316,6 +317,7 @@ task:
setup_additional_packages_script: |
pkgin -y install \
+ py312-cryptography \
py312-packaging \
py312-test
ln -s /usr/pkg/bin/pytest-3.12 /usr/pkg/bin/pytest
@@ -339,8 +341,9 @@ task:
setup_additional_packages_script: |
pkg_add -I \
- py3-test \
- py3-packaging
+ py3-cryptography \
+ py3-packaging \
+ py3-test
# Always core dump to ${CORE_DUMP_DIR}
set_core_dump_script: sysctl -w kern.nosuidcoredump=2
<<: *openbsd_task_template
@@ -501,8 +504,9 @@ task:
setup_additional_packages_script: |
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get -y install \
- python3-pytest \
- python3-packaging
+ python3-cryptography \
+ python3-packaging \
+ python3-pytest
matrix:
# SPECIAL:
@@ -643,6 +647,7 @@ task:
CIRRUS_WORKING_DIR: ${HOME}/pgsql/
CCACHE_DIR: ${HOME}/ccache
MACPORTS_CACHE: ${HOME}/macports-cache
+ PYTEST_DEBUG_TEMPROOT: /tmp # default is too long for UNIX sockets on Mac
MESON_FEATURES: >-
-Dbonjour=enabled
@@ -663,6 +668,7 @@ task:
p5.34-io-tty
p5.34-ipc-run
python312
+ py312-cryptography
py312-packaging
py312-pytest
tcl
@@ -801,7 +807,7 @@ task:
# XXX Does Chocolatey really not have any Python package installers?
setup_additional_packages_script: |
REM choco install -y --no-progress ...
- pip3 install --user packaging pytest
+ pip3 install --user cryptography packaging pytest
setup_hosts_file_script: |
echo 127.0.0.1 pg-loadbalancetest >> c:\Windows\System32\Drivers\etc\hosts
@@ -864,7 +870,7 @@ task:
folder: ${CCACHE_DIR}
setup_additional_packages_script: |
- C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-python-packaging mingw-w64-ucrt-x86_64-python-pytest
+ C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-python-cryptography mingw-w64-ucrt-x86_64-python-packaging mingw-w64-ucrt-x86_64-python-pytest
mingw_info_script: |
%BASH% -c "where gcc"
diff --git a/config/pytest-requirements.txt b/config/pytest-requirements.txt
index b941624b2f3..0bd6cadf608 100644
--- a/config/pytest-requirements.txt
+++ b/config/pytest-requirements.txt
@@ -19,3 +19,13 @@ pytest >= 7.0, < 9
# packaging is used by check_pytest.py at configure time.
packaging
+
+# Notes on the cryptography package:
+# - 3.3.2 is shipped on Debian bullseye.
+# - 3.4.x drops support for Python 2, making it a version of note for older LTS
+# distros.
+# - 35.x switched versioning schemes and moved to Rust parsing.
+# - 40.x is the last version supporting Python 3.6.
+# XXX Is it appropriate to require cryptography, or should we simply skip
+# dependent tests?
+cryptography >= 3.3.2
diff --git a/pytest.ini b/pytest.ini
index 8e8388f3afc..e7aa84f3a84 100644
--- a/pytest.ini
+++ b/pytest.ini
@@ -4,3 +4,6 @@ minversion = 7.0
# Ignore ./config (which contains the configure-time check_pytest.py tests) by
# default.
addopts = --ignore ./config
+
+# Common test code can be found here.
+pythonpath = src/test/pytest
diff --git a/src/test/pytest/meson.build b/src/test/pytest/meson.build
index abd128dfa24..f53193e8686 100644
--- a/src/test/pytest/meson.build
+++ b/src/test/pytest/meson.build
@@ -11,6 +11,7 @@ tests += {
'pytest': {
'tests': [
'pyt/test_something.py',
+ 'pyt/test_libpq.py',
],
},
}
diff --git a/src/test/pytest/pg/__init__.py b/src/test/pytest/pg/__init__.py
new file mode 100644
index 00000000000..ef8faf54ca4
--- /dev/null
+++ b/src/test/pytest/pg/__init__.py
@@ -0,0 +1,3 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+from ._env import has_test_extra, require_test_extra
diff --git a/src/test/pytest/pg/_env.py b/src/test/pytest/pg/_env.py
new file mode 100644
index 00000000000..6f18af07844
--- /dev/null
+++ b/src/test/pytest/pg/_env.py
@@ -0,0 +1,55 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import logging
+import os
+from typing import List, Optional
+
+import pytest
+
+logger = logging.getLogger(__name__)
+
+
+def has_test_extra(key: str) -> bool:
+ """
+ Returns True if the PG_TEST_EXTRA environment variable contains the given
+ key.
+ """
+ extra = os.getenv("PG_TEST_EXTRA", "")
+ return key in extra.split()
+
+
+def require_test_extra(*keys: str) -> bool:
+ """
+ A convenience annotation which will skip tests if all of the required keys
+ are not present in PG_TEST_EXTRA.
+
+ To skip a particular test function or class:
+
+ @pg.require_test_extra("ldap")
+ def test_some_ldap_feature():
+ ...
+
+ To skip an entire module:
+
+ pytestmark = pg.require_test_extra("ssl", "kerberos")
+ """
+ return pytest.mark.skipif(
+ not all([has_test_extra(k) for k in keys]),
+ reason="requires {} to be set in PG_TEST_EXTRA".format(", ".join(keys)),
+ )
+
+
+def test_timeout_default() -> int:
+ """
+ Returns the value of the PG_TEST_TIMEOUT_DEFAULT environment variable, in
+ seconds, or 180 if one was not provided.
+ """
+ default = os.getenv("PG_TEST_TIMEOUT_DEFAULT", "")
+ if not default:
+ return 180
+
+ try:
+ return int(default)
+ except ValueError as v:
+ logger.warning("PG_TEST_TIMEOUT_DEFAULT could not be parsed: " + str(v))
+ return 180
diff --git a/src/test/pytest/pg/fixtures.py b/src/test/pytest/pg/fixtures.py
new file mode 100644
index 00000000000..b5d3bff69a8
--- /dev/null
+++ b/src/test/pytest/pg/fixtures.py
@@ -0,0 +1,212 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import ctypes
+import platform
+import time
+from typing import Any, Callable, Dict
+
+import pytest
+
+from ._env import test_timeout_default
+
+
+@pytest.fixture
+def remaining_timeout():
+ """
+ This fixture provides a function that returns how much of the
+ PG_TEST_TIMEOUT_DEFAULT remains for the current test, in fractional seconds.
+ This value is never less than zero.
+
+ This fixture is per-test, so the deadline is also reset on a per-test basis.
+ """
+ now = time.monotonic()
+ deadline = now + test_timeout_default()
+
+ return lambda: max(deadline - time.monotonic(), 0)
+
+
+class _PGconn(ctypes.Structure):
+ pass
+
+
+class _PGresult(ctypes.Structure):
+ pass
+
+
+_PGconn_p = ctypes.POINTER(_PGconn)
+_PGresult_p = ctypes.POINTER(_PGresult)
+
+
+@pytest.fixture(scope="session")
+def libpq_handle():
+ """
+ Loads a ctypes handle for libpq. Some common function prototypes are
+ initialized for general use.
+ """
+ system = platform.system()
+
+ if system in ("Linux", "FreeBSD", "NetBSD", "OpenBSD"):
+ name = "libpq.so.5"
+ elif system == "Darwin":
+ name = "libpq.5.dylib"
+ elif system == "Windows":
+ name = "libpq.dll"
+ else:
+ assert False, f"the libpq fixture must be updated for {system}"
+
+ # XXX ctypes.CDLL() is a little stricter with load paths on Windows. The
+ # preferred way around that is to know the absolute path to libpq.dll, but
+ # that doesn't seem to mesh well with the current test infrastructure. For
+ # now, enable "standard" LoadLibrary behavior.
+ loadopts = {}
+ if system == "Windows":
+ loadopts["winmode"] = 0
+
+ lib = ctypes.CDLL(name, **loadopts)
+
+ #
+ # Function Prototypes
+ #
+
+ lib.PQconnectdb.restype = _PGconn_p
+ lib.PQconnectdb.argtypes = [ctypes.c_char_p]
+
+ lib.PQstatus.restype = ctypes.c_int
+ lib.PQstatus.argtypes = [_PGconn_p]
+
+ lib.PQexec.restype = _PGresult_p
+ lib.PQexec.argtypes = [_PGconn_p, ctypes.c_char_p]
+
+ lib.PQresultStatus.restype = ctypes.c_int
+ lib.PQresultStatus.argtypes = [_PGresult_p]
+
+ lib.PQclear.restype = None
+ lib.PQclear.argtypes = [_PGresult_p]
+
+ lib.PQerrorMessage.restype = ctypes.c_char_p
+ lib.PQerrorMessage.argtypes = [_PGconn_p]
+
+ lib.PQfinish.restype = None
+ lib.PQfinish.argtypes = [_PGconn_p]
+
+ return lib
+
+
+class PGresult(contextlib.AbstractContextManager):
+ """Wraps a raw _PGresult_p with a more friendly interface."""
+
+ def __init__(self, lib: ctypes.CDLL, res: _PGresult_p):
+ self._lib = lib
+ self._res = res
+
+ def __exit__(self, *exc):
+ self._lib.PQclear(self._res)
+ self._res = None
+
+ def status(self):
+ return self._lib.PQresultStatus(self._res)
+
+
+class PGconn(contextlib.AbstractContextManager):
+ """
+ Wraps a raw _PGconn_p with a more friendly interface. This is just a
+ stub; it's expected to grow.
+ """
+
+ def __init__(
+ self,
+ lib: ctypes.CDLL,
+ handle: _PGconn_p,
+ stack: contextlib.ExitStack,
+ ):
+ self._lib = lib
+ self._handle = handle
+ self._stack = stack
+
+ def __exit__(self, *exc):
+ self._lib.PQfinish(self._handle)
+ self._handle = None
+
+ def exec(self, query: str) -> PGresult:
+ """
+ Executes a query via PQexec() and returns a PGresult.
+ """
+ res = self._lib.PQexec(self._handle, query.encode())
+ return self._stack.enter_context(PGresult(self._lib, res))
+
+
+@pytest.fixture
+def libpq(libpq_handle, remaining_timeout):
+ """
+ Provides a ctypes-based API wrapped around libpq.so. This fixture keeps
+ track of allocated resources and cleans them up during teardown. See
+ _Libpq's public API for details.
+ """
+
+ class _Libpq(contextlib.ExitStack):
+ CONNECTION_OK = 0
+
+ PGRES_EMPTY_QUERY = 0
+
+ class Error(RuntimeError):
+ """
+ libpq.Error is the exception class for application-level errors that
+ are encountered during libpq operations.
+ """
+
+ pass
+
+ def __init__(self):
+ super().__init__()
+ self.lib = libpq_handle
+
+ def _connstr(self, opts: Dict[str, Any]) -> str:
+ """
+ Flattens the provided options into a libpq connection string. Values
+ are converted to str and quoted/escaped as necessary.
+ """
+ settings = []
+
+ for k, v in opts.items():
+ v = str(v)
+ if not v:
+ v = "''"
+ else:
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+
+ if " " in v:
+ v = f"'{v}'"
+
+ settings.append(f"{k}={v}")
+
+ return " ".join(settings)
+
+ def must_connect(self, **opts) -> PGconn:
+ """
+ Connects to a server, using the given connection options, and
+ returns a libpq.PGconn object wrapping the connection handle. A
+ failure will raise libpq.Error.
+
+ Connections honor PG_TEST_TIMEOUT_DEFAULT unless connect_timeout is
+ explicitly overridden in opts.
+ """
+
+ if "connect_timeout" not in opts:
+ t = int(remaining_timeout())
+ opts["connect_timeout"] = max(t, 1)
+
+ conn_p = self.lib.PQconnectdb(self._connstr(opts).encode())
+
+ # Ensure the connection handle is always closed at the end of the
+ # test.
+ conn = self.enter_context(PGconn(self.lib, conn_p, stack=self))
+
+ if self.lib.PQstatus(conn_p) != self.CONNECTION_OK:
+ raise self.Error(self.lib.PQerrorMessage(conn_p).decode())
+
+ return conn
+
+ with _Libpq() as lib:
+ yield lib
diff --git a/src/test/pytest/pyt/conftest.py b/src/test/pytest/pyt/conftest.py
new file mode 100644
index 00000000000..ecb72be26d7
--- /dev/null
+++ b/src/test/pytest/pyt/conftest.py
@@ -0,0 +1,3 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+from pg.fixtures import *
diff --git a/src/test/pytest/pyt/test_libpq.py b/src/test/pytest/pyt/test_libpq.py
new file mode 100644
index 00000000000..9f0857cc612
--- /dev/null
+++ b/src/test/pytest/pyt/test_libpq.py
@@ -0,0 +1,171 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import os
+import socket
+import struct
+import threading
+from typing import Callable
+
+import pytest
+
+
+@pytest.mark.parametrize(
+ "opts, expected",
+ [
+ (dict(), ""),
+ (dict(port=5432), "port=5432"),
+ (dict(port=5432, dbname="postgres"), "port=5432 dbname=postgres"),
+ (dict(host=""), "host=''"),
+ (dict(host=" "), r"host=' '"),
+ (dict(keyword="'"), r"keyword=\'"),
+ (dict(keyword=" \\' "), r"keyword=' \\\' '"),
+ ],
+)
+def test_connstr(libpq, opts, expected):
+ """Tests the escape behavior for libpq._connstr()."""
+ assert libpq._connstr(opts) == expected
+
+
+def test_must_connect_errors(libpq):
+ """Tests that must_connect() raises libpq.Error."""
+ with pytest.raises(libpq.Error, match="invalid connection option"):
+ libpq.must_connect(some_unknown_keyword="whatever")
+
+
+@pytest.fixture
+def local_server(tmp_path, remaining_timeout):
+ """
+ Opens up a local UNIX socket for mocking a Postgres server on a background
+ thread. See the _Server API for usage.
+
+ This fixture requires AF_UNIX support; dependent tests will be skipped on
+ platforms that don't provide it.
+ """
+
+ try:
+ from socket import AF_UNIX
+ except ImportError:
+ pytest.skip("AF_UNIX not supported on this platform")
+
+ class _Server(contextlib.ExitStack):
+ """
+ Implementation class for local_server. See .background() for the primary
+ entry point for tests. Postgres clients may connect to this server via
+ local_server.host/local_server.port.
+
+ _Server derives from contextlib.ExitStack to provide easy cleanup of
+ associated resources; see the documentation for that class for a full
+ explanation.
+ """
+
+ def __init__(self):
+ super().__init__()
+
+ self.host = tmp_path
+ self.port = 5432
+
+ self._thread = None
+ self._thread_exc = None
+ self._listener = self.enter_context(
+ socket.socket(AF_UNIX, socket.SOCK_STREAM),
+ )
+
+ def bind_and_listen(self):
+ """
+ Does the actual work of binding the UNIX socket using the Postgres
+ server conventions and listening for connections.
+
+ The listen backlog is currently hardcoded to one.
+ """
+ sockfile = self.host / ".s.PGSQL.{}".format(self.port)
+
+ # Lock down the permissions on the new socket.
+ prev_mask = os.umask(0o077)
+
+ # Bind (creating the socket file), and immediately register it for
+ # deletion from disk when the stack is cleaned up.
+ self._listener.bind(bytes(sockfile))
+ self.callback(os.unlink, sockfile)
+
+ os.umask(prev_mask)
+
+ self._listener.listen(1)
+
+ def background(self, fn: Callable[[socket.socket], None]) -> None:
+ """
+ Accepts a client connection on a background thread and passes it to
+ the provided callback. Any exceptions raised from the callback will
+ be re-raised on the main thread during fixture teardown.
+
+ Blocking operations on the connected socket default to using the
+ remaining_timeout(), though this can be changed by the test via the
+ socket's .settimeout().
+ """
+
+ def _bg():
+ try:
+ self._listener.settimeout(remaining_timeout())
+ sock, _ = self._listener.accept()
+
+ with sock:
+ sock.settimeout(remaining_timeout())
+ fn(sock)
+
+ except Exception as e:
+ # Save the exception for re-raising on the main thread.
+ self._thread_exc = e
+
+ # TODO: rather than using callback(), consider explicitly signaling
+ # the fn() implementation to stop early if we get an exception.
+ # Otherwise we'll hang until the end of the timeout.
+ self._thread = threading.Thread(target=_bg)
+ self.callback(self._join)
+
+ self._thread.start()
+
+ def _join(self):
+ """
+ Waits for the background thread to finish and raises any thrown
+ exception. This is called during fixture teardown.
+ """
+ # Give a little bit of wiggle room on the join timeout, since we're
+ # racing against the test's own use of remaining_timeout(). (It's
+ # preferable to let tests report timeouts; the stack traces will
+ # help with debugging.)
+ self._thread.join(remaining_timeout() + 1)
+ if self._thread.is_alive():
+ raise TimeoutError("background thread is still running after timeout")
+
+ if self._thread_exc is not None:
+ raise self._thread_exc
+
+ with _Server() as s:
+ s.bind_and_listen()
+ yield s
+
+
+def test_connection_is_finished_on_error(libpq, local_server, remaining_timeout):
+ """Tests that PQfinish() gets called at the end of testing."""
+ expected_error = "something is wrong"
+
+ def serve_error(s: socket.socket) -> None:
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Quick check for the startup packet version.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (3, 0)
+
+ # Discard the remainder of the startup packet and send a v2 error.
+ s.recv(pktlen - 8)
+ s.send(b"E" + expected_error.encode() + b"\0")
+
+ # And now the socket should be closed.
+ assert not s.recv(1), "client sent unexpected data"
+
+ local_server.background(serve_error)
+
+ with pytest.raises(libpq.Error, match=expected_error):
+ # Exiting this context should result in PQfinish().
+ with libpq:
+ libpq.must_connect(host=local_server.host, port=local_server.port)
diff --git a/src/test/ssl/Makefile b/src/test/ssl/Makefile
index e8a1639db2d..895ea5ea41c 100644
--- a/src/test/ssl/Makefile
+++ b/src/test/ssl/Makefile
@@ -30,6 +30,8 @@ clean distclean:
# Doesn't depend on sslfiles because we don't rebuild them by default
check:
$(prove_check)
+ # XXX these suites should run independently, not serially
+ $(pytest_check)
installcheck:
$(prove_installcheck)
diff --git a/src/test/ssl/meson.build b/src/test/ssl/meson.build
index d8e0fb518e0..a0ee2af0899 100644
--- a/src/test/ssl/meson.build
+++ b/src/test/ssl/meson.build
@@ -15,4 +15,10 @@ tests += {
't/003_sslinfo.pl',
],
},
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_client.py',
+ 'pyt/test_server.py',
+ ],
+ },
}
diff --git a/src/test/ssl/pyt/conftest.py b/src/test/ssl/pyt/conftest.py
new file mode 100644
index 00000000000..fb4db372f03
--- /dev/null
+++ b/src/test/ssl/pyt/conftest.py
@@ -0,0 +1,129 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import datetime
+import tempfile
+from collections import namedtuple
+
+import pytest
+
+import pg
+from pg.fixtures import *
+
+
+@pytest.fixture(scope="session")
+def cryptography():
+ return pytest.importorskip("cryptography", "3.3.2")
+
+
+Cert = namedtuple("Cert", "cert, certpath, key, keypath")
+
+
+@pytest.fixture(scope="session")
+def certs(cryptography, tmp_path_factory):
+ """
+ Caches commonly used certificates at the session level, and provides a way
+ to create new ones.
+
+ - certs.ca: the root CA certificate
+
+ - certs.server: the "standard" server certficate, signed by certs.ca
+
+ - certs.server_host: the hostname of the certs.server certificate
+
+ - certs.new(): creates a custom certificate, signed by certs.ca
+ """
+
+ from cryptography import x509
+ from cryptography.hazmat.primitives import hashes, serialization
+ from cryptography.hazmat.primitives.asymmetric import rsa
+ from cryptography.x509.oid import NameOID
+
+ tmpdir = tmp_path_factory.mktemp("test-certs")
+
+ class _Certs:
+ def __init__(self):
+ self.ca = self.new(
+ x509.Name(
+ [x509.NameAttribute(NameOID.COMMON_NAME, "PG pytest CA")],
+ ),
+ ca=True,
+ )
+
+ self.server_host = "example.org"
+ self.server = self.new(
+ x509.Name(
+ [x509.NameAttribute(NameOID.COMMON_NAME, self.server_host)],
+ )
+ )
+
+ def new(self, subject: x509.Name, *, ca=False) -> Cert:
+ """
+ Creates and signs a new Cert with the given subject name. If ca is
+ True, the certificate will be self-signed; otherwise the certificate
+ is signed by self.ca.
+ """
+ key = rsa.generate_private_key(
+ public_exponent=65537,
+ key_size=2048,
+ )
+
+ builder = x509.CertificateBuilder()
+ now = datetime.datetime.now(datetime.timezone.utc)
+
+ builder = (
+ builder.subject_name(subject)
+ .public_key(key.public_key())
+ .serial_number(x509.random_serial_number())
+ .not_valid_before(now)
+ .not_valid_after(now + datetime.timedelta(hours=1))
+ )
+
+ if ca:
+ builder = builder.issuer_name(subject)
+ else:
+ builder = builder.issuer_name(self.ca.cert.subject)
+
+ builder = builder.add_extension(
+ x509.BasicConstraints(ca=ca, path_length=None),
+ critical=True,
+ )
+
+ cert = builder.sign(
+ private_key=key if ca else self.ca.key,
+ algorithm=hashes.SHA256(),
+ )
+
+ # Dump the certificate and key to file.
+ keypath = self._tofile(
+ key.private_bytes(
+ serialization.Encoding.PEM,
+ serialization.PrivateFormat.PKCS8,
+ serialization.NoEncryption(),
+ ),
+ suffix=".key",
+ )
+ certpath = self._tofile(
+ cert.public_bytes(serialization.Encoding.PEM),
+ suffix="-ca.crt" if ca else ".crt",
+ )
+
+ return Cert(
+ cert=cert,
+ certpath=certpath,
+ key=key,
+ keypath=keypath,
+ )
+
+ def _tofile(self, data: bytes, *, suffix) -> str:
+ """
+ Dumps data to a file on disk with the requested suffix and returns
+ the path. The file is located somewhere in pytest's temporary
+ directory root.
+ """
+ f = tempfile.NamedTemporaryFile(suffix=suffix, dir=tmpdir, delete=False)
+ with f:
+ f.write(data)
+
+ return f.name
+
+ return _Certs()
diff --git a/src/test/ssl/pyt/test_client.py b/src/test/ssl/pyt/test_client.py
new file mode 100644
index 00000000000..28110ae0717
--- /dev/null
+++ b/src/test/ssl/pyt/test_client.py
@@ -0,0 +1,278 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import ctypes
+import socket
+import ssl
+import struct
+import threading
+from typing import Callable
+
+import pytest
+
+import pg
+
+# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
+pytestmark = pg.require_test_extra("ssl")
+
+
+@pytest.fixture(scope="session", autouse=True)
+def skip_if_no_ssl_support(libpq_handle):
+ """Skips tests if SSL support is not configured."""
+
+ # Declare PQsslAttribute().
+ PQsslAttribute = libpq_handle.PQsslAttribute
+ PQsslAttribute.restype = ctypes.c_char_p
+ PQsslAttribute.argtypes = [ctypes.c_void_p, ctypes.c_char_p]
+
+ if not PQsslAttribute(None, b"library"):
+ pytest.skip("requires SSL support to be configured")
+
+
+#
+# Test Fixtures
+#
+
+
+@pytest.fixture
+def tcp_server_class(remaining_timeout):
+ """
+ Metafixture to combine related logic for tcp_server and ssl_server.
+
+ TODO: combine with test_libpq.local_server
+ """
+
+ class _TCPServer(contextlib.ExitStack):
+ """
+ Implementation class for tcp_server. See .background() for the primary
+ entry point for tests. Postgres clients may connect to this server via
+ **tcp_server.conninfo.
+
+ _TCPServer derives from contextlib.ExitStack to provide easy cleanup of
+ associated resources; see the documentation for that class for a full
+ explanation.
+ """
+
+ def __init__(self):
+ super().__init__()
+
+ self._thread = None
+ self._thread_exc = None
+ self._listener = self.enter_context(
+ socket.socket(socket.AF_INET, socket.SOCK_STREAM),
+ )
+
+ self._bind_and_listen()
+ sockname = self._listener.getsockname()
+ self.conninfo = dict(
+ hostaddr=sockname[0],
+ port=sockname[1],
+ )
+
+ def _bind_and_listen(self):
+ """
+ Does the actual work of binding the socket and listening for
+ connections.
+
+ The listen backlog is currently hardcoded to one.
+ """
+ self._listener.bind(("127.0.0.1", 0))
+ self._listener.listen(1)
+
+ def background(self, fn: Callable[[socket.socket], None]) -> None:
+ """
+ Accepts a client connection on a background thread and passes it to
+ the provided callback. Any exceptions raised from the callback will
+ be re-raised on the main thread during fixture teardown.
+
+ Blocking operations on the connected socket default to using the
+ remaining_timeout(), though this can be changed by the test via the
+ socket's .settimeout().
+ """
+
+ def _bg():
+ try:
+ self._listener.settimeout(remaining_timeout())
+ sock, _ = self._listener.accept()
+
+ with sock:
+ sock.settimeout(remaining_timeout())
+ fn(sock)
+
+ except Exception as e:
+ # Save the exception for re-raising on the main thread.
+ self._thread_exc = e
+
+ # TODO: rather than using callback(), consider explicitly signaling
+ # the fn() implementation to stop early if we get an exception.
+ # Otherwise we'll hang until the end of the timeout.
+ self._thread = threading.Thread(target=_bg)
+ self.callback(self._join)
+
+ self._thread.start()
+
+ def _join(self):
+ """
+ Waits for the background thread to finish and raises any thrown
+ exception. This is called during fixture teardown.
+ """
+ # Give a little bit of wiggle room on the join timeout, since we're
+ # racing against the test's own use of remaining_timeout(). (It's
+ # preferable to let tests report timeouts; the stack traces will
+ # help with debugging.)
+ self._thread.join(remaining_timeout() + 1)
+ if self._thread.is_alive():
+ raise TimeoutError("background thread is still running after timeout")
+
+ if self._thread_exc is not None:
+ raise self._thread_exc
+
+ return _TCPServer
+
+
+@pytest.fixture
+def tcp_server(tcp_server_class):
+ """
+ Opens up a local TCP socket for mocking a Postgres server on a background
+ thread. See the _TCPServer API for usage.
+ """
+ with tcp_server_class() as s:
+ yield s
+
+
+@pytest.fixture
+def ssl_server(tcp_server_class, certs):
+ """
+ Like tcp_server, but with an additional .background_ssl() method which will
+ perform a SSLRequest handshake on the socket before handing the connection
+ to the test callback.
+
+ This server uses certs.server as its identity.
+ """
+
+ class _SSLServer(tcp_server_class):
+ def __init__(self):
+ super().__init__()
+
+ self.conninfo["host"] = certs.server_host
+
+ self._ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
+ self._ctx.load_cert_chain(certs.server.certpath, certs.server.keypath)
+
+ def background_ssl(self, fn: Callable[[ssl.SSLSocket], None]) -> None:
+ """
+ Invokes a server callback as with .background(), but an SSLRequest
+ handshake is performed first, and the socket provided to the
+ callback has been wrapped in an OpenSSL layer.
+ """
+
+ def handshake(s: socket.socket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Make sure we get an SSLRequest.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (1234, 5679)
+ assert pktlen == 8
+
+ # Accept the SSLRequest.
+ s.send(b"S")
+
+ with self._ctx.wrap_socket(s, server_side=True) as wrapped:
+ fn(wrapped)
+
+ self.background(handshake)
+
+ with _SSLServer() as s:
+ yield s
+
+
+#
+# Tests
+#
+
+
+@pytest.mark.parametrize("sslmode", ("require", "verify-ca", "verify-full"))
+def test_server_with_ssl_disabled(libpq, tcp_server, certs, sslmode):
+ """
+ Make sure client refuses to talk to non-SSL servers with stricter
+ sslmodes.
+ """
+
+ def refuse_ssl(s: socket.socket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Make sure we get an SSLRequest.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (1234, 5679)
+ assert pktlen == 8
+
+ # Refuse the SSLRequest.
+ s.send(b"N")
+
+ # Wait for the client to close the connection.
+ assert not s.recv(1), "client sent unexpected data"
+
+ tcp_server.background(refuse_ssl)
+
+ with pytest.raises(libpq.Error, match="server does not support SSL"):
+ with libpq: # XXX tests shouldn't need to do this
+ libpq.must_connect(
+ **tcp_server.conninfo,
+ sslrootcert=certs.ca.certpath,
+ sslmode=sslmode,
+ )
+
+
+def test_verify_full_connection(libpq, ssl_server, certs):
+ """Completes a verify-full connection and empty query."""
+
+ def handle_empty_query(s: ssl.SSLSocket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Check the startup packet version, then discard the remainder.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (3, 0)
+ s.recv(pktlen - 8)
+
+ # Send the required litany of server messages.
+ s.send(struct.pack("!cII", b"R", 8, 0)) # AuthenticationOK
+
+ # ParameterStatus: client_encoding
+ key = b"client_encoding\0"
+ val = b"UTF-8\0"
+ s.send(struct.pack("!cI", b"S", 4 + len(key) + len(val)) + key + val)
+
+ # ParameterStatus: DateStyle
+ key = b"DateStyle\0"
+ val = b"ISO, MDY\0"
+ s.send(struct.pack("!cI", b"S", 4 + len(key) + len(val)) + key + val)
+
+ s.send(struct.pack("!cIII", b"K", 12, 1234, 1234)) # BackendKeyData
+ s.send(struct.pack("!cIc", b"Z", 5, b"I")) # ReadyForQuery
+
+ # Expect an empty query.
+ pkttype = s.recv(1)
+ assert pkttype == b"Q"
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+ assert s.recv(pktlen - 4) == b"\0"
+
+ # Send an EmptyQueryResponse+ReadyForQuery.
+ s.send(struct.pack("!cI", b"I", 4))
+ s.send(struct.pack("!cIc", b"Z", 5, b"I"))
+
+ # libpq should terminate and close the connection.
+ assert s.recv(1) == b"X"
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+ assert pktlen == 4
+
+ assert not s.recv(1), "client sent unexpected data"
+
+ ssl_server.background_ssl(handle_empty_query)
+
+ conn = libpq.must_connect(
+ **ssl_server.conninfo,
+ sslrootcert=certs.ca.certpath,
+ sslmode="verify-full",
+ )
+ with conn:
+ assert conn.exec("").status() == libpq.PGRES_EMPTY_QUERY
--
2.34.1
v2-0004-WIP-pytest-Add-some-server-side-SSL-tests.patchapplication/octet-stream; name=v2-0004-WIP-pytest-Add-some-server-side-SSL-tests.patchDownload
From 493887b8dd9c947c0c5c3f94bd2b3ecc03c64341 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Fri, 22 Aug 2025 17:39:40 -0700
Subject: [PATCH v2 4/6] WIP: pytest: Add some server-side SSL tests
In the same vein as the previous commit, this is a server-only test
suite operating against a mock client. The test itself is a heavily
parameterized check for direct-SSL handshake behavior, using a
combination of "standard" and "custom" certificates via the certs
fixture.
installcheck is currently unsupported, but the architecture has some
extension points that should make it possible later. For now, a new
server is always started for the test session.
New session-level fixtures have been added which probably need to
migrate to the `pg` package. Of note:
- datadir points to the server's data directory
- sockdir points to the server's UNIX socket/lock directory
- server_instance actually inits and starts a server via the pg_ctl on
PATH (and could eventually point at an installcheck target)
Wrapping these session-level fixtures is pg_server[_session], which
provides APIs for configuration changes that unwind themselves at the
end of fixture scopes. There's also an example of nested scopes, via
pg_server_session.subcontext(). Many TODOs remain before we're on par
with Test::Cluster, but this should illustrate my desired architecture
pretty well.
Windows currently uses SCRAM-over-UNIX for the admin account rather than
SSPI-over-TCP. There's some dead Win32 code in pg.current_windows_user,
but I've kept it as an illustration of how a developer might write such
code for SSPI. I'll probably remove it in a future patch version.
TODOs:
- port more server configuration behavior from PostgreSQL::Test::Cluster
- decide again on "session" vs. "module" scope for server fixtures
- improve remaining_timeout() integration with socket operations; at the
moment, the timeout resets on every call rather than decrementing
---
src/test/pytest/pg/__init__.py | 1 +
src/test/pytest/pg/_win32.py | 145 +++++++++
src/test/ssl/pyt/conftest.py | 113 +++++++
src/test/ssl/pyt/test_server.py | 538 ++++++++++++++++++++++++++++++++
4 files changed, 797 insertions(+)
create mode 100644 src/test/pytest/pg/_win32.py
create mode 100644 src/test/ssl/pyt/test_server.py
diff --git a/src/test/pytest/pg/__init__.py b/src/test/pytest/pg/__init__.py
index ef8faf54ca4..5dae49b6406 100644
--- a/src/test/pytest/pg/__init__.py
+++ b/src/test/pytest/pg/__init__.py
@@ -1,3 +1,4 @@
# Copyright (c) 2025, PostgreSQL Global Development Group
from ._env import has_test_extra, require_test_extra
+from ._win32 import current_windows_user
diff --git a/src/test/pytest/pg/_win32.py b/src/test/pytest/pg/_win32.py
new file mode 100644
index 00000000000..3fd67b10191
--- /dev/null
+++ b/src/test/pytest/pg/_win32.py
@@ -0,0 +1,145 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import ctypes
+import platform
+
+
+def current_windows_user():
+ """
+ A port of pg_regress.c's current_windows_user() helper. Returns
+ (accountname, domainname).
+
+ XXX This is dead code now, but I'm keeping it as a motivating example of
+ Win32 interaction, and someone may find it useful in the future when writing
+ SSPI tests?
+ """
+ try:
+ advapi32 = ctypes.windll.advapi32
+ kernel32 = ctypes.windll.kernel32
+ except AttributeError:
+ raise RuntimeError(
+ f"current_windows_user() is not supported on {platform.system()}"
+ )
+
+ def raise_winerror_when_false(result, func, arguments):
+ """
+ A ctypes errcheck handler that raises WinError (which will contain the
+ result of GetLastError()) when the function's return value is false.
+ """
+ if not result:
+ raise ctypes.WinError()
+
+ #
+ # Function Prototypes
+ #
+
+ from ctypes import wintypes
+
+ # GetCurrentProcess
+ kernel32.GetCurrentProcess.restype = wintypes.HANDLE
+ kernel32.GetCurrentProcess.argtypes = []
+
+ # OpenProcessToken
+ TOKEN_READ = 0x00020008
+
+ advapi32.OpenProcessToken.restype = wintypes.BOOL
+ advapi32.OpenProcessToken.argtypes = [
+ wintypes.HANDLE,
+ wintypes.DWORD,
+ wintypes.PHANDLE,
+ ]
+ advapi32.OpenProcessToken.errcheck = raise_winerror_when_false
+
+ # GetTokenInformation
+ PSID = wintypes.LPVOID # we don't need the internals
+ TOKEN_INFORMATION_CLASS = wintypes.INT
+ TokenUser = 1
+
+ class SID_AND_ATTRIBUTES(ctypes.Structure):
+ _fields_ = [
+ ("Sid", PSID),
+ ("Attributes", wintypes.DWORD),
+ ]
+
+ class TOKEN_USER(ctypes.Structure):
+ _fields_ = [
+ ("User", SID_AND_ATTRIBUTES),
+ ]
+
+ advapi32.GetTokenInformation.restype = wintypes.BOOL
+ advapi32.GetTokenInformation.argtypes = [
+ wintypes.HANDLE,
+ TOKEN_INFORMATION_CLASS,
+ wintypes.LPVOID,
+ wintypes.DWORD,
+ wintypes.PDWORD,
+ ]
+ advapi32.GetTokenInformation.errcheck = raise_winerror_when_false
+
+ # LookupAccountSid
+ SID_NAME_USE = wintypes.INT
+ PSID_NAME_USE = ctypes.POINTER(SID_NAME_USE)
+
+ advapi32.LookupAccountSidW.restype = wintypes.BOOL
+ advapi32.LookupAccountSidW.argtypes = [
+ wintypes.LPCWSTR,
+ PSID,
+ wintypes.LPWSTR,
+ wintypes.LPDWORD,
+ wintypes.LPWSTR,
+ wintypes.LPDWORD,
+ PSID_NAME_USE,
+ ]
+ advapi32.LookupAccountSidW.errcheck = raise_winerror_when_false
+
+ #
+ # Implementation (see pg_SSPI_recv_auth())
+ #
+
+ # Get the current process token...
+ token = wintypes.HANDLE()
+ proc = kernel32.GetCurrentProcess()
+ advapi32.OpenProcessToken(proc, TOKEN_READ, token)
+
+ # ...then read the TOKEN_USER struct for that token...
+ info = TOKEN_USER()
+ infolen = wintypes.DWORD()
+
+ try:
+ # (GetTokenInformation creates a buffer bigger than TOKEN_USER, so we
+ # have to query the correct length first.)
+ advapi32.GetTokenInformation(token, TokenUser, None, 0, ctypes.byref(infolen))
+ assert False, "GetTokenInformation succeeded unexpectedly"
+
+ except OSError as err:
+ assert err.winerror == 122 # insufficient buffer
+
+ ctypes.resize(info, infolen.value)
+ advapi32.GetTokenInformation(
+ token,
+ TokenUser,
+ ctypes.byref(info),
+ ctypes.sizeof(info),
+ ctypes.byref(infolen),
+ )
+
+ # ...then pull the account and domain names out of the user SID.
+ MAXPGPATH = 1024
+
+ account = ctypes.create_unicode_buffer(MAXPGPATH)
+ domain = ctypes.create_unicode_buffer(MAXPGPATH)
+ accountlen = wintypes.DWORD(ctypes.sizeof(account))
+ domainlen = wintypes.DWORD(ctypes.sizeof(domain))
+ use = SID_NAME_USE()
+
+ advapi32.LookupAccountSidW(
+ None,
+ info.User.Sid,
+ account,
+ ctypes.byref(accountlen),
+ domain,
+ ctypes.byref(domainlen),
+ ctypes.byref(use),
+ )
+
+ return (account.value, domain.value)
diff --git a/src/test/ssl/pyt/conftest.py b/src/test/ssl/pyt/conftest.py
index fb4db372f03..85d2c994828 100644
--- a/src/test/ssl/pyt/conftest.py
+++ b/src/test/ssl/pyt/conftest.py
@@ -1,6 +1,12 @@
# Copyright (c) 2025, PostgreSQL Global Development Group
import datetime
+import os
+import pathlib
+import platform
+import secrets
+import socket
+import subprocess
import tempfile
from collections import namedtuple
@@ -127,3 +133,110 @@ def certs(cryptography, tmp_path_factory):
return f.name
return _Certs()
+
+
+@pytest.fixture(scope="session")
+def datadir(tmp_path_factory):
+ """
+ Returns the directory name to use as the server data directory. If
+ TESTDATADIR is provided, that will be used; otherwise a new temporary
+ directory is created in the pytest temp root.
+ """
+ d = os.getenv("TESTDATADIR")
+ if d:
+ d = pathlib.Path(d)
+ else:
+ d = tmp_path_factory.mktemp("tmp_check")
+
+ return d
+
+
+@pytest.fixture(scope="session")
+def sockdir(tmp_path_factory):
+ """
+ Returns the directory name to use as the server's unix_socket_directories
+ setting. Local client connections use this as the PGHOST.
+
+ At the moment, this is always put under the pytest temp root.
+ """
+ return tmp_path_factory.mktemp("sockfiles")
+
+
+@pytest.fixture(scope="session")
+def winpassword():
+ """The per-session SCRAM password for the server admin on Windows."""
+ return secrets.token_urlsafe(16)
+
+
+@pytest.fixture(scope="session")
+def server_instance(certs, datadir, sockdir, winpassword):
+ """
+ Starts a running Postgres server listening on localhost. The HBA initially
+ allows only local UNIX connections from the same user.
+
+ TODO: when installcheck is supported, this should optionally point to the
+ currently running server instead.
+ """
+
+ # Lock down the HBA by default; tests can open it back up later.
+ if platform.system() == "Windows":
+ # On Windows, for admin connections, use SCRAM with a generated password
+ # over local sockets. This requires additional work during initdb.
+ method = "scram-sha-256"
+
+ # NamedTemporaryFile doesn't work very nicely on Windows until Python
+ # 3.12, which introduces NamedTemporaryFile(delete_on_close=False).
+ # Until then, specify delete=False and manually unlink after use.
+ with tempfile.NamedTemporaryFile("w", delete=False) as pwfile:
+ pwfile.write(winpassword)
+
+ subprocess.check_call(
+ ["initdb", "--auth=scram-sha-256", "--pwfile", pwfile.name, datadir]
+ )
+ os.unlink(pwfile.name)
+
+ else:
+ # For other OSes we can just use peer auth.
+ method = "peer"
+ subprocess.check_call(["pg_ctl", "-D", datadir, "init"])
+
+ with open(datadir / "pg_hba.conf", "w") as f:
+ print(f"# default: local {method} connections only", file=f)
+ print(f"local all all {method}", file=f)
+
+ # Figure out a port to listen on. Attempt to reserve both IPv4 and IPv6
+ # addresses in one go.
+ #
+ # Note: socket.has_dualstack_ipv6/create_server are only in Python 3.8+.
+ if hasattr(socket, "has_dualstack_ipv6") and socket.has_dualstack_ipv6():
+ addr = ("::1", 0)
+ s = socket.create_server(addr, family=socket.AF_INET6, dualstack_ipv6=True)
+
+ hostaddr, port, _, _ = s.getsockname()
+ addrs = [hostaddr, "127.0.0.1"]
+
+ else:
+ addr = ("127.0.0.1", 0)
+
+ s = socket.socket()
+ s.bind(addr)
+
+ hostaddr, port = s.getsockname()
+ addrs = [hostaddr]
+
+ log = os.path.join(datadir, "postgresql.log")
+
+ with s, open(os.path.join(datadir, "postgresql.conf"), "a") as f:
+ print(file=f)
+ print("unix_socket_directories = '{}'".format(sockdir.as_posix()), file=f)
+ print("listen_addresses = '{}'".format(",".join(addrs)), file=f)
+ print("port =", port, file=f)
+ print("log_connections = all", file=f)
+
+ # Between closing of the socket, s, and server start, we're racing against
+ # anything that wants to open up ephemeral ports, so try not to put any new
+ # work here.
+
+ subprocess.check_call(["pg_ctl", "-D", datadir, "-l", log, "start"])
+ yield (hostaddr, port)
+ subprocess.check_call(["pg_ctl", "-D", datadir, "-l", log, "stop"])
diff --git a/src/test/ssl/pyt/test_server.py b/src/test/ssl/pyt/test_server.py
new file mode 100644
index 00000000000..2d0be735371
--- /dev/null
+++ b/src/test/ssl/pyt/test_server.py
@@ -0,0 +1,538 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import os
+import pathlib
+import platform
+import re
+import shutil
+import socket
+import ssl
+import struct
+import subprocess
+import tempfile
+from collections import namedtuple
+from typing import Dict, List, Union
+
+import pytest
+
+import pg
+
+# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
+pytestmark = pg.require_test_extra("ssl")
+
+
+#
+# Test Fixtures
+#
+
+
+@pytest.fixture(scope="session")
+def connenv(server_instance, sockdir, datadir):
+ """
+ Provides the values for several PG* environment variables needed for our
+ utility programs to connect to the server_instance.
+ """
+ return {
+ "PGHOST": str(sockdir),
+ "PGPORT": str(server_instance[1]),
+ "PGDATABASE": "postgres",
+ "PGDATA": str(datadir),
+ }
+
+
+class FileBackup(contextlib.AbstractContextManager):
+ """
+ A context manager which backs up a file's contents, restoring them on exit.
+ """
+
+ def __init__(self, file: pathlib.Path):
+ super().__init__()
+
+ self._file = file
+
+ def __enter__(self):
+ with tempfile.NamedTemporaryFile(
+ prefix=self._file.name, dir=self._file.parent, delete=False
+ ) as f:
+ self._backup = pathlib.Path(f.name)
+
+ shutil.copyfile(self._file, self._backup)
+
+ return self
+
+ def __exit__(self, *exc):
+ # Swap the backup and the original file, so that the modified contents
+ # can still be inspected in case of failure.
+ #
+ # TODO: this is less helpful if there are multiple layers, because it's
+ # not clear which backup to look at. Can the backup name be printed as
+ # part of the failed test output? Should we only swap on test failure?
+ tmp = self._backup.parent / (self._backup.name + ".tmp")
+
+ shutil.copyfile(self._file, tmp)
+ shutil.copyfile(self._backup, self._file)
+ shutil.move(tmp, self._backup)
+
+
+class HBA(FileBackup):
+ """
+ Backs up a server's HBA configuration and provides means for temporarily
+ editing it. See also pg_server, which provides an instance of this class and
+ context managers for enforcing the reload/restart order of operations.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "pg_hba.conf")
+
+ def prepend(self, *lines: Union[str, List[str]]):
+ """
+ Temporarily prepends lines to the server's pg_hba.conf.
+
+ As sugar for aligning HBA columns in the tests, each line can be either
+ a string or a list of strings. List elements will be joined by single
+ spaces before they are written to file.
+ """
+ with open(self._file, "r") as f:
+ prior_data = f.read()
+
+ with open(self._file, "w") as f:
+ for l in lines:
+ if isinstance(l, list):
+ print(*l, file=f)
+ else:
+ print(l, file=f)
+
+ f.write(prior_data)
+
+
+class Config(FileBackup):
+ """
+ Backs up a server's postgresql.conf and provides means for temporarily
+ editing it. See also pg_server, which provides an instance of this class and
+ context managers for enforcing the reload/restart order of operations.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "postgresql.conf")
+
+ def set(self, **gucs):
+ """
+ Temporarily appends GUC settings to the server's postgresql.conf.
+ """
+
+ with open(self._file, "a") as f:
+ print(file=f)
+
+ for n, v in gucs.items():
+ v = str(v)
+
+ # TODO: proper quoting
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+ v = "'{}'".format(v)
+
+ print(n, "=", v, file=f)
+
+
+@pytest.fixture(scope="session")
+def pg_server_session(server_instance, connenv, datadir, winpassword):
+ """
+ Provides common routines for configuring and connecting to the
+ server_instance. For example:
+
+ users = pg_server_session.create_users("one", "two")
+ dbs = pg_server_session.create_dbs("default")
+
+ with pg_server_session.reloading() as s:
+ s.hba.prepend(["local", dbs["default"], users["two"], "peer"])
+
+ conn = connect_somehow(**pg_server_session.conninfo)
+ ...
+
+ Attributes of note are
+ - .conninfo: provides TCP connection info for the server
+
+ This fixture unwinds its configuration changes at the end of the pytest
+ session. For more granular changes, pg_server_session.subcontext() splits
+ off a "nested" context to allow smaller scopes.
+ """
+
+ class _Server(contextlib.ExitStack):
+ conninfo = dict(
+ hostaddr=server_instance[0],
+ port=server_instance[1],
+ )
+
+ # for _backup_configuration()
+ _Backup = namedtuple("Backup", "conf, hba")
+
+ def subcontext(self):
+ """
+ Creates a new server stack instance that can be tied to a smaller
+ scope than "session".
+ """
+ # So far, there doesn't seem to be a need to link the two objects,
+ # since HBA/Config/FileBackup operate directly on the filesystem and
+ # will appear to "nest" naturally.
+ return self.__class__()
+
+ def create_users(self, *userkeys: str) -> Dict[str, str]:
+ """
+ Creates new users which will be dropped at the end of the server
+ context.
+
+ For each provided key, a related user name will be selected and
+ stored in a map. This map is returned to let calling code look up
+ the selected usernames (instead of hardcoding them and potentially
+ stomping on an existing installation).
+ """
+ usermap = {}
+
+ for u in userkeys:
+ # TODO: use a uniquifier to support installcheck
+ name = u + "user"
+ usermap[u] = name
+
+ # TODO: proper escaping
+ self.psql("-c", "CREATE USER " + name)
+ self.callback(self.psql, "-c", "DROP USER " + name)
+
+ return usermap
+
+ def create_dbs(self, *dbkeys: str) -> Dict[str, str]:
+ """
+ Creates new databases which will be dropped at the end of the server
+ context. See create_users() for the meaning of the keys and returned
+ map.
+ """
+ dbmap = {}
+
+ for d in dbkeys:
+ # TODO: use a uniquifier to support installcheck
+ name = d + "db"
+ dbmap[d] = name
+
+ # TODO: proper escaping
+ self.psql("-c", "CREATE DATABASE " + name)
+ self.callback(self.psql, "-c", "DROP DATABASE " + name)
+
+ return dbmap
+
+ @contextlib.contextmanager
+ def reloading(self):
+ """
+ Provides a context manager for making configuration changes.
+
+ If the context suite finishes successfully, the configuration will
+ be reloaded via pg_ctl. On teardown, the configuration changes will
+ be unwound, and the server will be signaled to reload again.
+
+ The context target contains the following attributes which can be
+ used to configure the server:
+ - .conf: modifies postgresql.conf
+ - .hba: modifies pg_hba.conf
+
+ For example:
+
+ with pg_server_session.reloading() as s:
+ s.conf.set(log_connections="on")
+ s.hba.prepend("local all all trust")
+ """
+ try:
+ # Push a reload onto the stack before making any other
+ # unwindable changes. That way the order of operations will be
+ #
+ # # test
+ # - config change 1
+ # - config change 2
+ # - reload
+ # # teardown
+ # - undo config change 2
+ # - undo config change 1
+ # - reload
+ #
+ self.callback(self.pg_ctl, "reload")
+ yield self._backup_configuration()
+ except:
+ # We only want to reload at the end of the suite if there were
+ # no errors. During exceptions, the pushed callback handles
+ # things instead, so there's nothing to do here.
+ raise
+ else:
+ # Suite completed successfully.
+ self.pg_ctl("reload")
+
+ @contextlib.contextmanager
+ def restarting(self):
+ """Like .reloading(), but with a full server restart."""
+ try:
+ self.callback(self.pg_ctl, "restart")
+ yield self._backup_configuration()
+ except:
+ raise
+ else:
+ self.pg_ctl("restart")
+
+ def psql(self, *args):
+ """
+ Runs psql with the given arguments. Password prompts are always
+ disabled. On Windows, the admin password will be included in the
+ environment.
+ """
+ if platform.system() == "Windows":
+ pw = dict(PGPASSWORD=winpassword)
+ else:
+ pw = None
+
+ self._run("psql", "-w", *args, addenv=pw)
+
+ def pg_ctl(self, *args):
+ """
+ Runs pg_ctl with the given arguments. Log output will be placed in
+ postgresql.log in the server's data directory.
+
+ TODO: put the log in TESTLOGDIR
+ """
+ self._run("pg_ctl", "-l", str(datadir / "postgresql.log"), *args)
+
+ def _run(self, cmd, *args, addenv: dict = None):
+ # Override the existing environment with the connenv values and
+ # anything the caller wanted to add. (Python 3.9 gives us the
+ # less-ugly `os.environ | connenv` merge operator.)
+ subenv = dict(os.environ, **connenv)
+ if addenv:
+ subenv.update(addenv)
+
+ subprocess.check_call([cmd, *args], env=subenv)
+
+ def _backup_configuration(self):
+ # Wrap the existing HBA and configuration with FileBackups.
+ return self._Backup(
+ hba=self.enter_context(HBA(datadir)),
+ conf=self.enter_context(Config(datadir)),
+ )
+
+ with _Server() as s:
+ yield s
+
+
+@pytest.fixture(scope="module", autouse=True)
+def ssl_setup(pg_server_session, certs, datadir):
+ """
+ Sets up required server settings for all tests in this module. The fixture
+ variable is a tuple (users, dbs) containing the user and database names that
+ have been chosen for the test session.
+ """
+ try:
+ with pg_server_session.restarting() as s:
+ s.conf.set(
+ ssl="on",
+ ssl_ca_file=certs.ca.certpath,
+ ssl_cert_file=certs.server.certpath,
+ ssl_key_file=certs.server.keypath,
+ )
+
+ # Reject by default.
+ s.hba.prepend("hostssl all all all reject")
+
+ except subprocess.CalledProcessError:
+ # This is a decent place to skip if the server isn't set up for SSL.
+ logpath = datadir / "postgresql.log"
+ unsupported = re.compile("SSL is not supported")
+
+ with open(logpath, "r") as log:
+ for line in log:
+ if unsupported.search(line):
+ pytest.skip("the server does not support SSL")
+
+ # Some other error happened.
+ raise
+
+ users = pg_server_session.create_users(
+ "ssl",
+ )
+
+ dbs = pg_server_session.create_dbs(
+ "ssl",
+ )
+
+ return (users, dbs)
+
+
+@pytest.fixture(scope="module")
+def client_cert(ssl_setup, certs):
+ """
+ Creates a Cert for the "ssl" user.
+ """
+ from cryptography import x509
+ from cryptography.x509.oid import NameOID
+
+ users, _ = ssl_setup
+ user = users["ssl"]
+
+ return certs.new(x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, user)]))
+
+
+@pytest.fixture
+def pg_server(pg_server_session):
+ """
+ A per-test instance of pg_server_session. Use this fixture to make changes
+ to the server which will be rolled back at the end of every test.
+ """
+ with pg_server_session.subcontext() as s:
+ yield s
+
+
+#
+# Tests
+#
+
+
+# For use with the `creds` parameter below.
+CLIENT = "client"
+SERVER = "server"
+
+
+@pytest.mark.parametrize(
+ # fmt: off
+ "auth_method, creds, expected_error",
+[
+ # Trust allows anything.
+ ("trust", None, None),
+ ("trust", CLIENT, None),
+ ("trust", SERVER, None),
+
+ # verify-ca allows any CA-signed certificate.
+ ("trust clientcert=verify-ca", None, "requires a valid client certificate"),
+ ("trust clientcert=verify-ca", CLIENT, None),
+ ("trust clientcert=verify-ca", SERVER, None),
+
+ # cert and verify-full allow only the correct certificate.
+ ("trust clientcert=verify-full", None, "requires a valid client certificate"),
+ ("trust clientcert=verify-full", CLIENT, None),
+ ("trust clientcert=verify-full", SERVER, "authentication failed for user"),
+ ("cert", None, "requires a valid client certificate"),
+ ("cert", CLIENT, None),
+ ("cert", SERVER, "authentication failed for user"),
+],
+ # fmt: on
+)
+def test_direct_ssl_certificate_authentication(
+ pg_server,
+ ssl_setup,
+ certs,
+ client_cert,
+ remaining_timeout,
+ # test parameters
+ auth_method,
+ creds,
+ expected_error,
+):
+ """
+ Tests direct SSL connections with various client-certificate/HBA
+ combinations.
+ """
+
+ # Set up the HBA as desired by the test.
+ users, dbs = ssl_setup
+
+ user = users["ssl"]
+ db = dbs["ssl"]
+
+ with pg_server.reloading() as s:
+ s.hba.prepend(
+ ["hostssl", db, user, "127.0.0.1/32", auth_method],
+ ["hostssl", db, user, "::1/128", auth_method],
+ )
+
+ # Configure the SSL settings for the client.
+ ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
+ ctx.load_verify_locations(cafile=certs.ca.certpath)
+ ctx.set_alpn_protocols(["postgresql"]) # for direct SSL
+
+ # Load up a client certificate if required by the test.
+ if creds == CLIENT:
+ ctx.load_cert_chain(client_cert.certpath, client_cert.keypath)
+ elif creds == SERVER:
+ # Using a server certificate as the client credential is expected to
+ # work only for clientcert=verify-ca (and `trust`, naturally).
+ ctx.load_cert_chain(certs.server.certpath, certs.server.keypath)
+
+ # Make a direct SSL connection. There's no SSLRequest in the handshake; we
+ # simply wrap a TCP connection with OpenSSL.
+ addr = (pg_server.conninfo["hostaddr"], pg_server.conninfo["port"])
+ with socket.create_connection(addr) as s:
+ s.settimeout(remaining_timeout()) # XXX this resets every operation
+
+ with ctx.wrap_socket(s, server_hostname=certs.server_host) as conn:
+ # Build and send the startup packet.
+ startup_options = dict(
+ user=user,
+ database=db,
+ application_name="pytest",
+ )
+
+ payload = b""
+ for k, v in startup_options.items():
+ payload += k.encode() + b"\0"
+ payload += str(v).encode() + b"\0"
+ payload += b"\0" # null terminator
+
+ pktlen = 4 + 4 + len(payload)
+ conn.send(struct.pack("!IHH", pktlen, 3, 0) + payload)
+
+ if not expected_error:
+ # Expect an AuthenticationOK to come back.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"R"
+ assert pktlen == 8
+
+ authn_result = struct.unpack("!I", conn.recv(4))[0]
+ assert authn_result == 0
+
+ # Read and discard to ReadyForQuery.
+ while True:
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ payload = conn.recv(pktlen - 4)
+
+ if pkttype == b"Z":
+ assert payload == b"I"
+ break
+
+ # Send an empty query.
+ conn.send(struct.pack("!cI", b"Q", 5) + b"\0")
+
+ # Expect EmptyQueryResponse+ReadyForQuery.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"I"
+ assert pktlen == 4
+
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"Z"
+
+ payload = conn.recv(pktlen - 4)
+ assert payload == b"I"
+
+ else:
+ # Match the expected authentication error.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"E"
+
+ payload = conn.recv(pktlen - 4)
+ msg = None
+
+ for component in payload.split(b"\0"):
+ if not component:
+ break # end of message
+
+ key, val = component[:1], component[1:]
+ if key == b"S":
+ assert val == b"FATAL"
+ elif key == b"M":
+ msg = val.decode()
+
+ assert re.search(expected_error, msg), "server error did not match"
+
+ # Terminate.
+ conn.send(struct.pack("!cI", b"X", 4))
--
2.34.1
v2-0005-ci-Add-MTEST_SUITES-for-optional-test-tailoring.patchapplication/octet-stream; name=v2-0005-ci-Add-MTEST_SUITES-for-optional-test-tailoring.patchDownload
From cb8a0a09a81e7a0a22207398d038ddb3f727b473 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 2 Sep 2025 15:37:53 -0700
Subject: [PATCH v2 5/6] ci: Add MTEST_SUITES for optional test tailoring
Should make it easier to control the test cycle time for Cirrus. Add the
desired suites (remembering `--suite setup`!) to the top-level envvar.
---
.cirrus.tasks.yml | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 4e744f1c105..706a809f641 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -28,6 +28,7 @@ env:
# errors/warnings in one place.
MBUILD_TARGET: all testprep
MTEST_ARGS: --print-errorlogs --no-rebuild -C build
+ MTEST_SUITES: # --suite setup --suite ssl --suite ...
PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
@@ -247,7 +248,7 @@ task:
test_world_script: |
su postgres <<-EOF
ulimit -c unlimited
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
# test runningcheck, freebsd chosen because it's currently fast enough
@@ -391,7 +392,7 @@ task:
# Otherwise tests will fail on OpenBSD, due to inability to start enough
# processes.
ulimit -p 256
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
on_failure:
@@ -605,7 +606,7 @@ task:
test_world_script: |
su postgres <<-EOF
ulimit -c unlimited
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
# so that we don't upload 64bit logs if 32bit fails
rm -rf build/
@@ -617,7 +618,7 @@ task:
test_world_32_script: |
su postgres <<-EOF
ulimit -c unlimited
- PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS}
+ PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
on_failure:
@@ -743,7 +744,7 @@ task:
test_world_script: |
ulimit -c unlimited # default is 0
ulimit -n 1024 # default is 256, pretty low
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
on_failure:
<<: *on_failure_meson
@@ -826,7 +827,7 @@ task:
check_world_script: |
vcvarsall x64
- meson test %MTEST_ARGS% --num-processes %TEST_JOBS%
+ meson test %MTEST_ARGS% --num-processes %TEST_JOBS% %MTEST_SUITES%
on_failure:
<<: *on_failure_meson
@@ -887,7 +888,7 @@ task:
upload_caches: ccache
test_world_script: |
- %BASH% -c "meson test %MTEST_ARGS% --num-processes %TEST_JOBS%"
+ %BASH% -c "meson test %MTEST_ARGS% --num-processes %TEST_JOBS% %MTEST_SUITES%"
on_failure:
<<: *on_failure_meson
--
2.34.1
v2-0006-XXX-run-pytest-and-ssl-suite-all-OSes.patchapplication/octet-stream; name=v2-0006-XXX-run-pytest-and-ssl-suite-all-OSes.patchDownload
From 5d95b517f6ec399150e56cf40f68843331d89f01 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 2 Sep 2025 15:38:52 -0700
Subject: [PATCH v2 6/6] XXX run pytest and ssl suite, all OSes
---
.cirrus.star | 2 +-
.cirrus.tasks.yml | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/.cirrus.star b/.cirrus.star
index e9bb672b959..7c1caaa12f1 100644
--- a/.cirrus.star
+++ b/.cirrus.star
@@ -73,7 +73,7 @@ def compute_environment_vars():
# REPO_CI_AUTOMATIC_TRIGGER_TASKS="task_name other_task" under "Repository
# Settings" on Cirrus CI's website.
- default_manual_trigger_tasks = ['mingw', 'netbsd', 'openbsd']
+ default_manual_trigger_tasks = []
repo_ci_automatic_trigger_tasks = env.get('REPO_CI_AUTOMATIC_TRIGGER_TASKS', '')
for task in default_manual_trigger_tasks:
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 706a809f641..ddb5305dc81 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -28,7 +28,7 @@ env:
# errors/warnings in one place.
MBUILD_TARGET: all testprep
MTEST_ARGS: --print-errorlogs --no-rebuild -C build
- MTEST_SUITES: # --suite setup --suite ssl --suite ...
+ MTEST_SUITES: --suite setup --suite pytest --suite ssl
PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
--
2.34.1
On Mon, 22 Sept 2025 at 22:30, Jacob Champion <jacob.champion@enterprisedb.com> wrote:
Done this way in v2-0002
Okay I finally managed to do some testing of this patchset while working
on a patchset of mine where I'm adding a GoAway message to the protocol
(should be ready to be published soon)
First of all: THANK YOU! It's a great base to start from and I hope we
can relatively soon have something merged, that we can gradually
improve.
I had some problems using it for my own tests though. The primary
reasons for that were:
1. It was missing functionality to send queries and get results.
2. A lot of the fixtures I wanted to use were located in the ssl tests
directory instead of the shared fixtures module.
3. When running pytest manually I had to configure LD_LIBRARY_PATH
So here's your patchset with an additional commit on top that does a
bunch of refactoring/renaming and adding features. I hope you like it. I
tried to make the most common actions easy to do.
The primary features it adds are:
- A `sql` method on `PGconn`: It takes a query and returns the results
as native python types.
- A `conn` fixture: Which is a libpq based connection to the default
Postgres server.
- Use the `pg_config` binary to find the libdir and bindir (can be
overridden by setting PG_CONFIG). Otherwise I had to use
LD_LIBRARY_PATH when running pytest manually.
The refactoring it does:
- Rename `pg_server` fixture to `pg` since it'll likely be one of the
most commonly used ones.
- Rename `pg` module to `pypg` to avoid naming conflict/shadowing
problems with the newly renamed `pg` fixture
- Move class definitions outside of fixtures to separate modules (either
in the `pypg` module or the new `libpq` module)
- Move all "general" fixtures to the `pypg.fixtures` module, instead of
having them be defined in the ssl module.
Attachments:
v3-0001-meson-Include-TAP-tests-in-the-configuration-summ.patchtext/x-patch; charset=utf-8; name=v3-0001-meson-Include-TAP-tests-in-the-configuration-summ.patchDownload
From 6be9c11e14a7cba6877f6d8c0397cbb94fb62f6b Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Fri, 5 Sep 2025 16:39:08 -0700
Subject: [PATCH v3 01/10] meson: Include TAP tests in the configuration
summary
...to make it obvious when they've been enabled. prove is added to the
executables list for good measure.
TODO: does Autoconf need something similar?
Per complaint by Peter Eisentraut.
---
meson.build | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/meson.build b/meson.build
index 395416a6060..37ed68ceeb4 100644
--- a/meson.build
+++ b/meson.build
@@ -3952,6 +3952,7 @@ summary(
'bison': '@0@ @1@'.format(bison.full_path(), bison_version),
'dtrace': dtrace,
'flex': '@0@ @1@'.format(flex.full_path(), flex_version),
+ 'prove': prove,
},
section: 'Programs',
)
@@ -3988,3 +3989,11 @@ summary(
section: 'External libraries',
list_sep: ' ',
)
+
+summary(
+ {
+ 'tap': tap_tests_enabled,
+ },
+ section: 'Other features',
+ list_sep: ' ',
+)
--
2.51.1
v3-0002-Add-support-for-pytest-test-suites.patchtext/x-patch; charset=utf-8; name=v3-0002-Add-support-for-pytest-test-suites.patchDownload
From 2a35a86f10914e95fd6e63e4224ab62a973a6a93 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 13 Aug 2025 10:58:56 -0700
Subject: [PATCH v3 02/10] Add support for pytest test suites
Specify --enable-pytest/-Dpytest=enabled at configure time. This
contains no Postgres test logic -- it is just a "vanilla" pytest
skeleton.
I've written a custom pgtap output plugin, used by the Meson mtest
runner, to fully control what we see during CI test failures. The
pytest-tap plugin would have been preferable, but it's now in
maintenance mode, and it has problems with accidentally suppressing
important collection failures.
test_something.py is intended to show a sample failure in the CI.
TODOs:
- OpenBSD has an ANSI-related terminal bug, but I'm not sure if the bug
is in Cirrus, the image, pytest, Python, or readline. The TERM envvar
is unset to work around it. If this workaround is removed, a bad ANSI
escape is inserted into the pgtap output and mtest is unable to parse
it.
- The Chocolatey CI setup is subpar. Need to find a way to bless the
dependencies in use rather than pulling from pip... or maybe that will
be done by the image baker.
---
.cirrus.tasks.yml | 38 +++--
.gitignore | 1 +
config/check_pytest.py | 150 ++++++++++++++++++++
config/conftest.py | 18 +++
config/pytest-requirements.txt | 21 +++
configure | 108 +++++++++++++-
configure.ac | 25 +++-
meson.build | 92 ++++++++++++
meson_options.txt | 8 +-
pytest.ini | 6 +
src/Makefile.global.in | 23 +++
src/makefiles/meson.build | 2 +
src/test/Makefile | 11 +-
src/test/meson.build | 1 +
src/test/pytest/Makefile | 20 +++
src/test/pytest/README | 1 +
src/test/pytest/meson.build | 16 +++
src/test/pytest/plugins/pgtap.py | 193 ++++++++++++++++++++++++++
src/test/pytest/pyt/test_something.py | 17 +++
19 files changed, 736 insertions(+), 15 deletions(-)
create mode 100644 config/check_pytest.py
create mode 100644 config/conftest.py
create mode 100644 config/pytest-requirements.txt
create mode 100644 pytest.ini
create mode 100644 src/test/pytest/Makefile
create mode 100644 src/test/pytest/README
create mode 100644 src/test/pytest/meson.build
create mode 100644 src/test/pytest/plugins/pgtap.py
create mode 100644 src/test/pytest/pyt/test_something.py
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index eca9d62fc22..80f9b394bd2 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -21,7 +21,8 @@ env:
# target to test, for all but windows
CHECK: check-world PROVE_FLAGS=$PROVE_FLAGS
- CHECKFLAGS: -Otarget
+ # TODO were we avoiding --keep-going on purpose?
+ CHECKFLAGS: -Otarget --keep-going
PROVE_FLAGS: --timer
# Build test dependencies as part of the build step, to see compiler
# errors/warnings in one place.
@@ -44,6 +45,7 @@ env:
-Dldap=enabled
-Dssl=openssl
-Dtap_tests=enabled
+ -Dpytest=enabled
-Dplperl=enabled
-Dplpython=enabled
-Ddocs=enabled
@@ -222,7 +224,9 @@ task:
chown root:postgres /tmp/cores
sysctl kern.corefile='/tmp/cores/%N.%P.core'
setup_additional_packages_script: |
- #pkg install -y ...
+ pkg install -y \
+ py311-packaging \
+ py311-pytest
# NB: Intentionally build without -Dllvm. The freebsd image size is already
# large enough to make VM startup slow, and even without llvm freebsd
@@ -311,7 +315,10 @@ task:
-Dpam=enabled
setup_additional_packages_script: |
- #pkgin -y install ...
+ pkgin -y install \
+ py312-packaging \
+ py312-test
+ ln -s /usr/pkg/bin/pytest-3.12 /usr/pkg/bin/pytest
<<: *netbsd_task_template
- name: OpenBSD - Meson
@@ -322,6 +329,7 @@ task:
OS_NAME: openbsd
IMAGE_FAMILY: pg-ci-openbsd-postgres
PKGCONFIG_PATH: '/usr/lib/pkgconfig:/usr/local/lib/pkgconfig'
+ TERM: # TODO why does pytest print ANSI escapes on OpenBSD?
MESON_FEATURES: >-
-Dbsd_auth=enabled
@@ -330,7 +338,9 @@ task:
-Duuid=e2fs
setup_additional_packages_script: |
- #pkg_add -I ...
+ pkg_add -I \
+ py3-test \
+ py3-packaging
# Always core dump to ${CORE_DUMP_DIR}
set_core_dump_script: sysctl -w kern.nosuidcoredump=2
<<: *openbsd_task_template
@@ -489,8 +499,10 @@ task:
EOF
setup_additional_packages_script: |
- #apt-get update
- #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+ apt-get update
+ DEBIAN_FRONTEND=noninteractive apt-get -y install \
+ python3-pytest \
+ python3-packaging
matrix:
# SPECIAL:
@@ -513,14 +525,15 @@ task:
su postgres <<-EOF
./configure \
--enable-cassert --enable-injection-points --enable-debug \
- --enable-tap-tests --enable-nls \
+ --enable-tap-tests --enable-pytest --enable-nls \
--with-segsize-blocks=6 \
--with-libnuma \
--with-liburing \
\
${LINUX_CONFIGURE_FEATURES} \
\
- CLANG="ccache clang-16"
+ CLANG="ccache clang-16" \
+ PYTEST="env LD_PRELOAD=/lib/x86_64-linux-gnu/libasan.so.8 pytest"
EOF
build_script: su postgres -c "make -s -j${BUILD_JOBS} world-bin"
upload_caches: ccache
@@ -650,6 +663,8 @@ task:
p5.34-io-tty
p5.34-ipc-run
python312
+ py312-packaging
+ py312-pytest
tcl
zstd
@@ -699,6 +714,7 @@ task:
sh src/tools/ci/ci_macports_packages.sh $MACOS_PACKAGE_LIST
# system python doesn't provide headers
sudo /opt/local/bin/port select python3 python312
+ sudo /opt/local/bin/port select pytest pytest312
# Make macports install visible for subsequent steps
echo PATH=/opt/local/sbin/:/opt/local/bin/:$PATH >> $CIRRUS_ENV
upload_caches: macports
@@ -772,6 +788,8 @@ task:
-Dldap=enabled
-Dssl=openssl
-Dtap_tests=enabled
+ -Dpytest=enabled
+ -DPYTEST=c:\Windows\system32\config\systemprofile\AppData\Roaming\Python\Python310\Scripts\pytest.exe
-Dplperl=enabled
-Dplpython=enabled
@@ -780,8 +798,10 @@ task:
depends_on: SanityCheck
only_if: $CI_WINDOWS_ENABLED
+ # XXX Does Chocolatey really not have any Python package installers?
setup_additional_packages_script: |
REM choco install -y --no-progress ...
+ pip3 install --user packaging pytest
setup_hosts_file_script: |
echo 127.0.0.1 pg-loadbalancetest >> c:\Windows\System32\Drivers\etc\hosts
@@ -844,7 +864,7 @@ task:
folder: ${CCACHE_DIR}
setup_additional_packages_script: |
- REM C:\msys64\usr\bin\pacman.exe -S --noconfirm ...
+ C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-python-packaging mingw-w64-ucrt-x86_64-python-pytest
mingw_info_script: |
%BASH% -c "where gcc"
diff --git a/.gitignore b/.gitignore
index 4e911395fe3..268426003b1 100644
--- a/.gitignore
+++ b/.gitignore
@@ -31,6 +31,7 @@ win32ver.rc
*.exe
lib*dll.def
lib*.pc
+__pycache__/
# Local excludes in root directory
/GNUmakefile
diff --git a/config/check_pytest.py b/config/check_pytest.py
new file mode 100644
index 00000000000..1562d16bcda
--- /dev/null
+++ b/config/check_pytest.py
@@ -0,0 +1,150 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+#
+# Verify that pytest-requirements.txt is satisfied. This would probably be
+# easier with pip, but requiring pip on build machines is a non-starter for
+# many.
+#
+# This is coded as a pytest suite in order to check the Python distribution in
+# use by pytest, as opposed to the Python distribution being linked against
+# Postgres. In some setups they are separate.
+#
+# The design philosophy of this script is to bend over backwards to help people
+# figure out what is missing. The target audience for error output is the
+# buildfarm operator who just wants to get the tests running, not the test
+# developer who presumably already knows how to solve these problems.
+
+import importlib
+import sys
+from typing import List, Union # needed for earlier Python versions
+
+# importlib.metadata is part of the standard library from 3.8 onwards. Earlier
+# Python versions have an official backport called importlib_metadata, which can
+# generally be installed as a separate OS package (python3-importlib-metadata).
+# This complication can be removed once we stop supporting Python 3.7.
+try:
+ from importlib import metadata
+except ImportError:
+ try:
+ import importlib_metadata as metadata
+ except ImportError:
+ # package_version() will need to fall back. This is unlikely to happen
+ # in practice, because pytest 7.x depends on importlib_metadata itself.
+ metadata = None
+
+
+def report(*args):
+ """
+ Prints a configure-time message to the user. (The configure scripts will
+ display these messages and ignore the output from the pytest suite.) This
+ assumes --capture=no is in use, to avoid pytest's standard stream capture.
+ """
+ print(*args, file=sys.stderr)
+
+
+def package_version(pkg: str) -> Union[str, None]:
+ """
+ Returns the version of the named package, or None if the package is not
+ installed.
+
+ This function prefers to use the distribution package version, if we have
+ the necessary prerequisites. Otherwise it will fall back to the __version__
+ of the imported module, which aligns with pytest.importorskip().
+ """
+ if metadata is not None:
+ try:
+ return metadata.version(pkg)
+ except metadata.PackageNotFoundError:
+ return None
+
+ # This is an older Python and we don't have importlib_metadata. Fall back to
+ # __version__ instead.
+ try:
+ mod = importlib.import_module(pkg)
+ except ModuleNotFoundError:
+ return None
+
+ if hasattr(mod, "__version__"):
+ return mod.__version__
+
+ # We're out of options. If this turns out to cause problems in practice, we
+ # might need to require importlib_metadata on older buildfarm members. But
+ # since our top-level requirements list will be small, and this possibility
+ # will eventually age out with newer Pythons, don't spend more effort on
+ # this case for now.
+ report(f"Fix check_pytest.py! {pkg} has no __version__")
+ assert False, "internal error in package_version()"
+
+
+def packaging_check(requirements: List[str]) -> bool:
+ """
+ Reports the status of each required package to the configure program.
+ Returns True if all dependencies were found.
+ """
+ report() # an opening newline makes the configure output easier to read
+
+ try:
+ # packaging contains the PyPA definitions of requirement specifiers.
+ # This is contained in a separate OS package (for example,
+ # python3-packaging), but it's extremely likely that the user has it
+ # installed already, because modern versions of pytest depend on it too.
+ import packaging
+ from packaging.requirements import Requirement
+
+ except ImportError as err:
+ # We don't even have enough prerequisites to check our prerequisites.
+ # Print the import error as-is.
+ report(err)
+ return False
+
+ # Strip extraneous whitespace, whole-line comments, and empty lines from our
+ # specifier list.
+ requirements = [r.strip() for r in requirements]
+ requirements = [r for r in requirements if r and r[0] != "#"]
+
+ found = True
+ for spec in requirements:
+ req = Requirement(spec)
+
+ # Skip any packages marked as unneeded for this particular Python env.
+ if req.marker and not req.marker.evaluate():
+ continue
+
+ # Make sure the package is installed...
+ version = package_version(req.name)
+ if version is None:
+ report(f"package '{req.name}': not installed")
+ found = False
+ continue
+
+ # ...and that it has a compatible version.
+ if not req.specifier.contains(version):
+ report(
+ "package '{}': has version {}, but '{}' is required".format(
+ req.name, version, req.specifier
+ ),
+ )
+ found = False
+ continue
+
+ # Report installed packages too, to mirror check_modules.pl.
+ report(f"package '{req.name}': installed (version {version})")
+
+ return found
+
+
+def test_packages(requirements_file):
+ """
+ Entry point.
+ """
+ try:
+ with open(requirements_file, "r") as f:
+ requirements = f.readlines()
+
+ all_found = packaging_check(requirements)
+
+ except Exception as err:
+ # Surface any breakage to the configure script before failing the test.
+ report(err)
+ raise
+
+ assert all_found, "required packages are missing"
diff --git a/config/conftest.py b/config/conftest.py
new file mode 100644
index 00000000000..a9c2bc546e8
--- /dev/null
+++ b/config/conftest.py
@@ -0,0 +1,18 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+#
+# Support for check_pytest.py. The configure script provides the path to
+# pytest-requirements.txt via the --requirements option added here.
+
+import pytest
+
+
+def pytest_addoption(parser):
+ parser.addoption(
+ "--requirements",
+ help="path to pytest-requirements.txt",
+ )
+
+
+@pytest.fixture
+def requirements_file(request):
+ return request.config.getoption("--requirements")
diff --git a/config/pytest-requirements.txt b/config/pytest-requirements.txt
new file mode 100644
index 00000000000..b941624b2f3
--- /dev/null
+++ b/config/pytest-requirements.txt
@@ -0,0 +1,21 @@
+#
+# This file contains the Python packages which are required in order for us to
+# enable pytest.
+#
+# The syntax is a *subset* of pip's requirements.txt syntax, so that both pip
+# and check_pytest.py can use it. Only whole-line comments and standard Python
+# dependency specifiers are allowed. pip-specific goodies like includes and
+# environment substitutions are not supported; keep it simple.
+#
+# Packages belong here if their absence should cause a configuration failure. If
+# you'd like to make a package optional, consider using pytest.importorskip()
+# instead.
+#
+
+# pytest 7.0 was the last version which supported Python 3.6, but the BSDs have
+# started putting 8.x into ports, so we support both. (pytest 8 can be used
+# throughout once we drop support for Python 3.7.)
+pytest >= 7.0, < 9
+
+# packaging is used by check_pytest.py at configure time.
+packaging
diff --git a/configure b/configure
index 22cd866147b..aa93fa5f0aa 100755
--- a/configure
+++ b/configure
@@ -630,6 +630,7 @@ vpath_build
PG_SYSROOT
PG_VERSION_NUM
LDFLAGS_EX_BE
+PYTEST
PROVE
DBTOEPUB
FOP
@@ -771,6 +772,7 @@ CFLAGS
CC
enable_injection_points
PG_TEST_EXTRA
+enable_pytest
enable_tap_tests
enable_dtrace
DTRACEFLAGS
@@ -849,6 +851,7 @@ enable_profiling
enable_coverage
enable_dtrace
enable_tap_tests
+enable_pytest
enable_injection_points
with_blocksize
with_segsize
@@ -1549,7 +1552,10 @@ Optional Features:
--enable-profiling build with profiling enabled
--enable-coverage build with coverage testing instrumentation
--enable-dtrace build with DTrace support
- --enable-tap-tests enable TAP tests (requires Perl and IPC::Run)
+ --enable-tap-tests enable (Perl-based) TAP tests (requires Perl and
+ IPC::Run)
+ --enable-pytest enable (Python-based) pytest suites (requires
+ Python)
--enable-injection-points
enable injection points (for testing)
--enable-depend turn on automatic dependency tracking
@@ -3631,7 +3637,7 @@ fi
#
-# TAP tests
+# Test frameworks
#
@@ -3659,6 +3665,32 @@ fi
+
+# Check whether --enable-pytest was given.
+if test "${enable_pytest+set}" = set; then :
+ enableval=$enable_pytest;
+ case $enableval in
+ yes)
+ :
+ ;;
+ no)
+ :
+ ;;
+ *)
+ as_fn_error $? "no argument expected for --enable-pytest option" "$LINENO" 5
+ ;;
+ esac
+
+else
+ enable_pytest=no
+
+fi
+
+
+
+
+
+
#
# Injection points
#
@@ -19074,6 +19106,78 @@ $as_echo "$modulestderr" >&6; }
fi
fi
+if test "$enable_pytest" = yes; then
+ if test -z "$PYTEST"; then
+ for ac_prog in pytest py.test
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_PYTEST+:} false; then :
+ $as_echo_n "(cached) " >&6
+else
+ case $PYTEST in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_PYTEST="$PYTEST" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_path_PYTEST="$as_dir/$ac_word$ac_exec_ext"
+ $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+PYTEST=$ac_cv_path_PYTEST
+if test -n "$PYTEST"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTEST" >&5
+$as_echo "$PYTEST" >&6; }
+else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+ test -n "$PYTEST" && break
+done
+
+else
+ # Report the value of PYTEST in configure's output in all cases.
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for PYTEST" >&5
+$as_echo_n "checking for PYTEST... " >&6; }
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTEST" >&5
+$as_echo "$PYTEST" >&6; }
+fi
+
+ if test -z "$PYTEST"; then
+ as_fn_error $? "pytest not found" "$LINENO" 5
+ fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for Python packages required for pytest" >&5
+$as_echo_n "checking for Python packages required for pytest... " >&6; }
+ modulestderr=`$PYTEST -c "$srcdir/pytest.ini" --confcutdir="$srcdir/config" --capture=no "$srcdir/config/check_pytest.py" --requirements "$srcdir/config/pytest-requirements.txt" 2>&1 >/dev/null`
+ if test $? -eq 0; then
+ echo "$modulestderr" >&5
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+ else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $modulestderr" >&5
+$as_echo "$modulestderr" >&6; }
+ as_fn_error $? "Additional Python packages are required to run the pytest suites" "$LINENO" 5
+ fi
+fi
+
# If compiler will take -Wl,--as-needed (or various platform-specific
# spellings thereof) then add that to LDFLAGS. This is much easier than
# trying to filter LIBS to the minimum for each executable.
diff --git a/configure.ac b/configure.ac
index e44943aa6fe..25442050f34 100644
--- a/configure.ac
+++ b/configure.ac
@@ -225,11 +225,16 @@ AC_SUBST(DTRACEFLAGS)])
AC_SUBST(enable_dtrace)
#
-# TAP tests
+# Test frameworks
#
PGAC_ARG_BOOL(enable, tap-tests, no,
- [enable TAP tests (requires Perl and IPC::Run)])
+ [enable (Perl-based) TAP tests (requires Perl and IPC::Run)])
AC_SUBST(enable_tap_tests)
+
+PGAC_ARG_BOOL(enable, pytest, no,
+ [enable (Python-based) pytest suites (requires Python)])
+AC_SUBST(enable_pytest)
+
AC_ARG_VAR(PG_TEST_EXTRA,
[enable selected extra tests (overridden at runtime by PG_TEST_EXTRA environment variable)])
@@ -2415,6 +2420,22 @@ if test "$enable_tap_tests" = yes; then
fi
fi
+if test "$enable_pytest" = yes; then
+ PGAC_PATH_PROGS(PYTEST, pytest py.test)
+ if test -z "$PYTEST"; then
+ AC_MSG_ERROR([pytest not found])
+ fi
+ AC_MSG_CHECKING(for Python packages required for pytest)
+ [modulestderr=`$PYTEST -c "$srcdir/pytest.ini" --confcutdir="$srcdir/config" --capture=no "$srcdir/config/check_pytest.py" --requirements "$srcdir/config/pytest-requirements.txt" 2>&1 >/dev/null`]
+ if test $? -eq 0; then
+ echo "$modulestderr" >&AS_MESSAGE_LOG_FD
+ AC_MSG_RESULT(yes)
+ else
+ AC_MSG_RESULT([$modulestderr])
+ AC_MSG_ERROR([Additional Python packages are required to run the pytest suites])
+ fi
+fi
+
# If compiler will take -Wl,--as-needed (or various platform-specific
# spellings thereof) then add that to LDFLAGS. This is much easier than
# trying to filter LIBS to the minimum for each executable.
diff --git a/meson.build b/meson.build
index 37ed68ceeb4..06eb7a19210 100644
--- a/meson.build
+++ b/meson.build
@@ -1702,6 +1702,39 @@ endif
+###############################################################
+# Library: pytest
+###############################################################
+
+pytest_enabled = false
+pytest = not_found_dep
+
+pytestopt = get_option('pytest')
+if not pytestopt.disabled()
+ pytest = find_program(get_option('PYTEST'), native: true, required: pytestopt)
+ if pytest.found()
+ pytest_check = run_command(pytest,
+ '-c', 'pytest.ini',
+ '--confcutdir=config',
+ '--capture=no',
+ 'config/check_pytest.py',
+ '--requirements', 'config/pytest-requirements.txt',
+ check: false)
+ if pytest_check.returncode() != 0
+ message(pytest_check.stderr())
+ if pytestopt.enabled()
+ error('Additional Python packages are required to run the pytest suites.')
+ else
+ warning('Additional Python packages are required to run the pytest suites.')
+ endif
+ else
+ pytest_enabled = true
+ endif
+ endif
+endif
+
+
+
###############################################################
# Library: zstd
###############################################################
@@ -3779,6 +3812,63 @@ foreach test_dir : tests
)
endforeach
install_suites += test_group
+ elif kind == 'pytest'
+ testwrap_pytest = testwrap_base
+ if not pytest_enabled
+ testwrap_pytest += ['--skip', 'pytest not enabled']
+ endif
+
+ test_command = [
+ pytest.full_path(),
+ '-c', meson.project_source_root() / 'pytest.ini',
+ '--verbose',
+ '-p', 'pgtap', # enable our test reporter plugin
+ '-ra', # show skipped and xfailed tests too
+ ]
+
+ # Add temporary install, the build directory for non-installed binaries and
+ # also test/ for non-installed test binaries built separately.
+ env = test_env
+ env.prepend('PATH', temp_install_bindir, test_dir['bd'], test_dir['bd'] / 'test')
+ temp_install_datadir = '@0@@1@'.format(test_install_destdir, dir_prefix / dir_data)
+ env.set('share_contrib_dir', temp_install_datadir / 'contrib')
+ env.prepend('PYTHONPATH', meson.project_source_root() / 'src' / 'test' / 'pytest' / 'plugins')
+
+ foreach name, value : t.get('env', {})
+ env.set(name, value)
+ endforeach
+
+ test_group = test_dir['name']
+ test_kwargs = {
+ 'protocol': 'tap',
+ 'suite': test_group,
+ 'timeout': 1000,
+ 'depends': test_deps + t.get('deps', []),
+ 'env': env,
+ } + t.get('test_kwargs', {})
+
+ foreach onetest : t['tests']
+ # Make test names prettier, remove pyt/ and .py
+ onetest_p = onetest
+ if onetest_p.startswith('pyt/')
+ onetest_p = onetest.split('pyt/')[1]
+ endif
+ if onetest_p.endswith('.py')
+ onetest_p = fs.stem(onetest_p)
+ endif
+
+ test(test_dir['name'] / onetest_p,
+ python,
+ kwargs: test_kwargs,
+ args: testwrap_pytest + [
+ '--testgroup', test_dir['name'],
+ '--testname', onetest_p,
+ '--', test_command,
+ test_dir['sd'] / onetest,
+ ],
+ )
+ endforeach
+ install_suites += test_group
else
error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
endif
@@ -3953,6 +4043,7 @@ summary(
'dtrace': dtrace,
'flex': '@0@ @1@'.format(flex.full_path(), flex_version),
'prove': prove,
+ 'pytest': pytest,
},
section: 'Programs',
)
@@ -3993,6 +4084,7 @@ summary(
summary(
{
'tap': tap_tests_enabled,
+ 'pytest': pytest_enabled,
},
section: 'Other features',
list_sep: ' ',
diff --git a/meson_options.txt b/meson_options.txt
index 06bf5627d3c..88f22e699d9 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -41,7 +41,10 @@ option('cassert', type: 'boolean', value: false,
description: 'Enable assertion checks (for debugging)')
option('tap_tests', type: 'feature', value: 'auto',
- description: 'Enable TAP tests')
+ description: 'Enable (Perl-based) TAP tests')
+
+option('pytest', type: 'feature', value: 'auto',
+ description: 'Enable (Python-based) pytest suites')
option('injection_points', type: 'boolean', value: false,
description: 'Enable injection points')
@@ -195,6 +198,9 @@ option('PERL', type: 'string', value: 'perl',
option('PROVE', type: 'string', value: 'prove',
description: 'Path to prove binary')
+option('PYTEST', type: 'array', value: ['pytest', 'py.test'],
+ description: 'Path to pytest binary')
+
option('PYTHON', type: 'array', value: ['python3', 'python'],
description: 'Path to python binary')
diff --git a/pytest.ini b/pytest.ini
new file mode 100644
index 00000000000..8e8388f3afc
--- /dev/null
+++ b/pytest.ini
@@ -0,0 +1,6 @@
+[pytest]
+minversion = 7.0
+
+# Ignore ./config (which contains the configure-time check_pytest.py tests) by
+# default.
+addopts = --ignore ./config
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 0aa389bc710..8a6885206ce 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -211,6 +211,7 @@ enable_dtrace = @enable_dtrace@
enable_coverage = @enable_coverage@
enable_injection_points = @enable_injection_points@
enable_tap_tests = @enable_tap_tests@
+enable_pytest = @enable_pytest@
python_includespec = @python_includespec@
python_libdir = @python_libdir@
@@ -353,6 +354,7 @@ MSGFMT = @MSGFMT@
MSGFMT_FLAGS = @MSGFMT_FLAGS@
MSGMERGE = @MSGMERGE@
OPENSSL = @OPENSSL@
+PYTEST = @PYTEST@
PYTHON = @PYTHON@
TAR = @TAR@
XGETTEXT = @XGETTEXT@
@@ -507,6 +509,27 @@ prove_installcheck = @echo "TAP tests not enabled. Try configuring with --enable
prove_check = $(prove_installcheck)
endif
+ifeq ($(enable_pytest),yes)
+
+pytest_installcheck = @echo "Installcheck is not currently supported for pytest."
+
+define pytest_check
+echo "# +++ pytest check in $(subdir) +++" && \
+rm -rf '$(CURDIR)'/tmp_check && \
+$(MKDIR_P) '$(CURDIR)'/tmp_check && \
+cd $(srcdir) && \
+ TESTLOGDIR='$(CURDIR)/tmp_check/log' \
+ TESTDATADIR='$(CURDIR)/tmp_check' \
+ PYTHONPATH='$(abs_top_srcdir)/src/test/pytest/plugins:$$PYTHONPATH' \
+ $(with_temp_install) \
+ $(PYTEST) -c '$(abs_top_srcdir)/pytest.ini' --verbose -ra ./pyt/
+endef
+
+else
+pytest_installcheck = @echo "pytest is not enabled. Try configuring with --enable-pytest"
+pytest_check = $(pytest_installcheck)
+endif
+
# Installation.
install_bin = @install_bin@
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 0def244c901..f68acd57bc4 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -56,6 +56,7 @@ pgxs_kv = {
'enable_nls': libintl.found() ? 'yes' : 'no',
'enable_injection_points': get_option('injection_points') ? 'yes' : 'no',
'enable_tap_tests': tap_tests_enabled ? 'yes' : 'no',
+ 'enable_pytest': pytest_enabled ? 'yes' : 'no',
'enable_debug': get_option('debug') ? 'yes' : 'no',
'enable_coverage': 'no',
'enable_dtrace': dtrace.found() ? 'yes' : 'no',
@@ -145,6 +146,7 @@ pgxs_bins = {
'OPENSSL': openssl,
'PERL': perl,
'PROVE': prove,
+ 'PYTEST': pytest,
'PYTHON': python,
'TAR': tar,
'ZSTD': program_zstd,
diff --git a/src/test/Makefile b/src/test/Makefile
index 511a72e6238..0be9771d71f 100644
--- a/src/test/Makefile
+++ b/src/test/Makefile
@@ -12,7 +12,16 @@ subdir = src/test
top_builddir = ../..
include $(top_builddir)/src/Makefile.global
-SUBDIRS = perl postmaster regress isolation modules authentication recovery subscription
+SUBDIRS = \
+ authentication \
+ isolation \
+ modules \
+ perl \
+ postmaster \
+ pytest \
+ recovery \
+ regress \
+ subscription
ifeq ($(with_icu),yes)
SUBDIRS += icu
diff --git a/src/test/meson.build b/src/test/meson.build
index ccc31d6a86a..d08a6ef61c2 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -5,6 +5,7 @@ subdir('isolation')
subdir('authentication')
subdir('postmaster')
+subdir('pytest')
subdir('recovery')
subdir('subscription')
subdir('modules')
diff --git a/src/test/pytest/Makefile b/src/test/pytest/Makefile
new file mode 100644
index 00000000000..2bdca96ccbe
--- /dev/null
+++ b/src/test/pytest/Makefile
@@ -0,0 +1,20 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for pytest
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/pytest/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/test/pytest
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+check:
+ $(pytest_check)
+
+clean distclean maintainer-clean:
+ rm -rf tmp_check
diff --git a/src/test/pytest/README b/src/test/pytest/README
new file mode 100644
index 00000000000..1333ed77b7e
--- /dev/null
+++ b/src/test/pytest/README
@@ -0,0 +1 @@
+TODO
diff --git a/src/test/pytest/meson.build b/src/test/pytest/meson.build
new file mode 100644
index 00000000000..abd128dfa24
--- /dev/null
+++ b/src/test/pytest/meson.build
@@ -0,0 +1,16 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+if not pytest_enabled
+ subdir_done()
+endif
+
+tests += {
+ 'name': 'pytest',
+ 'sd': meson.current_source_dir(),
+ 'bd': meson.current_build_dir(),
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_something.py',
+ ],
+ },
+}
diff --git a/src/test/pytest/plugins/pgtap.py b/src/test/pytest/plugins/pgtap.py
new file mode 100644
index 00000000000..ef8291e291c
--- /dev/null
+++ b/src/test/pytest/plugins/pgtap.py
@@ -0,0 +1,193 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import os
+import sys
+from typing import Optional
+
+import pytest
+
+#
+# Helpers
+#
+
+
+class TAP:
+ """
+ A basic API for reporting via the TAP protocol.
+ """
+
+ def __init__(self):
+ self.count = 0
+
+ # XXX interacts poorly with testwrap's boilerplate diagnostics
+ # self.print("TAP version 13")
+
+ def expect(self, num: int):
+ self.print(f"1..{num}")
+
+ def print(self, *args):
+ print(*args, file=sys.__stdout__)
+
+ def ok(self, name: str):
+ self.count += 1
+ self.print("ok", self.count, "-", name)
+
+ def skip(self, name: str, reason: str):
+ self.count += 1
+ self.print("ok", self.count, "-", name, "# skip", reason)
+
+ def fail(self, name: str, details: str):
+ self.count += 1
+ self.print("not ok", self.count, "-", name)
+
+ # mtest has some odd behavior around TAP tests where it won't print
+ # diagnostics on failure if they're part of the stdout stream, so we
+ # might as well just dump the details directly to stderr instead.
+ print(details, file=sys.__stderr__)
+
+
+tap = TAP()
+
+
+class TestNotes:
+ """
+ Annotations for a single test. The existing pytest hooks keep interesting
+ information somewhat separated across the different stages
+ (setup/test/teardown), so this class is used to correlate them.
+ """
+
+ skipped = False
+ skip_reason = None
+
+ failed = False
+ details = ""
+
+
+# Register a custom key in the stash dictionary for keeping our TestNotes.
+notes_key = pytest.StashKey[TestNotes]()
+
+
+#
+# Hook Implementations
+#
+
+
+@pytest.hookimpl(tryfirst=True)
+def pytest_configure(config):
+ """
+ Hijacks the standard streams as soon as possible during pytest startup. The
+ pytest-formatted output gets logged to file instead, and we'll use the
+ original sys.__stdout__/__stderr__ streams for the TAP protocol.
+ """
+ logdir = os.getenv("TESTLOGDIR")
+ if not logdir:
+ raise RuntimeError("pgtap requires the TESTLOGDIR envvar to be set")
+
+ os.makedirs(logdir)
+ logpath = os.path.join(logdir, "pytest.log")
+ sys.stdout = sys.stderr = open(logpath, "a", buffering=1)
+
+
+@pytest.hookimpl(trylast=True)
+def pytest_sessionfinish(session, exitstatus):
+ """
+ Suppresses nonzero exit codes due to failed tests. (In that case, we want
+ Meson to report a failure count, not a generic ERROR.)
+ """
+ if exitstatus == pytest.ExitCode.TESTS_FAILED:
+ session.exitstatus = pytest.ExitCode.OK
+
+
+@pytest.hookimpl
+def pytest_collectreport(report):
+ # Include collection failures directly in Meson error output.
+ if report.failed:
+ print(report.longreprtext, file=sys.__stderr__)
+
+
+@pytest.hookimpl
+def pytest_internalerror(excrepr, excinfo):
+ # Include internal errors directly in Meson error output.
+ print(excrepr, file=sys.__stderr__)
+
+
+#
+# Hook Wrappers
+#
+# In pytest parlance, a "wrapper" for a hook can inspect and optionally modify
+# existing hooks' behavior, but it does not replace the hook chain. This is done
+# through a generator-style API which chains the hooks together (see the use of
+# `yield`).
+#
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_collection(session):
+ """Reports the number of gathered tests after collection is finished."""
+ res = yield
+ tap.expect(session.testscollected)
+ return res
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_runtest_makereport(item, call):
+ """
+ Annotates a test item with our TestNotes and grabs relevant information for
+ reporting.
+
+ This is called multiple times per test, so it's not correct to print the TAP
+ result here. (A test and its teardown stage can both fail, and we want to
+ see the details for both.) We instead combine all the information for use by
+ our pytest_runtest_protocol wrapper later on.
+ """
+ res = yield
+
+ if notes_key not in item.stash:
+ item.stash[notes_key] = TestNotes()
+ notes = item.stash[notes_key]
+
+ report = res.get_result()
+ if report.passed:
+ pass # no annotation needed
+
+ elif report.skipped:
+ notes.skipped = True
+ _, _, notes.skip_reason = report.longrepr
+
+ elif report.failed:
+ notes.failed = True
+
+ if not notes.details:
+ notes.details += "{:_^72}\n\n".format(f" {report.head_line} ")
+
+ if report.when in ("setup", "teardown"):
+ notes.details += "\n{:_^72}\n\n".format(
+ f" Error during {report.when} of {report.head_line} "
+ )
+
+ notes.details += report.longreprtext + "\n"
+
+ else:
+ raise RuntimeError("pytest_runtest_makereport received unknown test status")
+
+ return res
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_runtest_protocol(item, nextitem):
+ """
+ Reports the TAP result for this test item using our gathered TestNotes.
+ """
+ res = yield
+
+ assert notes_key in item.stash, "pgtap didn't annotate a test item?"
+ notes = item.stash[notes_key]
+
+ if notes.failed:
+ tap.fail(item.nodeid, notes.details)
+ elif notes.skipped:
+ tap.skip(item.nodeid, notes.skip_reason)
+ else:
+ tap.ok(item.nodeid)
+
+ return res
diff --git a/src/test/pytest/pyt/test_something.py b/src/test/pytest/pyt/test_something.py
new file mode 100644
index 00000000000..5bd45618512
--- /dev/null
+++ b/src/test/pytest/pyt/test_something.py
@@ -0,0 +1,17 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import pytest
+
+
+@pytest.fixture
+def hey():
+ yield
+ raise "uh-oh"
+
+
+def test_something(hey):
+ assert 2 == 4
+
+
+def test_something_else():
+ assert 2 == 2
--
2.51.1
v3-0003-WIP-pytest-Add-some-SSL-client-tests.patchtext/x-patch; charset=utf-8; name=v3-0003-WIP-pytest-Add-some-SSL-client-tests.patchDownload
From f0cf8b502b183c113a82a113b8c0a75c9b0f7904 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 19 Aug 2025 12:56:45 -0700
Subject: [PATCH v3 03/10] WIP: pytest: Add some SSL client tests
This is a sample client-only test suite. It tests some handshake
failures against a mock server, as well as a full SSL handshake + empty
query + response.
pyca/cryptography is added as a new package dependency. Certificates for
testing are generated on the fly.
The `pg` test package contains some helpers and fixtures (as well as
some self-tests for more complicated behavior). Of note:
- pg.require_test_extra() lets you mark a test/class/module as skippable
if PG_TEST_EXTRA does not contain the necessary strings.
- pg.remaining_timeout() is a function which can be repeatedly called to
determine how much of the PG_TEST_TIMEOUT_DEFAULT remains for the
current test item.
- pg.libpq is a fixture that wraps libpq.so in a more friendly, but
still low-level, ctypes FFI. Allocated resources are unwound and
released during test teardown.
The mock design is threaded: the server socket is listening on a
background thread, and the test provides the server logic via a
callback. There is some additional work still needed to make this
production-ready; see the notes for _TCPServer.background(). (Currently,
an exception in the wrong place could result in a hang-until-timeout
rather than an immediate failure.)
TODOs:
- local_server and tcp_server_class are nearly identical and should
share code.
- fix exception-related timeouts for .background()
- figure out the proper use of "session" vs "module" scope
- ensure that pq.libpq unwinds (to close connections) before tcp_server;
see comment in test_server_with_ssl_disabled()
---
.cirrus.tasks.yml | 18 +-
config/pytest-requirements.txt | 10 ++
pytest.ini | 3 +
src/test/pytest/meson.build | 1 +
src/test/pytest/pg/__init__.py | 3 +
src/test/pytest/pg/_env.py | 55 ++++++
src/test/pytest/pg/fixtures.py | 212 +++++++++++++++++++++++
src/test/pytest/pyt/conftest.py | 3 +
src/test/pytest/pyt/test_libpq.py | 171 ++++++++++++++++++
src/test/ssl/Makefile | 2 +
src/test/ssl/meson.build | 6 +
src/test/ssl/pyt/conftest.py | 129 ++++++++++++++
src/test/ssl/pyt/test_client.py | 278 ++++++++++++++++++++++++++++++
13 files changed, 885 insertions(+), 6 deletions(-)
create mode 100644 src/test/pytest/pg/__init__.py
create mode 100644 src/test/pytest/pg/_env.py
create mode 100644 src/test/pytest/pg/fixtures.py
create mode 100644 src/test/pytest/pyt/conftest.py
create mode 100644 src/test/pytest/pyt/test_libpq.py
create mode 100644 src/test/ssl/pyt/conftest.py
create mode 100644 src/test/ssl/pyt/test_client.py
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 80f9b394bd2..4e744f1c105 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -225,6 +225,7 @@ task:
sysctl kern.corefile='/tmp/cores/%N.%P.core'
setup_additional_packages_script: |
pkg install -y \
+ py311-cryptography \
py311-packaging \
py311-pytest
@@ -316,6 +317,7 @@ task:
setup_additional_packages_script: |
pkgin -y install \
+ py312-cryptography \
py312-packaging \
py312-test
ln -s /usr/pkg/bin/pytest-3.12 /usr/pkg/bin/pytest
@@ -339,8 +341,9 @@ task:
setup_additional_packages_script: |
pkg_add -I \
- py3-test \
- py3-packaging
+ py3-cryptography \
+ py3-packaging \
+ py3-test
# Always core dump to ${CORE_DUMP_DIR}
set_core_dump_script: sysctl -w kern.nosuidcoredump=2
<<: *openbsd_task_template
@@ -501,8 +504,9 @@ task:
setup_additional_packages_script: |
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get -y install \
- python3-pytest \
- python3-packaging
+ python3-cryptography \
+ python3-packaging \
+ python3-pytest
matrix:
# SPECIAL:
@@ -643,6 +647,7 @@ task:
CIRRUS_WORKING_DIR: ${HOME}/pgsql/
CCACHE_DIR: ${HOME}/ccache
MACPORTS_CACHE: ${HOME}/macports-cache
+ PYTEST_DEBUG_TEMPROOT: /tmp # default is too long for UNIX sockets on Mac
MESON_FEATURES: >-
-Dbonjour=enabled
@@ -663,6 +668,7 @@ task:
p5.34-io-tty
p5.34-ipc-run
python312
+ py312-cryptography
py312-packaging
py312-pytest
tcl
@@ -801,7 +807,7 @@ task:
# XXX Does Chocolatey really not have any Python package installers?
setup_additional_packages_script: |
REM choco install -y --no-progress ...
- pip3 install --user packaging pytest
+ pip3 install --user cryptography packaging pytest
setup_hosts_file_script: |
echo 127.0.0.1 pg-loadbalancetest >> c:\Windows\System32\Drivers\etc\hosts
@@ -864,7 +870,7 @@ task:
folder: ${CCACHE_DIR}
setup_additional_packages_script: |
- C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-python-packaging mingw-w64-ucrt-x86_64-python-pytest
+ C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-python-cryptography mingw-w64-ucrt-x86_64-python-packaging mingw-w64-ucrt-x86_64-python-pytest
mingw_info_script: |
%BASH% -c "where gcc"
diff --git a/config/pytest-requirements.txt b/config/pytest-requirements.txt
index b941624b2f3..0bd6cadf608 100644
--- a/config/pytest-requirements.txt
+++ b/config/pytest-requirements.txt
@@ -19,3 +19,13 @@ pytest >= 7.0, < 9
# packaging is used by check_pytest.py at configure time.
packaging
+
+# Notes on the cryptography package:
+# - 3.3.2 is shipped on Debian bullseye.
+# - 3.4.x drops support for Python 2, making it a version of note for older LTS
+# distros.
+# - 35.x switched versioning schemes and moved to Rust parsing.
+# - 40.x is the last version supporting Python 3.6.
+# XXX Is it appropriate to require cryptography, or should we simply skip
+# dependent tests?
+cryptography >= 3.3.2
diff --git a/pytest.ini b/pytest.ini
index 8e8388f3afc..e7aa84f3a84 100644
--- a/pytest.ini
+++ b/pytest.ini
@@ -4,3 +4,6 @@ minversion = 7.0
# Ignore ./config (which contains the configure-time check_pytest.py tests) by
# default.
addopts = --ignore ./config
+
+# Common test code can be found here.
+pythonpath = src/test/pytest
diff --git a/src/test/pytest/meson.build b/src/test/pytest/meson.build
index abd128dfa24..f53193e8686 100644
--- a/src/test/pytest/meson.build
+++ b/src/test/pytest/meson.build
@@ -11,6 +11,7 @@ tests += {
'pytest': {
'tests': [
'pyt/test_something.py',
+ 'pyt/test_libpq.py',
],
},
}
diff --git a/src/test/pytest/pg/__init__.py b/src/test/pytest/pg/__init__.py
new file mode 100644
index 00000000000..ef8faf54ca4
--- /dev/null
+++ b/src/test/pytest/pg/__init__.py
@@ -0,0 +1,3 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+from ._env import has_test_extra, require_test_extra
diff --git a/src/test/pytest/pg/_env.py b/src/test/pytest/pg/_env.py
new file mode 100644
index 00000000000..6f18af07844
--- /dev/null
+++ b/src/test/pytest/pg/_env.py
@@ -0,0 +1,55 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import logging
+import os
+from typing import List, Optional
+
+import pytest
+
+logger = logging.getLogger(__name__)
+
+
+def has_test_extra(key: str) -> bool:
+ """
+ Returns True if the PG_TEST_EXTRA environment variable contains the given
+ key.
+ """
+ extra = os.getenv("PG_TEST_EXTRA", "")
+ return key in extra.split()
+
+
+def require_test_extra(*keys: str) -> bool:
+ """
+ A convenience annotation which will skip tests if all of the required keys
+ are not present in PG_TEST_EXTRA.
+
+ To skip a particular test function or class:
+
+ @pg.require_test_extra("ldap")
+ def test_some_ldap_feature():
+ ...
+
+ To skip an entire module:
+
+ pytestmark = pg.require_test_extra("ssl", "kerberos")
+ """
+ return pytest.mark.skipif(
+ not all([has_test_extra(k) for k in keys]),
+ reason="requires {} to be set in PG_TEST_EXTRA".format(", ".join(keys)),
+ )
+
+
+def test_timeout_default() -> int:
+ """
+ Returns the value of the PG_TEST_TIMEOUT_DEFAULT environment variable, in
+ seconds, or 180 if one was not provided.
+ """
+ default = os.getenv("PG_TEST_TIMEOUT_DEFAULT", "")
+ if not default:
+ return 180
+
+ try:
+ return int(default)
+ except ValueError as v:
+ logger.warning("PG_TEST_TIMEOUT_DEFAULT could not be parsed: " + str(v))
+ return 180
diff --git a/src/test/pytest/pg/fixtures.py b/src/test/pytest/pg/fixtures.py
new file mode 100644
index 00000000000..b5d3bff69a8
--- /dev/null
+++ b/src/test/pytest/pg/fixtures.py
@@ -0,0 +1,212 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import ctypes
+import platform
+import time
+from typing import Any, Callable, Dict
+
+import pytest
+
+from ._env import test_timeout_default
+
+
+@pytest.fixture
+def remaining_timeout():
+ """
+ This fixture provides a function that returns how much of the
+ PG_TEST_TIMEOUT_DEFAULT remains for the current test, in fractional seconds.
+ This value is never less than zero.
+
+ This fixture is per-test, so the deadline is also reset on a per-test basis.
+ """
+ now = time.monotonic()
+ deadline = now + test_timeout_default()
+
+ return lambda: max(deadline - time.monotonic(), 0)
+
+
+class _PGconn(ctypes.Structure):
+ pass
+
+
+class _PGresult(ctypes.Structure):
+ pass
+
+
+_PGconn_p = ctypes.POINTER(_PGconn)
+_PGresult_p = ctypes.POINTER(_PGresult)
+
+
+@pytest.fixture(scope="session")
+def libpq_handle():
+ """
+ Loads a ctypes handle for libpq. Some common function prototypes are
+ initialized for general use.
+ """
+ system = platform.system()
+
+ if system in ("Linux", "FreeBSD", "NetBSD", "OpenBSD"):
+ name = "libpq.so.5"
+ elif system == "Darwin":
+ name = "libpq.5.dylib"
+ elif system == "Windows":
+ name = "libpq.dll"
+ else:
+ assert False, f"the libpq fixture must be updated for {system}"
+
+ # XXX ctypes.CDLL() is a little stricter with load paths on Windows. The
+ # preferred way around that is to know the absolute path to libpq.dll, but
+ # that doesn't seem to mesh well with the current test infrastructure. For
+ # now, enable "standard" LoadLibrary behavior.
+ loadopts = {}
+ if system == "Windows":
+ loadopts["winmode"] = 0
+
+ lib = ctypes.CDLL(name, **loadopts)
+
+ #
+ # Function Prototypes
+ #
+
+ lib.PQconnectdb.restype = _PGconn_p
+ lib.PQconnectdb.argtypes = [ctypes.c_char_p]
+
+ lib.PQstatus.restype = ctypes.c_int
+ lib.PQstatus.argtypes = [_PGconn_p]
+
+ lib.PQexec.restype = _PGresult_p
+ lib.PQexec.argtypes = [_PGconn_p, ctypes.c_char_p]
+
+ lib.PQresultStatus.restype = ctypes.c_int
+ lib.PQresultStatus.argtypes = [_PGresult_p]
+
+ lib.PQclear.restype = None
+ lib.PQclear.argtypes = [_PGresult_p]
+
+ lib.PQerrorMessage.restype = ctypes.c_char_p
+ lib.PQerrorMessage.argtypes = [_PGconn_p]
+
+ lib.PQfinish.restype = None
+ lib.PQfinish.argtypes = [_PGconn_p]
+
+ return lib
+
+
+class PGresult(contextlib.AbstractContextManager):
+ """Wraps a raw _PGresult_p with a more friendly interface."""
+
+ def __init__(self, lib: ctypes.CDLL, res: _PGresult_p):
+ self._lib = lib
+ self._res = res
+
+ def __exit__(self, *exc):
+ self._lib.PQclear(self._res)
+ self._res = None
+
+ def status(self):
+ return self._lib.PQresultStatus(self._res)
+
+
+class PGconn(contextlib.AbstractContextManager):
+ """
+ Wraps a raw _PGconn_p with a more friendly interface. This is just a
+ stub; it's expected to grow.
+ """
+
+ def __init__(
+ self,
+ lib: ctypes.CDLL,
+ handle: _PGconn_p,
+ stack: contextlib.ExitStack,
+ ):
+ self._lib = lib
+ self._handle = handle
+ self._stack = stack
+
+ def __exit__(self, *exc):
+ self._lib.PQfinish(self._handle)
+ self._handle = None
+
+ def exec(self, query: str) -> PGresult:
+ """
+ Executes a query via PQexec() and returns a PGresult.
+ """
+ res = self._lib.PQexec(self._handle, query.encode())
+ return self._stack.enter_context(PGresult(self._lib, res))
+
+
+@pytest.fixture
+def libpq(libpq_handle, remaining_timeout):
+ """
+ Provides a ctypes-based API wrapped around libpq.so. This fixture keeps
+ track of allocated resources and cleans them up during teardown. See
+ _Libpq's public API for details.
+ """
+
+ class _Libpq(contextlib.ExitStack):
+ CONNECTION_OK = 0
+
+ PGRES_EMPTY_QUERY = 0
+
+ class Error(RuntimeError):
+ """
+ libpq.Error is the exception class for application-level errors that
+ are encountered during libpq operations.
+ """
+
+ pass
+
+ def __init__(self):
+ super().__init__()
+ self.lib = libpq_handle
+
+ def _connstr(self, opts: Dict[str, Any]) -> str:
+ """
+ Flattens the provided options into a libpq connection string. Values
+ are converted to str and quoted/escaped as necessary.
+ """
+ settings = []
+
+ for k, v in opts.items():
+ v = str(v)
+ if not v:
+ v = "''"
+ else:
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+
+ if " " in v:
+ v = f"'{v}'"
+
+ settings.append(f"{k}={v}")
+
+ return " ".join(settings)
+
+ def must_connect(self, **opts) -> PGconn:
+ """
+ Connects to a server, using the given connection options, and
+ returns a libpq.PGconn object wrapping the connection handle. A
+ failure will raise libpq.Error.
+
+ Connections honor PG_TEST_TIMEOUT_DEFAULT unless connect_timeout is
+ explicitly overridden in opts.
+ """
+
+ if "connect_timeout" not in opts:
+ t = int(remaining_timeout())
+ opts["connect_timeout"] = max(t, 1)
+
+ conn_p = self.lib.PQconnectdb(self._connstr(opts).encode())
+
+ # Ensure the connection handle is always closed at the end of the
+ # test.
+ conn = self.enter_context(PGconn(self.lib, conn_p, stack=self))
+
+ if self.lib.PQstatus(conn_p) != self.CONNECTION_OK:
+ raise self.Error(self.lib.PQerrorMessage(conn_p).decode())
+
+ return conn
+
+ with _Libpq() as lib:
+ yield lib
diff --git a/src/test/pytest/pyt/conftest.py b/src/test/pytest/pyt/conftest.py
new file mode 100644
index 00000000000..ecb72be26d7
--- /dev/null
+++ b/src/test/pytest/pyt/conftest.py
@@ -0,0 +1,3 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+from pg.fixtures import *
diff --git a/src/test/pytest/pyt/test_libpq.py b/src/test/pytest/pyt/test_libpq.py
new file mode 100644
index 00000000000..9f0857cc612
--- /dev/null
+++ b/src/test/pytest/pyt/test_libpq.py
@@ -0,0 +1,171 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import os
+import socket
+import struct
+import threading
+from typing import Callable
+
+import pytest
+
+
+@pytest.mark.parametrize(
+ "opts, expected",
+ [
+ (dict(), ""),
+ (dict(port=5432), "port=5432"),
+ (dict(port=5432, dbname="postgres"), "port=5432 dbname=postgres"),
+ (dict(host=""), "host=''"),
+ (dict(host=" "), r"host=' '"),
+ (dict(keyword="'"), r"keyword=\'"),
+ (dict(keyword=" \\' "), r"keyword=' \\\' '"),
+ ],
+)
+def test_connstr(libpq, opts, expected):
+ """Tests the escape behavior for libpq._connstr()."""
+ assert libpq._connstr(opts) == expected
+
+
+def test_must_connect_errors(libpq):
+ """Tests that must_connect() raises libpq.Error."""
+ with pytest.raises(libpq.Error, match="invalid connection option"):
+ libpq.must_connect(some_unknown_keyword="whatever")
+
+
+@pytest.fixture
+def local_server(tmp_path, remaining_timeout):
+ """
+ Opens up a local UNIX socket for mocking a Postgres server on a background
+ thread. See the _Server API for usage.
+
+ This fixture requires AF_UNIX support; dependent tests will be skipped on
+ platforms that don't provide it.
+ """
+
+ try:
+ from socket import AF_UNIX
+ except ImportError:
+ pytest.skip("AF_UNIX not supported on this platform")
+
+ class _Server(contextlib.ExitStack):
+ """
+ Implementation class for local_server. See .background() for the primary
+ entry point for tests. Postgres clients may connect to this server via
+ local_server.host/local_server.port.
+
+ _Server derives from contextlib.ExitStack to provide easy cleanup of
+ associated resources; see the documentation for that class for a full
+ explanation.
+ """
+
+ def __init__(self):
+ super().__init__()
+
+ self.host = tmp_path
+ self.port = 5432
+
+ self._thread = None
+ self._thread_exc = None
+ self._listener = self.enter_context(
+ socket.socket(AF_UNIX, socket.SOCK_STREAM),
+ )
+
+ def bind_and_listen(self):
+ """
+ Does the actual work of binding the UNIX socket using the Postgres
+ server conventions and listening for connections.
+
+ The listen backlog is currently hardcoded to one.
+ """
+ sockfile = self.host / ".s.PGSQL.{}".format(self.port)
+
+ # Lock down the permissions on the new socket.
+ prev_mask = os.umask(0o077)
+
+ # Bind (creating the socket file), and immediately register it for
+ # deletion from disk when the stack is cleaned up.
+ self._listener.bind(bytes(sockfile))
+ self.callback(os.unlink, sockfile)
+
+ os.umask(prev_mask)
+
+ self._listener.listen(1)
+
+ def background(self, fn: Callable[[socket.socket], None]) -> None:
+ """
+ Accepts a client connection on a background thread and passes it to
+ the provided callback. Any exceptions raised from the callback will
+ be re-raised on the main thread during fixture teardown.
+
+ Blocking operations on the connected socket default to using the
+ remaining_timeout(), though this can be changed by the test via the
+ socket's .settimeout().
+ """
+
+ def _bg():
+ try:
+ self._listener.settimeout(remaining_timeout())
+ sock, _ = self._listener.accept()
+
+ with sock:
+ sock.settimeout(remaining_timeout())
+ fn(sock)
+
+ except Exception as e:
+ # Save the exception for re-raising on the main thread.
+ self._thread_exc = e
+
+ # TODO: rather than using callback(), consider explicitly signaling
+ # the fn() implementation to stop early if we get an exception.
+ # Otherwise we'll hang until the end of the timeout.
+ self._thread = threading.Thread(target=_bg)
+ self.callback(self._join)
+
+ self._thread.start()
+
+ def _join(self):
+ """
+ Waits for the background thread to finish and raises any thrown
+ exception. This is called during fixture teardown.
+ """
+ # Give a little bit of wiggle room on the join timeout, since we're
+ # racing against the test's own use of remaining_timeout(). (It's
+ # preferable to let tests report timeouts; the stack traces will
+ # help with debugging.)
+ self._thread.join(remaining_timeout() + 1)
+ if self._thread.is_alive():
+ raise TimeoutError("background thread is still running after timeout")
+
+ if self._thread_exc is not None:
+ raise self._thread_exc
+
+ with _Server() as s:
+ s.bind_and_listen()
+ yield s
+
+
+def test_connection_is_finished_on_error(libpq, local_server, remaining_timeout):
+ """Tests that PQfinish() gets called at the end of testing."""
+ expected_error = "something is wrong"
+
+ def serve_error(s: socket.socket) -> None:
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Quick check for the startup packet version.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (3, 0)
+
+ # Discard the remainder of the startup packet and send a v2 error.
+ s.recv(pktlen - 8)
+ s.send(b"E" + expected_error.encode() + b"\0")
+
+ # And now the socket should be closed.
+ assert not s.recv(1), "client sent unexpected data"
+
+ local_server.background(serve_error)
+
+ with pytest.raises(libpq.Error, match=expected_error):
+ # Exiting this context should result in PQfinish().
+ with libpq:
+ libpq.must_connect(host=local_server.host, port=local_server.port)
diff --git a/src/test/ssl/Makefile b/src/test/ssl/Makefile
index e8a1639db2d..895ea5ea41c 100644
--- a/src/test/ssl/Makefile
+++ b/src/test/ssl/Makefile
@@ -30,6 +30,8 @@ clean distclean:
# Doesn't depend on sslfiles because we don't rebuild them by default
check:
$(prove_check)
+ # XXX these suites should run independently, not serially
+ $(pytest_check)
installcheck:
$(prove_installcheck)
diff --git a/src/test/ssl/meson.build b/src/test/ssl/meson.build
index d8e0fb518e0..a0ee2af0899 100644
--- a/src/test/ssl/meson.build
+++ b/src/test/ssl/meson.build
@@ -15,4 +15,10 @@ tests += {
't/003_sslinfo.pl',
],
},
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_client.py',
+ 'pyt/test_server.py',
+ ],
+ },
}
diff --git a/src/test/ssl/pyt/conftest.py b/src/test/ssl/pyt/conftest.py
new file mode 100644
index 00000000000..fb4db372f03
--- /dev/null
+++ b/src/test/ssl/pyt/conftest.py
@@ -0,0 +1,129 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import datetime
+import tempfile
+from collections import namedtuple
+
+import pytest
+
+import pg
+from pg.fixtures import *
+
+
+@pytest.fixture(scope="session")
+def cryptography():
+ return pytest.importorskip("cryptography", "3.3.2")
+
+
+Cert = namedtuple("Cert", "cert, certpath, key, keypath")
+
+
+@pytest.fixture(scope="session")
+def certs(cryptography, tmp_path_factory):
+ """
+ Caches commonly used certificates at the session level, and provides a way
+ to create new ones.
+
+ - certs.ca: the root CA certificate
+
+ - certs.server: the "standard" server certficate, signed by certs.ca
+
+ - certs.server_host: the hostname of the certs.server certificate
+
+ - certs.new(): creates a custom certificate, signed by certs.ca
+ """
+
+ from cryptography import x509
+ from cryptography.hazmat.primitives import hashes, serialization
+ from cryptography.hazmat.primitives.asymmetric import rsa
+ from cryptography.x509.oid import NameOID
+
+ tmpdir = tmp_path_factory.mktemp("test-certs")
+
+ class _Certs:
+ def __init__(self):
+ self.ca = self.new(
+ x509.Name(
+ [x509.NameAttribute(NameOID.COMMON_NAME, "PG pytest CA")],
+ ),
+ ca=True,
+ )
+
+ self.server_host = "example.org"
+ self.server = self.new(
+ x509.Name(
+ [x509.NameAttribute(NameOID.COMMON_NAME, self.server_host)],
+ )
+ )
+
+ def new(self, subject: x509.Name, *, ca=False) -> Cert:
+ """
+ Creates and signs a new Cert with the given subject name. If ca is
+ True, the certificate will be self-signed; otherwise the certificate
+ is signed by self.ca.
+ """
+ key = rsa.generate_private_key(
+ public_exponent=65537,
+ key_size=2048,
+ )
+
+ builder = x509.CertificateBuilder()
+ now = datetime.datetime.now(datetime.timezone.utc)
+
+ builder = (
+ builder.subject_name(subject)
+ .public_key(key.public_key())
+ .serial_number(x509.random_serial_number())
+ .not_valid_before(now)
+ .not_valid_after(now + datetime.timedelta(hours=1))
+ )
+
+ if ca:
+ builder = builder.issuer_name(subject)
+ else:
+ builder = builder.issuer_name(self.ca.cert.subject)
+
+ builder = builder.add_extension(
+ x509.BasicConstraints(ca=ca, path_length=None),
+ critical=True,
+ )
+
+ cert = builder.sign(
+ private_key=key if ca else self.ca.key,
+ algorithm=hashes.SHA256(),
+ )
+
+ # Dump the certificate and key to file.
+ keypath = self._tofile(
+ key.private_bytes(
+ serialization.Encoding.PEM,
+ serialization.PrivateFormat.PKCS8,
+ serialization.NoEncryption(),
+ ),
+ suffix=".key",
+ )
+ certpath = self._tofile(
+ cert.public_bytes(serialization.Encoding.PEM),
+ suffix="-ca.crt" if ca else ".crt",
+ )
+
+ return Cert(
+ cert=cert,
+ certpath=certpath,
+ key=key,
+ keypath=keypath,
+ )
+
+ def _tofile(self, data: bytes, *, suffix) -> str:
+ """
+ Dumps data to a file on disk with the requested suffix and returns
+ the path. The file is located somewhere in pytest's temporary
+ directory root.
+ """
+ f = tempfile.NamedTemporaryFile(suffix=suffix, dir=tmpdir, delete=False)
+ with f:
+ f.write(data)
+
+ return f.name
+
+ return _Certs()
diff --git a/src/test/ssl/pyt/test_client.py b/src/test/ssl/pyt/test_client.py
new file mode 100644
index 00000000000..28110ae0717
--- /dev/null
+++ b/src/test/ssl/pyt/test_client.py
@@ -0,0 +1,278 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import ctypes
+import socket
+import ssl
+import struct
+import threading
+from typing import Callable
+
+import pytest
+
+import pg
+
+# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
+pytestmark = pg.require_test_extra("ssl")
+
+
+@pytest.fixture(scope="session", autouse=True)
+def skip_if_no_ssl_support(libpq_handle):
+ """Skips tests if SSL support is not configured."""
+
+ # Declare PQsslAttribute().
+ PQsslAttribute = libpq_handle.PQsslAttribute
+ PQsslAttribute.restype = ctypes.c_char_p
+ PQsslAttribute.argtypes = [ctypes.c_void_p, ctypes.c_char_p]
+
+ if not PQsslAttribute(None, b"library"):
+ pytest.skip("requires SSL support to be configured")
+
+
+#
+# Test Fixtures
+#
+
+
+@pytest.fixture
+def tcp_server_class(remaining_timeout):
+ """
+ Metafixture to combine related logic for tcp_server and ssl_server.
+
+ TODO: combine with test_libpq.local_server
+ """
+
+ class _TCPServer(contextlib.ExitStack):
+ """
+ Implementation class for tcp_server. See .background() for the primary
+ entry point for tests. Postgres clients may connect to this server via
+ **tcp_server.conninfo.
+
+ _TCPServer derives from contextlib.ExitStack to provide easy cleanup of
+ associated resources; see the documentation for that class for a full
+ explanation.
+ """
+
+ def __init__(self):
+ super().__init__()
+
+ self._thread = None
+ self._thread_exc = None
+ self._listener = self.enter_context(
+ socket.socket(socket.AF_INET, socket.SOCK_STREAM),
+ )
+
+ self._bind_and_listen()
+ sockname = self._listener.getsockname()
+ self.conninfo = dict(
+ hostaddr=sockname[0],
+ port=sockname[1],
+ )
+
+ def _bind_and_listen(self):
+ """
+ Does the actual work of binding the socket and listening for
+ connections.
+
+ The listen backlog is currently hardcoded to one.
+ """
+ self._listener.bind(("127.0.0.1", 0))
+ self._listener.listen(1)
+
+ def background(self, fn: Callable[[socket.socket], None]) -> None:
+ """
+ Accepts a client connection on a background thread and passes it to
+ the provided callback. Any exceptions raised from the callback will
+ be re-raised on the main thread during fixture teardown.
+
+ Blocking operations on the connected socket default to using the
+ remaining_timeout(), though this can be changed by the test via the
+ socket's .settimeout().
+ """
+
+ def _bg():
+ try:
+ self._listener.settimeout(remaining_timeout())
+ sock, _ = self._listener.accept()
+
+ with sock:
+ sock.settimeout(remaining_timeout())
+ fn(sock)
+
+ except Exception as e:
+ # Save the exception for re-raising on the main thread.
+ self._thread_exc = e
+
+ # TODO: rather than using callback(), consider explicitly signaling
+ # the fn() implementation to stop early if we get an exception.
+ # Otherwise we'll hang until the end of the timeout.
+ self._thread = threading.Thread(target=_bg)
+ self.callback(self._join)
+
+ self._thread.start()
+
+ def _join(self):
+ """
+ Waits for the background thread to finish and raises any thrown
+ exception. This is called during fixture teardown.
+ """
+ # Give a little bit of wiggle room on the join timeout, since we're
+ # racing against the test's own use of remaining_timeout(). (It's
+ # preferable to let tests report timeouts; the stack traces will
+ # help with debugging.)
+ self._thread.join(remaining_timeout() + 1)
+ if self._thread.is_alive():
+ raise TimeoutError("background thread is still running after timeout")
+
+ if self._thread_exc is not None:
+ raise self._thread_exc
+
+ return _TCPServer
+
+
+@pytest.fixture
+def tcp_server(tcp_server_class):
+ """
+ Opens up a local TCP socket for mocking a Postgres server on a background
+ thread. See the _TCPServer API for usage.
+ """
+ with tcp_server_class() as s:
+ yield s
+
+
+@pytest.fixture
+def ssl_server(tcp_server_class, certs):
+ """
+ Like tcp_server, but with an additional .background_ssl() method which will
+ perform a SSLRequest handshake on the socket before handing the connection
+ to the test callback.
+
+ This server uses certs.server as its identity.
+ """
+
+ class _SSLServer(tcp_server_class):
+ def __init__(self):
+ super().__init__()
+
+ self.conninfo["host"] = certs.server_host
+
+ self._ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
+ self._ctx.load_cert_chain(certs.server.certpath, certs.server.keypath)
+
+ def background_ssl(self, fn: Callable[[ssl.SSLSocket], None]) -> None:
+ """
+ Invokes a server callback as with .background(), but an SSLRequest
+ handshake is performed first, and the socket provided to the
+ callback has been wrapped in an OpenSSL layer.
+ """
+
+ def handshake(s: socket.socket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Make sure we get an SSLRequest.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (1234, 5679)
+ assert pktlen == 8
+
+ # Accept the SSLRequest.
+ s.send(b"S")
+
+ with self._ctx.wrap_socket(s, server_side=True) as wrapped:
+ fn(wrapped)
+
+ self.background(handshake)
+
+ with _SSLServer() as s:
+ yield s
+
+
+#
+# Tests
+#
+
+
+@pytest.mark.parametrize("sslmode", ("require", "verify-ca", "verify-full"))
+def test_server_with_ssl_disabled(libpq, tcp_server, certs, sslmode):
+ """
+ Make sure client refuses to talk to non-SSL servers with stricter
+ sslmodes.
+ """
+
+ def refuse_ssl(s: socket.socket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Make sure we get an SSLRequest.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (1234, 5679)
+ assert pktlen == 8
+
+ # Refuse the SSLRequest.
+ s.send(b"N")
+
+ # Wait for the client to close the connection.
+ assert not s.recv(1), "client sent unexpected data"
+
+ tcp_server.background(refuse_ssl)
+
+ with pytest.raises(libpq.Error, match="server does not support SSL"):
+ with libpq: # XXX tests shouldn't need to do this
+ libpq.must_connect(
+ **tcp_server.conninfo,
+ sslrootcert=certs.ca.certpath,
+ sslmode=sslmode,
+ )
+
+
+def test_verify_full_connection(libpq, ssl_server, certs):
+ """Completes a verify-full connection and empty query."""
+
+ def handle_empty_query(s: ssl.SSLSocket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Check the startup packet version, then discard the remainder.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (3, 0)
+ s.recv(pktlen - 8)
+
+ # Send the required litany of server messages.
+ s.send(struct.pack("!cII", b"R", 8, 0)) # AuthenticationOK
+
+ # ParameterStatus: client_encoding
+ key = b"client_encoding\0"
+ val = b"UTF-8\0"
+ s.send(struct.pack("!cI", b"S", 4 + len(key) + len(val)) + key + val)
+
+ # ParameterStatus: DateStyle
+ key = b"DateStyle\0"
+ val = b"ISO, MDY\0"
+ s.send(struct.pack("!cI", b"S", 4 + len(key) + len(val)) + key + val)
+
+ s.send(struct.pack("!cIII", b"K", 12, 1234, 1234)) # BackendKeyData
+ s.send(struct.pack("!cIc", b"Z", 5, b"I")) # ReadyForQuery
+
+ # Expect an empty query.
+ pkttype = s.recv(1)
+ assert pkttype == b"Q"
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+ assert s.recv(pktlen - 4) == b"\0"
+
+ # Send an EmptyQueryResponse+ReadyForQuery.
+ s.send(struct.pack("!cI", b"I", 4))
+ s.send(struct.pack("!cIc", b"Z", 5, b"I"))
+
+ # libpq should terminate and close the connection.
+ assert s.recv(1) == b"X"
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+ assert pktlen == 4
+
+ assert not s.recv(1), "client sent unexpected data"
+
+ ssl_server.background_ssl(handle_empty_query)
+
+ conn = libpq.must_connect(
+ **ssl_server.conninfo,
+ sslrootcert=certs.ca.certpath,
+ sslmode="verify-full",
+ )
+ with conn:
+ assert conn.exec("").status() == libpq.PGRES_EMPTY_QUERY
--
2.51.1
v3-0004-WIP-pytest-Add-some-server-side-SSL-tests.patchtext/x-patch; charset=utf-8; name=v3-0004-WIP-pytest-Add-some-server-side-SSL-tests.patchDownload
From dc96eb0721074a522568f5ac608522ea30001b6b Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Fri, 22 Aug 2025 17:39:40 -0700
Subject: [PATCH v3 04/10] WIP: pytest: Add some server-side SSL tests
In the same vein as the previous commit, this is a server-only test
suite operating against a mock client. The test itself is a heavily
parameterized check for direct-SSL handshake behavior, using a
combination of "standard" and "custom" certificates via the certs
fixture.
installcheck is currently unsupported, but the architecture has some
extension points that should make it possible later. For now, a new
server is always started for the test session.
New session-level fixtures have been added which probably need to
migrate to the `pg` package. Of note:
- datadir points to the server's data directory
- sockdir points to the server's UNIX socket/lock directory
- server_instance actually inits and starts a server via the pg_ctl on
PATH (and could eventually point at an installcheck target)
Wrapping these session-level fixtures is pg_server[_session], which
provides APIs for configuration changes that unwind themselves at the
end of fixture scopes. There's also an example of nested scopes, via
pg_server_session.subcontext(). Many TODOs remain before we're on par
with Test::Cluster, but this should illustrate my desired architecture
pretty well.
Windows currently uses SCRAM-over-UNIX for the admin account rather than
SSPI-over-TCP. There's some dead Win32 code in pg.current_windows_user,
but I've kept it as an illustration of how a developer might write such
code for SSPI. I'll probably remove it in a future patch version.
TODOs:
- port more server configuration behavior from PostgreSQL::Test::Cluster
- decide again on "session" vs. "module" scope for server fixtures
- improve remaining_timeout() integration with socket operations; at the
moment, the timeout resets on every call rather than decrementing
---
src/test/pytest/pg/__init__.py | 1 +
src/test/pytest/pg/_win32.py | 145 +++++++++
src/test/ssl/pyt/conftest.py | 113 +++++++
src/test/ssl/pyt/test_server.py | 538 ++++++++++++++++++++++++++++++++
4 files changed, 797 insertions(+)
create mode 100644 src/test/pytest/pg/_win32.py
create mode 100644 src/test/ssl/pyt/test_server.py
diff --git a/src/test/pytest/pg/__init__.py b/src/test/pytest/pg/__init__.py
index ef8faf54ca4..5dae49b6406 100644
--- a/src/test/pytest/pg/__init__.py
+++ b/src/test/pytest/pg/__init__.py
@@ -1,3 +1,4 @@
# Copyright (c) 2025, PostgreSQL Global Development Group
from ._env import has_test_extra, require_test_extra
+from ._win32 import current_windows_user
diff --git a/src/test/pytest/pg/_win32.py b/src/test/pytest/pg/_win32.py
new file mode 100644
index 00000000000..3fd67b10191
--- /dev/null
+++ b/src/test/pytest/pg/_win32.py
@@ -0,0 +1,145 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import ctypes
+import platform
+
+
+def current_windows_user():
+ """
+ A port of pg_regress.c's current_windows_user() helper. Returns
+ (accountname, domainname).
+
+ XXX This is dead code now, but I'm keeping it as a motivating example of
+ Win32 interaction, and someone may find it useful in the future when writing
+ SSPI tests?
+ """
+ try:
+ advapi32 = ctypes.windll.advapi32
+ kernel32 = ctypes.windll.kernel32
+ except AttributeError:
+ raise RuntimeError(
+ f"current_windows_user() is not supported on {platform.system()}"
+ )
+
+ def raise_winerror_when_false(result, func, arguments):
+ """
+ A ctypes errcheck handler that raises WinError (which will contain the
+ result of GetLastError()) when the function's return value is false.
+ """
+ if not result:
+ raise ctypes.WinError()
+
+ #
+ # Function Prototypes
+ #
+
+ from ctypes import wintypes
+
+ # GetCurrentProcess
+ kernel32.GetCurrentProcess.restype = wintypes.HANDLE
+ kernel32.GetCurrentProcess.argtypes = []
+
+ # OpenProcessToken
+ TOKEN_READ = 0x00020008
+
+ advapi32.OpenProcessToken.restype = wintypes.BOOL
+ advapi32.OpenProcessToken.argtypes = [
+ wintypes.HANDLE,
+ wintypes.DWORD,
+ wintypes.PHANDLE,
+ ]
+ advapi32.OpenProcessToken.errcheck = raise_winerror_when_false
+
+ # GetTokenInformation
+ PSID = wintypes.LPVOID # we don't need the internals
+ TOKEN_INFORMATION_CLASS = wintypes.INT
+ TokenUser = 1
+
+ class SID_AND_ATTRIBUTES(ctypes.Structure):
+ _fields_ = [
+ ("Sid", PSID),
+ ("Attributes", wintypes.DWORD),
+ ]
+
+ class TOKEN_USER(ctypes.Structure):
+ _fields_ = [
+ ("User", SID_AND_ATTRIBUTES),
+ ]
+
+ advapi32.GetTokenInformation.restype = wintypes.BOOL
+ advapi32.GetTokenInformation.argtypes = [
+ wintypes.HANDLE,
+ TOKEN_INFORMATION_CLASS,
+ wintypes.LPVOID,
+ wintypes.DWORD,
+ wintypes.PDWORD,
+ ]
+ advapi32.GetTokenInformation.errcheck = raise_winerror_when_false
+
+ # LookupAccountSid
+ SID_NAME_USE = wintypes.INT
+ PSID_NAME_USE = ctypes.POINTER(SID_NAME_USE)
+
+ advapi32.LookupAccountSidW.restype = wintypes.BOOL
+ advapi32.LookupAccountSidW.argtypes = [
+ wintypes.LPCWSTR,
+ PSID,
+ wintypes.LPWSTR,
+ wintypes.LPDWORD,
+ wintypes.LPWSTR,
+ wintypes.LPDWORD,
+ PSID_NAME_USE,
+ ]
+ advapi32.LookupAccountSidW.errcheck = raise_winerror_when_false
+
+ #
+ # Implementation (see pg_SSPI_recv_auth())
+ #
+
+ # Get the current process token...
+ token = wintypes.HANDLE()
+ proc = kernel32.GetCurrentProcess()
+ advapi32.OpenProcessToken(proc, TOKEN_READ, token)
+
+ # ...then read the TOKEN_USER struct for that token...
+ info = TOKEN_USER()
+ infolen = wintypes.DWORD()
+
+ try:
+ # (GetTokenInformation creates a buffer bigger than TOKEN_USER, so we
+ # have to query the correct length first.)
+ advapi32.GetTokenInformation(token, TokenUser, None, 0, ctypes.byref(infolen))
+ assert False, "GetTokenInformation succeeded unexpectedly"
+
+ except OSError as err:
+ assert err.winerror == 122 # insufficient buffer
+
+ ctypes.resize(info, infolen.value)
+ advapi32.GetTokenInformation(
+ token,
+ TokenUser,
+ ctypes.byref(info),
+ ctypes.sizeof(info),
+ ctypes.byref(infolen),
+ )
+
+ # ...then pull the account and domain names out of the user SID.
+ MAXPGPATH = 1024
+
+ account = ctypes.create_unicode_buffer(MAXPGPATH)
+ domain = ctypes.create_unicode_buffer(MAXPGPATH)
+ accountlen = wintypes.DWORD(ctypes.sizeof(account))
+ domainlen = wintypes.DWORD(ctypes.sizeof(domain))
+ use = SID_NAME_USE()
+
+ advapi32.LookupAccountSidW(
+ None,
+ info.User.Sid,
+ account,
+ ctypes.byref(accountlen),
+ domain,
+ ctypes.byref(domainlen),
+ ctypes.byref(use),
+ )
+
+ return (account.value, domain.value)
diff --git a/src/test/ssl/pyt/conftest.py b/src/test/ssl/pyt/conftest.py
index fb4db372f03..85d2c994828 100644
--- a/src/test/ssl/pyt/conftest.py
+++ b/src/test/ssl/pyt/conftest.py
@@ -1,6 +1,12 @@
# Copyright (c) 2025, PostgreSQL Global Development Group
import datetime
+import os
+import pathlib
+import platform
+import secrets
+import socket
+import subprocess
import tempfile
from collections import namedtuple
@@ -127,3 +133,110 @@ def certs(cryptography, tmp_path_factory):
return f.name
return _Certs()
+
+
+@pytest.fixture(scope="session")
+def datadir(tmp_path_factory):
+ """
+ Returns the directory name to use as the server data directory. If
+ TESTDATADIR is provided, that will be used; otherwise a new temporary
+ directory is created in the pytest temp root.
+ """
+ d = os.getenv("TESTDATADIR")
+ if d:
+ d = pathlib.Path(d)
+ else:
+ d = tmp_path_factory.mktemp("tmp_check")
+
+ return d
+
+
+@pytest.fixture(scope="session")
+def sockdir(tmp_path_factory):
+ """
+ Returns the directory name to use as the server's unix_socket_directories
+ setting. Local client connections use this as the PGHOST.
+
+ At the moment, this is always put under the pytest temp root.
+ """
+ return tmp_path_factory.mktemp("sockfiles")
+
+
+@pytest.fixture(scope="session")
+def winpassword():
+ """The per-session SCRAM password for the server admin on Windows."""
+ return secrets.token_urlsafe(16)
+
+
+@pytest.fixture(scope="session")
+def server_instance(certs, datadir, sockdir, winpassword):
+ """
+ Starts a running Postgres server listening on localhost. The HBA initially
+ allows only local UNIX connections from the same user.
+
+ TODO: when installcheck is supported, this should optionally point to the
+ currently running server instead.
+ """
+
+ # Lock down the HBA by default; tests can open it back up later.
+ if platform.system() == "Windows":
+ # On Windows, for admin connections, use SCRAM with a generated password
+ # over local sockets. This requires additional work during initdb.
+ method = "scram-sha-256"
+
+ # NamedTemporaryFile doesn't work very nicely on Windows until Python
+ # 3.12, which introduces NamedTemporaryFile(delete_on_close=False).
+ # Until then, specify delete=False and manually unlink after use.
+ with tempfile.NamedTemporaryFile("w", delete=False) as pwfile:
+ pwfile.write(winpassword)
+
+ subprocess.check_call(
+ ["initdb", "--auth=scram-sha-256", "--pwfile", pwfile.name, datadir]
+ )
+ os.unlink(pwfile.name)
+
+ else:
+ # For other OSes we can just use peer auth.
+ method = "peer"
+ subprocess.check_call(["pg_ctl", "-D", datadir, "init"])
+
+ with open(datadir / "pg_hba.conf", "w") as f:
+ print(f"# default: local {method} connections only", file=f)
+ print(f"local all all {method}", file=f)
+
+ # Figure out a port to listen on. Attempt to reserve both IPv4 and IPv6
+ # addresses in one go.
+ #
+ # Note: socket.has_dualstack_ipv6/create_server are only in Python 3.8+.
+ if hasattr(socket, "has_dualstack_ipv6") and socket.has_dualstack_ipv6():
+ addr = ("::1", 0)
+ s = socket.create_server(addr, family=socket.AF_INET6, dualstack_ipv6=True)
+
+ hostaddr, port, _, _ = s.getsockname()
+ addrs = [hostaddr, "127.0.0.1"]
+
+ else:
+ addr = ("127.0.0.1", 0)
+
+ s = socket.socket()
+ s.bind(addr)
+
+ hostaddr, port = s.getsockname()
+ addrs = [hostaddr]
+
+ log = os.path.join(datadir, "postgresql.log")
+
+ with s, open(os.path.join(datadir, "postgresql.conf"), "a") as f:
+ print(file=f)
+ print("unix_socket_directories = '{}'".format(sockdir.as_posix()), file=f)
+ print("listen_addresses = '{}'".format(",".join(addrs)), file=f)
+ print("port =", port, file=f)
+ print("log_connections = all", file=f)
+
+ # Between closing of the socket, s, and server start, we're racing against
+ # anything that wants to open up ephemeral ports, so try not to put any new
+ # work here.
+
+ subprocess.check_call(["pg_ctl", "-D", datadir, "-l", log, "start"])
+ yield (hostaddr, port)
+ subprocess.check_call(["pg_ctl", "-D", datadir, "-l", log, "stop"])
diff --git a/src/test/ssl/pyt/test_server.py b/src/test/ssl/pyt/test_server.py
new file mode 100644
index 00000000000..2d0be735371
--- /dev/null
+++ b/src/test/ssl/pyt/test_server.py
@@ -0,0 +1,538 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import os
+import pathlib
+import platform
+import re
+import shutil
+import socket
+import ssl
+import struct
+import subprocess
+import tempfile
+from collections import namedtuple
+from typing import Dict, List, Union
+
+import pytest
+
+import pg
+
+# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
+pytestmark = pg.require_test_extra("ssl")
+
+
+#
+# Test Fixtures
+#
+
+
+@pytest.fixture(scope="session")
+def connenv(server_instance, sockdir, datadir):
+ """
+ Provides the values for several PG* environment variables needed for our
+ utility programs to connect to the server_instance.
+ """
+ return {
+ "PGHOST": str(sockdir),
+ "PGPORT": str(server_instance[1]),
+ "PGDATABASE": "postgres",
+ "PGDATA": str(datadir),
+ }
+
+
+class FileBackup(contextlib.AbstractContextManager):
+ """
+ A context manager which backs up a file's contents, restoring them on exit.
+ """
+
+ def __init__(self, file: pathlib.Path):
+ super().__init__()
+
+ self._file = file
+
+ def __enter__(self):
+ with tempfile.NamedTemporaryFile(
+ prefix=self._file.name, dir=self._file.parent, delete=False
+ ) as f:
+ self._backup = pathlib.Path(f.name)
+
+ shutil.copyfile(self._file, self._backup)
+
+ return self
+
+ def __exit__(self, *exc):
+ # Swap the backup and the original file, so that the modified contents
+ # can still be inspected in case of failure.
+ #
+ # TODO: this is less helpful if there are multiple layers, because it's
+ # not clear which backup to look at. Can the backup name be printed as
+ # part of the failed test output? Should we only swap on test failure?
+ tmp = self._backup.parent / (self._backup.name + ".tmp")
+
+ shutil.copyfile(self._file, tmp)
+ shutil.copyfile(self._backup, self._file)
+ shutil.move(tmp, self._backup)
+
+
+class HBA(FileBackup):
+ """
+ Backs up a server's HBA configuration and provides means for temporarily
+ editing it. See also pg_server, which provides an instance of this class and
+ context managers for enforcing the reload/restart order of operations.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "pg_hba.conf")
+
+ def prepend(self, *lines: Union[str, List[str]]):
+ """
+ Temporarily prepends lines to the server's pg_hba.conf.
+
+ As sugar for aligning HBA columns in the tests, each line can be either
+ a string or a list of strings. List elements will be joined by single
+ spaces before they are written to file.
+ """
+ with open(self._file, "r") as f:
+ prior_data = f.read()
+
+ with open(self._file, "w") as f:
+ for l in lines:
+ if isinstance(l, list):
+ print(*l, file=f)
+ else:
+ print(l, file=f)
+
+ f.write(prior_data)
+
+
+class Config(FileBackup):
+ """
+ Backs up a server's postgresql.conf and provides means for temporarily
+ editing it. See also pg_server, which provides an instance of this class and
+ context managers for enforcing the reload/restart order of operations.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "postgresql.conf")
+
+ def set(self, **gucs):
+ """
+ Temporarily appends GUC settings to the server's postgresql.conf.
+ """
+
+ with open(self._file, "a") as f:
+ print(file=f)
+
+ for n, v in gucs.items():
+ v = str(v)
+
+ # TODO: proper quoting
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+ v = "'{}'".format(v)
+
+ print(n, "=", v, file=f)
+
+
+@pytest.fixture(scope="session")
+def pg_server_session(server_instance, connenv, datadir, winpassword):
+ """
+ Provides common routines for configuring and connecting to the
+ server_instance. For example:
+
+ users = pg_server_session.create_users("one", "two")
+ dbs = pg_server_session.create_dbs("default")
+
+ with pg_server_session.reloading() as s:
+ s.hba.prepend(["local", dbs["default"], users["two"], "peer"])
+
+ conn = connect_somehow(**pg_server_session.conninfo)
+ ...
+
+ Attributes of note are
+ - .conninfo: provides TCP connection info for the server
+
+ This fixture unwinds its configuration changes at the end of the pytest
+ session. For more granular changes, pg_server_session.subcontext() splits
+ off a "nested" context to allow smaller scopes.
+ """
+
+ class _Server(contextlib.ExitStack):
+ conninfo = dict(
+ hostaddr=server_instance[0],
+ port=server_instance[1],
+ )
+
+ # for _backup_configuration()
+ _Backup = namedtuple("Backup", "conf, hba")
+
+ def subcontext(self):
+ """
+ Creates a new server stack instance that can be tied to a smaller
+ scope than "session".
+ """
+ # So far, there doesn't seem to be a need to link the two objects,
+ # since HBA/Config/FileBackup operate directly on the filesystem and
+ # will appear to "nest" naturally.
+ return self.__class__()
+
+ def create_users(self, *userkeys: str) -> Dict[str, str]:
+ """
+ Creates new users which will be dropped at the end of the server
+ context.
+
+ For each provided key, a related user name will be selected and
+ stored in a map. This map is returned to let calling code look up
+ the selected usernames (instead of hardcoding them and potentially
+ stomping on an existing installation).
+ """
+ usermap = {}
+
+ for u in userkeys:
+ # TODO: use a uniquifier to support installcheck
+ name = u + "user"
+ usermap[u] = name
+
+ # TODO: proper escaping
+ self.psql("-c", "CREATE USER " + name)
+ self.callback(self.psql, "-c", "DROP USER " + name)
+
+ return usermap
+
+ def create_dbs(self, *dbkeys: str) -> Dict[str, str]:
+ """
+ Creates new databases which will be dropped at the end of the server
+ context. See create_users() for the meaning of the keys and returned
+ map.
+ """
+ dbmap = {}
+
+ for d in dbkeys:
+ # TODO: use a uniquifier to support installcheck
+ name = d + "db"
+ dbmap[d] = name
+
+ # TODO: proper escaping
+ self.psql("-c", "CREATE DATABASE " + name)
+ self.callback(self.psql, "-c", "DROP DATABASE " + name)
+
+ return dbmap
+
+ @contextlib.contextmanager
+ def reloading(self):
+ """
+ Provides a context manager for making configuration changes.
+
+ If the context suite finishes successfully, the configuration will
+ be reloaded via pg_ctl. On teardown, the configuration changes will
+ be unwound, and the server will be signaled to reload again.
+
+ The context target contains the following attributes which can be
+ used to configure the server:
+ - .conf: modifies postgresql.conf
+ - .hba: modifies pg_hba.conf
+
+ For example:
+
+ with pg_server_session.reloading() as s:
+ s.conf.set(log_connections="on")
+ s.hba.prepend("local all all trust")
+ """
+ try:
+ # Push a reload onto the stack before making any other
+ # unwindable changes. That way the order of operations will be
+ #
+ # # test
+ # - config change 1
+ # - config change 2
+ # - reload
+ # # teardown
+ # - undo config change 2
+ # - undo config change 1
+ # - reload
+ #
+ self.callback(self.pg_ctl, "reload")
+ yield self._backup_configuration()
+ except:
+ # We only want to reload at the end of the suite if there were
+ # no errors. During exceptions, the pushed callback handles
+ # things instead, so there's nothing to do here.
+ raise
+ else:
+ # Suite completed successfully.
+ self.pg_ctl("reload")
+
+ @contextlib.contextmanager
+ def restarting(self):
+ """Like .reloading(), but with a full server restart."""
+ try:
+ self.callback(self.pg_ctl, "restart")
+ yield self._backup_configuration()
+ except:
+ raise
+ else:
+ self.pg_ctl("restart")
+
+ def psql(self, *args):
+ """
+ Runs psql with the given arguments. Password prompts are always
+ disabled. On Windows, the admin password will be included in the
+ environment.
+ """
+ if platform.system() == "Windows":
+ pw = dict(PGPASSWORD=winpassword)
+ else:
+ pw = None
+
+ self._run("psql", "-w", *args, addenv=pw)
+
+ def pg_ctl(self, *args):
+ """
+ Runs pg_ctl with the given arguments. Log output will be placed in
+ postgresql.log in the server's data directory.
+
+ TODO: put the log in TESTLOGDIR
+ """
+ self._run("pg_ctl", "-l", str(datadir / "postgresql.log"), *args)
+
+ def _run(self, cmd, *args, addenv: dict = None):
+ # Override the existing environment with the connenv values and
+ # anything the caller wanted to add. (Python 3.9 gives us the
+ # less-ugly `os.environ | connenv` merge operator.)
+ subenv = dict(os.environ, **connenv)
+ if addenv:
+ subenv.update(addenv)
+
+ subprocess.check_call([cmd, *args], env=subenv)
+
+ def _backup_configuration(self):
+ # Wrap the existing HBA and configuration with FileBackups.
+ return self._Backup(
+ hba=self.enter_context(HBA(datadir)),
+ conf=self.enter_context(Config(datadir)),
+ )
+
+ with _Server() as s:
+ yield s
+
+
+@pytest.fixture(scope="module", autouse=True)
+def ssl_setup(pg_server_session, certs, datadir):
+ """
+ Sets up required server settings for all tests in this module. The fixture
+ variable is a tuple (users, dbs) containing the user and database names that
+ have been chosen for the test session.
+ """
+ try:
+ with pg_server_session.restarting() as s:
+ s.conf.set(
+ ssl="on",
+ ssl_ca_file=certs.ca.certpath,
+ ssl_cert_file=certs.server.certpath,
+ ssl_key_file=certs.server.keypath,
+ )
+
+ # Reject by default.
+ s.hba.prepend("hostssl all all all reject")
+
+ except subprocess.CalledProcessError:
+ # This is a decent place to skip if the server isn't set up for SSL.
+ logpath = datadir / "postgresql.log"
+ unsupported = re.compile("SSL is not supported")
+
+ with open(logpath, "r") as log:
+ for line in log:
+ if unsupported.search(line):
+ pytest.skip("the server does not support SSL")
+
+ # Some other error happened.
+ raise
+
+ users = pg_server_session.create_users(
+ "ssl",
+ )
+
+ dbs = pg_server_session.create_dbs(
+ "ssl",
+ )
+
+ return (users, dbs)
+
+
+@pytest.fixture(scope="module")
+def client_cert(ssl_setup, certs):
+ """
+ Creates a Cert for the "ssl" user.
+ """
+ from cryptography import x509
+ from cryptography.x509.oid import NameOID
+
+ users, _ = ssl_setup
+ user = users["ssl"]
+
+ return certs.new(x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, user)]))
+
+
+@pytest.fixture
+def pg_server(pg_server_session):
+ """
+ A per-test instance of pg_server_session. Use this fixture to make changes
+ to the server which will be rolled back at the end of every test.
+ """
+ with pg_server_session.subcontext() as s:
+ yield s
+
+
+#
+# Tests
+#
+
+
+# For use with the `creds` parameter below.
+CLIENT = "client"
+SERVER = "server"
+
+
+@pytest.mark.parametrize(
+ # fmt: off
+ "auth_method, creds, expected_error",
+[
+ # Trust allows anything.
+ ("trust", None, None),
+ ("trust", CLIENT, None),
+ ("trust", SERVER, None),
+
+ # verify-ca allows any CA-signed certificate.
+ ("trust clientcert=verify-ca", None, "requires a valid client certificate"),
+ ("trust clientcert=verify-ca", CLIENT, None),
+ ("trust clientcert=verify-ca", SERVER, None),
+
+ # cert and verify-full allow only the correct certificate.
+ ("trust clientcert=verify-full", None, "requires a valid client certificate"),
+ ("trust clientcert=verify-full", CLIENT, None),
+ ("trust clientcert=verify-full", SERVER, "authentication failed for user"),
+ ("cert", None, "requires a valid client certificate"),
+ ("cert", CLIENT, None),
+ ("cert", SERVER, "authentication failed for user"),
+],
+ # fmt: on
+)
+def test_direct_ssl_certificate_authentication(
+ pg_server,
+ ssl_setup,
+ certs,
+ client_cert,
+ remaining_timeout,
+ # test parameters
+ auth_method,
+ creds,
+ expected_error,
+):
+ """
+ Tests direct SSL connections with various client-certificate/HBA
+ combinations.
+ """
+
+ # Set up the HBA as desired by the test.
+ users, dbs = ssl_setup
+
+ user = users["ssl"]
+ db = dbs["ssl"]
+
+ with pg_server.reloading() as s:
+ s.hba.prepend(
+ ["hostssl", db, user, "127.0.0.1/32", auth_method],
+ ["hostssl", db, user, "::1/128", auth_method],
+ )
+
+ # Configure the SSL settings for the client.
+ ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
+ ctx.load_verify_locations(cafile=certs.ca.certpath)
+ ctx.set_alpn_protocols(["postgresql"]) # for direct SSL
+
+ # Load up a client certificate if required by the test.
+ if creds == CLIENT:
+ ctx.load_cert_chain(client_cert.certpath, client_cert.keypath)
+ elif creds == SERVER:
+ # Using a server certificate as the client credential is expected to
+ # work only for clientcert=verify-ca (and `trust`, naturally).
+ ctx.load_cert_chain(certs.server.certpath, certs.server.keypath)
+
+ # Make a direct SSL connection. There's no SSLRequest in the handshake; we
+ # simply wrap a TCP connection with OpenSSL.
+ addr = (pg_server.conninfo["hostaddr"], pg_server.conninfo["port"])
+ with socket.create_connection(addr) as s:
+ s.settimeout(remaining_timeout()) # XXX this resets every operation
+
+ with ctx.wrap_socket(s, server_hostname=certs.server_host) as conn:
+ # Build and send the startup packet.
+ startup_options = dict(
+ user=user,
+ database=db,
+ application_name="pytest",
+ )
+
+ payload = b""
+ for k, v in startup_options.items():
+ payload += k.encode() + b"\0"
+ payload += str(v).encode() + b"\0"
+ payload += b"\0" # null terminator
+
+ pktlen = 4 + 4 + len(payload)
+ conn.send(struct.pack("!IHH", pktlen, 3, 0) + payload)
+
+ if not expected_error:
+ # Expect an AuthenticationOK to come back.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"R"
+ assert pktlen == 8
+
+ authn_result = struct.unpack("!I", conn.recv(4))[0]
+ assert authn_result == 0
+
+ # Read and discard to ReadyForQuery.
+ while True:
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ payload = conn.recv(pktlen - 4)
+
+ if pkttype == b"Z":
+ assert payload == b"I"
+ break
+
+ # Send an empty query.
+ conn.send(struct.pack("!cI", b"Q", 5) + b"\0")
+
+ # Expect EmptyQueryResponse+ReadyForQuery.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"I"
+ assert pktlen == 4
+
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"Z"
+
+ payload = conn.recv(pktlen - 4)
+ assert payload == b"I"
+
+ else:
+ # Match the expected authentication error.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"E"
+
+ payload = conn.recv(pktlen - 4)
+ msg = None
+
+ for component in payload.split(b"\0"):
+ if not component:
+ break # end of message
+
+ key, val = component[:1], component[1:]
+ if key == b"S":
+ assert val == b"FATAL"
+ elif key == b"M":
+ msg = val.decode()
+
+ assert re.search(expected_error, msg), "server error did not match"
+
+ # Terminate.
+ conn.send(struct.pack("!cI", b"X", 4))
--
2.51.1
v3-0005-ci-Add-MTEST_SUITES-for-optional-test-tailoring.patchtext/x-patch; charset=utf-8; name=v3-0005-ci-Add-MTEST_SUITES-for-optional-test-tailoring.patchDownload
From 09b5c7d9d04966b842ac19f66aac9e2dd7097b3e Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 2 Sep 2025 15:37:53 -0700
Subject: [PATCH v3 05/10] ci: Add MTEST_SUITES for optional test tailoring
Should make it easier to control the test cycle time for Cirrus. Add the
desired suites (remembering `--suite setup`!) to the top-level envvar.
---
.cirrus.tasks.yml | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 4e744f1c105..706a809f641 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -28,6 +28,7 @@ env:
# errors/warnings in one place.
MBUILD_TARGET: all testprep
MTEST_ARGS: --print-errorlogs --no-rebuild -C build
+ MTEST_SUITES: # --suite setup --suite ssl --suite ...
PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
@@ -247,7 +248,7 @@ task:
test_world_script: |
su postgres <<-EOF
ulimit -c unlimited
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
# test runningcheck, freebsd chosen because it's currently fast enough
@@ -391,7 +392,7 @@ task:
# Otherwise tests will fail on OpenBSD, due to inability to start enough
# processes.
ulimit -p 256
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
on_failure:
@@ -605,7 +606,7 @@ task:
test_world_script: |
su postgres <<-EOF
ulimit -c unlimited
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
# so that we don't upload 64bit logs if 32bit fails
rm -rf build/
@@ -617,7 +618,7 @@ task:
test_world_32_script: |
su postgres <<-EOF
ulimit -c unlimited
- PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS}
+ PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
on_failure:
@@ -743,7 +744,7 @@ task:
test_world_script: |
ulimit -c unlimited # default is 0
ulimit -n 1024 # default is 256, pretty low
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
on_failure:
<<: *on_failure_meson
@@ -826,7 +827,7 @@ task:
check_world_script: |
vcvarsall x64
- meson test %MTEST_ARGS% --num-processes %TEST_JOBS%
+ meson test %MTEST_ARGS% --num-processes %TEST_JOBS% %MTEST_SUITES%
on_failure:
<<: *on_failure_meson
@@ -887,7 +888,7 @@ task:
upload_caches: ccache
test_world_script: |
- %BASH% -c "meson test %MTEST_ARGS% --num-processes %TEST_JOBS%"
+ %BASH% -c "meson test %MTEST_ARGS% --num-processes %TEST_JOBS% %MTEST_SUITES%"
on_failure:
<<: *on_failure_meson
--
2.51.1
v3-0006-XXX-run-pytest-and-ssl-suite-all-OSes.patchtext/x-patch; charset=utf-8; name=v3-0006-XXX-run-pytest-and-ssl-suite-all-OSes.patchDownload
From 6912ea5437feedeb9ca65e312d1726c23671ad54 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 2 Sep 2025 15:38:52 -0700
Subject: [PATCH v3 06/10] XXX run pytest and ssl suite, all OSes
---
.cirrus.star | 2 +-
.cirrus.tasks.yml | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/.cirrus.star b/.cirrus.star
index e9bb672b959..7c1caaa12f1 100644
--- a/.cirrus.star
+++ b/.cirrus.star
@@ -73,7 +73,7 @@ def compute_environment_vars():
# REPO_CI_AUTOMATIC_TRIGGER_TASKS="task_name other_task" under "Repository
# Settings" on Cirrus CI's website.
- default_manual_trigger_tasks = ['mingw', 'netbsd', 'openbsd']
+ default_manual_trigger_tasks = []
repo_ci_automatic_trigger_tasks = env.get('REPO_CI_AUTOMATIC_TRIGGER_TASKS', '')
for task in default_manual_trigger_tasks:
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 706a809f641..ddb5305dc81 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -28,7 +28,7 @@ env:
# errors/warnings in one place.
MBUILD_TARGET: all testprep
MTEST_ARGS: --print-errorlogs --no-rebuild -C build
- MTEST_SUITES: # --suite setup --suite ssl --suite ...
+ MTEST_SUITES: --suite setup --suite pytest --suite ssl
PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
--
2.51.1
v3-0007-Refactor-and-improve-pytest-infrastructure.patchtext/x-patch; charset=utf-8; name=v3-0007-Refactor-and-improve-pytest-infrastructure.patchDownload
From 220c4db5e6bd5996da4b31abe35a43fc61abb71d Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <postgres@jeltef.nl>
Date: Sun, 19 Oct 2025 23:01:30 +0200
Subject: [PATCH v3 07/10] Refactor and improve pytest infrastructure
This change does a lot of refactoring and adding new features to the
pytest based test infrastructure.
The primary features it adds are:
- A `sql` method on `PGconn`: It takes a query and returns the results
as native python types.
- A `conn` fixture: Which is a libpq based connection to the default
Postgres server.
- Use the `pg_config` binary to find the libdir and bindir (can be
overridden by setting PG_CONFIG). Otherwise I had to use
LD_LIBRARY_PATH when running pytest manually.
The refactoring it does:
- Rename `pg_server` fixture to `pg` since it'll likely be one of the
most commonly used ones.
- Rename `pg` module to `pypg` to avoid naming conflict/shadowing
problems with the newly renamed `pg` fixture
- Move class definitions outside of fixtures to separate modules (either
in the `pypg` module or the new `libpq` module)
- Move all "general" fixtures to the `pypg.fixtures` module, instead of
having them be defined in the ssl module.
---
src/test/pytest/libpq.py | 409 ++++++++++++++++++++++
src/test/pytest/pg/fixtures.py | 212 -----------
src/test/pytest/plugins/pgtap.py | 1 -
src/test/pytest/{pg => pypg}/__init__.py | 0
src/test/pytest/{pg => pypg}/_env.py | 1 -
src/test/pytest/{pg => pypg}/_win32.py | 0
src/test/pytest/pypg/fixtures.py | 175 +++++++++
src/test/pytest/pypg/server.py | 387 ++++++++++++++++++++
src/test/pytest/pypg/util.py | 42 +++
src/test/pytest/pyt/conftest.py | 3 +-
src/test/pytest/pyt/test_libpq.py | 23 +-
src/test/pytest/pyt/test_query_helpers.py | 286 +++++++++++++++
src/test/ssl/pyt/conftest.py | 136 ++-----
src/test/ssl/pyt/test_client.py | 26 +-
src/test/ssl/pyt/test_server.py | 380 +-------------------
15 files changed, 1370 insertions(+), 711 deletions(-)
create mode 100644 src/test/pytest/libpq.py
delete mode 100644 src/test/pytest/pg/fixtures.py
rename src/test/pytest/{pg => pypg}/__init__.py (100%)
rename src/test/pytest/{pg => pypg}/_env.py (97%)
rename src/test/pytest/{pg => pypg}/_win32.py (100%)
create mode 100644 src/test/pytest/pypg/fixtures.py
create mode 100644 src/test/pytest/pypg/server.py
create mode 100644 src/test/pytest/pypg/util.py
create mode 100644 src/test/pytest/pyt/test_query_helpers.py
diff --git a/src/test/pytest/libpq.py b/src/test/pytest/libpq.py
new file mode 100644
index 00000000000..b851a117b66
--- /dev/null
+++ b/src/test/pytest/libpq.py
@@ -0,0 +1,409 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+libpq testing utilities - ctypes bindings and helpers for PostgreSQL's libpq library.
+
+This module provides Python wrappers around libpq for use in pytest tests.
+"""
+
+import contextlib
+import ctypes
+import datetime
+import decimal
+import enum
+import json
+import platform
+import os
+import uuid
+from typing import Any, Callable, Dict
+
+
+class LibpqError(RuntimeError):
+ """
+ Exception class for application-level errors that are encountered during libpq operations.
+ """
+
+ pass
+
+
+class ConnectionStatus(enum.IntEnum):
+ """PostgreSQL connection status codes from libpq."""
+
+ CONNECTION_OK = 0
+ CONNECTION_BAD = 1
+
+
+class ExecStatus(enum.IntEnum):
+ """PostgreSQL result status codes from PQresultStatus."""
+
+ PGRES_EMPTY_QUERY = 0
+ PGRES_COMMAND_OK = 1
+ PGRES_TUPLES_OK = 2
+ PGRES_COPY_OUT = 3
+ PGRES_COPY_IN = 4
+ PGRES_BAD_RESPONSE = 5
+ PGRES_NONFATAL_ERROR = 6
+ PGRES_FATAL_ERROR = 7
+ PGRES_COPY_BOTH = 8
+ PGRES_SINGLE_TUPLE = 9
+ PGRES_PIPELINE_SYNC = 10
+ PGRES_PIPELINE_ABORTED = 11
+
+
+class _PGconn(ctypes.Structure):
+ pass
+
+
+class _PGresult(ctypes.Structure):
+ pass
+
+
+_PGconn_p = ctypes.POINTER(_PGconn)
+_PGresult_p = ctypes.POINTER(_PGresult)
+
+
+def load_libpq_handle(libdir):
+ """
+ Loads a ctypes handle for libpq. Some common function prototypes are
+ initialized for general use.
+ """
+ system = platform.system()
+
+ if system in ("Linux", "FreeBSD", "NetBSD", "OpenBSD"):
+ name = "libpq.so.5"
+ elif system == "Darwin":
+ name = "libpq.5.dylib"
+ elif system == "Windows":
+ name = "libpq.dll"
+ else:
+ assert False, f"the libpq fixture must be updated for {system}"
+
+ libpq_path = os.path.join(libdir, name)
+
+ # XXX ctypes.CDLL() is a little stricter with load paths on Windows. The
+ # preferred way around that is to know the absolute path to libpq.dll, but
+ # that doesn't seem to mesh well with the current test infrastructure. For
+ # now, enable "standard" LoadLibrary behavior.
+ loadopts = {}
+ if system == "Windows":
+ loadopts["winmode"] = 0
+
+ lib = ctypes.CDLL(libpq_path, **loadopts)
+
+ #
+ # Function Prototypes
+ #
+
+ lib.PQconnectdb.restype = _PGconn_p
+ lib.PQconnectdb.argtypes = [ctypes.c_char_p]
+
+ lib.PQstatus.restype = ctypes.c_int
+ lib.PQstatus.argtypes = [_PGconn_p]
+
+ lib.PQexec.restype = _PGresult_p
+ lib.PQexec.argtypes = [_PGconn_p, ctypes.c_char_p]
+
+ lib.PQresultStatus.restype = ctypes.c_int
+ lib.PQresultStatus.argtypes = [_PGresult_p]
+
+ lib.PQclear.restype = None
+ lib.PQclear.argtypes = [_PGresult_p]
+
+ lib.PQerrorMessage.restype = ctypes.c_char_p
+ lib.PQerrorMessage.argtypes = [_PGconn_p]
+
+ lib.PQfinish.restype = None
+ lib.PQfinish.argtypes = [_PGconn_p]
+
+ lib.PQresultErrorMessage.restype = ctypes.c_char_p
+ lib.PQresultErrorMessage.argtypes = [_PGresult_p]
+
+ lib.PQntuples.restype = ctypes.c_int
+ lib.PQntuples.argtypes = [_PGresult_p]
+
+ lib.PQnfields.restype = ctypes.c_int
+ lib.PQnfields.argtypes = [_PGresult_p]
+
+ lib.PQgetvalue.restype = ctypes.c_char_p
+ lib.PQgetvalue.argtypes = [_PGresult_p, ctypes.c_int, ctypes.c_int]
+
+ lib.PQgetisnull.restype = ctypes.c_int
+ lib.PQgetisnull.argtypes = [_PGresult_p, ctypes.c_int, ctypes.c_int]
+
+ lib.PQftype.restype = ctypes.c_uint
+ lib.PQftype.argtypes = [_PGresult_p, ctypes.c_int]
+
+ return lib
+
+
+# PostgreSQL type OIDs and conversion system
+# Type registry - maps OID to converter function
+_type_converters: Dict[int, Callable[[str], Any]] = {}
+_array_to_elem_map: Dict[int, int] = {}
+
+
+def register_type_info(
+ name: str, oid: int, array_oid: int, converter: Callable[[str], Any]
+):
+ """
+ Register a PostgreSQL type with its OID, array OID, and conversion function.
+
+ Usage:
+ register_type_info("bool", 16, 1000, lambda v: v == "t")
+ """
+ _type_converters[oid] = converter
+ if array_oid is not None:
+ _array_to_elem_map[array_oid] = oid
+
+
+# Helper converters
+def _parse_array(value: str, elem_oid: int) -> list:
+ """Parse PostgreSQL array syntax: {elem1,elem2,elem3}"""
+ if not (value.startswith("{") and value.endswith("}")):
+ return value
+
+ inner = value[1:-1]
+ if not inner:
+ return []
+
+ elements = inner.split(",")
+ result = []
+ for elem in elements:
+ elem = elem.strip()
+ if elem == "NULL":
+ result.append(None)
+ else:
+ # Remove quotes if present
+ if elem.startswith('"') and elem.endswith('"'):
+ elem = elem[1:-1]
+ result.append(_convert_pg_value(elem, elem_oid))
+
+ return result
+
+
+# Register standard PostgreSQL types that we'll likely encounter in tests
+register_type_info("bool", 16, 1000, lambda v: v == "t")
+register_type_info("int2", 21, 1005, int)
+register_type_info("int4", 23, 1007, int)
+register_type_info("int8", 20, 1016, int)
+register_type_info("float4", 700, 1021, float)
+register_type_info("float8", 701, 1022, float)
+register_type_info("numeric", 1700, 1231, decimal.Decimal)
+register_type_info("text", 25, 1009, str)
+register_type_info("varchar", 1043, 1015, str)
+register_type_info("date", 1082, 1182, datetime.date.fromisoformat)
+register_type_info("time", 1083, 1183, datetime.time.fromisoformat)
+register_type_info("timestamp", 1114, 1115, datetime.datetime.fromisoformat)
+register_type_info("timestamptz", 1184, 1185, datetime.datetime.fromisoformat)
+register_type_info("uuid", 2950, 2951, uuid.UUID)
+register_type_info("json", 114, 199, json.loads)
+register_type_info("jsonb", 3802, 3807, json.loads)
+
+
+def _convert_pg_value(value: str, type_oid: int) -> Any:
+ """
+ Convert PostgreSQL string value to appropriate Python type based on OID.
+ Uses the registered type converters from register_type_info().
+ """
+ # Check if it's an array type
+ if type_oid in _array_to_elem_map:
+ elem_oid = _array_to_elem_map[type_oid]
+ return _parse_array(value, elem_oid)
+
+ # Use registered converter if available
+ converter = _type_converters.get(type_oid)
+ if converter:
+ return converter(value)
+
+ # Unknown types - return as string
+ return value
+
+
+def simplify_query_results(results) -> Any:
+ """
+ Simplify the results of a query so that the caller doesn't have to unpack
+ lists and tuples of length 1.
+ """
+ if len(results) == 1:
+ row = results[0]
+ if len(row) == 1:
+ # If there's only a single cell, just return the value
+ return row[0]
+ # If there's only a single row, just return that row
+ return row
+
+ if len(results) != 0 and len(results[0]) == 1:
+ # If there's only a single column, return an array of values
+ return [row[0] for row in results]
+
+ # if there are multiple rows and columns, return the results as is
+ return results
+
+
+class PGresult(contextlib.AbstractContextManager):
+ """Wraps a raw _PGresult_p with a more friendly interface."""
+
+ def __init__(self, lib: ctypes.CDLL, res: _PGresult_p):
+ self._lib = lib
+ self._res = res
+
+ def __exit__(self, *exc):
+ self._lib.PQclear(self._res)
+ self._res = None
+
+ def status(self) -> ExecStatus:
+ return ExecStatus(self._lib.PQresultStatus(self._res))
+
+ def error_message(self):
+ """Returns the error message associated with this result."""
+ msg = self._lib.PQresultErrorMessage(self._res)
+ return msg.decode() if msg else ""
+
+ def fetch_all(self):
+ """
+ Fetch all rows and convert to Python types.
+ Returns a list of tuples, with values converted based on their PostgreSQL type.
+ """
+ nrows = self._lib.PQntuples(self._res)
+ ncols = self._lib.PQnfields(self._res)
+
+ # Get type OIDs for each column
+ type_oids = [self._lib.PQftype(self._res, col) for col in range(ncols)]
+
+ results = []
+ for row in range(nrows):
+ row_data = []
+ for col in range(ncols):
+ if self._lib.PQgetisnull(self._res, row, col):
+ row_data.append(None)
+ else:
+ value = self._lib.PQgetvalue(self._res, row, col).decode()
+ row_data.append(_convert_pg_value(value, type_oids[col]))
+ results.append(tuple(row_data))
+
+ return results
+
+
+class PGconn(contextlib.AbstractContextManager):
+ """
+ Wraps a raw _PGconn_p with a more friendly interface. This is just a
+ stub; it's expected to grow.
+ """
+
+ def __init__(
+ self,
+ lib: ctypes.CDLL,
+ handle: _PGconn_p,
+ stack: contextlib.ExitStack,
+ ):
+ self._lib = lib
+ self._handle = handle
+ self._stack = stack
+
+ def __exit__(self, *exc):
+ self._lib.PQfinish(self._handle)
+ self._handle = None
+
+ def exec(self, query: str):
+ """
+ Executes a query via PQexec() and returns a PGresult.
+ """
+ res = self._lib.PQexec(self._handle, query.encode())
+ return self._stack.enter_context(PGresult(self._lib, res))
+
+ def sql(self, query: str):
+ """
+ Executes a query and raises an exception if it fails.
+ Returns the query results with automatic type conversion and simplification.
+ For commands that don't return data (INSERT, UPDATE, etc.), returns None.
+
+ Examples:
+ - SELECT 1 -> 1
+ - SELECT 1, 2 -> (1, 2)
+ - SELECT * FROM generate_series(1, 3) -> [1, 2, 3]
+ - SELECT * FROM (VALUES (1, 'a'), (2, 'b')) t -> [(1, 'a'), (2, 'b')]
+ - CREATE TABLE ... -> None
+ - INSERT INTO ... -> None
+ """
+ res = self.exec(query)
+ status = res.status()
+
+ if status == ExecStatus.PGRES_FATAL_ERROR:
+ error_msg = res.error_message()
+ raise LibpqError(f"Query failed: {error_msg}\nQuery: {query}")
+ elif status == ExecStatus.PGRES_COMMAND_OK:
+ return None
+ elif status == ExecStatus.PGRES_TUPLES_OK:
+ results = res.fetch_all()
+ return simplify_query_results(results)
+ else:
+ error_msg = res.error_message() or f"Unexpected status: {status}"
+ raise LibpqError(f"Query failed: {error_msg}\nQuery: {query}")
+
+
+def connstr(opts: Dict[str, Any]) -> str:
+ """
+ Flattens the provided options into a libpq connection string. Values
+ are converted to str and quoted/escaped as necessary.
+ """
+ settings = []
+
+ for k, v in opts.items():
+ v = str(v)
+ if not v:
+ v = "''"
+ else:
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+
+ if " " in v:
+ v = f"'{v}'"
+
+ settings.append(f"{k}={v}")
+
+ return " ".join(settings)
+
+
+def connect(
+ libpq_handle: ctypes.CDLL,
+ stack: contextlib.ExitStack,
+ remaining_timeout_fn: Callable[[], float],
+ **opts,
+) -> PGconn:
+ """
+ Connects to a server, using the given connection options, and
+ returns a PGconn object wrapping the connection handle. A
+ failure will raise LibpqError.
+
+ Connections honor PG_TEST_TIMEOUT_DEFAULT unless connect_timeout is
+ explicitly overridden in opts.
+
+ Args:
+ libpq_handle: ctypes.CDLL handle to libpq library
+ stack: ExitStack for managing connection cleanup
+ remaining_timeout_fn: Function that returns remaining timeout in seconds
+ **opts: Connection options (host, port, dbname, etc.)
+
+ Returns:
+ PGconn: Connected database connection
+
+ Raises:
+ LibpqError: If connection fails
+ """
+
+ if "connect_timeout" not in opts:
+ t = int(remaining_timeout_fn())
+ opts["connect_timeout"] = max(t, 1)
+
+ conn_p = libpq_handle.PQconnectdb(connstr(opts).encode())
+
+ # Check connection status before adding to stack
+ if libpq_handle.PQstatus(conn_p) != ConnectionStatus.CONNECTION_OK:
+ error_msg = libpq_handle.PQerrorMessage(conn_p).decode()
+ # Manually close the failed connection
+ libpq_handle.PQfinish(conn_p)
+ raise LibpqError(error_msg)
+
+ # Connection succeeded - add to stack for cleanup
+ conn = stack.enter_context(PGconn(libpq_handle, conn_p, stack=stack))
+ return conn
diff --git a/src/test/pytest/pg/fixtures.py b/src/test/pytest/pg/fixtures.py
deleted file mode 100644
index b5d3bff69a8..00000000000
--- a/src/test/pytest/pg/fixtures.py
+++ /dev/null
@@ -1,212 +0,0 @@
-# Copyright (c) 2025, PostgreSQL Global Development Group
-
-import contextlib
-import ctypes
-import platform
-import time
-from typing import Any, Callable, Dict
-
-import pytest
-
-from ._env import test_timeout_default
-
-
-@pytest.fixture
-def remaining_timeout():
- """
- This fixture provides a function that returns how much of the
- PG_TEST_TIMEOUT_DEFAULT remains for the current test, in fractional seconds.
- This value is never less than zero.
-
- This fixture is per-test, so the deadline is also reset on a per-test basis.
- """
- now = time.monotonic()
- deadline = now + test_timeout_default()
-
- return lambda: max(deadline - time.monotonic(), 0)
-
-
-class _PGconn(ctypes.Structure):
- pass
-
-
-class _PGresult(ctypes.Structure):
- pass
-
-
-_PGconn_p = ctypes.POINTER(_PGconn)
-_PGresult_p = ctypes.POINTER(_PGresult)
-
-
-@pytest.fixture(scope="session")
-def libpq_handle():
- """
- Loads a ctypes handle for libpq. Some common function prototypes are
- initialized for general use.
- """
- system = platform.system()
-
- if system in ("Linux", "FreeBSD", "NetBSD", "OpenBSD"):
- name = "libpq.so.5"
- elif system == "Darwin":
- name = "libpq.5.dylib"
- elif system == "Windows":
- name = "libpq.dll"
- else:
- assert False, f"the libpq fixture must be updated for {system}"
-
- # XXX ctypes.CDLL() is a little stricter with load paths on Windows. The
- # preferred way around that is to know the absolute path to libpq.dll, but
- # that doesn't seem to mesh well with the current test infrastructure. For
- # now, enable "standard" LoadLibrary behavior.
- loadopts = {}
- if system == "Windows":
- loadopts["winmode"] = 0
-
- lib = ctypes.CDLL(name, **loadopts)
-
- #
- # Function Prototypes
- #
-
- lib.PQconnectdb.restype = _PGconn_p
- lib.PQconnectdb.argtypes = [ctypes.c_char_p]
-
- lib.PQstatus.restype = ctypes.c_int
- lib.PQstatus.argtypes = [_PGconn_p]
-
- lib.PQexec.restype = _PGresult_p
- lib.PQexec.argtypes = [_PGconn_p, ctypes.c_char_p]
-
- lib.PQresultStatus.restype = ctypes.c_int
- lib.PQresultStatus.argtypes = [_PGresult_p]
-
- lib.PQclear.restype = None
- lib.PQclear.argtypes = [_PGresult_p]
-
- lib.PQerrorMessage.restype = ctypes.c_char_p
- lib.PQerrorMessage.argtypes = [_PGconn_p]
-
- lib.PQfinish.restype = None
- lib.PQfinish.argtypes = [_PGconn_p]
-
- return lib
-
-
-class PGresult(contextlib.AbstractContextManager):
- """Wraps a raw _PGresult_p with a more friendly interface."""
-
- def __init__(self, lib: ctypes.CDLL, res: _PGresult_p):
- self._lib = lib
- self._res = res
-
- def __exit__(self, *exc):
- self._lib.PQclear(self._res)
- self._res = None
-
- def status(self):
- return self._lib.PQresultStatus(self._res)
-
-
-class PGconn(contextlib.AbstractContextManager):
- """
- Wraps a raw _PGconn_p with a more friendly interface. This is just a
- stub; it's expected to grow.
- """
-
- def __init__(
- self,
- lib: ctypes.CDLL,
- handle: _PGconn_p,
- stack: contextlib.ExitStack,
- ):
- self._lib = lib
- self._handle = handle
- self._stack = stack
-
- def __exit__(self, *exc):
- self._lib.PQfinish(self._handle)
- self._handle = None
-
- def exec(self, query: str) -> PGresult:
- """
- Executes a query via PQexec() and returns a PGresult.
- """
- res = self._lib.PQexec(self._handle, query.encode())
- return self._stack.enter_context(PGresult(self._lib, res))
-
-
-@pytest.fixture
-def libpq(libpq_handle, remaining_timeout):
- """
- Provides a ctypes-based API wrapped around libpq.so. This fixture keeps
- track of allocated resources and cleans them up during teardown. See
- _Libpq's public API for details.
- """
-
- class _Libpq(contextlib.ExitStack):
- CONNECTION_OK = 0
-
- PGRES_EMPTY_QUERY = 0
-
- class Error(RuntimeError):
- """
- libpq.Error is the exception class for application-level errors that
- are encountered during libpq operations.
- """
-
- pass
-
- def __init__(self):
- super().__init__()
- self.lib = libpq_handle
-
- def _connstr(self, opts: Dict[str, Any]) -> str:
- """
- Flattens the provided options into a libpq connection string. Values
- are converted to str and quoted/escaped as necessary.
- """
- settings = []
-
- for k, v in opts.items():
- v = str(v)
- if not v:
- v = "''"
- else:
- v = v.replace("\\", "\\\\")
- v = v.replace("'", "\\'")
-
- if " " in v:
- v = f"'{v}'"
-
- settings.append(f"{k}={v}")
-
- return " ".join(settings)
-
- def must_connect(self, **opts) -> PGconn:
- """
- Connects to a server, using the given connection options, and
- returns a libpq.PGconn object wrapping the connection handle. A
- failure will raise libpq.Error.
-
- Connections honor PG_TEST_TIMEOUT_DEFAULT unless connect_timeout is
- explicitly overridden in opts.
- """
-
- if "connect_timeout" not in opts:
- t = int(remaining_timeout())
- opts["connect_timeout"] = max(t, 1)
-
- conn_p = self.lib.PQconnectdb(self._connstr(opts).encode())
-
- # Ensure the connection handle is always closed at the end of the
- # test.
- conn = self.enter_context(PGconn(self.lib, conn_p, stack=self))
-
- if self.lib.PQstatus(conn_p) != self.CONNECTION_OK:
- raise self.Error(self.lib.PQerrorMessage(conn_p).decode())
-
- return conn
-
- with _Libpq() as lib:
- yield lib
diff --git a/src/test/pytest/plugins/pgtap.py b/src/test/pytest/plugins/pgtap.py
index ef8291e291c..6a729d252e1 100644
--- a/src/test/pytest/plugins/pgtap.py
+++ b/src/test/pytest/plugins/pgtap.py
@@ -2,7 +2,6 @@
import os
import sys
-from typing import Optional
import pytest
diff --git a/src/test/pytest/pg/__init__.py b/src/test/pytest/pypg/__init__.py
similarity index 100%
rename from src/test/pytest/pg/__init__.py
rename to src/test/pytest/pypg/__init__.py
diff --git a/src/test/pytest/pg/_env.py b/src/test/pytest/pypg/_env.py
similarity index 97%
rename from src/test/pytest/pg/_env.py
rename to src/test/pytest/pypg/_env.py
index 6f18af07844..154c986d73e 100644
--- a/src/test/pytest/pg/_env.py
+++ b/src/test/pytest/pypg/_env.py
@@ -2,7 +2,6 @@
import logging
import os
-from typing import List, Optional
import pytest
diff --git a/src/test/pytest/pg/_win32.py b/src/test/pytest/pypg/_win32.py
similarity index 100%
rename from src/test/pytest/pg/_win32.py
rename to src/test/pytest/pypg/_win32.py
diff --git a/src/test/pytest/pypg/fixtures.py b/src/test/pytest/pypg/fixtures.py
new file mode 100644
index 00000000000..cf22c8ec436
--- /dev/null
+++ b/src/test/pytest/pypg/fixtures.py
@@ -0,0 +1,175 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import os
+import contextlib
+import pathlib
+import secrets
+import time
+
+import pytest
+
+from ._env import test_timeout_default
+from .util import capture
+from .server import PostgresServer
+
+from libpq import load_libpq_handle, connect as libpq_connect
+
+
+@pytest.fixture
+def remaining_timeout():
+ """
+ This fixture provides a function that returns how much of the
+ PG_TEST_TIMEOUT_DEFAULT remains for the current test, in fractional seconds.
+ This value is never less than zero.
+
+ This fixture is per-test, so the deadline is also reset on a per-test basis.
+ """
+ now = time.monotonic()
+ deadline = now + test_timeout_default()
+
+ return lambda: max(deadline - time.monotonic(), 0)
+
+
+@pytest.fixture(scope="session")
+def libpq_handle(libdir):
+ """
+ Loads a ctypes handle for libpq. Some common function prototypes are
+ initialized for general use.
+ """
+ return load_libpq_handle(libdir)
+
+
+@pytest.fixture
+def connect(libpq_handle, remaining_timeout):
+ """
+ Returns a function to connect to PostgreSQL via libpq.
+
+ The returned function accepts connection options as keyword arguments
+ (host, port, dbname, etc.) and returns a PGconn object. Connections
+ are automatically cleaned up at the end of the test.
+
+ Example:
+ conn = connect(host='localhost', port=5432, dbname='postgres')
+ result = conn.sql("SELECT 1")
+ """
+ with contextlib.ExitStack() as stack:
+
+ def _connect(**opts):
+ return libpq_connect(libpq_handle, stack, remaining_timeout, **opts)
+
+ yield _connect
+
+
+@pytest.fixture(scope="session")
+def pg_config():
+ """
+ Returns the path to pg_config. Uses PG_CONFIG environment variable if set,
+ otherwise uses 'pg_config' from PATH.
+ """
+ return os.environ.get("PG_CONFIG", "pg_config")
+
+
+@pytest.fixture(scope="session")
+def bindir(pg_config):
+ """
+ Returns the PostgreSQL bin directory using pg_config --bindir.
+ """
+ return capture(pg_config, "--bindir")
+
+
+@pytest.fixture(scope="session")
+def libdir(pg_config):
+ """
+ Returns the PostgreSQL lib directory using pg_config --libdir.
+ """
+ return capture(pg_config, "--libdir")
+
+
+@pytest.fixture(scope="session")
+def datadir(tmp_path_factory):
+ """
+ Returns the directory name to use as the server data directory. If
+ TESTDATADIR is provided, that will be used; otherwise a new temporary
+ directory is created in the pytest temp root.
+ """
+ d = os.getenv("TESTDATADIR")
+ if d:
+ d = pathlib.Path(d)
+ else:
+ d = tmp_path_factory.mktemp("tmp_check")
+
+ return d
+
+
+@pytest.fixture(scope="session")
+def sockdir(tmp_path_factory):
+ """
+ Returns the directory name to use as the server's unix_socket_directories
+ setting. Local client connections use this as the PGHOST.
+
+ At the moment, this is always put under the pytest temp root.
+ """
+ return tmp_path_factory.mktemp("sockfiles")
+
+
+@pytest.fixture(scope="session")
+def winpassword():
+ """The per-session SCRAM password for the server admin on Windows."""
+ return secrets.token_urlsafe(16)
+
+
+@pytest.fixture(scope="session")
+def pg_server_global(bindir, datadir, sockdir, winpassword, libpq_handle):
+ """
+ Starts a running Postgres server listening on localhost. The HBA initially
+ allows only local UNIX connections from the same user.
+
+ Returns a PostgresServer instance with methods for server management, configuration,
+ and creating test databases/users.
+ """
+ server = PostgresServer(bindir, datadir, sockdir, winpassword, libpq_handle)
+
+ yield server
+
+ # Cleanup any test resources
+ server.cleanup()
+
+ # Stop the server
+ server.stop()
+
+
+@pytest.fixture(scope="module")
+def pg_server_module(pg_server_global):
+ """
+ Module-scoped server context. Which can be useful so that certain settings
+ can be overriden at the module level through autouse fixtures. An example
+ of this is in the SSL tests.
+ """
+ with pg_server_global.subcontext() as s:
+ yield s
+
+
+@pytest.fixture
+def pg(pg_server_module, remaining_timeout):
+ """
+ Per-test server context. Use this fixture to make changes to the server
+ which will be rolled back at the end of the test (e.g., creating test
+ users/databases).
+ """
+ pg_server_module.set_timeout(remaining_timeout)
+ with pg_server_module.subcontext() as s:
+ yield s
+
+
+@pytest.fixture
+def conn(pg):
+ """
+ Returns a connected PGconn instance to the test PostgreSQL server.
+ The connection is automatically cleaned up at the end of the test.
+
+ Example:
+ def test_something(conn):
+ result = conn.sql("SELECT 1")
+ assert result == 1
+ """
+ return pg.connect()
diff --git a/src/test/pytest/pypg/server.py b/src/test/pytest/pypg/server.py
new file mode 100644
index 00000000000..d6675cde93d
--- /dev/null
+++ b/src/test/pytest/pypg/server.py
@@ -0,0 +1,387 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import glob
+import os
+import pathlib
+import platform
+import socket
+import subprocess
+import tempfile
+import time
+from collections import namedtuple
+from typing import Callable, Optional
+
+from .util import run
+from libpq import PGconn
+
+
+class FileBackup(contextlib.AbstractContextManager):
+ """
+ A context manager which backs up a file's contents, restoring them on exit.
+ """
+
+ def __init__(self, file: pathlib.Path):
+ super().__init__()
+
+ self._file = file
+
+ def __enter__(self):
+ import shutil
+
+ with tempfile.NamedTemporaryFile(
+ prefix=self._file.name, dir=self._file.parent, delete=False
+ ) as f:
+ self._backup = pathlib.Path(f.name)
+
+ shutil.copyfile(self._file, self._backup)
+
+ return self
+
+ def __exit__(self, *exc):
+ import shutil
+
+ # Swap the backup and the original file, so that the modified contents
+ # can still be inspected in case of failure.
+ tmp = self._backup.parent / (self._backup.name + ".tmp")
+
+ shutil.copyfile(self._file, tmp)
+ shutil.copyfile(self._backup, self._file)
+ shutil.move(tmp, self._backup)
+
+
+class HBA(FileBackup):
+ """
+ Backs up a server's HBA configuration and provides means for temporarily
+ editing it.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "pg_hba.conf")
+
+ def prepend(self, *lines):
+ """
+ Temporarily prepends lines to the server's pg_hba.conf.
+
+ As sugar for aligning HBA columns in the tests, each line can be either
+ a string or a list of strings. List elements will be joined by single
+ spaces before they are written to file.
+ """
+ with open(self._file, "r") as f:
+ prior_data = f.read()
+
+ with open(self._file, "w") as f:
+ for line in lines:
+ if isinstance(line, list):
+ print(*line, file=f)
+ else:
+ print(line, file=f)
+
+ f.write(prior_data)
+
+
+class Config(FileBackup):
+ """
+ Backs up a server's postgresql.conf and provides means for temporarily
+ editing it.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "postgresql.conf")
+
+ def set(self, **gucs):
+ """
+ Temporarily appends GUC settings to the server's postgresql.conf.
+ """
+
+ with open(self._file, "a") as f:
+ print(file=f)
+
+ for n, v in gucs.items():
+ v = str(v)
+
+ # TODO: proper quoting
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+ v = "'{}'".format(v)
+
+ print(n, "=", v, file=f)
+
+
+Backup = namedtuple("Backup", "conf, hba")
+
+
+class PostgresServer:
+ """
+ Represents a running PostgreSQL server instance with management utilities.
+ Provides methods for configuration, user/database creation, and server control.
+ """
+
+ def __init__(self, bindir, datadir, sockdir, winpassword, libpq_handle):
+ """
+ Initialize and start a PostgreSQL server instance.
+ """
+ self.datadir = datadir
+ self.sockdir = sockdir
+ self.libpq_handle = libpq_handle
+ self._remaining_timeout_fn: Optional[Callable[[], float]] = None
+ self._bindir = bindir
+ self._winpassword = winpassword
+ self._pg_ctl = os.path.join(bindir, "pg_ctl")
+ self._log = os.path.join(datadir, "postgresql.log")
+
+ initdb = os.path.join(bindir, "initdb")
+ pg_ctl = self._pg_ctl
+
+ # Lock down the HBA by default; tests can open it back up later.
+ if platform.system() == "Windows":
+ # On Windows, for admin connections, use SCRAM with a generated password
+ # over local sockets. This requires additional work during initdb.
+ method = "scram-sha-256"
+
+ # NamedTemporaryFile doesn't work very nicely on Windows until Python
+ # 3.12, which introduces NamedTemporaryFile(delete_on_close=False).
+ # Until then, specify delete=False and manually unlink after use.
+ with tempfile.NamedTemporaryFile("w", delete=False) as pwfile:
+ pwfile.write(winpassword)
+
+ run(initdb, "--auth=scram-sha-256", "--pwfile", pwfile.name, datadir)
+ os.unlink(pwfile.name)
+
+ else:
+ # For other OSes we can just use peer auth.
+ method = "peer"
+ run(pg_ctl, "-D", datadir, "init")
+
+ with open(datadir / "pg_hba.conf", "w") as f:
+ print(f"# default: local {method} connections only", file=f)
+ print(f"local all all {method}", file=f)
+
+ # Figure out a port to listen on. Attempt to reserve both IPv4 and IPv6
+ # addresses in one go.
+ #
+ # Note: socket.has_dualstack_ipv6/create_server are only in Python 3.8+.
+ if hasattr(socket, "has_dualstack_ipv6") and socket.has_dualstack_ipv6():
+ addr = ("::1", 0)
+ s = socket.create_server(addr, family=socket.AF_INET6, dualstack_ipv6=True)
+
+ hostaddr, port, _, _ = s.getsockname()
+ addrs = [hostaddr, "127.0.0.1"]
+
+ else:
+ addr = ("127.0.0.1", 0)
+
+ s = socket.socket()
+ s.bind(addr)
+
+ hostaddr, port = s.getsockname()
+ addrs = [hostaddr]
+
+ log = self._log
+
+ with s, open(os.path.join(datadir, "postgresql.conf"), "a") as f:
+ print(file=f)
+ print("unix_socket_directories = '{}'".format(sockdir.as_posix()), file=f)
+ print("listen_addresses = '{}'".format(",".join(addrs)), file=f)
+ print("port =", port, file=f)
+ print("log_connections = all", file=f)
+
+ # Between closing of the socket, s, and server start, we're racing against
+ # anything that wants to open up ephemeral ports, so try not to put any new
+ # work here.
+
+ run(pg_ctl, "-D", datadir, "-l", log, "start")
+
+ # Read the PID file to get the postmaster PID
+ with open(os.path.join(datadir, "postmaster.pid")) as f:
+ pid = int(f.readline().strip())
+
+ # Store the computed values
+ self.hostaddr = hostaddr
+ self.port = port
+ self.pid = pid
+
+ # ExitStack for cleanup callbacks
+ self._cleanup_stack = contextlib.ExitStack()
+
+ def psql(self, *args):
+ """Run psql with the given arguments."""
+ if platform.system() == "Windows":
+ pw = dict(PGPASSWORD=self._winpassword)
+ else:
+ pw = None
+ self._run(os.path.join(self._bindir, "psql"), "-w", *args, addenv=pw)
+
+ def pg_ctl(self, *args):
+ """Run pg_ctl with the given arguments."""
+ self._run(self._pg_ctl, "-l", self._log, *args)
+
+ def _run(self, cmd, *args, addenv: Optional[dict] = None):
+ """Run a command with PG* environment variables set."""
+ subenv = dict(os.environ)
+ subenv.update(
+ {
+ "PGHOST": str(self.sockdir),
+ "PGPORT": str(self.port),
+ "PGDATABASE": "postgres",
+ "PGDATA": str(self.datadir),
+ }
+ )
+ if addenv:
+ subenv.update(addenv)
+ run(cmd, *args, env=subenv)
+
+ def create_users(self, *userkeys: str):
+ """Create test users and register them for cleanup."""
+ usermap = {}
+ for u in userkeys:
+ name = u + "user"
+ usermap[u] = name
+ self.psql("-c", "CREATE USER " + name)
+ self._cleanup_stack.callback(self.psql, "-c", "DROP USER " + name)
+ return usermap
+
+ def create_dbs(self, *dbkeys: str):
+ """Create test databases and register them for cleanup."""
+ dbmap = {}
+ for d in dbkeys:
+ name = d + "db"
+ dbmap[d] = name
+ self.psql("-c", "CREATE DATABASE " + name)
+ self._cleanup_stack.callback(self.psql, "-c", "DROP DATABASE " + name)
+ return dbmap
+
+ @contextlib.contextmanager
+ def reloading(self):
+ """
+ Provides a context manager for making configuration changes.
+
+ If the context suite finishes successfully, the configuration will
+ be reloaded via pg_ctl. On teardown, the configuration changes will
+ be unwound, and the server will be signaled to reload again.
+
+ The context target contains the following attributes which can be
+ used to configure the server:
+ - .conf: modifies postgresql.conf
+ - .hba: modifies pg_hba.conf
+
+ For example:
+
+ with pg_server_session.reloading() as s:
+ s.conf.set(log_connections="on")
+ s.hba.prepend("local all all trust")
+ """
+ # Push a reload onto the stack before making any other
+ # unwindable changes. That way the order of operations will be
+ #
+ # # test
+ # - config change 1
+ # - config change 2
+ # - reload
+ # # teardown
+ # - undo config change 2
+ # - undo config change 1
+ # - reload
+ #
+ self._cleanup_stack.callback(self.pg_ctl, "reload")
+ yield self._backup_configuration()
+
+ # Now actually reload
+ self.pg_ctl("reload")
+
+ @contextlib.contextmanager
+ def restarting(self):
+ """Like .reloading(), but with a full server restart."""
+ self._cleanup_stack.callback(self.pg_ctl, "restart")
+ yield self._backup_configuration()
+ self.pg_ctl("restart")
+
+ def _backup_configuration(self):
+ # Wrap the existing HBA and configuration with FileBackups.
+ return Backup(
+ hba=self._cleanup_stack.enter_context(HBA(self.datadir)),
+ conf=self._cleanup_stack.enter_context(Config(self.datadir)),
+ )
+
+ @contextlib.contextmanager
+ def subcontext(self):
+ """
+ Create a new cleanup context for per-test isolation.
+
+ Temporarily replaces the cleanup stack so that any cleanup callbacks
+ registered within this context will be cleaned up when the context exits.
+ """
+ old_stack = self._cleanup_stack
+ self._cleanup_stack = contextlib.ExitStack()
+ try:
+ self._cleanup_stack.__enter__()
+ yield self
+ finally:
+ self._cleanup_stack.__exit__(None, None, None)
+ self._cleanup_stack = old_stack
+
+ def stop(self):
+ """
+ Stop the PostgreSQL server instance.
+
+ Ignores failures if the server is already stopped.
+ """
+ try:
+ run(self._pg_ctl, "-D", self.datadir, "-l", self._log, "stop")
+ except subprocess.CalledProcessError:
+ # Server may have already been stopped
+ pass
+
+ def cleanup(self):
+ """Run all registered cleanup callbacks."""
+ self._cleanup_stack.close()
+
+ def set_timeout(self, remaining_timeout_fn: Callable[[], float]) -> None:
+ """
+ Set the timeout function for connections.
+ This is typically called by pg fixture for each test.
+ """
+ self._remaining_timeout_fn = remaining_timeout_fn
+
+ def connect(self, **opts) -> PGconn:
+ """
+ Creates a connection to this PostgreSQL server instance.
+
+ This is a convenience method that automatically fills in the host, port,
+ and dbname (defaulting to 'postgres') for connecting to this server.
+
+ Args:
+ stack: ExitStack for managing connection cleanup (uses internal stack if not provided)
+ remaining_timeout_fn: Function that returns remaining timeout (uses stored timeout if not provided)
+ **opts: Additional connection options (can override defaults)
+
+ Returns:
+ PGconn: Connected database connection
+
+ Example:
+ conn = pg.connect()
+ conn = pg.connect(dbname='mydb')
+ """
+ from libpq import connect as libpq_connect
+
+ # Set default connection options for this server
+ defaults = {
+ "host": str(self.sockdir),
+ "port": self.port,
+ "dbname": "postgres",
+ }
+
+ # Merge with user-provided options (user options take precedence)
+ defaults.update(opts)
+
+ if self._remaining_timeout_fn is None:
+ raise RuntimeError(
+ "Timeout function not set. Use set_timeout() or pg fixture."
+ )
+
+ return libpq_connect(
+ self.libpq_handle,
+ self._cleanup_stack,
+ self._remaining_timeout_fn,
+ **defaults,
+ )
diff --git a/src/test/pytest/pypg/util.py b/src/test/pytest/pypg/util.py
new file mode 100644
index 00000000000..b2a1e627e4b
--- /dev/null
+++ b/src/test/pytest/pypg/util.py
@@ -0,0 +1,42 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import shlex
+import subprocess
+import sys
+
+
+def eprint(*args, **kwargs):
+ """eprint prints to stderr"""
+ print(*args, file=sys.stderr, **kwargs)
+
+
+def run(*command, check=True, shell=None, silent=False, **kwargs):
+ """run runs the given command and prints it to stderr"""
+
+ if shell is None:
+ shell = len(command) == 1 and isinstance(command[0], str)
+
+ if shell:
+ command = command[0]
+ else:
+ command = list(map(str, command))
+
+ if not silent:
+ if shell:
+ eprint(f"+ {command}")
+ else:
+ # We could normally use shlex.join here, but it's not available in
+ # Python 3.6 which we still like to support
+ unsafe_string_cmd = " ".join(map(shlex.quote, command))
+ eprint(f"+ {unsafe_string_cmd}")
+
+ if silent:
+ kwargs.setdefault("stdout", subprocess.DEVNULL)
+
+ return subprocess.run(command, check=check, shell=shell, **kwargs)
+
+
+def capture(command, *args, stdout=subprocess.PIPE, encoding="utf-8", **kwargs):
+ return run(
+ command, *args, stdout=stdout, encoding=encoding, **kwargs
+ ).stdout.removesuffix("\n")
diff --git a/src/test/pytest/pyt/conftest.py b/src/test/pytest/pyt/conftest.py
index ecb72be26d7..641af0bbac5 100644
--- a/src/test/pytest/pyt/conftest.py
+++ b/src/test/pytest/pyt/conftest.py
@@ -1,3 +1,4 @@
# Copyright (c) 2025, PostgreSQL Global Development Group
-from pg.fixtures import *
+
+from pypg.fixtures import *
diff --git a/src/test/pytest/pyt/test_libpq.py b/src/test/pytest/pyt/test_libpq.py
index 9f0857cc612..4fcf4056f41 100644
--- a/src/test/pytest/pyt/test_libpq.py
+++ b/src/test/pytest/pyt/test_libpq.py
@@ -9,6 +9,8 @@ from typing import Callable
import pytest
+from libpq import connstr, LibpqError
+
@pytest.mark.parametrize(
"opts, expected",
@@ -22,15 +24,15 @@ import pytest
(dict(keyword=" \\' "), r"keyword=' \\\' '"),
],
)
-def test_connstr(libpq, opts, expected):
- """Tests the escape behavior for libpq._connstr()."""
- assert libpq._connstr(opts) == expected
+def test_connstr(opts, expected):
+ """Tests the escape behavior for connstr()."""
+ assert connstr(opts) == expected
-def test_must_connect_errors(libpq):
- """Tests that must_connect() raises libpq.Error."""
- with pytest.raises(libpq.Error, match="invalid connection option"):
- libpq.must_connect(some_unknown_keyword="whatever")
+def test_must_connect_errors(connect):
+ """Tests that connect() raises LibpqError."""
+ with pytest.raises(LibpqError, match="invalid connection option"):
+ connect(some_unknown_keyword="whatever")
@pytest.fixture
@@ -145,7 +147,7 @@ def local_server(tmp_path, remaining_timeout):
yield s
-def test_connection_is_finished_on_error(libpq, local_server, remaining_timeout):
+def test_connection_is_finished_on_error(connect, local_server):
"""Tests that PQfinish() gets called at the end of testing."""
expected_error = "something is wrong"
@@ -165,7 +167,6 @@ def test_connection_is_finished_on_error(libpq, local_server, remaining_timeout)
local_server.background(serve_error)
- with pytest.raises(libpq.Error, match=expected_error):
+ with pytest.raises(LibpqError, match=expected_error):
# Exiting this context should result in PQfinish().
- with libpq:
- libpq.must_connect(host=local_server.host, port=local_server.port)
+ connect(host=local_server.host, port=local_server.port)
diff --git a/src/test/pytest/pyt/test_query_helpers.py b/src/test/pytest/pyt/test_query_helpers.py
new file mode 100644
index 00000000000..5a5a1ae1edf
--- /dev/null
+++ b/src/test/pytest/pyt/test_query_helpers.py
@@ -0,0 +1,286 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests for query helper functions with type conversion and result simplification.
+"""
+
+import pytest
+
+
+def test_single_cell_int(conn):
+ """Single cell integer query returns just the value."""
+ result = conn.sql("SELECT 1")
+ assert result == 1
+ assert isinstance(result, int)
+
+
+def test_single_cell_string(conn):
+ """Single cell string query returns just the value."""
+ result = conn.sql("SELECT 'hello'")
+ assert result == "hello"
+ assert isinstance(result, str)
+
+
+def test_single_cell_bool(conn):
+ """Single cell boolean query returns just the value."""
+
+ result = conn.sql("SELECT true")
+ assert result is True
+ assert isinstance(result, bool)
+
+ result = conn.sql("SELECT false")
+ assert result is False
+
+
+def test_single_cell_float(conn):
+ """Single cell float query returns just the value."""
+
+ result = conn.sql("SELECT 3.14::float4")
+ assert isinstance(result, float)
+ assert abs(result - 3.14) < 0.01
+
+
+def test_single_cell_null(conn):
+ """Single cell NULL query returns None."""
+
+ result = conn.sql("SELECT NULL")
+ assert result is None
+
+
+def test_single_row_multiple_columns(conn):
+ """Single row with multiple columns returns a tuple."""
+
+ result = conn.sql("SELECT 1, 'hello', true")
+ assert result == (1, "hello", True)
+ assert isinstance(result, tuple)
+
+
+def test_single_column_multiple_rows(conn):
+ """Single column with multiple rows returns a list of values."""
+
+ result = conn.sql("SELECT * FROM generate_series(1, 3)")
+ assert result == [1, 2, 3]
+ assert isinstance(result, list)
+
+
+def test_multiple_rows_and_columns(conn):
+ """Multiple rows and columns returns list of tuples."""
+
+ result = conn.sql("SELECT * FROM (VALUES (1, 'a'), (2, 'b'), (3, 'c')) AS t")
+ assert result == [(1, "a"), (2, "b"), (3, "c")]
+ assert isinstance(result, list)
+ assert all(isinstance(row, tuple) for row in result)
+
+
+def test_empty_result(conn):
+ """Empty result set returns empty list."""
+
+ result = conn.sql("SELECT 1 WHERE false")
+ assert result == []
+
+
+def test_query_error_handling(conn):
+ """Query errors raise RuntimeError with actual error message."""
+
+ with pytest.raises(RuntimeError) as exc_info:
+ conn.sql("SELECT * FROM nonexistent_table")
+
+ error_msg = str(exc_info.value)
+ assert "nonexistent_table" in error_msg or "does not exist" in error_msg
+
+
+def test_division_by_zero_error(conn):
+ """Division by zero raises RuntimeError."""
+
+ with pytest.raises(RuntimeError) as exc_info:
+ conn.sql("SELECT 1/0")
+
+ error_msg = str(exc_info.value)
+ assert "division by zero" in error_msg.lower()
+
+
+def test_simple_exec_create_table(conn):
+ """sql for CREATE TABLE returns None."""
+
+ result = conn.sql("CREATE TEMP TABLE test_table (id int, name text)")
+ assert result is None
+
+ # Verify table was created
+ count = conn.sql("SELECT COUNT(*) FROM test_table")
+ assert count == 0
+
+
+def test_simple_exec_insert(conn):
+ """sql for INSERT returns None."""
+
+ conn.sql("CREATE TEMP TABLE test_table (id int, name text)")
+ result = conn.sql("INSERT INTO test_table VALUES (1, 'Alice'), (2, 'Bob')")
+ assert result is None
+
+ # Verify data was inserted
+ count = conn.sql("SELECT COUNT(*) FROM test_table")
+ assert count == 2
+
+
+def test_type_conversion_mixed(conn):
+ """Test mixed type conversion in a single row."""
+
+ result = conn.sql(
+ "SELECT 42::int4, 123::int8, 3.14::float8, 'text', true, NULL"
+ )
+ assert result == (42, 123, 3.14, "text", True, None)
+ assert isinstance(result[0], int)
+ assert isinstance(result[1], int)
+ assert isinstance(result[2], float)
+ assert isinstance(result[3], str)
+ assert isinstance(result[4], bool)
+ assert result[5] is None
+
+
+def test_multiple_queries_same_connection(conn):
+ """Test running multiple queries on the same connection."""
+
+ result1 = conn.sql("SELECT 1")
+ assert result1 == 1
+
+ result2 = conn.sql("SELECT 'hello', 'world'")
+ assert result2 == ("hello", "world")
+
+ result3 = conn.sql("SELECT * FROM generate_series(1, 5)")
+ assert result3 == [1, 2, 3, 4, 5]
+
+
+def test_date_type(conn):
+ """Test date type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '2025-10-20'::date")
+ assert result == datetime.date(2025, 10, 20)
+ assert isinstance(result, datetime.date)
+
+
+def test_timestamp_type(conn):
+ """Test timestamp type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '2025-10-20 15:30:45'::timestamp")
+ assert result == datetime.datetime(2025, 10, 20, 15, 30, 45)
+ assert isinstance(result, datetime.datetime)
+
+
+def test_time_type(conn):
+ """Test time type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '15:30:45'::time")
+ assert result == datetime.time(15, 30, 45)
+ assert isinstance(result, datetime.time)
+
+
+def test_numeric_type(conn):
+ """Test numeric/decimal type conversion."""
+ import decimal
+
+ result = conn.sql("SELECT 123.456::numeric")
+ assert result == decimal.Decimal("123.456")
+ assert isinstance(result, decimal.Decimal)
+
+
+def test_int_array(conn):
+ """Test integer array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[1, 2, 3, 4, 5]")
+ assert result == [1, 2, 3, 4, 5]
+ assert isinstance(result, list)
+ assert all(isinstance(x, int) for x in result)
+
+
+def test_text_array(conn):
+ """Test text array type conversion."""
+
+ result = conn.sql("SELECT ARRAY['hello', 'world', 'test']")
+ assert result == ["hello", "world", "test"]
+ assert isinstance(result, list)
+ assert all(isinstance(x, str) for x in result)
+
+
+def test_bool_array(conn):
+ """Test boolean array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[true, false, true]")
+ assert result == [True, False, True]
+ assert isinstance(result, list)
+ assert all(isinstance(x, bool) for x in result)
+
+
+def test_empty_array(conn):
+ """Test empty array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[]::int[]")
+ assert result == []
+ assert isinstance(result, list)
+
+
+def test_json_type(conn):
+ """Test JSON type (parsed to dict)."""
+
+ result = conn.sql('SELECT \'{"key": "value"}\'::json')
+ assert isinstance(result, dict)
+ assert result == {"key": "value"}
+
+
+def test_jsonb_type(conn):
+ """Test JSONB type (parsed to dict)."""
+
+ result = conn.sql('SELECT \'{"name": "test", "count": 42}\'::jsonb')
+ assert isinstance(result, dict)
+ assert result == {"name": "test", "count": 42}
+
+
+def test_json_array(conn):
+ """Test JSON array type."""
+
+ result = conn.sql("SELECT '[1, 2, 3, 4, 5]'::json")
+ assert isinstance(result, list)
+ assert result == [1, 2, 3, 4, 5]
+
+
+def test_json_nested(conn):
+ """Test nested JSON object."""
+
+ result = conn.sql(
+ 'SELECT \'{"user": {"id": 1, "name": "Alice"}, "active": true}\'::json'
+ )
+ assert isinstance(result, dict)
+ assert result == {"user": {"id": 1, "name": "Alice"}, "active": True}
+
+
+def test_mixed_types_with_arrays(conn):
+ """Test mixed types including arrays in a single row."""
+
+ result = conn.sql("SELECT 42, 'text', ARRAY[1, 2, 3], true")
+ assert result == (42, "text", [1, 2, 3], True)
+ assert isinstance(result[0], int)
+ assert isinstance(result[1], str)
+ assert isinstance(result[2], list)
+ assert isinstance(result[3], bool)
+
+
+def test_uuid_type(conn):
+ """Test UUID type conversion."""
+ import uuid
+
+ test_uuid = "550e8400-e29b-41d4-a716-446655440000"
+ result = conn.sql(f"SELECT '{test_uuid}'::uuid")
+ assert result == uuid.UUID(test_uuid)
+ assert isinstance(result, uuid.UUID)
+
+
+def test_uuid_generation(conn):
+ """Test generated UUID type conversion."""
+ import uuid
+
+ result = conn.sql("SELECT uuidv4()")
+ assert isinstance(result, uuid.UUID)
+ # Check it's a valid UUID by ensuring it can be converted to string
+ assert len(str(result)) == 36 # UUID string format length
diff --git a/src/test/ssl/pyt/conftest.py b/src/test/ssl/pyt/conftest.py
index 85d2c994828..6e8699e0971 100644
--- a/src/test/ssl/pyt/conftest.py
+++ b/src/test/ssl/pyt/conftest.py
@@ -1,19 +1,14 @@
# Copyright (c) 2025, PostgreSQL Global Development Group
import datetime
-import os
-import pathlib
-import platform
-import secrets
-import socket
+import re
import subprocess
import tempfile
from collections import namedtuple
import pytest
-import pg
-from pg.fixtures import *
+from pypg.fixtures import *
@pytest.fixture(scope="session")
@@ -135,108 +130,51 @@ def certs(cryptography, tmp_path_factory):
return _Certs()
-@pytest.fixture(scope="session")
-def datadir(tmp_path_factory):
+@pytest.fixture(scope="module", autouse=True)
+def ssl_setup(pg_server_module, certs, datadir):
"""
- Returns the directory name to use as the server data directory. If
- TESTDATADIR is provided, that will be used; otherwise a new temporary
- directory is created in the pytest temp root.
+ Sets up required server settings for all tests in this module.
"""
- d = os.getenv("TESTDATADIR")
- if d:
- d = pathlib.Path(d)
- else:
- d = tmp_path_factory.mktemp("tmp_check")
+ try:
+ with pg_server_module.restarting() as s:
+ s.conf.set(
+ ssl="on",
+ ssl_ca_file=certs.ca.certpath,
+ ssl_cert_file=certs.server.certpath,
+ ssl_key_file=certs.server.keypath,
+ )
- return d
+ # Reject by default.
+ s.hba.prepend("hostssl all all all reject")
+ except subprocess.CalledProcessError:
+ # This is a decent place to skip if the server isn't set up for SSL.
+ logpath = datadir / "postgresql.log"
+ unsupported = re.compile("SSL is not supported")
-@pytest.fixture(scope="session")
-def sockdir(tmp_path_factory):
- """
- Returns the directory name to use as the server's unix_socket_directories
- setting. Local client connections use this as the PGHOST.
+ with open(logpath, "r") as log:
+ for line in log:
+ if unsupported.search(line):
+ pytest.skip("the server does not support SSL")
- At the moment, this is always put under the pytest temp root.
- """
- return tmp_path_factory.mktemp("sockfiles")
+ # Some other error happened.
+ raise
+ users = pg_server_module.create_users("ssl")
+ dbs = pg_server_module.create_dbs("ssl")
-@pytest.fixture(scope="session")
-def winpassword():
- """The per-session SCRAM password for the server admin on Windows."""
- return secrets.token_urlsafe(16)
+ return (users, dbs)
-@pytest.fixture(scope="session")
-def server_instance(certs, datadir, sockdir, winpassword):
+@pytest.fixture(scope="module")
+def client_cert(ssl_setup, certs):
"""
- Starts a running Postgres server listening on localhost. The HBA initially
- allows only local UNIX connections from the same user.
-
- TODO: when installcheck is supported, this should optionally point to the
- currently running server instead.
+ Creates a Cert for the "ssl" user.
"""
+ from cryptography import x509
+ from cryptography.x509.oid import NameOID
+
+ users, _ = ssl_setup
+ user = users["ssl"]
- # Lock down the HBA by default; tests can open it back up later.
- if platform.system() == "Windows":
- # On Windows, for admin connections, use SCRAM with a generated password
- # over local sockets. This requires additional work during initdb.
- method = "scram-sha-256"
-
- # NamedTemporaryFile doesn't work very nicely on Windows until Python
- # 3.12, which introduces NamedTemporaryFile(delete_on_close=False).
- # Until then, specify delete=False and manually unlink after use.
- with tempfile.NamedTemporaryFile("w", delete=False) as pwfile:
- pwfile.write(winpassword)
-
- subprocess.check_call(
- ["initdb", "--auth=scram-sha-256", "--pwfile", pwfile.name, datadir]
- )
- os.unlink(pwfile.name)
-
- else:
- # For other OSes we can just use peer auth.
- method = "peer"
- subprocess.check_call(["pg_ctl", "-D", datadir, "init"])
-
- with open(datadir / "pg_hba.conf", "w") as f:
- print(f"# default: local {method} connections only", file=f)
- print(f"local all all {method}", file=f)
-
- # Figure out a port to listen on. Attempt to reserve both IPv4 and IPv6
- # addresses in one go.
- #
- # Note: socket.has_dualstack_ipv6/create_server are only in Python 3.8+.
- if hasattr(socket, "has_dualstack_ipv6") and socket.has_dualstack_ipv6():
- addr = ("::1", 0)
- s = socket.create_server(addr, family=socket.AF_INET6, dualstack_ipv6=True)
-
- hostaddr, port, _, _ = s.getsockname()
- addrs = [hostaddr, "127.0.0.1"]
-
- else:
- addr = ("127.0.0.1", 0)
-
- s = socket.socket()
- s.bind(addr)
-
- hostaddr, port = s.getsockname()
- addrs = [hostaddr]
-
- log = os.path.join(datadir, "postgresql.log")
-
- with s, open(os.path.join(datadir, "postgresql.conf"), "a") as f:
- print(file=f)
- print("unix_socket_directories = '{}'".format(sockdir.as_posix()), file=f)
- print("listen_addresses = '{}'".format(",".join(addrs)), file=f)
- print("port =", port, file=f)
- print("log_connections = all", file=f)
-
- # Between closing of the socket, s, and server start, we're racing against
- # anything that wants to open up ephemeral ports, so try not to put any new
- # work here.
-
- subprocess.check_call(["pg_ctl", "-D", datadir, "-l", log, "start"])
- yield (hostaddr, port)
- subprocess.check_call(["pg_ctl", "-D", datadir, "-l", log, "stop"])
+ return certs.new(x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, user)]))
diff --git a/src/test/ssl/pyt/test_client.py b/src/test/ssl/pyt/test_client.py
index 28110ae0717..247681f93cb 100644
--- a/src/test/ssl/pyt/test_client.py
+++ b/src/test/ssl/pyt/test_client.py
@@ -10,10 +10,11 @@ from typing import Callable
import pytest
-import pg
+import pypg
+from libpq import LibpqError, ExecStatus
# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
-pytestmark = pg.require_test_extra("ssl")
+pytestmark = pypg.require_test_extra("ssl")
@pytest.fixture(scope="session", autouse=True)
@@ -192,7 +193,7 @@ def ssl_server(tcp_server_class, certs):
@pytest.mark.parametrize("sslmode", ("require", "verify-ca", "verify-full"))
-def test_server_with_ssl_disabled(libpq, tcp_server, certs, sslmode):
+def test_server_with_ssl_disabled(connect, tcp_server, certs, sslmode):
"""
Make sure client refuses to talk to non-SSL servers with stricter
sslmodes.
@@ -214,16 +215,15 @@ def test_server_with_ssl_disabled(libpq, tcp_server, certs, sslmode):
tcp_server.background(refuse_ssl)
- with pytest.raises(libpq.Error, match="server does not support SSL"):
- with libpq: # XXX tests shouldn't need to do this
- libpq.must_connect(
- **tcp_server.conninfo,
- sslrootcert=certs.ca.certpath,
- sslmode=sslmode,
- )
+ with pytest.raises(LibpqError, match="server does not support SSL"):
+ connect(
+ **tcp_server.conninfo,
+ sslrootcert=certs.ca.certpath,
+ sslmode=sslmode,
+ )
-def test_verify_full_connection(libpq, ssl_server, certs):
+def test_verify_full_connection(connect, ssl_server, certs):
"""Completes a verify-full connection and empty query."""
def handle_empty_query(s: ssl.SSLSocket):
@@ -269,10 +269,10 @@ def test_verify_full_connection(libpq, ssl_server, certs):
ssl_server.background_ssl(handle_empty_query)
- conn = libpq.must_connect(
+ conn = connect(
**ssl_server.conninfo,
sslrootcert=certs.ca.certpath,
sslmode="verify-full",
)
with conn:
- assert conn.exec("").status() == libpq.PGRES_EMPTY_QUERY
+ assert conn.exec("").status() == ExecStatus.PGRES_EMPTY_QUERY
diff --git a/src/test/ssl/pyt/test_server.py b/src/test/ssl/pyt/test_server.py
index 2d0be735371..60628d0c067 100644
--- a/src/test/ssl/pyt/test_server.py
+++ b/src/test/ssl/pyt/test_server.py
@@ -1,25 +1,16 @@
# Copyright (c) 2025, PostgreSQL Global Development Group
-import contextlib
-import os
-import pathlib
-import platform
import re
-import shutil
import socket
import ssl
import struct
-import subprocess
-import tempfile
-from collections import namedtuple
-from typing import Dict, List, Union
import pytest
-import pg
+import pypg
# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
-pytestmark = pg.require_test_extra("ssl")
+pytestmark = pypg.require_test_extra("ssl")
#
@@ -27,363 +18,6 @@ pytestmark = pg.require_test_extra("ssl")
#
-@pytest.fixture(scope="session")
-def connenv(server_instance, sockdir, datadir):
- """
- Provides the values for several PG* environment variables needed for our
- utility programs to connect to the server_instance.
- """
- return {
- "PGHOST": str(sockdir),
- "PGPORT": str(server_instance[1]),
- "PGDATABASE": "postgres",
- "PGDATA": str(datadir),
- }
-
-
-class FileBackup(contextlib.AbstractContextManager):
- """
- A context manager which backs up a file's contents, restoring them on exit.
- """
-
- def __init__(self, file: pathlib.Path):
- super().__init__()
-
- self._file = file
-
- def __enter__(self):
- with tempfile.NamedTemporaryFile(
- prefix=self._file.name, dir=self._file.parent, delete=False
- ) as f:
- self._backup = pathlib.Path(f.name)
-
- shutil.copyfile(self._file, self._backup)
-
- return self
-
- def __exit__(self, *exc):
- # Swap the backup and the original file, so that the modified contents
- # can still be inspected in case of failure.
- #
- # TODO: this is less helpful if there are multiple layers, because it's
- # not clear which backup to look at. Can the backup name be printed as
- # part of the failed test output? Should we only swap on test failure?
- tmp = self._backup.parent / (self._backup.name + ".tmp")
-
- shutil.copyfile(self._file, tmp)
- shutil.copyfile(self._backup, self._file)
- shutil.move(tmp, self._backup)
-
-
-class HBA(FileBackup):
- """
- Backs up a server's HBA configuration and provides means for temporarily
- editing it. See also pg_server, which provides an instance of this class and
- context managers for enforcing the reload/restart order of operations.
- """
-
- def __init__(self, datadir: pathlib.Path):
- super().__init__(datadir / "pg_hba.conf")
-
- def prepend(self, *lines: Union[str, List[str]]):
- """
- Temporarily prepends lines to the server's pg_hba.conf.
-
- As sugar for aligning HBA columns in the tests, each line can be either
- a string or a list of strings. List elements will be joined by single
- spaces before they are written to file.
- """
- with open(self._file, "r") as f:
- prior_data = f.read()
-
- with open(self._file, "w") as f:
- for l in lines:
- if isinstance(l, list):
- print(*l, file=f)
- else:
- print(l, file=f)
-
- f.write(prior_data)
-
-
-class Config(FileBackup):
- """
- Backs up a server's postgresql.conf and provides means for temporarily
- editing it. See also pg_server, which provides an instance of this class and
- context managers for enforcing the reload/restart order of operations.
- """
-
- def __init__(self, datadir: pathlib.Path):
- super().__init__(datadir / "postgresql.conf")
-
- def set(self, **gucs):
- """
- Temporarily appends GUC settings to the server's postgresql.conf.
- """
-
- with open(self._file, "a") as f:
- print(file=f)
-
- for n, v in gucs.items():
- v = str(v)
-
- # TODO: proper quoting
- v = v.replace("\\", "\\\\")
- v = v.replace("'", "\\'")
- v = "'{}'".format(v)
-
- print(n, "=", v, file=f)
-
-
-@pytest.fixture(scope="session")
-def pg_server_session(server_instance, connenv, datadir, winpassword):
- """
- Provides common routines for configuring and connecting to the
- server_instance. For example:
-
- users = pg_server_session.create_users("one", "two")
- dbs = pg_server_session.create_dbs("default")
-
- with pg_server_session.reloading() as s:
- s.hba.prepend(["local", dbs["default"], users["two"], "peer"])
-
- conn = connect_somehow(**pg_server_session.conninfo)
- ...
-
- Attributes of note are
- - .conninfo: provides TCP connection info for the server
-
- This fixture unwinds its configuration changes at the end of the pytest
- session. For more granular changes, pg_server_session.subcontext() splits
- off a "nested" context to allow smaller scopes.
- """
-
- class _Server(contextlib.ExitStack):
- conninfo = dict(
- hostaddr=server_instance[0],
- port=server_instance[1],
- )
-
- # for _backup_configuration()
- _Backup = namedtuple("Backup", "conf, hba")
-
- def subcontext(self):
- """
- Creates a new server stack instance that can be tied to a smaller
- scope than "session".
- """
- # So far, there doesn't seem to be a need to link the two objects,
- # since HBA/Config/FileBackup operate directly on the filesystem and
- # will appear to "nest" naturally.
- return self.__class__()
-
- def create_users(self, *userkeys: str) -> Dict[str, str]:
- """
- Creates new users which will be dropped at the end of the server
- context.
-
- For each provided key, a related user name will be selected and
- stored in a map. This map is returned to let calling code look up
- the selected usernames (instead of hardcoding them and potentially
- stomping on an existing installation).
- """
- usermap = {}
-
- for u in userkeys:
- # TODO: use a uniquifier to support installcheck
- name = u + "user"
- usermap[u] = name
-
- # TODO: proper escaping
- self.psql("-c", "CREATE USER " + name)
- self.callback(self.psql, "-c", "DROP USER " + name)
-
- return usermap
-
- def create_dbs(self, *dbkeys: str) -> Dict[str, str]:
- """
- Creates new databases which will be dropped at the end of the server
- context. See create_users() for the meaning of the keys and returned
- map.
- """
- dbmap = {}
-
- for d in dbkeys:
- # TODO: use a uniquifier to support installcheck
- name = d + "db"
- dbmap[d] = name
-
- # TODO: proper escaping
- self.psql("-c", "CREATE DATABASE " + name)
- self.callback(self.psql, "-c", "DROP DATABASE " + name)
-
- return dbmap
-
- @contextlib.contextmanager
- def reloading(self):
- """
- Provides a context manager for making configuration changes.
-
- If the context suite finishes successfully, the configuration will
- be reloaded via pg_ctl. On teardown, the configuration changes will
- be unwound, and the server will be signaled to reload again.
-
- The context target contains the following attributes which can be
- used to configure the server:
- - .conf: modifies postgresql.conf
- - .hba: modifies pg_hba.conf
-
- For example:
-
- with pg_server_session.reloading() as s:
- s.conf.set(log_connections="on")
- s.hba.prepend("local all all trust")
- """
- try:
- # Push a reload onto the stack before making any other
- # unwindable changes. That way the order of operations will be
- #
- # # test
- # - config change 1
- # - config change 2
- # - reload
- # # teardown
- # - undo config change 2
- # - undo config change 1
- # - reload
- #
- self.callback(self.pg_ctl, "reload")
- yield self._backup_configuration()
- except:
- # We only want to reload at the end of the suite if there were
- # no errors. During exceptions, the pushed callback handles
- # things instead, so there's nothing to do here.
- raise
- else:
- # Suite completed successfully.
- self.pg_ctl("reload")
-
- @contextlib.contextmanager
- def restarting(self):
- """Like .reloading(), but with a full server restart."""
- try:
- self.callback(self.pg_ctl, "restart")
- yield self._backup_configuration()
- except:
- raise
- else:
- self.pg_ctl("restart")
-
- def psql(self, *args):
- """
- Runs psql with the given arguments. Password prompts are always
- disabled. On Windows, the admin password will be included in the
- environment.
- """
- if platform.system() == "Windows":
- pw = dict(PGPASSWORD=winpassword)
- else:
- pw = None
-
- self._run("psql", "-w", *args, addenv=pw)
-
- def pg_ctl(self, *args):
- """
- Runs pg_ctl with the given arguments. Log output will be placed in
- postgresql.log in the server's data directory.
-
- TODO: put the log in TESTLOGDIR
- """
- self._run("pg_ctl", "-l", str(datadir / "postgresql.log"), *args)
-
- def _run(self, cmd, *args, addenv: dict = None):
- # Override the existing environment with the connenv values and
- # anything the caller wanted to add. (Python 3.9 gives us the
- # less-ugly `os.environ | connenv` merge operator.)
- subenv = dict(os.environ, **connenv)
- if addenv:
- subenv.update(addenv)
-
- subprocess.check_call([cmd, *args], env=subenv)
-
- def _backup_configuration(self):
- # Wrap the existing HBA and configuration with FileBackups.
- return self._Backup(
- hba=self.enter_context(HBA(datadir)),
- conf=self.enter_context(Config(datadir)),
- )
-
- with _Server() as s:
- yield s
-
-
-@pytest.fixture(scope="module", autouse=True)
-def ssl_setup(pg_server_session, certs, datadir):
- """
- Sets up required server settings for all tests in this module. The fixture
- variable is a tuple (users, dbs) containing the user and database names that
- have been chosen for the test session.
- """
- try:
- with pg_server_session.restarting() as s:
- s.conf.set(
- ssl="on",
- ssl_ca_file=certs.ca.certpath,
- ssl_cert_file=certs.server.certpath,
- ssl_key_file=certs.server.keypath,
- )
-
- # Reject by default.
- s.hba.prepend("hostssl all all all reject")
-
- except subprocess.CalledProcessError:
- # This is a decent place to skip if the server isn't set up for SSL.
- logpath = datadir / "postgresql.log"
- unsupported = re.compile("SSL is not supported")
-
- with open(logpath, "r") as log:
- for line in log:
- if unsupported.search(line):
- pytest.skip("the server does not support SSL")
-
- # Some other error happened.
- raise
-
- users = pg_server_session.create_users(
- "ssl",
- )
-
- dbs = pg_server_session.create_dbs(
- "ssl",
- )
-
- return (users, dbs)
-
-
-@pytest.fixture(scope="module")
-def client_cert(ssl_setup, certs):
- """
- Creates a Cert for the "ssl" user.
- """
- from cryptography import x509
- from cryptography.x509.oid import NameOID
-
- users, _ = ssl_setup
- user = users["ssl"]
-
- return certs.new(x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, user)]))
-
-
-@pytest.fixture
-def pg_server(pg_server_session):
- """
- A per-test instance of pg_server_session. Use this fixture to make changes
- to the server which will be rolled back at the end of every test.
- """
- with pg_server_session.subcontext() as s:
- yield s
-
-
#
# Tests
#
@@ -394,8 +28,8 @@ CLIENT = "client"
SERVER = "server"
+# fmt: off
@pytest.mark.parametrize(
- # fmt: off
"auth_method, creds, expected_error",
[
# Trust allows anything.
@@ -416,10 +50,10 @@ SERVER = "server"
("cert", CLIENT, None),
("cert", SERVER, "authentication failed for user"),
],
- # fmt: on
)
+# fmt: on
def test_direct_ssl_certificate_authentication(
- pg_server,
+ pg,
ssl_setup,
certs,
client_cert,
@@ -440,7 +74,7 @@ def test_direct_ssl_certificate_authentication(
user = users["ssl"]
db = dbs["ssl"]
- with pg_server.reloading() as s:
+ with pg.reloading() as s:
s.hba.prepend(
["hostssl", db, user, "127.0.0.1/32", auth_method],
["hostssl", db, user, "::1/128", auth_method],
@@ -461,7 +95,7 @@ def test_direct_ssl_certificate_authentication(
# Make a direct SSL connection. There's no SSLRequest in the handshake; we
# simply wrap a TCP connection with OpenSSL.
- addr = (pg_server.conninfo["hostaddr"], pg_server.conninfo["port"])
+ addr = (pg.hostaddr, pg.port)
with socket.create_connection(addr) as s:
s.settimeout(remaining_timeout()) # XXX this resets every operation
--
2.51.1
On Wed Oct 22, 2025 at 2:44 PM CEST, Jelte Fennema-Nio wrote:
So here's your patchset with an additional commit on top that does a
bunch of refactoring/renaming and adding features.
Rebased to fix conflicts.
Attachments:
v3-0001-meson-Include-TAP-tests-in-the-configuration-summ.patchtext/x-patch; charset=utf-8; name=v3-0001-meson-Include-TAP-tests-in-the-configuration-summ.patchDownload
From f6823405eb994d457f8123df0d417ca2340e4c71 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Fri, 5 Sep 2025 16:39:08 -0700
Subject: [PATCH v3 01/10] meson: Include TAP tests in the configuration
summary
...to make it obvious when they've been enabled. prove is added to the
executables list for good measure.
TODO: does Autoconf need something similar?
Per complaint by Peter Eisentraut.
---
meson.build | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/meson.build b/meson.build
index 24aeffe929f..1ce2e79c436 100644
--- a/meson.build
+++ b/meson.build
@@ -3959,6 +3959,7 @@ summary(
'bison': '@0@ @1@'.format(bison.full_path(), bison_version),
'dtrace': dtrace,
'flex': '@0@ @1@'.format(flex.full_path(), flex_version),
+ 'prove': prove,
},
section: 'Programs',
)
@@ -3995,3 +3996,11 @@ summary(
section: 'External libraries',
list_sep: ' ',
)
+
+summary(
+ {
+ 'tap': tap_tests_enabled,
+ },
+ section: 'Other features',
+ list_sep: ' ',
+)
base-commit: e510378358540703a13b77090a0021853bae0745
--
2.51.1
v3-0002-Add-support-for-pytest-test-suites.patchtext/x-patch; charset=utf-8; name=v3-0002-Add-support-for-pytest-test-suites.patchDownload
From 5a27976496db53d8e9b88ab59e6c71f0f42dedcd Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 13 Aug 2025 10:58:56 -0700
Subject: [PATCH v3 02/10] Add support for pytest test suites
Specify --enable-pytest/-Dpytest=enabled at configure time. This
contains no Postgres test logic -- it is just a "vanilla" pytest
skeleton.
I've written a custom pgtap output plugin, used by the Meson mtest
runner, to fully control what we see during CI test failures. The
pytest-tap plugin would have been preferable, but it's now in
maintenance mode, and it has problems with accidentally suppressing
important collection failures.
test_something.py is intended to show a sample failure in the CI.
TODOs:
- OpenBSD has an ANSI-related terminal bug, but I'm not sure if the bug
is in Cirrus, the image, pytest, Python, or readline. The TERM envvar
is unset to work around it. If this workaround is removed, a bad ANSI
escape is inserted into the pgtap output and mtest is unable to parse
it.
- The Chocolatey CI setup is subpar. Need to find a way to bless the
dependencies in use rather than pulling from pip... or maybe that will
be done by the image baker.
---
.cirrus.tasks.yml | 38 +++--
.gitignore | 1 +
config/check_pytest.py | 150 ++++++++++++++++++++
config/conftest.py | 18 +++
config/pytest-requirements.txt | 21 +++
configure | 108 +++++++++++++-
configure.ac | 25 +++-
meson.build | 92 ++++++++++++
meson_options.txt | 8 +-
pytest.ini | 6 +
src/Makefile.global.in | 23 +++
src/makefiles/meson.build | 2 +
src/test/Makefile | 11 +-
src/test/meson.build | 1 +
src/test/pytest/Makefile | 20 +++
src/test/pytest/README | 1 +
src/test/pytest/meson.build | 16 +++
src/test/pytest/plugins/pgtap.py | 193 ++++++++++++++++++++++++++
src/test/pytest/pyt/test_something.py | 17 +++
19 files changed, 736 insertions(+), 15 deletions(-)
create mode 100644 config/check_pytest.py
create mode 100644 config/conftest.py
create mode 100644 config/pytest-requirements.txt
create mode 100644 pytest.ini
create mode 100644 src/test/pytest/Makefile
create mode 100644 src/test/pytest/README
create mode 100644 src/test/pytest/meson.build
create mode 100644 src/test/pytest/plugins/pgtap.py
create mode 100644 src/test/pytest/pyt/test_something.py
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 2fe9671f3dc..b3388351110 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -21,7 +21,8 @@ env:
# target to test, for all but windows
CHECK: check-world PROVE_FLAGS=$PROVE_FLAGS
- CHECKFLAGS: -Otarget
+ # TODO were we avoiding --keep-going on purpose?
+ CHECKFLAGS: -Otarget --keep-going
PROVE_FLAGS: --timer
# Build test dependencies as part of the build step, to see compiler
# errors/warnings in one place.
@@ -44,6 +45,7 @@ env:
-Dldap=enabled
-Dssl=openssl
-Dtap_tests=enabled
+ -Dpytest=enabled
-Dplperl=enabled
-Dplpython=enabled
-Ddocs=enabled
@@ -225,7 +227,9 @@ task:
chown root:postgres /tmp/cores
sysctl kern.corefile='/tmp/cores/%N.%P.core'
setup_additional_packages_script: |
- #pkg install -y ...
+ pkg install -y \
+ py311-packaging \
+ py311-pytest
# NB: Intentionally build without -Dllvm. The freebsd image size is already
# large enough to make VM startup slow, and even without llvm freebsd
@@ -317,7 +321,10 @@ task:
-Dpam=enabled
setup_additional_packages_script: |
- #pkgin -y install ...
+ pkgin -y install \
+ py312-packaging \
+ py312-test
+ ln -s /usr/pkg/bin/pytest-3.12 /usr/pkg/bin/pytest
<<: *netbsd_task_template
- name: OpenBSD - Meson
@@ -329,6 +336,7 @@ task:
IMAGE_FAMILY: pg-ci-openbsd-postgres
PKGCONFIG_PATH: '/usr/lib/pkgconfig:/usr/local/lib/pkgconfig'
CORE_DUMP_EXECUTABLE_DIR: $CIRRUS_WORKING_DIR/build/tmp_install/usr/local/pgsql/bin
+ TERM: # TODO why does pytest print ANSI escapes on OpenBSD?
MESON_FEATURES: >-
-Dbsd_auth=enabled
@@ -337,7 +345,9 @@ task:
-Duuid=e2fs
setup_additional_packages_script: |
- #pkg_add -I ...
+ pkg_add -I \
+ py3-test \
+ py3-packaging
# Always core dump to ${CORE_DUMP_DIR}
set_core_dump_script: sysctl -w kern.nosuidcoredump=2
<<: *openbsd_task_template
@@ -496,8 +506,10 @@ task:
EOF
setup_additional_packages_script: |
- #apt-get update
- #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+ apt-get update
+ DEBIAN_FRONTEND=noninteractive apt-get -y install \
+ python3-pytest \
+ python3-packaging
matrix:
# SPECIAL:
@@ -521,14 +533,15 @@ task:
set -e
./configure \
--enable-cassert --enable-injection-points --enable-debug \
- --enable-tap-tests --enable-nls \
+ --enable-tap-tests --enable-pytest --enable-nls \
--with-segsize-blocks=6 \
--with-libnuma \
--with-liburing \
\
${LINUX_CONFIGURE_FEATURES} \
\
- CLANG="ccache clang"
+ CLANG="ccache clang" \
+ PYTEST="env LD_PRELOAD=/lib/x86_64-linux-gnu/libasan.so.8 pytest"
EOF
build_script: su postgres -c "make -s -j${BUILD_JOBS} world-bin"
upload_caches: ccache
@@ -665,6 +678,8 @@ task:
p5.34-io-tty
p5.34-ipc-run
python312
+ py312-packaging
+ py312-pytest
tcl
zstd
@@ -714,6 +729,7 @@ task:
sh src/tools/ci/ci_macports_packages.sh $MACOS_PACKAGE_LIST
# system python doesn't provide headers
sudo /opt/local/bin/port select python3 python312
+ sudo /opt/local/bin/port select pytest pytest312
# Make macports install visible for subsequent steps
echo PATH=/opt/local/sbin/:/opt/local/bin/:$PATH >> $CIRRUS_ENV
upload_caches: macports
@@ -787,6 +803,8 @@ task:
-Dldap=enabled
-Dssl=openssl
-Dtap_tests=enabled
+ -Dpytest=enabled
+ -DPYTEST=c:\Windows\system32\config\systemprofile\AppData\Roaming\Python\Python310\Scripts\pytest.exe
-Dplperl=enabled
-Dplpython=enabled
@@ -795,8 +813,10 @@ task:
depends_on: SanityCheck
only_if: $CI_WINDOWS_ENABLED
+ # XXX Does Chocolatey really not have any Python package installers?
setup_additional_packages_script: |
REM choco install -y --no-progress ...
+ pip3 install --user packaging pytest
setup_hosts_file_script: |
echo 127.0.0.1 pg-loadbalancetest >> c:\Windows\System32\Drivers\etc\hosts
@@ -859,7 +879,7 @@ task:
folder: ${CCACHE_DIR}
setup_additional_packages_script: |
- REM C:\msys64\usr\bin\pacman.exe -S --noconfirm ...
+ C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-python-packaging mingw-w64-ucrt-x86_64-python-pytest
mingw_info_script: |
%BASH% -c "where gcc"
diff --git a/.gitignore b/.gitignore
index 4e911395fe3..268426003b1 100644
--- a/.gitignore
+++ b/.gitignore
@@ -31,6 +31,7 @@ win32ver.rc
*.exe
lib*dll.def
lib*.pc
+__pycache__/
# Local excludes in root directory
/GNUmakefile
diff --git a/config/check_pytest.py b/config/check_pytest.py
new file mode 100644
index 00000000000..1562d16bcda
--- /dev/null
+++ b/config/check_pytest.py
@@ -0,0 +1,150 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+#
+# Verify that pytest-requirements.txt is satisfied. This would probably be
+# easier with pip, but requiring pip on build machines is a non-starter for
+# many.
+#
+# This is coded as a pytest suite in order to check the Python distribution in
+# use by pytest, as opposed to the Python distribution being linked against
+# Postgres. In some setups they are separate.
+#
+# The design philosophy of this script is to bend over backwards to help people
+# figure out what is missing. The target audience for error output is the
+# buildfarm operator who just wants to get the tests running, not the test
+# developer who presumably already knows how to solve these problems.
+
+import importlib
+import sys
+from typing import List, Union # needed for earlier Python versions
+
+# importlib.metadata is part of the standard library from 3.8 onwards. Earlier
+# Python versions have an official backport called importlib_metadata, which can
+# generally be installed as a separate OS package (python3-importlib-metadata).
+# This complication can be removed once we stop supporting Python 3.7.
+try:
+ from importlib import metadata
+except ImportError:
+ try:
+ import importlib_metadata as metadata
+ except ImportError:
+ # package_version() will need to fall back. This is unlikely to happen
+ # in practice, because pytest 7.x depends on importlib_metadata itself.
+ metadata = None
+
+
+def report(*args):
+ """
+ Prints a configure-time message to the user. (The configure scripts will
+ display these messages and ignore the output from the pytest suite.) This
+ assumes --capture=no is in use, to avoid pytest's standard stream capture.
+ """
+ print(*args, file=sys.stderr)
+
+
+def package_version(pkg: str) -> Union[str, None]:
+ """
+ Returns the version of the named package, or None if the package is not
+ installed.
+
+ This function prefers to use the distribution package version, if we have
+ the necessary prerequisites. Otherwise it will fall back to the __version__
+ of the imported module, which aligns with pytest.importorskip().
+ """
+ if metadata is not None:
+ try:
+ return metadata.version(pkg)
+ except metadata.PackageNotFoundError:
+ return None
+
+ # This is an older Python and we don't have importlib_metadata. Fall back to
+ # __version__ instead.
+ try:
+ mod = importlib.import_module(pkg)
+ except ModuleNotFoundError:
+ return None
+
+ if hasattr(mod, "__version__"):
+ return mod.__version__
+
+ # We're out of options. If this turns out to cause problems in practice, we
+ # might need to require importlib_metadata on older buildfarm members. But
+ # since our top-level requirements list will be small, and this possibility
+ # will eventually age out with newer Pythons, don't spend more effort on
+ # this case for now.
+ report(f"Fix check_pytest.py! {pkg} has no __version__")
+ assert False, "internal error in package_version()"
+
+
+def packaging_check(requirements: List[str]) -> bool:
+ """
+ Reports the status of each required package to the configure program.
+ Returns True if all dependencies were found.
+ """
+ report() # an opening newline makes the configure output easier to read
+
+ try:
+ # packaging contains the PyPA definitions of requirement specifiers.
+ # This is contained in a separate OS package (for example,
+ # python3-packaging), but it's extremely likely that the user has it
+ # installed already, because modern versions of pytest depend on it too.
+ import packaging
+ from packaging.requirements import Requirement
+
+ except ImportError as err:
+ # We don't even have enough prerequisites to check our prerequisites.
+ # Print the import error as-is.
+ report(err)
+ return False
+
+ # Strip extraneous whitespace, whole-line comments, and empty lines from our
+ # specifier list.
+ requirements = [r.strip() for r in requirements]
+ requirements = [r for r in requirements if r and r[0] != "#"]
+
+ found = True
+ for spec in requirements:
+ req = Requirement(spec)
+
+ # Skip any packages marked as unneeded for this particular Python env.
+ if req.marker and not req.marker.evaluate():
+ continue
+
+ # Make sure the package is installed...
+ version = package_version(req.name)
+ if version is None:
+ report(f"package '{req.name}': not installed")
+ found = False
+ continue
+
+ # ...and that it has a compatible version.
+ if not req.specifier.contains(version):
+ report(
+ "package '{}': has version {}, but '{}' is required".format(
+ req.name, version, req.specifier
+ ),
+ )
+ found = False
+ continue
+
+ # Report installed packages too, to mirror check_modules.pl.
+ report(f"package '{req.name}': installed (version {version})")
+
+ return found
+
+
+def test_packages(requirements_file):
+ """
+ Entry point.
+ """
+ try:
+ with open(requirements_file, "r") as f:
+ requirements = f.readlines()
+
+ all_found = packaging_check(requirements)
+
+ except Exception as err:
+ # Surface any breakage to the configure script before failing the test.
+ report(err)
+ raise
+
+ assert all_found, "required packages are missing"
diff --git a/config/conftest.py b/config/conftest.py
new file mode 100644
index 00000000000..a9c2bc546e8
--- /dev/null
+++ b/config/conftest.py
@@ -0,0 +1,18 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+#
+# Support for check_pytest.py. The configure script provides the path to
+# pytest-requirements.txt via the --requirements option added here.
+
+import pytest
+
+
+def pytest_addoption(parser):
+ parser.addoption(
+ "--requirements",
+ help="path to pytest-requirements.txt",
+ )
+
+
+@pytest.fixture
+def requirements_file(request):
+ return request.config.getoption("--requirements")
diff --git a/config/pytest-requirements.txt b/config/pytest-requirements.txt
new file mode 100644
index 00000000000..b941624b2f3
--- /dev/null
+++ b/config/pytest-requirements.txt
@@ -0,0 +1,21 @@
+#
+# This file contains the Python packages which are required in order for us to
+# enable pytest.
+#
+# The syntax is a *subset* of pip's requirements.txt syntax, so that both pip
+# and check_pytest.py can use it. Only whole-line comments and standard Python
+# dependency specifiers are allowed. pip-specific goodies like includes and
+# environment substitutions are not supported; keep it simple.
+#
+# Packages belong here if their absence should cause a configuration failure. If
+# you'd like to make a package optional, consider using pytest.importorskip()
+# instead.
+#
+
+# pytest 7.0 was the last version which supported Python 3.6, but the BSDs have
+# started putting 8.x into ports, so we support both. (pytest 8 can be used
+# throughout once we drop support for Python 3.7.)
+pytest >= 7.0, < 9
+
+# packaging is used by check_pytest.py at configure time.
+packaging
diff --git a/configure b/configure
index 3a0ed11fa8e..d543d331dd3 100755
--- a/configure
+++ b/configure
@@ -630,6 +630,7 @@ vpath_build
PG_SYSROOT
PG_VERSION_NUM
LDFLAGS_EX_BE
+PYTEST
PROVE
DBTOEPUB
FOP
@@ -771,6 +772,7 @@ CFLAGS
CC
enable_injection_points
PG_TEST_EXTRA
+enable_pytest
enable_tap_tests
enable_dtrace
DTRACEFLAGS
@@ -849,6 +851,7 @@ enable_profiling
enable_coverage
enable_dtrace
enable_tap_tests
+enable_pytest
enable_injection_points
with_blocksize
with_segsize
@@ -1549,7 +1552,10 @@ Optional Features:
--enable-profiling build with profiling enabled
--enable-coverage build with coverage testing instrumentation
--enable-dtrace build with DTrace support
- --enable-tap-tests enable TAP tests (requires Perl and IPC::Run)
+ --enable-tap-tests enable (Perl-based) TAP tests (requires Perl and
+ IPC::Run)
+ --enable-pytest enable (Python-based) pytest suites (requires
+ Python)
--enable-injection-points
enable injection points (for testing)
--enable-depend turn on automatic dependency tracking
@@ -3631,7 +3637,7 @@ fi
#
-# TAP tests
+# Test frameworks
#
@@ -3659,6 +3665,32 @@ fi
+
+# Check whether --enable-pytest was given.
+if test "${enable_pytest+set}" = set; then :
+ enableval=$enable_pytest;
+ case $enableval in
+ yes)
+ :
+ ;;
+ no)
+ :
+ ;;
+ *)
+ as_fn_error $? "no argument expected for --enable-pytest option" "$LINENO" 5
+ ;;
+ esac
+
+else
+ enable_pytest=no
+
+fi
+
+
+
+
+
+
#
# Injection points
#
@@ -19064,6 +19096,78 @@ $as_echo "$modulestderr" >&6; }
fi
fi
+if test "$enable_pytest" = yes; then
+ if test -z "$PYTEST"; then
+ for ac_prog in pytest py.test
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_PYTEST+:} false; then :
+ $as_echo_n "(cached) " >&6
+else
+ case $PYTEST in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_PYTEST="$PYTEST" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_path_PYTEST="$as_dir/$ac_word$ac_exec_ext"
+ $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+PYTEST=$ac_cv_path_PYTEST
+if test -n "$PYTEST"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTEST" >&5
+$as_echo "$PYTEST" >&6; }
+else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+ test -n "$PYTEST" && break
+done
+
+else
+ # Report the value of PYTEST in configure's output in all cases.
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for PYTEST" >&5
+$as_echo_n "checking for PYTEST... " >&6; }
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTEST" >&5
+$as_echo "$PYTEST" >&6; }
+fi
+
+ if test -z "$PYTEST"; then
+ as_fn_error $? "pytest not found" "$LINENO" 5
+ fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for Python packages required for pytest" >&5
+$as_echo_n "checking for Python packages required for pytest... " >&6; }
+ modulestderr=`$PYTEST -c "$srcdir/pytest.ini" --confcutdir="$srcdir/config" --capture=no "$srcdir/config/check_pytest.py" --requirements "$srcdir/config/pytest-requirements.txt" 2>&1 >/dev/null`
+ if test $? -eq 0; then
+ echo "$modulestderr" >&5
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+ else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $modulestderr" >&5
+$as_echo "$modulestderr" >&6; }
+ as_fn_error $? "Additional Python packages are required to run the pytest suites" "$LINENO" 5
+ fi
+fi
+
# If compiler will take -Wl,--as-needed (or various platform-specific
# spellings thereof) then add that to LDFLAGS. This is much easier than
# trying to filter LIBS to the minimum for each executable.
diff --git a/configure.ac b/configure.ac
index c2413720a18..905a1e0c3c5 100644
--- a/configure.ac
+++ b/configure.ac
@@ -225,11 +225,16 @@ AC_SUBST(DTRACEFLAGS)])
AC_SUBST(enable_dtrace)
#
-# TAP tests
+# Test frameworks
#
PGAC_ARG_BOOL(enable, tap-tests, no,
- [enable TAP tests (requires Perl and IPC::Run)])
+ [enable (Perl-based) TAP tests (requires Perl and IPC::Run)])
AC_SUBST(enable_tap_tests)
+
+PGAC_ARG_BOOL(enable, pytest, no,
+ [enable (Python-based) pytest suites (requires Python)])
+AC_SUBST(enable_pytest)
+
AC_ARG_VAR(PG_TEST_EXTRA,
[enable selected extra tests (overridden at runtime by PG_TEST_EXTRA environment variable)])
@@ -2408,6 +2413,22 @@ if test "$enable_tap_tests" = yes; then
fi
fi
+if test "$enable_pytest" = yes; then
+ PGAC_PATH_PROGS(PYTEST, pytest py.test)
+ if test -z "$PYTEST"; then
+ AC_MSG_ERROR([pytest not found])
+ fi
+ AC_MSG_CHECKING(for Python packages required for pytest)
+ [modulestderr=`$PYTEST -c "$srcdir/pytest.ini" --confcutdir="$srcdir/config" --capture=no "$srcdir/config/check_pytest.py" --requirements "$srcdir/config/pytest-requirements.txt" 2>&1 >/dev/null`]
+ if test $? -eq 0; then
+ echo "$modulestderr" >&AS_MESSAGE_LOG_FD
+ AC_MSG_RESULT(yes)
+ else
+ AC_MSG_RESULT([$modulestderr])
+ AC_MSG_ERROR([Additional Python packages are required to run the pytest suites])
+ fi
+fi
+
# If compiler will take -Wl,--as-needed (or various platform-specific
# spellings thereof) then add that to LDFLAGS. This is much easier than
# trying to filter LIBS to the minimum for each executable.
diff --git a/meson.build b/meson.build
index 1ce2e79c436..38a08a29c8f 100644
--- a/meson.build
+++ b/meson.build
@@ -1702,6 +1702,39 @@ endif
+###############################################################
+# Library: pytest
+###############################################################
+
+pytest_enabled = false
+pytest = not_found_dep
+
+pytestopt = get_option('pytest')
+if not pytestopt.disabled()
+ pytest = find_program(get_option('PYTEST'), native: true, required: pytestopt)
+ if pytest.found()
+ pytest_check = run_command(pytest,
+ '-c', 'pytest.ini',
+ '--confcutdir=config',
+ '--capture=no',
+ 'config/check_pytest.py',
+ '--requirements', 'config/pytest-requirements.txt',
+ check: false)
+ if pytest_check.returncode() != 0
+ message(pytest_check.stderr())
+ if pytestopt.enabled()
+ error('Additional Python packages are required to run the pytest suites.')
+ else
+ warning('Additional Python packages are required to run the pytest suites.')
+ endif
+ else
+ pytest_enabled = true
+ endif
+ endif
+endif
+
+
+
###############################################################
# Library: zstd
###############################################################
@@ -3786,6 +3819,63 @@ foreach test_dir : tests
)
endforeach
install_suites += test_group
+ elif kind == 'pytest'
+ testwrap_pytest = testwrap_base
+ if not pytest_enabled
+ testwrap_pytest += ['--skip', 'pytest not enabled']
+ endif
+
+ test_command = [
+ pytest.full_path(),
+ '-c', meson.project_source_root() / 'pytest.ini',
+ '--verbose',
+ '-p', 'pgtap', # enable our test reporter plugin
+ '-ra', # show skipped and xfailed tests too
+ ]
+
+ # Add temporary install, the build directory for non-installed binaries and
+ # also test/ for non-installed test binaries built separately.
+ env = test_env
+ env.prepend('PATH', temp_install_bindir, test_dir['bd'], test_dir['bd'] / 'test')
+ temp_install_datadir = '@0@@1@'.format(test_install_destdir, dir_prefix / dir_data)
+ env.set('share_contrib_dir', temp_install_datadir / 'contrib')
+ env.prepend('PYTHONPATH', meson.project_source_root() / 'src' / 'test' / 'pytest' / 'plugins')
+
+ foreach name, value : t.get('env', {})
+ env.set(name, value)
+ endforeach
+
+ test_group = test_dir['name']
+ test_kwargs = {
+ 'protocol': 'tap',
+ 'suite': test_group,
+ 'timeout': 1000,
+ 'depends': test_deps + t.get('deps', []),
+ 'env': env,
+ } + t.get('test_kwargs', {})
+
+ foreach onetest : t['tests']
+ # Make test names prettier, remove pyt/ and .py
+ onetest_p = onetest
+ if onetest_p.startswith('pyt/')
+ onetest_p = onetest.split('pyt/')[1]
+ endif
+ if onetest_p.endswith('.py')
+ onetest_p = fs.stem(onetest_p)
+ endif
+
+ test(test_dir['name'] / onetest_p,
+ python,
+ kwargs: test_kwargs,
+ args: testwrap_pytest + [
+ '--testgroup', test_dir['name'],
+ '--testname', onetest_p,
+ '--', test_command,
+ test_dir['sd'] / onetest,
+ ],
+ )
+ endforeach
+ install_suites += test_group
else
error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
endif
@@ -3960,6 +4050,7 @@ summary(
'dtrace': dtrace,
'flex': '@0@ @1@'.format(flex.full_path(), flex_version),
'prove': prove,
+ 'pytest': pytest,
},
section: 'Programs',
)
@@ -4000,6 +4091,7 @@ summary(
summary(
{
'tap': tap_tests_enabled,
+ 'pytest': pytest_enabled,
},
section: 'Other features',
list_sep: ' ',
diff --git a/meson_options.txt b/meson_options.txt
index 06bf5627d3c..88f22e699d9 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -41,7 +41,10 @@ option('cassert', type: 'boolean', value: false,
description: 'Enable assertion checks (for debugging)')
option('tap_tests', type: 'feature', value: 'auto',
- description: 'Enable TAP tests')
+ description: 'Enable (Perl-based) TAP tests')
+
+option('pytest', type: 'feature', value: 'auto',
+ description: 'Enable (Python-based) pytest suites')
option('injection_points', type: 'boolean', value: false,
description: 'Enable injection points')
@@ -195,6 +198,9 @@ option('PERL', type: 'string', value: 'perl',
option('PROVE', type: 'string', value: 'prove',
description: 'Path to prove binary')
+option('PYTEST', type: 'array', value: ['pytest', 'py.test'],
+ description: 'Path to pytest binary')
+
option('PYTHON', type: 'array', value: ['python3', 'python'],
description: 'Path to python binary')
diff --git a/pytest.ini b/pytest.ini
new file mode 100644
index 00000000000..8e8388f3afc
--- /dev/null
+++ b/pytest.ini
@@ -0,0 +1,6 @@
+[pytest]
+minversion = 7.0
+
+# Ignore ./config (which contains the configure-time check_pytest.py tests) by
+# default.
+addopts = --ignore ./config
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 0aa389bc710..8a6885206ce 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -211,6 +211,7 @@ enable_dtrace = @enable_dtrace@
enable_coverage = @enable_coverage@
enable_injection_points = @enable_injection_points@
enable_tap_tests = @enable_tap_tests@
+enable_pytest = @enable_pytest@
python_includespec = @python_includespec@
python_libdir = @python_libdir@
@@ -353,6 +354,7 @@ MSGFMT = @MSGFMT@
MSGFMT_FLAGS = @MSGFMT_FLAGS@
MSGMERGE = @MSGMERGE@
OPENSSL = @OPENSSL@
+PYTEST = @PYTEST@
PYTHON = @PYTHON@
TAR = @TAR@
XGETTEXT = @XGETTEXT@
@@ -507,6 +509,27 @@ prove_installcheck = @echo "TAP tests not enabled. Try configuring with --enable
prove_check = $(prove_installcheck)
endif
+ifeq ($(enable_pytest),yes)
+
+pytest_installcheck = @echo "Installcheck is not currently supported for pytest."
+
+define pytest_check
+echo "# +++ pytest check in $(subdir) +++" && \
+rm -rf '$(CURDIR)'/tmp_check && \
+$(MKDIR_P) '$(CURDIR)'/tmp_check && \
+cd $(srcdir) && \
+ TESTLOGDIR='$(CURDIR)/tmp_check/log' \
+ TESTDATADIR='$(CURDIR)/tmp_check' \
+ PYTHONPATH='$(abs_top_srcdir)/src/test/pytest/plugins:$$PYTHONPATH' \
+ $(with_temp_install) \
+ $(PYTEST) -c '$(abs_top_srcdir)/pytest.ini' --verbose -ra ./pyt/
+endef
+
+else
+pytest_installcheck = @echo "pytest is not enabled. Try configuring with --enable-pytest"
+pytest_check = $(pytest_installcheck)
+endif
+
# Installation.
install_bin = @install_bin@
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 0def244c901..f68acd57bc4 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -56,6 +56,7 @@ pgxs_kv = {
'enable_nls': libintl.found() ? 'yes' : 'no',
'enable_injection_points': get_option('injection_points') ? 'yes' : 'no',
'enable_tap_tests': tap_tests_enabled ? 'yes' : 'no',
+ 'enable_pytest': pytest_enabled ? 'yes' : 'no',
'enable_debug': get_option('debug') ? 'yes' : 'no',
'enable_coverage': 'no',
'enable_dtrace': dtrace.found() ? 'yes' : 'no',
@@ -145,6 +146,7 @@ pgxs_bins = {
'OPENSSL': openssl,
'PERL': perl,
'PROVE': prove,
+ 'PYTEST': pytest,
'PYTHON': python,
'TAR': tar,
'ZSTD': program_zstd,
diff --git a/src/test/Makefile b/src/test/Makefile
index 511a72e6238..0be9771d71f 100644
--- a/src/test/Makefile
+++ b/src/test/Makefile
@@ -12,7 +12,16 @@ subdir = src/test
top_builddir = ../..
include $(top_builddir)/src/Makefile.global
-SUBDIRS = perl postmaster regress isolation modules authentication recovery subscription
+SUBDIRS = \
+ authentication \
+ isolation \
+ modules \
+ perl \
+ postmaster \
+ pytest \
+ recovery \
+ regress \
+ subscription
ifeq ($(with_icu),yes)
SUBDIRS += icu
diff --git a/src/test/meson.build b/src/test/meson.build
index ccc31d6a86a..d08a6ef61c2 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -5,6 +5,7 @@ subdir('isolation')
subdir('authentication')
subdir('postmaster')
+subdir('pytest')
subdir('recovery')
subdir('subscription')
subdir('modules')
diff --git a/src/test/pytest/Makefile b/src/test/pytest/Makefile
new file mode 100644
index 00000000000..2bdca96ccbe
--- /dev/null
+++ b/src/test/pytest/Makefile
@@ -0,0 +1,20 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for pytest
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/pytest/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/test/pytest
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+check:
+ $(pytest_check)
+
+clean distclean maintainer-clean:
+ rm -rf tmp_check
diff --git a/src/test/pytest/README b/src/test/pytest/README
new file mode 100644
index 00000000000..1333ed77b7e
--- /dev/null
+++ b/src/test/pytest/README
@@ -0,0 +1 @@
+TODO
diff --git a/src/test/pytest/meson.build b/src/test/pytest/meson.build
new file mode 100644
index 00000000000..abd128dfa24
--- /dev/null
+++ b/src/test/pytest/meson.build
@@ -0,0 +1,16 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+if not pytest_enabled
+ subdir_done()
+endif
+
+tests += {
+ 'name': 'pytest',
+ 'sd': meson.current_source_dir(),
+ 'bd': meson.current_build_dir(),
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_something.py',
+ ],
+ },
+}
diff --git a/src/test/pytest/plugins/pgtap.py b/src/test/pytest/plugins/pgtap.py
new file mode 100644
index 00000000000..ef8291e291c
--- /dev/null
+++ b/src/test/pytest/plugins/pgtap.py
@@ -0,0 +1,193 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import os
+import sys
+from typing import Optional
+
+import pytest
+
+#
+# Helpers
+#
+
+
+class TAP:
+ """
+ A basic API for reporting via the TAP protocol.
+ """
+
+ def __init__(self):
+ self.count = 0
+
+ # XXX interacts poorly with testwrap's boilerplate diagnostics
+ # self.print("TAP version 13")
+
+ def expect(self, num: int):
+ self.print(f"1..{num}")
+
+ def print(self, *args):
+ print(*args, file=sys.__stdout__)
+
+ def ok(self, name: str):
+ self.count += 1
+ self.print("ok", self.count, "-", name)
+
+ def skip(self, name: str, reason: str):
+ self.count += 1
+ self.print("ok", self.count, "-", name, "# skip", reason)
+
+ def fail(self, name: str, details: str):
+ self.count += 1
+ self.print("not ok", self.count, "-", name)
+
+ # mtest has some odd behavior around TAP tests where it won't print
+ # diagnostics on failure if they're part of the stdout stream, so we
+ # might as well just dump the details directly to stderr instead.
+ print(details, file=sys.__stderr__)
+
+
+tap = TAP()
+
+
+class TestNotes:
+ """
+ Annotations for a single test. The existing pytest hooks keep interesting
+ information somewhat separated across the different stages
+ (setup/test/teardown), so this class is used to correlate them.
+ """
+
+ skipped = False
+ skip_reason = None
+
+ failed = False
+ details = ""
+
+
+# Register a custom key in the stash dictionary for keeping our TestNotes.
+notes_key = pytest.StashKey[TestNotes]()
+
+
+#
+# Hook Implementations
+#
+
+
+@pytest.hookimpl(tryfirst=True)
+def pytest_configure(config):
+ """
+ Hijacks the standard streams as soon as possible during pytest startup. The
+ pytest-formatted output gets logged to file instead, and we'll use the
+ original sys.__stdout__/__stderr__ streams for the TAP protocol.
+ """
+ logdir = os.getenv("TESTLOGDIR")
+ if not logdir:
+ raise RuntimeError("pgtap requires the TESTLOGDIR envvar to be set")
+
+ os.makedirs(logdir)
+ logpath = os.path.join(logdir, "pytest.log")
+ sys.stdout = sys.stderr = open(logpath, "a", buffering=1)
+
+
+@pytest.hookimpl(trylast=True)
+def pytest_sessionfinish(session, exitstatus):
+ """
+ Suppresses nonzero exit codes due to failed tests. (In that case, we want
+ Meson to report a failure count, not a generic ERROR.)
+ """
+ if exitstatus == pytest.ExitCode.TESTS_FAILED:
+ session.exitstatus = pytest.ExitCode.OK
+
+
+@pytest.hookimpl
+def pytest_collectreport(report):
+ # Include collection failures directly in Meson error output.
+ if report.failed:
+ print(report.longreprtext, file=sys.__stderr__)
+
+
+@pytest.hookimpl
+def pytest_internalerror(excrepr, excinfo):
+ # Include internal errors directly in Meson error output.
+ print(excrepr, file=sys.__stderr__)
+
+
+#
+# Hook Wrappers
+#
+# In pytest parlance, a "wrapper" for a hook can inspect and optionally modify
+# existing hooks' behavior, but it does not replace the hook chain. This is done
+# through a generator-style API which chains the hooks together (see the use of
+# `yield`).
+#
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_collection(session):
+ """Reports the number of gathered tests after collection is finished."""
+ res = yield
+ tap.expect(session.testscollected)
+ return res
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_runtest_makereport(item, call):
+ """
+ Annotates a test item with our TestNotes and grabs relevant information for
+ reporting.
+
+ This is called multiple times per test, so it's not correct to print the TAP
+ result here. (A test and its teardown stage can both fail, and we want to
+ see the details for both.) We instead combine all the information for use by
+ our pytest_runtest_protocol wrapper later on.
+ """
+ res = yield
+
+ if notes_key not in item.stash:
+ item.stash[notes_key] = TestNotes()
+ notes = item.stash[notes_key]
+
+ report = res.get_result()
+ if report.passed:
+ pass # no annotation needed
+
+ elif report.skipped:
+ notes.skipped = True
+ _, _, notes.skip_reason = report.longrepr
+
+ elif report.failed:
+ notes.failed = True
+
+ if not notes.details:
+ notes.details += "{:_^72}\n\n".format(f" {report.head_line} ")
+
+ if report.when in ("setup", "teardown"):
+ notes.details += "\n{:_^72}\n\n".format(
+ f" Error during {report.when} of {report.head_line} "
+ )
+
+ notes.details += report.longreprtext + "\n"
+
+ else:
+ raise RuntimeError("pytest_runtest_makereport received unknown test status")
+
+ return res
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_runtest_protocol(item, nextitem):
+ """
+ Reports the TAP result for this test item using our gathered TestNotes.
+ """
+ res = yield
+
+ assert notes_key in item.stash, "pgtap didn't annotate a test item?"
+ notes = item.stash[notes_key]
+
+ if notes.failed:
+ tap.fail(item.nodeid, notes.details)
+ elif notes.skipped:
+ tap.skip(item.nodeid, notes.skip_reason)
+ else:
+ tap.ok(item.nodeid)
+
+ return res
diff --git a/src/test/pytest/pyt/test_something.py b/src/test/pytest/pyt/test_something.py
new file mode 100644
index 00000000000..5bd45618512
--- /dev/null
+++ b/src/test/pytest/pyt/test_something.py
@@ -0,0 +1,17 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import pytest
+
+
+@pytest.fixture
+def hey():
+ yield
+ raise "uh-oh"
+
+
+def test_something(hey):
+ assert 2 == 4
+
+
+def test_something_else():
+ assert 2 == 2
--
2.51.1
v3-0003-WIP-pytest-Add-some-SSL-client-tests.patchtext/x-patch; charset=utf-8; name=v3-0003-WIP-pytest-Add-some-SSL-client-tests.patchDownload
From df56210b8585dcdd9d9d755ea2ef37911984d84a Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 19 Aug 2025 12:56:45 -0700
Subject: [PATCH v3 03/10] WIP: pytest: Add some SSL client tests
This is a sample client-only test suite. It tests some handshake
failures against a mock server, as well as a full SSL handshake + empty
query + response.
pyca/cryptography is added as a new package dependency. Certificates for
testing are generated on the fly.
The `pg` test package contains some helpers and fixtures (as well as
some self-tests for more complicated behavior). Of note:
- pg.require_test_extra() lets you mark a test/class/module as skippable
if PG_TEST_EXTRA does not contain the necessary strings.
- pg.remaining_timeout() is a function which can be repeatedly called to
determine how much of the PG_TEST_TIMEOUT_DEFAULT remains for the
current test item.
- pg.libpq is a fixture that wraps libpq.so in a more friendly, but
still low-level, ctypes FFI. Allocated resources are unwound and
released during test teardown.
The mock design is threaded: the server socket is listening on a
background thread, and the test provides the server logic via a
callback. There is some additional work still needed to make this
production-ready; see the notes for _TCPServer.background(). (Currently,
an exception in the wrong place could result in a hang-until-timeout
rather than an immediate failure.)
TODOs:
- local_server and tcp_server_class are nearly identical and should
share code.
- fix exception-related timeouts for .background()
- figure out the proper use of "session" vs "module" scope
- ensure that pq.libpq unwinds (to close connections) before tcp_server;
see comment in test_server_with_ssl_disabled()
---
.cirrus.tasks.yml | 18 +-
config/pytest-requirements.txt | 10 ++
pytest.ini | 3 +
src/test/pytest/meson.build | 1 +
src/test/pytest/pg/__init__.py | 3 +
src/test/pytest/pg/_env.py | 55 ++++++
src/test/pytest/pg/fixtures.py | 212 +++++++++++++++++++++++
src/test/pytest/pyt/conftest.py | 3 +
src/test/pytest/pyt/test_libpq.py | 171 ++++++++++++++++++
src/test/ssl/Makefile | 2 +
src/test/ssl/meson.build | 6 +
src/test/ssl/pyt/conftest.py | 129 ++++++++++++++
src/test/ssl/pyt/test_client.py | 278 ++++++++++++++++++++++++++++++
13 files changed, 885 insertions(+), 6 deletions(-)
create mode 100644 src/test/pytest/pg/__init__.py
create mode 100644 src/test/pytest/pg/_env.py
create mode 100644 src/test/pytest/pg/fixtures.py
create mode 100644 src/test/pytest/pyt/conftest.py
create mode 100644 src/test/pytest/pyt/test_libpq.py
create mode 100644 src/test/ssl/pyt/conftest.py
create mode 100644 src/test/ssl/pyt/test_client.py
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index b3388351110..762d1ce4108 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -228,6 +228,7 @@ task:
sysctl kern.corefile='/tmp/cores/%N.%P.core'
setup_additional_packages_script: |
pkg install -y \
+ py311-cryptography \
py311-packaging \
py311-pytest
@@ -322,6 +323,7 @@ task:
setup_additional_packages_script: |
pkgin -y install \
+ py312-cryptography \
py312-packaging \
py312-test
ln -s /usr/pkg/bin/pytest-3.12 /usr/pkg/bin/pytest
@@ -346,8 +348,9 @@ task:
setup_additional_packages_script: |
pkg_add -I \
- py3-test \
- py3-packaging
+ py3-cryptography \
+ py3-packaging \
+ py3-test
# Always core dump to ${CORE_DUMP_DIR}
set_core_dump_script: sysctl -w kern.nosuidcoredump=2
<<: *openbsd_task_template
@@ -508,8 +511,9 @@ task:
setup_additional_packages_script: |
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get -y install \
- python3-pytest \
- python3-packaging
+ python3-cryptography \
+ python3-packaging \
+ python3-pytest
matrix:
# SPECIAL:
@@ -658,6 +662,7 @@ task:
CIRRUS_WORKING_DIR: ${HOME}/pgsql/
CCACHE_DIR: ${HOME}/ccache
MACPORTS_CACHE: ${HOME}/macports-cache
+ PYTEST_DEBUG_TEMPROOT: /tmp # default is too long for UNIX sockets on Mac
MESON_FEATURES: >-
-Dbonjour=enabled
@@ -678,6 +683,7 @@ task:
p5.34-io-tty
p5.34-ipc-run
python312
+ py312-cryptography
py312-packaging
py312-pytest
tcl
@@ -816,7 +822,7 @@ task:
# XXX Does Chocolatey really not have any Python package installers?
setup_additional_packages_script: |
REM choco install -y --no-progress ...
- pip3 install --user packaging pytest
+ pip3 install --user cryptography packaging pytest
setup_hosts_file_script: |
echo 127.0.0.1 pg-loadbalancetest >> c:\Windows\System32\Drivers\etc\hosts
@@ -879,7 +885,7 @@ task:
folder: ${CCACHE_DIR}
setup_additional_packages_script: |
- C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-python-packaging mingw-w64-ucrt-x86_64-python-pytest
+ C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-python-cryptography mingw-w64-ucrt-x86_64-python-packaging mingw-w64-ucrt-x86_64-python-pytest
mingw_info_script: |
%BASH% -c "where gcc"
diff --git a/config/pytest-requirements.txt b/config/pytest-requirements.txt
index b941624b2f3..0bd6cadf608 100644
--- a/config/pytest-requirements.txt
+++ b/config/pytest-requirements.txt
@@ -19,3 +19,13 @@ pytest >= 7.0, < 9
# packaging is used by check_pytest.py at configure time.
packaging
+
+# Notes on the cryptography package:
+# - 3.3.2 is shipped on Debian bullseye.
+# - 3.4.x drops support for Python 2, making it a version of note for older LTS
+# distros.
+# - 35.x switched versioning schemes and moved to Rust parsing.
+# - 40.x is the last version supporting Python 3.6.
+# XXX Is it appropriate to require cryptography, or should we simply skip
+# dependent tests?
+cryptography >= 3.3.2
diff --git a/pytest.ini b/pytest.ini
index 8e8388f3afc..e7aa84f3a84 100644
--- a/pytest.ini
+++ b/pytest.ini
@@ -4,3 +4,6 @@ minversion = 7.0
# Ignore ./config (which contains the configure-time check_pytest.py tests) by
# default.
addopts = --ignore ./config
+
+# Common test code can be found here.
+pythonpath = src/test/pytest
diff --git a/src/test/pytest/meson.build b/src/test/pytest/meson.build
index abd128dfa24..f53193e8686 100644
--- a/src/test/pytest/meson.build
+++ b/src/test/pytest/meson.build
@@ -11,6 +11,7 @@ tests += {
'pytest': {
'tests': [
'pyt/test_something.py',
+ 'pyt/test_libpq.py',
],
},
}
diff --git a/src/test/pytest/pg/__init__.py b/src/test/pytest/pg/__init__.py
new file mode 100644
index 00000000000..ef8faf54ca4
--- /dev/null
+++ b/src/test/pytest/pg/__init__.py
@@ -0,0 +1,3 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+from ._env import has_test_extra, require_test_extra
diff --git a/src/test/pytest/pg/_env.py b/src/test/pytest/pg/_env.py
new file mode 100644
index 00000000000..6f18af07844
--- /dev/null
+++ b/src/test/pytest/pg/_env.py
@@ -0,0 +1,55 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import logging
+import os
+from typing import List, Optional
+
+import pytest
+
+logger = logging.getLogger(__name__)
+
+
+def has_test_extra(key: str) -> bool:
+ """
+ Returns True if the PG_TEST_EXTRA environment variable contains the given
+ key.
+ """
+ extra = os.getenv("PG_TEST_EXTRA", "")
+ return key in extra.split()
+
+
+def require_test_extra(*keys: str) -> bool:
+ """
+ A convenience annotation which will skip tests if all of the required keys
+ are not present in PG_TEST_EXTRA.
+
+ To skip a particular test function or class:
+
+ @pg.require_test_extra("ldap")
+ def test_some_ldap_feature():
+ ...
+
+ To skip an entire module:
+
+ pytestmark = pg.require_test_extra("ssl", "kerberos")
+ """
+ return pytest.mark.skipif(
+ not all([has_test_extra(k) for k in keys]),
+ reason="requires {} to be set in PG_TEST_EXTRA".format(", ".join(keys)),
+ )
+
+
+def test_timeout_default() -> int:
+ """
+ Returns the value of the PG_TEST_TIMEOUT_DEFAULT environment variable, in
+ seconds, or 180 if one was not provided.
+ """
+ default = os.getenv("PG_TEST_TIMEOUT_DEFAULT", "")
+ if not default:
+ return 180
+
+ try:
+ return int(default)
+ except ValueError as v:
+ logger.warning("PG_TEST_TIMEOUT_DEFAULT could not be parsed: " + str(v))
+ return 180
diff --git a/src/test/pytest/pg/fixtures.py b/src/test/pytest/pg/fixtures.py
new file mode 100644
index 00000000000..b5d3bff69a8
--- /dev/null
+++ b/src/test/pytest/pg/fixtures.py
@@ -0,0 +1,212 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import ctypes
+import platform
+import time
+from typing import Any, Callable, Dict
+
+import pytest
+
+from ._env import test_timeout_default
+
+
+@pytest.fixture
+def remaining_timeout():
+ """
+ This fixture provides a function that returns how much of the
+ PG_TEST_TIMEOUT_DEFAULT remains for the current test, in fractional seconds.
+ This value is never less than zero.
+
+ This fixture is per-test, so the deadline is also reset on a per-test basis.
+ """
+ now = time.monotonic()
+ deadline = now + test_timeout_default()
+
+ return lambda: max(deadline - time.monotonic(), 0)
+
+
+class _PGconn(ctypes.Structure):
+ pass
+
+
+class _PGresult(ctypes.Structure):
+ pass
+
+
+_PGconn_p = ctypes.POINTER(_PGconn)
+_PGresult_p = ctypes.POINTER(_PGresult)
+
+
+@pytest.fixture(scope="session")
+def libpq_handle():
+ """
+ Loads a ctypes handle for libpq. Some common function prototypes are
+ initialized for general use.
+ """
+ system = platform.system()
+
+ if system in ("Linux", "FreeBSD", "NetBSD", "OpenBSD"):
+ name = "libpq.so.5"
+ elif system == "Darwin":
+ name = "libpq.5.dylib"
+ elif system == "Windows":
+ name = "libpq.dll"
+ else:
+ assert False, f"the libpq fixture must be updated for {system}"
+
+ # XXX ctypes.CDLL() is a little stricter with load paths on Windows. The
+ # preferred way around that is to know the absolute path to libpq.dll, but
+ # that doesn't seem to mesh well with the current test infrastructure. For
+ # now, enable "standard" LoadLibrary behavior.
+ loadopts = {}
+ if system == "Windows":
+ loadopts["winmode"] = 0
+
+ lib = ctypes.CDLL(name, **loadopts)
+
+ #
+ # Function Prototypes
+ #
+
+ lib.PQconnectdb.restype = _PGconn_p
+ lib.PQconnectdb.argtypes = [ctypes.c_char_p]
+
+ lib.PQstatus.restype = ctypes.c_int
+ lib.PQstatus.argtypes = [_PGconn_p]
+
+ lib.PQexec.restype = _PGresult_p
+ lib.PQexec.argtypes = [_PGconn_p, ctypes.c_char_p]
+
+ lib.PQresultStatus.restype = ctypes.c_int
+ lib.PQresultStatus.argtypes = [_PGresult_p]
+
+ lib.PQclear.restype = None
+ lib.PQclear.argtypes = [_PGresult_p]
+
+ lib.PQerrorMessage.restype = ctypes.c_char_p
+ lib.PQerrorMessage.argtypes = [_PGconn_p]
+
+ lib.PQfinish.restype = None
+ lib.PQfinish.argtypes = [_PGconn_p]
+
+ return lib
+
+
+class PGresult(contextlib.AbstractContextManager):
+ """Wraps a raw _PGresult_p with a more friendly interface."""
+
+ def __init__(self, lib: ctypes.CDLL, res: _PGresult_p):
+ self._lib = lib
+ self._res = res
+
+ def __exit__(self, *exc):
+ self._lib.PQclear(self._res)
+ self._res = None
+
+ def status(self):
+ return self._lib.PQresultStatus(self._res)
+
+
+class PGconn(contextlib.AbstractContextManager):
+ """
+ Wraps a raw _PGconn_p with a more friendly interface. This is just a
+ stub; it's expected to grow.
+ """
+
+ def __init__(
+ self,
+ lib: ctypes.CDLL,
+ handle: _PGconn_p,
+ stack: contextlib.ExitStack,
+ ):
+ self._lib = lib
+ self._handle = handle
+ self._stack = stack
+
+ def __exit__(self, *exc):
+ self._lib.PQfinish(self._handle)
+ self._handle = None
+
+ def exec(self, query: str) -> PGresult:
+ """
+ Executes a query via PQexec() and returns a PGresult.
+ """
+ res = self._lib.PQexec(self._handle, query.encode())
+ return self._stack.enter_context(PGresult(self._lib, res))
+
+
+@pytest.fixture
+def libpq(libpq_handle, remaining_timeout):
+ """
+ Provides a ctypes-based API wrapped around libpq.so. This fixture keeps
+ track of allocated resources and cleans them up during teardown. See
+ _Libpq's public API for details.
+ """
+
+ class _Libpq(contextlib.ExitStack):
+ CONNECTION_OK = 0
+
+ PGRES_EMPTY_QUERY = 0
+
+ class Error(RuntimeError):
+ """
+ libpq.Error is the exception class for application-level errors that
+ are encountered during libpq operations.
+ """
+
+ pass
+
+ def __init__(self):
+ super().__init__()
+ self.lib = libpq_handle
+
+ def _connstr(self, opts: Dict[str, Any]) -> str:
+ """
+ Flattens the provided options into a libpq connection string. Values
+ are converted to str and quoted/escaped as necessary.
+ """
+ settings = []
+
+ for k, v in opts.items():
+ v = str(v)
+ if not v:
+ v = "''"
+ else:
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+
+ if " " in v:
+ v = f"'{v}'"
+
+ settings.append(f"{k}={v}")
+
+ return " ".join(settings)
+
+ def must_connect(self, **opts) -> PGconn:
+ """
+ Connects to a server, using the given connection options, and
+ returns a libpq.PGconn object wrapping the connection handle. A
+ failure will raise libpq.Error.
+
+ Connections honor PG_TEST_TIMEOUT_DEFAULT unless connect_timeout is
+ explicitly overridden in opts.
+ """
+
+ if "connect_timeout" not in opts:
+ t = int(remaining_timeout())
+ opts["connect_timeout"] = max(t, 1)
+
+ conn_p = self.lib.PQconnectdb(self._connstr(opts).encode())
+
+ # Ensure the connection handle is always closed at the end of the
+ # test.
+ conn = self.enter_context(PGconn(self.lib, conn_p, stack=self))
+
+ if self.lib.PQstatus(conn_p) != self.CONNECTION_OK:
+ raise self.Error(self.lib.PQerrorMessage(conn_p).decode())
+
+ return conn
+
+ with _Libpq() as lib:
+ yield lib
diff --git a/src/test/pytest/pyt/conftest.py b/src/test/pytest/pyt/conftest.py
new file mode 100644
index 00000000000..ecb72be26d7
--- /dev/null
+++ b/src/test/pytest/pyt/conftest.py
@@ -0,0 +1,3 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+from pg.fixtures import *
diff --git a/src/test/pytest/pyt/test_libpq.py b/src/test/pytest/pyt/test_libpq.py
new file mode 100644
index 00000000000..9f0857cc612
--- /dev/null
+++ b/src/test/pytest/pyt/test_libpq.py
@@ -0,0 +1,171 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import os
+import socket
+import struct
+import threading
+from typing import Callable
+
+import pytest
+
+
+@pytest.mark.parametrize(
+ "opts, expected",
+ [
+ (dict(), ""),
+ (dict(port=5432), "port=5432"),
+ (dict(port=5432, dbname="postgres"), "port=5432 dbname=postgres"),
+ (dict(host=""), "host=''"),
+ (dict(host=" "), r"host=' '"),
+ (dict(keyword="'"), r"keyword=\'"),
+ (dict(keyword=" \\' "), r"keyword=' \\\' '"),
+ ],
+)
+def test_connstr(libpq, opts, expected):
+ """Tests the escape behavior for libpq._connstr()."""
+ assert libpq._connstr(opts) == expected
+
+
+def test_must_connect_errors(libpq):
+ """Tests that must_connect() raises libpq.Error."""
+ with pytest.raises(libpq.Error, match="invalid connection option"):
+ libpq.must_connect(some_unknown_keyword="whatever")
+
+
+@pytest.fixture
+def local_server(tmp_path, remaining_timeout):
+ """
+ Opens up a local UNIX socket for mocking a Postgres server on a background
+ thread. See the _Server API for usage.
+
+ This fixture requires AF_UNIX support; dependent tests will be skipped on
+ platforms that don't provide it.
+ """
+
+ try:
+ from socket import AF_UNIX
+ except ImportError:
+ pytest.skip("AF_UNIX not supported on this platform")
+
+ class _Server(contextlib.ExitStack):
+ """
+ Implementation class for local_server. See .background() for the primary
+ entry point for tests. Postgres clients may connect to this server via
+ local_server.host/local_server.port.
+
+ _Server derives from contextlib.ExitStack to provide easy cleanup of
+ associated resources; see the documentation for that class for a full
+ explanation.
+ """
+
+ def __init__(self):
+ super().__init__()
+
+ self.host = tmp_path
+ self.port = 5432
+
+ self._thread = None
+ self._thread_exc = None
+ self._listener = self.enter_context(
+ socket.socket(AF_UNIX, socket.SOCK_STREAM),
+ )
+
+ def bind_and_listen(self):
+ """
+ Does the actual work of binding the UNIX socket using the Postgres
+ server conventions and listening for connections.
+
+ The listen backlog is currently hardcoded to one.
+ """
+ sockfile = self.host / ".s.PGSQL.{}".format(self.port)
+
+ # Lock down the permissions on the new socket.
+ prev_mask = os.umask(0o077)
+
+ # Bind (creating the socket file), and immediately register it for
+ # deletion from disk when the stack is cleaned up.
+ self._listener.bind(bytes(sockfile))
+ self.callback(os.unlink, sockfile)
+
+ os.umask(prev_mask)
+
+ self._listener.listen(1)
+
+ def background(self, fn: Callable[[socket.socket], None]) -> None:
+ """
+ Accepts a client connection on a background thread and passes it to
+ the provided callback. Any exceptions raised from the callback will
+ be re-raised on the main thread during fixture teardown.
+
+ Blocking operations on the connected socket default to using the
+ remaining_timeout(), though this can be changed by the test via the
+ socket's .settimeout().
+ """
+
+ def _bg():
+ try:
+ self._listener.settimeout(remaining_timeout())
+ sock, _ = self._listener.accept()
+
+ with sock:
+ sock.settimeout(remaining_timeout())
+ fn(sock)
+
+ except Exception as e:
+ # Save the exception for re-raising on the main thread.
+ self._thread_exc = e
+
+ # TODO: rather than using callback(), consider explicitly signaling
+ # the fn() implementation to stop early if we get an exception.
+ # Otherwise we'll hang until the end of the timeout.
+ self._thread = threading.Thread(target=_bg)
+ self.callback(self._join)
+
+ self._thread.start()
+
+ def _join(self):
+ """
+ Waits for the background thread to finish and raises any thrown
+ exception. This is called during fixture teardown.
+ """
+ # Give a little bit of wiggle room on the join timeout, since we're
+ # racing against the test's own use of remaining_timeout(). (It's
+ # preferable to let tests report timeouts; the stack traces will
+ # help with debugging.)
+ self._thread.join(remaining_timeout() + 1)
+ if self._thread.is_alive():
+ raise TimeoutError("background thread is still running after timeout")
+
+ if self._thread_exc is not None:
+ raise self._thread_exc
+
+ with _Server() as s:
+ s.bind_and_listen()
+ yield s
+
+
+def test_connection_is_finished_on_error(libpq, local_server, remaining_timeout):
+ """Tests that PQfinish() gets called at the end of testing."""
+ expected_error = "something is wrong"
+
+ def serve_error(s: socket.socket) -> None:
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Quick check for the startup packet version.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (3, 0)
+
+ # Discard the remainder of the startup packet and send a v2 error.
+ s.recv(pktlen - 8)
+ s.send(b"E" + expected_error.encode() + b"\0")
+
+ # And now the socket should be closed.
+ assert not s.recv(1), "client sent unexpected data"
+
+ local_server.background(serve_error)
+
+ with pytest.raises(libpq.Error, match=expected_error):
+ # Exiting this context should result in PQfinish().
+ with libpq:
+ libpq.must_connect(host=local_server.host, port=local_server.port)
diff --git a/src/test/ssl/Makefile b/src/test/ssl/Makefile
index e8a1639db2d..895ea5ea41c 100644
--- a/src/test/ssl/Makefile
+++ b/src/test/ssl/Makefile
@@ -30,6 +30,8 @@ clean distclean:
# Doesn't depend on sslfiles because we don't rebuild them by default
check:
$(prove_check)
+ # XXX these suites should run independently, not serially
+ $(pytest_check)
installcheck:
$(prove_installcheck)
diff --git a/src/test/ssl/meson.build b/src/test/ssl/meson.build
index d8e0fb518e0..a0ee2af0899 100644
--- a/src/test/ssl/meson.build
+++ b/src/test/ssl/meson.build
@@ -15,4 +15,10 @@ tests += {
't/003_sslinfo.pl',
],
},
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_client.py',
+ 'pyt/test_server.py',
+ ],
+ },
}
diff --git a/src/test/ssl/pyt/conftest.py b/src/test/ssl/pyt/conftest.py
new file mode 100644
index 00000000000..fb4db372f03
--- /dev/null
+++ b/src/test/ssl/pyt/conftest.py
@@ -0,0 +1,129 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import datetime
+import tempfile
+from collections import namedtuple
+
+import pytest
+
+import pg
+from pg.fixtures import *
+
+
+@pytest.fixture(scope="session")
+def cryptography():
+ return pytest.importorskip("cryptography", "3.3.2")
+
+
+Cert = namedtuple("Cert", "cert, certpath, key, keypath")
+
+
+@pytest.fixture(scope="session")
+def certs(cryptography, tmp_path_factory):
+ """
+ Caches commonly used certificates at the session level, and provides a way
+ to create new ones.
+
+ - certs.ca: the root CA certificate
+
+ - certs.server: the "standard" server certficate, signed by certs.ca
+
+ - certs.server_host: the hostname of the certs.server certificate
+
+ - certs.new(): creates a custom certificate, signed by certs.ca
+ """
+
+ from cryptography import x509
+ from cryptography.hazmat.primitives import hashes, serialization
+ from cryptography.hazmat.primitives.asymmetric import rsa
+ from cryptography.x509.oid import NameOID
+
+ tmpdir = tmp_path_factory.mktemp("test-certs")
+
+ class _Certs:
+ def __init__(self):
+ self.ca = self.new(
+ x509.Name(
+ [x509.NameAttribute(NameOID.COMMON_NAME, "PG pytest CA")],
+ ),
+ ca=True,
+ )
+
+ self.server_host = "example.org"
+ self.server = self.new(
+ x509.Name(
+ [x509.NameAttribute(NameOID.COMMON_NAME, self.server_host)],
+ )
+ )
+
+ def new(self, subject: x509.Name, *, ca=False) -> Cert:
+ """
+ Creates and signs a new Cert with the given subject name. If ca is
+ True, the certificate will be self-signed; otherwise the certificate
+ is signed by self.ca.
+ """
+ key = rsa.generate_private_key(
+ public_exponent=65537,
+ key_size=2048,
+ )
+
+ builder = x509.CertificateBuilder()
+ now = datetime.datetime.now(datetime.timezone.utc)
+
+ builder = (
+ builder.subject_name(subject)
+ .public_key(key.public_key())
+ .serial_number(x509.random_serial_number())
+ .not_valid_before(now)
+ .not_valid_after(now + datetime.timedelta(hours=1))
+ )
+
+ if ca:
+ builder = builder.issuer_name(subject)
+ else:
+ builder = builder.issuer_name(self.ca.cert.subject)
+
+ builder = builder.add_extension(
+ x509.BasicConstraints(ca=ca, path_length=None),
+ critical=True,
+ )
+
+ cert = builder.sign(
+ private_key=key if ca else self.ca.key,
+ algorithm=hashes.SHA256(),
+ )
+
+ # Dump the certificate and key to file.
+ keypath = self._tofile(
+ key.private_bytes(
+ serialization.Encoding.PEM,
+ serialization.PrivateFormat.PKCS8,
+ serialization.NoEncryption(),
+ ),
+ suffix=".key",
+ )
+ certpath = self._tofile(
+ cert.public_bytes(serialization.Encoding.PEM),
+ suffix="-ca.crt" if ca else ".crt",
+ )
+
+ return Cert(
+ cert=cert,
+ certpath=certpath,
+ key=key,
+ keypath=keypath,
+ )
+
+ def _tofile(self, data: bytes, *, suffix) -> str:
+ """
+ Dumps data to a file on disk with the requested suffix and returns
+ the path. The file is located somewhere in pytest's temporary
+ directory root.
+ """
+ f = tempfile.NamedTemporaryFile(suffix=suffix, dir=tmpdir, delete=False)
+ with f:
+ f.write(data)
+
+ return f.name
+
+ return _Certs()
diff --git a/src/test/ssl/pyt/test_client.py b/src/test/ssl/pyt/test_client.py
new file mode 100644
index 00000000000..28110ae0717
--- /dev/null
+++ b/src/test/ssl/pyt/test_client.py
@@ -0,0 +1,278 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import ctypes
+import socket
+import ssl
+import struct
+import threading
+from typing import Callable
+
+import pytest
+
+import pg
+
+# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
+pytestmark = pg.require_test_extra("ssl")
+
+
+@pytest.fixture(scope="session", autouse=True)
+def skip_if_no_ssl_support(libpq_handle):
+ """Skips tests if SSL support is not configured."""
+
+ # Declare PQsslAttribute().
+ PQsslAttribute = libpq_handle.PQsslAttribute
+ PQsslAttribute.restype = ctypes.c_char_p
+ PQsslAttribute.argtypes = [ctypes.c_void_p, ctypes.c_char_p]
+
+ if not PQsslAttribute(None, b"library"):
+ pytest.skip("requires SSL support to be configured")
+
+
+#
+# Test Fixtures
+#
+
+
+@pytest.fixture
+def tcp_server_class(remaining_timeout):
+ """
+ Metafixture to combine related logic for tcp_server and ssl_server.
+
+ TODO: combine with test_libpq.local_server
+ """
+
+ class _TCPServer(contextlib.ExitStack):
+ """
+ Implementation class for tcp_server. See .background() for the primary
+ entry point for tests. Postgres clients may connect to this server via
+ **tcp_server.conninfo.
+
+ _TCPServer derives from contextlib.ExitStack to provide easy cleanup of
+ associated resources; see the documentation for that class for a full
+ explanation.
+ """
+
+ def __init__(self):
+ super().__init__()
+
+ self._thread = None
+ self._thread_exc = None
+ self._listener = self.enter_context(
+ socket.socket(socket.AF_INET, socket.SOCK_STREAM),
+ )
+
+ self._bind_and_listen()
+ sockname = self._listener.getsockname()
+ self.conninfo = dict(
+ hostaddr=sockname[0],
+ port=sockname[1],
+ )
+
+ def _bind_and_listen(self):
+ """
+ Does the actual work of binding the socket and listening for
+ connections.
+
+ The listen backlog is currently hardcoded to one.
+ """
+ self._listener.bind(("127.0.0.1", 0))
+ self._listener.listen(1)
+
+ def background(self, fn: Callable[[socket.socket], None]) -> None:
+ """
+ Accepts a client connection on a background thread and passes it to
+ the provided callback. Any exceptions raised from the callback will
+ be re-raised on the main thread during fixture teardown.
+
+ Blocking operations on the connected socket default to using the
+ remaining_timeout(), though this can be changed by the test via the
+ socket's .settimeout().
+ """
+
+ def _bg():
+ try:
+ self._listener.settimeout(remaining_timeout())
+ sock, _ = self._listener.accept()
+
+ with sock:
+ sock.settimeout(remaining_timeout())
+ fn(sock)
+
+ except Exception as e:
+ # Save the exception for re-raising on the main thread.
+ self._thread_exc = e
+
+ # TODO: rather than using callback(), consider explicitly signaling
+ # the fn() implementation to stop early if we get an exception.
+ # Otherwise we'll hang until the end of the timeout.
+ self._thread = threading.Thread(target=_bg)
+ self.callback(self._join)
+
+ self._thread.start()
+
+ def _join(self):
+ """
+ Waits for the background thread to finish and raises any thrown
+ exception. This is called during fixture teardown.
+ """
+ # Give a little bit of wiggle room on the join timeout, since we're
+ # racing against the test's own use of remaining_timeout(). (It's
+ # preferable to let tests report timeouts; the stack traces will
+ # help with debugging.)
+ self._thread.join(remaining_timeout() + 1)
+ if self._thread.is_alive():
+ raise TimeoutError("background thread is still running after timeout")
+
+ if self._thread_exc is not None:
+ raise self._thread_exc
+
+ return _TCPServer
+
+
+@pytest.fixture
+def tcp_server(tcp_server_class):
+ """
+ Opens up a local TCP socket for mocking a Postgres server on a background
+ thread. See the _TCPServer API for usage.
+ """
+ with tcp_server_class() as s:
+ yield s
+
+
+@pytest.fixture
+def ssl_server(tcp_server_class, certs):
+ """
+ Like tcp_server, but with an additional .background_ssl() method which will
+ perform a SSLRequest handshake on the socket before handing the connection
+ to the test callback.
+
+ This server uses certs.server as its identity.
+ """
+
+ class _SSLServer(tcp_server_class):
+ def __init__(self):
+ super().__init__()
+
+ self.conninfo["host"] = certs.server_host
+
+ self._ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
+ self._ctx.load_cert_chain(certs.server.certpath, certs.server.keypath)
+
+ def background_ssl(self, fn: Callable[[ssl.SSLSocket], None]) -> None:
+ """
+ Invokes a server callback as with .background(), but an SSLRequest
+ handshake is performed first, and the socket provided to the
+ callback has been wrapped in an OpenSSL layer.
+ """
+
+ def handshake(s: socket.socket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Make sure we get an SSLRequest.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (1234, 5679)
+ assert pktlen == 8
+
+ # Accept the SSLRequest.
+ s.send(b"S")
+
+ with self._ctx.wrap_socket(s, server_side=True) as wrapped:
+ fn(wrapped)
+
+ self.background(handshake)
+
+ with _SSLServer() as s:
+ yield s
+
+
+#
+# Tests
+#
+
+
+@pytest.mark.parametrize("sslmode", ("require", "verify-ca", "verify-full"))
+def test_server_with_ssl_disabled(libpq, tcp_server, certs, sslmode):
+ """
+ Make sure client refuses to talk to non-SSL servers with stricter
+ sslmodes.
+ """
+
+ def refuse_ssl(s: socket.socket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Make sure we get an SSLRequest.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (1234, 5679)
+ assert pktlen == 8
+
+ # Refuse the SSLRequest.
+ s.send(b"N")
+
+ # Wait for the client to close the connection.
+ assert not s.recv(1), "client sent unexpected data"
+
+ tcp_server.background(refuse_ssl)
+
+ with pytest.raises(libpq.Error, match="server does not support SSL"):
+ with libpq: # XXX tests shouldn't need to do this
+ libpq.must_connect(
+ **tcp_server.conninfo,
+ sslrootcert=certs.ca.certpath,
+ sslmode=sslmode,
+ )
+
+
+def test_verify_full_connection(libpq, ssl_server, certs):
+ """Completes a verify-full connection and empty query."""
+
+ def handle_empty_query(s: ssl.SSLSocket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Check the startup packet version, then discard the remainder.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (3, 0)
+ s.recv(pktlen - 8)
+
+ # Send the required litany of server messages.
+ s.send(struct.pack("!cII", b"R", 8, 0)) # AuthenticationOK
+
+ # ParameterStatus: client_encoding
+ key = b"client_encoding\0"
+ val = b"UTF-8\0"
+ s.send(struct.pack("!cI", b"S", 4 + len(key) + len(val)) + key + val)
+
+ # ParameterStatus: DateStyle
+ key = b"DateStyle\0"
+ val = b"ISO, MDY\0"
+ s.send(struct.pack("!cI", b"S", 4 + len(key) + len(val)) + key + val)
+
+ s.send(struct.pack("!cIII", b"K", 12, 1234, 1234)) # BackendKeyData
+ s.send(struct.pack("!cIc", b"Z", 5, b"I")) # ReadyForQuery
+
+ # Expect an empty query.
+ pkttype = s.recv(1)
+ assert pkttype == b"Q"
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+ assert s.recv(pktlen - 4) == b"\0"
+
+ # Send an EmptyQueryResponse+ReadyForQuery.
+ s.send(struct.pack("!cI", b"I", 4))
+ s.send(struct.pack("!cIc", b"Z", 5, b"I"))
+
+ # libpq should terminate and close the connection.
+ assert s.recv(1) == b"X"
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+ assert pktlen == 4
+
+ assert not s.recv(1), "client sent unexpected data"
+
+ ssl_server.background_ssl(handle_empty_query)
+
+ conn = libpq.must_connect(
+ **ssl_server.conninfo,
+ sslrootcert=certs.ca.certpath,
+ sslmode="verify-full",
+ )
+ with conn:
+ assert conn.exec("").status() == libpq.PGRES_EMPTY_QUERY
--
2.51.1
v3-0004-WIP-pytest-Add-some-server-side-SSL-tests.patchtext/x-patch; charset=utf-8; name=v3-0004-WIP-pytest-Add-some-server-side-SSL-tests.patchDownload
From a79257394b3766e676c02543d7f416217f05d293 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Fri, 22 Aug 2025 17:39:40 -0700
Subject: [PATCH v3 04/10] WIP: pytest: Add some server-side SSL tests
In the same vein as the previous commit, this is a server-only test
suite operating against a mock client. The test itself is a heavily
parameterized check for direct-SSL handshake behavior, using a
combination of "standard" and "custom" certificates via the certs
fixture.
installcheck is currently unsupported, but the architecture has some
extension points that should make it possible later. For now, a new
server is always started for the test session.
New session-level fixtures have been added which probably need to
migrate to the `pg` package. Of note:
- datadir points to the server's data directory
- sockdir points to the server's UNIX socket/lock directory
- server_instance actually inits and starts a server via the pg_ctl on
PATH (and could eventually point at an installcheck target)
Wrapping these session-level fixtures is pg_server[_session], which
provides APIs for configuration changes that unwind themselves at the
end of fixture scopes. There's also an example of nested scopes, via
pg_server_session.subcontext(). Many TODOs remain before we're on par
with Test::Cluster, but this should illustrate my desired architecture
pretty well.
Windows currently uses SCRAM-over-UNIX for the admin account rather than
SSPI-over-TCP. There's some dead Win32 code in pg.current_windows_user,
but I've kept it as an illustration of how a developer might write such
code for SSPI. I'll probably remove it in a future patch version.
TODOs:
- port more server configuration behavior from PostgreSQL::Test::Cluster
- decide again on "session" vs. "module" scope for server fixtures
- improve remaining_timeout() integration with socket operations; at the
moment, the timeout resets on every call rather than decrementing
---
src/test/pytest/pg/__init__.py | 1 +
src/test/pytest/pg/_win32.py | 145 +++++++++
src/test/ssl/pyt/conftest.py | 113 +++++++
src/test/ssl/pyt/test_server.py | 538 ++++++++++++++++++++++++++++++++
4 files changed, 797 insertions(+)
create mode 100644 src/test/pytest/pg/_win32.py
create mode 100644 src/test/ssl/pyt/test_server.py
diff --git a/src/test/pytest/pg/__init__.py b/src/test/pytest/pg/__init__.py
index ef8faf54ca4..5dae49b6406 100644
--- a/src/test/pytest/pg/__init__.py
+++ b/src/test/pytest/pg/__init__.py
@@ -1,3 +1,4 @@
# Copyright (c) 2025, PostgreSQL Global Development Group
from ._env import has_test_extra, require_test_extra
+from ._win32 import current_windows_user
diff --git a/src/test/pytest/pg/_win32.py b/src/test/pytest/pg/_win32.py
new file mode 100644
index 00000000000..3fd67b10191
--- /dev/null
+++ b/src/test/pytest/pg/_win32.py
@@ -0,0 +1,145 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import ctypes
+import platform
+
+
+def current_windows_user():
+ """
+ A port of pg_regress.c's current_windows_user() helper. Returns
+ (accountname, domainname).
+
+ XXX This is dead code now, but I'm keeping it as a motivating example of
+ Win32 interaction, and someone may find it useful in the future when writing
+ SSPI tests?
+ """
+ try:
+ advapi32 = ctypes.windll.advapi32
+ kernel32 = ctypes.windll.kernel32
+ except AttributeError:
+ raise RuntimeError(
+ f"current_windows_user() is not supported on {platform.system()}"
+ )
+
+ def raise_winerror_when_false(result, func, arguments):
+ """
+ A ctypes errcheck handler that raises WinError (which will contain the
+ result of GetLastError()) when the function's return value is false.
+ """
+ if not result:
+ raise ctypes.WinError()
+
+ #
+ # Function Prototypes
+ #
+
+ from ctypes import wintypes
+
+ # GetCurrentProcess
+ kernel32.GetCurrentProcess.restype = wintypes.HANDLE
+ kernel32.GetCurrentProcess.argtypes = []
+
+ # OpenProcessToken
+ TOKEN_READ = 0x00020008
+
+ advapi32.OpenProcessToken.restype = wintypes.BOOL
+ advapi32.OpenProcessToken.argtypes = [
+ wintypes.HANDLE,
+ wintypes.DWORD,
+ wintypes.PHANDLE,
+ ]
+ advapi32.OpenProcessToken.errcheck = raise_winerror_when_false
+
+ # GetTokenInformation
+ PSID = wintypes.LPVOID # we don't need the internals
+ TOKEN_INFORMATION_CLASS = wintypes.INT
+ TokenUser = 1
+
+ class SID_AND_ATTRIBUTES(ctypes.Structure):
+ _fields_ = [
+ ("Sid", PSID),
+ ("Attributes", wintypes.DWORD),
+ ]
+
+ class TOKEN_USER(ctypes.Structure):
+ _fields_ = [
+ ("User", SID_AND_ATTRIBUTES),
+ ]
+
+ advapi32.GetTokenInformation.restype = wintypes.BOOL
+ advapi32.GetTokenInformation.argtypes = [
+ wintypes.HANDLE,
+ TOKEN_INFORMATION_CLASS,
+ wintypes.LPVOID,
+ wintypes.DWORD,
+ wintypes.PDWORD,
+ ]
+ advapi32.GetTokenInformation.errcheck = raise_winerror_when_false
+
+ # LookupAccountSid
+ SID_NAME_USE = wintypes.INT
+ PSID_NAME_USE = ctypes.POINTER(SID_NAME_USE)
+
+ advapi32.LookupAccountSidW.restype = wintypes.BOOL
+ advapi32.LookupAccountSidW.argtypes = [
+ wintypes.LPCWSTR,
+ PSID,
+ wintypes.LPWSTR,
+ wintypes.LPDWORD,
+ wintypes.LPWSTR,
+ wintypes.LPDWORD,
+ PSID_NAME_USE,
+ ]
+ advapi32.LookupAccountSidW.errcheck = raise_winerror_when_false
+
+ #
+ # Implementation (see pg_SSPI_recv_auth())
+ #
+
+ # Get the current process token...
+ token = wintypes.HANDLE()
+ proc = kernel32.GetCurrentProcess()
+ advapi32.OpenProcessToken(proc, TOKEN_READ, token)
+
+ # ...then read the TOKEN_USER struct for that token...
+ info = TOKEN_USER()
+ infolen = wintypes.DWORD()
+
+ try:
+ # (GetTokenInformation creates a buffer bigger than TOKEN_USER, so we
+ # have to query the correct length first.)
+ advapi32.GetTokenInformation(token, TokenUser, None, 0, ctypes.byref(infolen))
+ assert False, "GetTokenInformation succeeded unexpectedly"
+
+ except OSError as err:
+ assert err.winerror == 122 # insufficient buffer
+
+ ctypes.resize(info, infolen.value)
+ advapi32.GetTokenInformation(
+ token,
+ TokenUser,
+ ctypes.byref(info),
+ ctypes.sizeof(info),
+ ctypes.byref(infolen),
+ )
+
+ # ...then pull the account and domain names out of the user SID.
+ MAXPGPATH = 1024
+
+ account = ctypes.create_unicode_buffer(MAXPGPATH)
+ domain = ctypes.create_unicode_buffer(MAXPGPATH)
+ accountlen = wintypes.DWORD(ctypes.sizeof(account))
+ domainlen = wintypes.DWORD(ctypes.sizeof(domain))
+ use = SID_NAME_USE()
+
+ advapi32.LookupAccountSidW(
+ None,
+ info.User.Sid,
+ account,
+ ctypes.byref(accountlen),
+ domain,
+ ctypes.byref(domainlen),
+ ctypes.byref(use),
+ )
+
+ return (account.value, domain.value)
diff --git a/src/test/ssl/pyt/conftest.py b/src/test/ssl/pyt/conftest.py
index fb4db372f03..85d2c994828 100644
--- a/src/test/ssl/pyt/conftest.py
+++ b/src/test/ssl/pyt/conftest.py
@@ -1,6 +1,12 @@
# Copyright (c) 2025, PostgreSQL Global Development Group
import datetime
+import os
+import pathlib
+import platform
+import secrets
+import socket
+import subprocess
import tempfile
from collections import namedtuple
@@ -127,3 +133,110 @@ def certs(cryptography, tmp_path_factory):
return f.name
return _Certs()
+
+
+@pytest.fixture(scope="session")
+def datadir(tmp_path_factory):
+ """
+ Returns the directory name to use as the server data directory. If
+ TESTDATADIR is provided, that will be used; otherwise a new temporary
+ directory is created in the pytest temp root.
+ """
+ d = os.getenv("TESTDATADIR")
+ if d:
+ d = pathlib.Path(d)
+ else:
+ d = tmp_path_factory.mktemp("tmp_check")
+
+ return d
+
+
+@pytest.fixture(scope="session")
+def sockdir(tmp_path_factory):
+ """
+ Returns the directory name to use as the server's unix_socket_directories
+ setting. Local client connections use this as the PGHOST.
+
+ At the moment, this is always put under the pytest temp root.
+ """
+ return tmp_path_factory.mktemp("sockfiles")
+
+
+@pytest.fixture(scope="session")
+def winpassword():
+ """The per-session SCRAM password for the server admin on Windows."""
+ return secrets.token_urlsafe(16)
+
+
+@pytest.fixture(scope="session")
+def server_instance(certs, datadir, sockdir, winpassword):
+ """
+ Starts a running Postgres server listening on localhost. The HBA initially
+ allows only local UNIX connections from the same user.
+
+ TODO: when installcheck is supported, this should optionally point to the
+ currently running server instead.
+ """
+
+ # Lock down the HBA by default; tests can open it back up later.
+ if platform.system() == "Windows":
+ # On Windows, for admin connections, use SCRAM with a generated password
+ # over local sockets. This requires additional work during initdb.
+ method = "scram-sha-256"
+
+ # NamedTemporaryFile doesn't work very nicely on Windows until Python
+ # 3.12, which introduces NamedTemporaryFile(delete_on_close=False).
+ # Until then, specify delete=False and manually unlink after use.
+ with tempfile.NamedTemporaryFile("w", delete=False) as pwfile:
+ pwfile.write(winpassword)
+
+ subprocess.check_call(
+ ["initdb", "--auth=scram-sha-256", "--pwfile", pwfile.name, datadir]
+ )
+ os.unlink(pwfile.name)
+
+ else:
+ # For other OSes we can just use peer auth.
+ method = "peer"
+ subprocess.check_call(["pg_ctl", "-D", datadir, "init"])
+
+ with open(datadir / "pg_hba.conf", "w") as f:
+ print(f"# default: local {method} connections only", file=f)
+ print(f"local all all {method}", file=f)
+
+ # Figure out a port to listen on. Attempt to reserve both IPv4 and IPv6
+ # addresses in one go.
+ #
+ # Note: socket.has_dualstack_ipv6/create_server are only in Python 3.8+.
+ if hasattr(socket, "has_dualstack_ipv6") and socket.has_dualstack_ipv6():
+ addr = ("::1", 0)
+ s = socket.create_server(addr, family=socket.AF_INET6, dualstack_ipv6=True)
+
+ hostaddr, port, _, _ = s.getsockname()
+ addrs = [hostaddr, "127.0.0.1"]
+
+ else:
+ addr = ("127.0.0.1", 0)
+
+ s = socket.socket()
+ s.bind(addr)
+
+ hostaddr, port = s.getsockname()
+ addrs = [hostaddr]
+
+ log = os.path.join(datadir, "postgresql.log")
+
+ with s, open(os.path.join(datadir, "postgresql.conf"), "a") as f:
+ print(file=f)
+ print("unix_socket_directories = '{}'".format(sockdir.as_posix()), file=f)
+ print("listen_addresses = '{}'".format(",".join(addrs)), file=f)
+ print("port =", port, file=f)
+ print("log_connections = all", file=f)
+
+ # Between closing of the socket, s, and server start, we're racing against
+ # anything that wants to open up ephemeral ports, so try not to put any new
+ # work here.
+
+ subprocess.check_call(["pg_ctl", "-D", datadir, "-l", log, "start"])
+ yield (hostaddr, port)
+ subprocess.check_call(["pg_ctl", "-D", datadir, "-l", log, "stop"])
diff --git a/src/test/ssl/pyt/test_server.py b/src/test/ssl/pyt/test_server.py
new file mode 100644
index 00000000000..2d0be735371
--- /dev/null
+++ b/src/test/ssl/pyt/test_server.py
@@ -0,0 +1,538 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import os
+import pathlib
+import platform
+import re
+import shutil
+import socket
+import ssl
+import struct
+import subprocess
+import tempfile
+from collections import namedtuple
+from typing import Dict, List, Union
+
+import pytest
+
+import pg
+
+# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
+pytestmark = pg.require_test_extra("ssl")
+
+
+#
+# Test Fixtures
+#
+
+
+@pytest.fixture(scope="session")
+def connenv(server_instance, sockdir, datadir):
+ """
+ Provides the values for several PG* environment variables needed for our
+ utility programs to connect to the server_instance.
+ """
+ return {
+ "PGHOST": str(sockdir),
+ "PGPORT": str(server_instance[1]),
+ "PGDATABASE": "postgres",
+ "PGDATA": str(datadir),
+ }
+
+
+class FileBackup(contextlib.AbstractContextManager):
+ """
+ A context manager which backs up a file's contents, restoring them on exit.
+ """
+
+ def __init__(self, file: pathlib.Path):
+ super().__init__()
+
+ self._file = file
+
+ def __enter__(self):
+ with tempfile.NamedTemporaryFile(
+ prefix=self._file.name, dir=self._file.parent, delete=False
+ ) as f:
+ self._backup = pathlib.Path(f.name)
+
+ shutil.copyfile(self._file, self._backup)
+
+ return self
+
+ def __exit__(self, *exc):
+ # Swap the backup and the original file, so that the modified contents
+ # can still be inspected in case of failure.
+ #
+ # TODO: this is less helpful if there are multiple layers, because it's
+ # not clear which backup to look at. Can the backup name be printed as
+ # part of the failed test output? Should we only swap on test failure?
+ tmp = self._backup.parent / (self._backup.name + ".tmp")
+
+ shutil.copyfile(self._file, tmp)
+ shutil.copyfile(self._backup, self._file)
+ shutil.move(tmp, self._backup)
+
+
+class HBA(FileBackup):
+ """
+ Backs up a server's HBA configuration and provides means for temporarily
+ editing it. See also pg_server, which provides an instance of this class and
+ context managers for enforcing the reload/restart order of operations.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "pg_hba.conf")
+
+ def prepend(self, *lines: Union[str, List[str]]):
+ """
+ Temporarily prepends lines to the server's pg_hba.conf.
+
+ As sugar for aligning HBA columns in the tests, each line can be either
+ a string or a list of strings. List elements will be joined by single
+ spaces before they are written to file.
+ """
+ with open(self._file, "r") as f:
+ prior_data = f.read()
+
+ with open(self._file, "w") as f:
+ for l in lines:
+ if isinstance(l, list):
+ print(*l, file=f)
+ else:
+ print(l, file=f)
+
+ f.write(prior_data)
+
+
+class Config(FileBackup):
+ """
+ Backs up a server's postgresql.conf and provides means for temporarily
+ editing it. See also pg_server, which provides an instance of this class and
+ context managers for enforcing the reload/restart order of operations.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "postgresql.conf")
+
+ def set(self, **gucs):
+ """
+ Temporarily appends GUC settings to the server's postgresql.conf.
+ """
+
+ with open(self._file, "a") as f:
+ print(file=f)
+
+ for n, v in gucs.items():
+ v = str(v)
+
+ # TODO: proper quoting
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+ v = "'{}'".format(v)
+
+ print(n, "=", v, file=f)
+
+
+@pytest.fixture(scope="session")
+def pg_server_session(server_instance, connenv, datadir, winpassword):
+ """
+ Provides common routines for configuring and connecting to the
+ server_instance. For example:
+
+ users = pg_server_session.create_users("one", "two")
+ dbs = pg_server_session.create_dbs("default")
+
+ with pg_server_session.reloading() as s:
+ s.hba.prepend(["local", dbs["default"], users["two"], "peer"])
+
+ conn = connect_somehow(**pg_server_session.conninfo)
+ ...
+
+ Attributes of note are
+ - .conninfo: provides TCP connection info for the server
+
+ This fixture unwinds its configuration changes at the end of the pytest
+ session. For more granular changes, pg_server_session.subcontext() splits
+ off a "nested" context to allow smaller scopes.
+ """
+
+ class _Server(contextlib.ExitStack):
+ conninfo = dict(
+ hostaddr=server_instance[0],
+ port=server_instance[1],
+ )
+
+ # for _backup_configuration()
+ _Backup = namedtuple("Backup", "conf, hba")
+
+ def subcontext(self):
+ """
+ Creates a new server stack instance that can be tied to a smaller
+ scope than "session".
+ """
+ # So far, there doesn't seem to be a need to link the two objects,
+ # since HBA/Config/FileBackup operate directly on the filesystem and
+ # will appear to "nest" naturally.
+ return self.__class__()
+
+ def create_users(self, *userkeys: str) -> Dict[str, str]:
+ """
+ Creates new users which will be dropped at the end of the server
+ context.
+
+ For each provided key, a related user name will be selected and
+ stored in a map. This map is returned to let calling code look up
+ the selected usernames (instead of hardcoding them and potentially
+ stomping on an existing installation).
+ """
+ usermap = {}
+
+ for u in userkeys:
+ # TODO: use a uniquifier to support installcheck
+ name = u + "user"
+ usermap[u] = name
+
+ # TODO: proper escaping
+ self.psql("-c", "CREATE USER " + name)
+ self.callback(self.psql, "-c", "DROP USER " + name)
+
+ return usermap
+
+ def create_dbs(self, *dbkeys: str) -> Dict[str, str]:
+ """
+ Creates new databases which will be dropped at the end of the server
+ context. See create_users() for the meaning of the keys and returned
+ map.
+ """
+ dbmap = {}
+
+ for d in dbkeys:
+ # TODO: use a uniquifier to support installcheck
+ name = d + "db"
+ dbmap[d] = name
+
+ # TODO: proper escaping
+ self.psql("-c", "CREATE DATABASE " + name)
+ self.callback(self.psql, "-c", "DROP DATABASE " + name)
+
+ return dbmap
+
+ @contextlib.contextmanager
+ def reloading(self):
+ """
+ Provides a context manager for making configuration changes.
+
+ If the context suite finishes successfully, the configuration will
+ be reloaded via pg_ctl. On teardown, the configuration changes will
+ be unwound, and the server will be signaled to reload again.
+
+ The context target contains the following attributes which can be
+ used to configure the server:
+ - .conf: modifies postgresql.conf
+ - .hba: modifies pg_hba.conf
+
+ For example:
+
+ with pg_server_session.reloading() as s:
+ s.conf.set(log_connections="on")
+ s.hba.prepend("local all all trust")
+ """
+ try:
+ # Push a reload onto the stack before making any other
+ # unwindable changes. That way the order of operations will be
+ #
+ # # test
+ # - config change 1
+ # - config change 2
+ # - reload
+ # # teardown
+ # - undo config change 2
+ # - undo config change 1
+ # - reload
+ #
+ self.callback(self.pg_ctl, "reload")
+ yield self._backup_configuration()
+ except:
+ # We only want to reload at the end of the suite if there were
+ # no errors. During exceptions, the pushed callback handles
+ # things instead, so there's nothing to do here.
+ raise
+ else:
+ # Suite completed successfully.
+ self.pg_ctl("reload")
+
+ @contextlib.contextmanager
+ def restarting(self):
+ """Like .reloading(), but with a full server restart."""
+ try:
+ self.callback(self.pg_ctl, "restart")
+ yield self._backup_configuration()
+ except:
+ raise
+ else:
+ self.pg_ctl("restart")
+
+ def psql(self, *args):
+ """
+ Runs psql with the given arguments. Password prompts are always
+ disabled. On Windows, the admin password will be included in the
+ environment.
+ """
+ if platform.system() == "Windows":
+ pw = dict(PGPASSWORD=winpassword)
+ else:
+ pw = None
+
+ self._run("psql", "-w", *args, addenv=pw)
+
+ def pg_ctl(self, *args):
+ """
+ Runs pg_ctl with the given arguments. Log output will be placed in
+ postgresql.log in the server's data directory.
+
+ TODO: put the log in TESTLOGDIR
+ """
+ self._run("pg_ctl", "-l", str(datadir / "postgresql.log"), *args)
+
+ def _run(self, cmd, *args, addenv: dict = None):
+ # Override the existing environment with the connenv values and
+ # anything the caller wanted to add. (Python 3.9 gives us the
+ # less-ugly `os.environ | connenv` merge operator.)
+ subenv = dict(os.environ, **connenv)
+ if addenv:
+ subenv.update(addenv)
+
+ subprocess.check_call([cmd, *args], env=subenv)
+
+ def _backup_configuration(self):
+ # Wrap the existing HBA and configuration with FileBackups.
+ return self._Backup(
+ hba=self.enter_context(HBA(datadir)),
+ conf=self.enter_context(Config(datadir)),
+ )
+
+ with _Server() as s:
+ yield s
+
+
+@pytest.fixture(scope="module", autouse=True)
+def ssl_setup(pg_server_session, certs, datadir):
+ """
+ Sets up required server settings for all tests in this module. The fixture
+ variable is a tuple (users, dbs) containing the user and database names that
+ have been chosen for the test session.
+ """
+ try:
+ with pg_server_session.restarting() as s:
+ s.conf.set(
+ ssl="on",
+ ssl_ca_file=certs.ca.certpath,
+ ssl_cert_file=certs.server.certpath,
+ ssl_key_file=certs.server.keypath,
+ )
+
+ # Reject by default.
+ s.hba.prepend("hostssl all all all reject")
+
+ except subprocess.CalledProcessError:
+ # This is a decent place to skip if the server isn't set up for SSL.
+ logpath = datadir / "postgresql.log"
+ unsupported = re.compile("SSL is not supported")
+
+ with open(logpath, "r") as log:
+ for line in log:
+ if unsupported.search(line):
+ pytest.skip("the server does not support SSL")
+
+ # Some other error happened.
+ raise
+
+ users = pg_server_session.create_users(
+ "ssl",
+ )
+
+ dbs = pg_server_session.create_dbs(
+ "ssl",
+ )
+
+ return (users, dbs)
+
+
+@pytest.fixture(scope="module")
+def client_cert(ssl_setup, certs):
+ """
+ Creates a Cert for the "ssl" user.
+ """
+ from cryptography import x509
+ from cryptography.x509.oid import NameOID
+
+ users, _ = ssl_setup
+ user = users["ssl"]
+
+ return certs.new(x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, user)]))
+
+
+@pytest.fixture
+def pg_server(pg_server_session):
+ """
+ A per-test instance of pg_server_session. Use this fixture to make changes
+ to the server which will be rolled back at the end of every test.
+ """
+ with pg_server_session.subcontext() as s:
+ yield s
+
+
+#
+# Tests
+#
+
+
+# For use with the `creds` parameter below.
+CLIENT = "client"
+SERVER = "server"
+
+
+@pytest.mark.parametrize(
+ # fmt: off
+ "auth_method, creds, expected_error",
+[
+ # Trust allows anything.
+ ("trust", None, None),
+ ("trust", CLIENT, None),
+ ("trust", SERVER, None),
+
+ # verify-ca allows any CA-signed certificate.
+ ("trust clientcert=verify-ca", None, "requires a valid client certificate"),
+ ("trust clientcert=verify-ca", CLIENT, None),
+ ("trust clientcert=verify-ca", SERVER, None),
+
+ # cert and verify-full allow only the correct certificate.
+ ("trust clientcert=verify-full", None, "requires a valid client certificate"),
+ ("trust clientcert=verify-full", CLIENT, None),
+ ("trust clientcert=verify-full", SERVER, "authentication failed for user"),
+ ("cert", None, "requires a valid client certificate"),
+ ("cert", CLIENT, None),
+ ("cert", SERVER, "authentication failed for user"),
+],
+ # fmt: on
+)
+def test_direct_ssl_certificate_authentication(
+ pg_server,
+ ssl_setup,
+ certs,
+ client_cert,
+ remaining_timeout,
+ # test parameters
+ auth_method,
+ creds,
+ expected_error,
+):
+ """
+ Tests direct SSL connections with various client-certificate/HBA
+ combinations.
+ """
+
+ # Set up the HBA as desired by the test.
+ users, dbs = ssl_setup
+
+ user = users["ssl"]
+ db = dbs["ssl"]
+
+ with pg_server.reloading() as s:
+ s.hba.prepend(
+ ["hostssl", db, user, "127.0.0.1/32", auth_method],
+ ["hostssl", db, user, "::1/128", auth_method],
+ )
+
+ # Configure the SSL settings for the client.
+ ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
+ ctx.load_verify_locations(cafile=certs.ca.certpath)
+ ctx.set_alpn_protocols(["postgresql"]) # for direct SSL
+
+ # Load up a client certificate if required by the test.
+ if creds == CLIENT:
+ ctx.load_cert_chain(client_cert.certpath, client_cert.keypath)
+ elif creds == SERVER:
+ # Using a server certificate as the client credential is expected to
+ # work only for clientcert=verify-ca (and `trust`, naturally).
+ ctx.load_cert_chain(certs.server.certpath, certs.server.keypath)
+
+ # Make a direct SSL connection. There's no SSLRequest in the handshake; we
+ # simply wrap a TCP connection with OpenSSL.
+ addr = (pg_server.conninfo["hostaddr"], pg_server.conninfo["port"])
+ with socket.create_connection(addr) as s:
+ s.settimeout(remaining_timeout()) # XXX this resets every operation
+
+ with ctx.wrap_socket(s, server_hostname=certs.server_host) as conn:
+ # Build and send the startup packet.
+ startup_options = dict(
+ user=user,
+ database=db,
+ application_name="pytest",
+ )
+
+ payload = b""
+ for k, v in startup_options.items():
+ payload += k.encode() + b"\0"
+ payload += str(v).encode() + b"\0"
+ payload += b"\0" # null terminator
+
+ pktlen = 4 + 4 + len(payload)
+ conn.send(struct.pack("!IHH", pktlen, 3, 0) + payload)
+
+ if not expected_error:
+ # Expect an AuthenticationOK to come back.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"R"
+ assert pktlen == 8
+
+ authn_result = struct.unpack("!I", conn.recv(4))[0]
+ assert authn_result == 0
+
+ # Read and discard to ReadyForQuery.
+ while True:
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ payload = conn.recv(pktlen - 4)
+
+ if pkttype == b"Z":
+ assert payload == b"I"
+ break
+
+ # Send an empty query.
+ conn.send(struct.pack("!cI", b"Q", 5) + b"\0")
+
+ # Expect EmptyQueryResponse+ReadyForQuery.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"I"
+ assert pktlen == 4
+
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"Z"
+
+ payload = conn.recv(pktlen - 4)
+ assert payload == b"I"
+
+ else:
+ # Match the expected authentication error.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"E"
+
+ payload = conn.recv(pktlen - 4)
+ msg = None
+
+ for component in payload.split(b"\0"):
+ if not component:
+ break # end of message
+
+ key, val = component[:1], component[1:]
+ if key == b"S":
+ assert val == b"FATAL"
+ elif key == b"M":
+ msg = val.decode()
+
+ assert re.search(expected_error, msg), "server error did not match"
+
+ # Terminate.
+ conn.send(struct.pack("!cI", b"X", 4))
--
2.51.1
v3-0005-ci-Add-MTEST_SUITES-for-optional-test-tailoring.patchtext/x-patch; charset=utf-8; name=v3-0005-ci-Add-MTEST_SUITES-for-optional-test-tailoring.patchDownload
From 50ddd26a1416d5d77e1351c47b5dd349004f05b5 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 2 Sep 2025 15:37:53 -0700
Subject: [PATCH v3 05/10] ci: Add MTEST_SUITES for optional test tailoring
Should make it easier to control the test cycle time for Cirrus. Add the
desired suites (remembering `--suite setup`!) to the top-level envvar.
---
.cirrus.tasks.yml | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 762d1ce4108..cb8b6aef930 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -28,6 +28,7 @@ env:
# errors/warnings in one place.
MBUILD_TARGET: all testprep
MTEST_ARGS: --print-errorlogs --no-rebuild -C build
+ MTEST_SUITES: # --suite setup --suite ssl --suite ...
PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
@@ -252,7 +253,7 @@ task:
su postgres <<-EOF
set -e
ulimit -c unlimited
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
# test runningcheck, freebsd chosen because it's currently fast enough
@@ -400,7 +401,7 @@ task:
# Otherwise tests will fail on OpenBSD, due to inability to start enough
# processes.
ulimit -p 256
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
on_failure:
@@ -619,7 +620,7 @@ task:
su postgres <<-EOF
set -e
ulimit -c unlimited
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
# so that we don't upload 64bit logs if 32bit fails
rm -rf build/
@@ -632,7 +633,7 @@ task:
su postgres <<-EOF
set -e
ulimit -c unlimited
- PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS}
+ PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
on_failure:
@@ -758,7 +759,7 @@ task:
test_world_script: |
ulimit -c unlimited # default is 0
ulimit -n 1024 # default is 256, pretty low
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
on_failure:
<<: *on_failure_meson
@@ -841,7 +842,7 @@ task:
check_world_script: |
vcvarsall x64
- meson test %MTEST_ARGS% --num-processes %TEST_JOBS%
+ meson test %MTEST_ARGS% --num-processes %TEST_JOBS% %MTEST_SUITES%
on_failure:
<<: *on_failure_meson
@@ -902,7 +903,7 @@ task:
upload_caches: ccache
test_world_script: |
- %BASH% -c "meson test %MTEST_ARGS% --num-processes %TEST_JOBS%"
+ %BASH% -c "meson test %MTEST_ARGS% --num-processes %TEST_JOBS% %MTEST_SUITES%"
on_failure:
<<: *on_failure_meson
--
2.51.1
v3-0006-XXX-run-pytest-and-ssl-suite-all-OSes.patchtext/x-patch; charset=utf-8; name=v3-0006-XXX-run-pytest-and-ssl-suite-all-OSes.patchDownload
From 5570682497e995e424717bd2e9375b5d3be553f2 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 2 Sep 2025 15:38:52 -0700
Subject: [PATCH v3 06/10] XXX run pytest and ssl suite, all OSes
---
.cirrus.star | 2 +-
.cirrus.tasks.yml | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/.cirrus.star b/.cirrus.star
index e9bb672b959..7c1caaa12f1 100644
--- a/.cirrus.star
+++ b/.cirrus.star
@@ -73,7 +73,7 @@ def compute_environment_vars():
# REPO_CI_AUTOMATIC_TRIGGER_TASKS="task_name other_task" under "Repository
# Settings" on Cirrus CI's website.
- default_manual_trigger_tasks = ['mingw', 'netbsd', 'openbsd']
+ default_manual_trigger_tasks = []
repo_ci_automatic_trigger_tasks = env.get('REPO_CI_AUTOMATIC_TRIGGER_TASKS', '')
for task in default_manual_trigger_tasks:
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index cb8b6aef930..e2d380afc60 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -28,7 +28,7 @@ env:
# errors/warnings in one place.
MBUILD_TARGET: all testprep
MTEST_ARGS: --print-errorlogs --no-rebuild -C build
- MTEST_SUITES: # --suite setup --suite ssl --suite ...
+ MTEST_SUITES: --suite setup --suite pytest --suite ssl
PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
--
2.51.1
v3-0007-Refactor-and-improve-pytest-infrastructure.patchtext/x-patch; charset=utf-8; name=v3-0007-Refactor-and-improve-pytest-infrastructure.patchDownload
From f5485a815769a23ecda96aeafa956e0ff2273e07 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <postgres@jeltef.nl>
Date: Sun, 19 Oct 2025 23:01:30 +0200
Subject: [PATCH v3 07/10] Refactor and improve pytest infrastructure
This change does a lot of refactoring and adding new features to the
pytest based test infrastructure.
The primary features it adds are:
- A `sql` method on `PGconn`: It takes a query and returns the results
as native python types.
- A `conn` fixture: Which is a libpq based connection to the default
Postgres server.
- Use the `pg_config` binary to find the libdir and bindir (can be
overridden by setting PG_CONFIG). Otherwise I had to use
LD_LIBRARY_PATH when running pytest manually.
The refactoring it does:
- Rename `pg_server` fixture to `pg` since it'll likely be one of the
most commonly used ones.
- Rename `pg` module to `pypg` to avoid naming conflict/shadowing
problems with the newly renamed `pg` fixture
- Move class definitions outside of fixtures to separate modules (either
in the `pypg` module or the new `libpq` module)
- Move all "general" fixtures to the `pypg.fixtures` module, instead of
having them be defined in the ssl module.
---
src/test/pytest/libpq.py | 409 ++++++++++++++++++++++
src/test/pytest/pg/fixtures.py | 212 -----------
src/test/pytest/plugins/pgtap.py | 1 -
src/test/pytest/{pg => pypg}/__init__.py | 0
src/test/pytest/{pg => pypg}/_env.py | 1 -
src/test/pytest/{pg => pypg}/_win32.py | 0
src/test/pytest/pypg/fixtures.py | 175 +++++++++
src/test/pytest/pypg/server.py | 387 ++++++++++++++++++++
src/test/pytest/pypg/util.py | 42 +++
src/test/pytest/pyt/conftest.py | 3 +-
src/test/pytest/pyt/test_libpq.py | 23 +-
src/test/pytest/pyt/test_query_helpers.py | 286 +++++++++++++++
src/test/ssl/pyt/conftest.py | 136 ++-----
src/test/ssl/pyt/test_client.py | 26 +-
src/test/ssl/pyt/test_server.py | 380 +-------------------
15 files changed, 1370 insertions(+), 711 deletions(-)
create mode 100644 src/test/pytest/libpq.py
delete mode 100644 src/test/pytest/pg/fixtures.py
rename src/test/pytest/{pg => pypg}/__init__.py (100%)
rename src/test/pytest/{pg => pypg}/_env.py (97%)
rename src/test/pytest/{pg => pypg}/_win32.py (100%)
create mode 100644 src/test/pytest/pypg/fixtures.py
create mode 100644 src/test/pytest/pypg/server.py
create mode 100644 src/test/pytest/pypg/util.py
create mode 100644 src/test/pytest/pyt/test_query_helpers.py
diff --git a/src/test/pytest/libpq.py b/src/test/pytest/libpq.py
new file mode 100644
index 00000000000..b851a117b66
--- /dev/null
+++ b/src/test/pytest/libpq.py
@@ -0,0 +1,409 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+libpq testing utilities - ctypes bindings and helpers for PostgreSQL's libpq library.
+
+This module provides Python wrappers around libpq for use in pytest tests.
+"""
+
+import contextlib
+import ctypes
+import datetime
+import decimal
+import enum
+import json
+import platform
+import os
+import uuid
+from typing import Any, Callable, Dict
+
+
+class LibpqError(RuntimeError):
+ """
+ Exception class for application-level errors that are encountered during libpq operations.
+ """
+
+ pass
+
+
+class ConnectionStatus(enum.IntEnum):
+ """PostgreSQL connection status codes from libpq."""
+
+ CONNECTION_OK = 0
+ CONNECTION_BAD = 1
+
+
+class ExecStatus(enum.IntEnum):
+ """PostgreSQL result status codes from PQresultStatus."""
+
+ PGRES_EMPTY_QUERY = 0
+ PGRES_COMMAND_OK = 1
+ PGRES_TUPLES_OK = 2
+ PGRES_COPY_OUT = 3
+ PGRES_COPY_IN = 4
+ PGRES_BAD_RESPONSE = 5
+ PGRES_NONFATAL_ERROR = 6
+ PGRES_FATAL_ERROR = 7
+ PGRES_COPY_BOTH = 8
+ PGRES_SINGLE_TUPLE = 9
+ PGRES_PIPELINE_SYNC = 10
+ PGRES_PIPELINE_ABORTED = 11
+
+
+class _PGconn(ctypes.Structure):
+ pass
+
+
+class _PGresult(ctypes.Structure):
+ pass
+
+
+_PGconn_p = ctypes.POINTER(_PGconn)
+_PGresult_p = ctypes.POINTER(_PGresult)
+
+
+def load_libpq_handle(libdir):
+ """
+ Loads a ctypes handle for libpq. Some common function prototypes are
+ initialized for general use.
+ """
+ system = platform.system()
+
+ if system in ("Linux", "FreeBSD", "NetBSD", "OpenBSD"):
+ name = "libpq.so.5"
+ elif system == "Darwin":
+ name = "libpq.5.dylib"
+ elif system == "Windows":
+ name = "libpq.dll"
+ else:
+ assert False, f"the libpq fixture must be updated for {system}"
+
+ libpq_path = os.path.join(libdir, name)
+
+ # XXX ctypes.CDLL() is a little stricter with load paths on Windows. The
+ # preferred way around that is to know the absolute path to libpq.dll, but
+ # that doesn't seem to mesh well with the current test infrastructure. For
+ # now, enable "standard" LoadLibrary behavior.
+ loadopts = {}
+ if system == "Windows":
+ loadopts["winmode"] = 0
+
+ lib = ctypes.CDLL(libpq_path, **loadopts)
+
+ #
+ # Function Prototypes
+ #
+
+ lib.PQconnectdb.restype = _PGconn_p
+ lib.PQconnectdb.argtypes = [ctypes.c_char_p]
+
+ lib.PQstatus.restype = ctypes.c_int
+ lib.PQstatus.argtypes = [_PGconn_p]
+
+ lib.PQexec.restype = _PGresult_p
+ lib.PQexec.argtypes = [_PGconn_p, ctypes.c_char_p]
+
+ lib.PQresultStatus.restype = ctypes.c_int
+ lib.PQresultStatus.argtypes = [_PGresult_p]
+
+ lib.PQclear.restype = None
+ lib.PQclear.argtypes = [_PGresult_p]
+
+ lib.PQerrorMessage.restype = ctypes.c_char_p
+ lib.PQerrorMessage.argtypes = [_PGconn_p]
+
+ lib.PQfinish.restype = None
+ lib.PQfinish.argtypes = [_PGconn_p]
+
+ lib.PQresultErrorMessage.restype = ctypes.c_char_p
+ lib.PQresultErrorMessage.argtypes = [_PGresult_p]
+
+ lib.PQntuples.restype = ctypes.c_int
+ lib.PQntuples.argtypes = [_PGresult_p]
+
+ lib.PQnfields.restype = ctypes.c_int
+ lib.PQnfields.argtypes = [_PGresult_p]
+
+ lib.PQgetvalue.restype = ctypes.c_char_p
+ lib.PQgetvalue.argtypes = [_PGresult_p, ctypes.c_int, ctypes.c_int]
+
+ lib.PQgetisnull.restype = ctypes.c_int
+ lib.PQgetisnull.argtypes = [_PGresult_p, ctypes.c_int, ctypes.c_int]
+
+ lib.PQftype.restype = ctypes.c_uint
+ lib.PQftype.argtypes = [_PGresult_p, ctypes.c_int]
+
+ return lib
+
+
+# PostgreSQL type OIDs and conversion system
+# Type registry - maps OID to converter function
+_type_converters: Dict[int, Callable[[str], Any]] = {}
+_array_to_elem_map: Dict[int, int] = {}
+
+
+def register_type_info(
+ name: str, oid: int, array_oid: int, converter: Callable[[str], Any]
+):
+ """
+ Register a PostgreSQL type with its OID, array OID, and conversion function.
+
+ Usage:
+ register_type_info("bool", 16, 1000, lambda v: v == "t")
+ """
+ _type_converters[oid] = converter
+ if array_oid is not None:
+ _array_to_elem_map[array_oid] = oid
+
+
+# Helper converters
+def _parse_array(value: str, elem_oid: int) -> list:
+ """Parse PostgreSQL array syntax: {elem1,elem2,elem3}"""
+ if not (value.startswith("{") and value.endswith("}")):
+ return value
+
+ inner = value[1:-1]
+ if not inner:
+ return []
+
+ elements = inner.split(",")
+ result = []
+ for elem in elements:
+ elem = elem.strip()
+ if elem == "NULL":
+ result.append(None)
+ else:
+ # Remove quotes if present
+ if elem.startswith('"') and elem.endswith('"'):
+ elem = elem[1:-1]
+ result.append(_convert_pg_value(elem, elem_oid))
+
+ return result
+
+
+# Register standard PostgreSQL types that we'll likely encounter in tests
+register_type_info("bool", 16, 1000, lambda v: v == "t")
+register_type_info("int2", 21, 1005, int)
+register_type_info("int4", 23, 1007, int)
+register_type_info("int8", 20, 1016, int)
+register_type_info("float4", 700, 1021, float)
+register_type_info("float8", 701, 1022, float)
+register_type_info("numeric", 1700, 1231, decimal.Decimal)
+register_type_info("text", 25, 1009, str)
+register_type_info("varchar", 1043, 1015, str)
+register_type_info("date", 1082, 1182, datetime.date.fromisoformat)
+register_type_info("time", 1083, 1183, datetime.time.fromisoformat)
+register_type_info("timestamp", 1114, 1115, datetime.datetime.fromisoformat)
+register_type_info("timestamptz", 1184, 1185, datetime.datetime.fromisoformat)
+register_type_info("uuid", 2950, 2951, uuid.UUID)
+register_type_info("json", 114, 199, json.loads)
+register_type_info("jsonb", 3802, 3807, json.loads)
+
+
+def _convert_pg_value(value: str, type_oid: int) -> Any:
+ """
+ Convert PostgreSQL string value to appropriate Python type based on OID.
+ Uses the registered type converters from register_type_info().
+ """
+ # Check if it's an array type
+ if type_oid in _array_to_elem_map:
+ elem_oid = _array_to_elem_map[type_oid]
+ return _parse_array(value, elem_oid)
+
+ # Use registered converter if available
+ converter = _type_converters.get(type_oid)
+ if converter:
+ return converter(value)
+
+ # Unknown types - return as string
+ return value
+
+
+def simplify_query_results(results) -> Any:
+ """
+ Simplify the results of a query so that the caller doesn't have to unpack
+ lists and tuples of length 1.
+ """
+ if len(results) == 1:
+ row = results[0]
+ if len(row) == 1:
+ # If there's only a single cell, just return the value
+ return row[0]
+ # If there's only a single row, just return that row
+ return row
+
+ if len(results) != 0 and len(results[0]) == 1:
+ # If there's only a single column, return an array of values
+ return [row[0] for row in results]
+
+ # if there are multiple rows and columns, return the results as is
+ return results
+
+
+class PGresult(contextlib.AbstractContextManager):
+ """Wraps a raw _PGresult_p with a more friendly interface."""
+
+ def __init__(self, lib: ctypes.CDLL, res: _PGresult_p):
+ self._lib = lib
+ self._res = res
+
+ def __exit__(self, *exc):
+ self._lib.PQclear(self._res)
+ self._res = None
+
+ def status(self) -> ExecStatus:
+ return ExecStatus(self._lib.PQresultStatus(self._res))
+
+ def error_message(self):
+ """Returns the error message associated with this result."""
+ msg = self._lib.PQresultErrorMessage(self._res)
+ return msg.decode() if msg else ""
+
+ def fetch_all(self):
+ """
+ Fetch all rows and convert to Python types.
+ Returns a list of tuples, with values converted based on their PostgreSQL type.
+ """
+ nrows = self._lib.PQntuples(self._res)
+ ncols = self._lib.PQnfields(self._res)
+
+ # Get type OIDs for each column
+ type_oids = [self._lib.PQftype(self._res, col) for col in range(ncols)]
+
+ results = []
+ for row in range(nrows):
+ row_data = []
+ for col in range(ncols):
+ if self._lib.PQgetisnull(self._res, row, col):
+ row_data.append(None)
+ else:
+ value = self._lib.PQgetvalue(self._res, row, col).decode()
+ row_data.append(_convert_pg_value(value, type_oids[col]))
+ results.append(tuple(row_data))
+
+ return results
+
+
+class PGconn(contextlib.AbstractContextManager):
+ """
+ Wraps a raw _PGconn_p with a more friendly interface. This is just a
+ stub; it's expected to grow.
+ """
+
+ def __init__(
+ self,
+ lib: ctypes.CDLL,
+ handle: _PGconn_p,
+ stack: contextlib.ExitStack,
+ ):
+ self._lib = lib
+ self._handle = handle
+ self._stack = stack
+
+ def __exit__(self, *exc):
+ self._lib.PQfinish(self._handle)
+ self._handle = None
+
+ def exec(self, query: str):
+ """
+ Executes a query via PQexec() and returns a PGresult.
+ """
+ res = self._lib.PQexec(self._handle, query.encode())
+ return self._stack.enter_context(PGresult(self._lib, res))
+
+ def sql(self, query: str):
+ """
+ Executes a query and raises an exception if it fails.
+ Returns the query results with automatic type conversion and simplification.
+ For commands that don't return data (INSERT, UPDATE, etc.), returns None.
+
+ Examples:
+ - SELECT 1 -> 1
+ - SELECT 1, 2 -> (1, 2)
+ - SELECT * FROM generate_series(1, 3) -> [1, 2, 3]
+ - SELECT * FROM (VALUES (1, 'a'), (2, 'b')) t -> [(1, 'a'), (2, 'b')]
+ - CREATE TABLE ... -> None
+ - INSERT INTO ... -> None
+ """
+ res = self.exec(query)
+ status = res.status()
+
+ if status == ExecStatus.PGRES_FATAL_ERROR:
+ error_msg = res.error_message()
+ raise LibpqError(f"Query failed: {error_msg}\nQuery: {query}")
+ elif status == ExecStatus.PGRES_COMMAND_OK:
+ return None
+ elif status == ExecStatus.PGRES_TUPLES_OK:
+ results = res.fetch_all()
+ return simplify_query_results(results)
+ else:
+ error_msg = res.error_message() or f"Unexpected status: {status}"
+ raise LibpqError(f"Query failed: {error_msg}\nQuery: {query}")
+
+
+def connstr(opts: Dict[str, Any]) -> str:
+ """
+ Flattens the provided options into a libpq connection string. Values
+ are converted to str and quoted/escaped as necessary.
+ """
+ settings = []
+
+ for k, v in opts.items():
+ v = str(v)
+ if not v:
+ v = "''"
+ else:
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+
+ if " " in v:
+ v = f"'{v}'"
+
+ settings.append(f"{k}={v}")
+
+ return " ".join(settings)
+
+
+def connect(
+ libpq_handle: ctypes.CDLL,
+ stack: contextlib.ExitStack,
+ remaining_timeout_fn: Callable[[], float],
+ **opts,
+) -> PGconn:
+ """
+ Connects to a server, using the given connection options, and
+ returns a PGconn object wrapping the connection handle. A
+ failure will raise LibpqError.
+
+ Connections honor PG_TEST_TIMEOUT_DEFAULT unless connect_timeout is
+ explicitly overridden in opts.
+
+ Args:
+ libpq_handle: ctypes.CDLL handle to libpq library
+ stack: ExitStack for managing connection cleanup
+ remaining_timeout_fn: Function that returns remaining timeout in seconds
+ **opts: Connection options (host, port, dbname, etc.)
+
+ Returns:
+ PGconn: Connected database connection
+
+ Raises:
+ LibpqError: If connection fails
+ """
+
+ if "connect_timeout" not in opts:
+ t = int(remaining_timeout_fn())
+ opts["connect_timeout"] = max(t, 1)
+
+ conn_p = libpq_handle.PQconnectdb(connstr(opts).encode())
+
+ # Check connection status before adding to stack
+ if libpq_handle.PQstatus(conn_p) != ConnectionStatus.CONNECTION_OK:
+ error_msg = libpq_handle.PQerrorMessage(conn_p).decode()
+ # Manually close the failed connection
+ libpq_handle.PQfinish(conn_p)
+ raise LibpqError(error_msg)
+
+ # Connection succeeded - add to stack for cleanup
+ conn = stack.enter_context(PGconn(libpq_handle, conn_p, stack=stack))
+ return conn
diff --git a/src/test/pytest/pg/fixtures.py b/src/test/pytest/pg/fixtures.py
deleted file mode 100644
index b5d3bff69a8..00000000000
--- a/src/test/pytest/pg/fixtures.py
+++ /dev/null
@@ -1,212 +0,0 @@
-# Copyright (c) 2025, PostgreSQL Global Development Group
-
-import contextlib
-import ctypes
-import platform
-import time
-from typing import Any, Callable, Dict
-
-import pytest
-
-from ._env import test_timeout_default
-
-
-@pytest.fixture
-def remaining_timeout():
- """
- This fixture provides a function that returns how much of the
- PG_TEST_TIMEOUT_DEFAULT remains for the current test, in fractional seconds.
- This value is never less than zero.
-
- This fixture is per-test, so the deadline is also reset on a per-test basis.
- """
- now = time.monotonic()
- deadline = now + test_timeout_default()
-
- return lambda: max(deadline - time.monotonic(), 0)
-
-
-class _PGconn(ctypes.Structure):
- pass
-
-
-class _PGresult(ctypes.Structure):
- pass
-
-
-_PGconn_p = ctypes.POINTER(_PGconn)
-_PGresult_p = ctypes.POINTER(_PGresult)
-
-
-@pytest.fixture(scope="session")
-def libpq_handle():
- """
- Loads a ctypes handle for libpq. Some common function prototypes are
- initialized for general use.
- """
- system = platform.system()
-
- if system in ("Linux", "FreeBSD", "NetBSD", "OpenBSD"):
- name = "libpq.so.5"
- elif system == "Darwin":
- name = "libpq.5.dylib"
- elif system == "Windows":
- name = "libpq.dll"
- else:
- assert False, f"the libpq fixture must be updated for {system}"
-
- # XXX ctypes.CDLL() is a little stricter with load paths on Windows. The
- # preferred way around that is to know the absolute path to libpq.dll, but
- # that doesn't seem to mesh well with the current test infrastructure. For
- # now, enable "standard" LoadLibrary behavior.
- loadopts = {}
- if system == "Windows":
- loadopts["winmode"] = 0
-
- lib = ctypes.CDLL(name, **loadopts)
-
- #
- # Function Prototypes
- #
-
- lib.PQconnectdb.restype = _PGconn_p
- lib.PQconnectdb.argtypes = [ctypes.c_char_p]
-
- lib.PQstatus.restype = ctypes.c_int
- lib.PQstatus.argtypes = [_PGconn_p]
-
- lib.PQexec.restype = _PGresult_p
- lib.PQexec.argtypes = [_PGconn_p, ctypes.c_char_p]
-
- lib.PQresultStatus.restype = ctypes.c_int
- lib.PQresultStatus.argtypes = [_PGresult_p]
-
- lib.PQclear.restype = None
- lib.PQclear.argtypes = [_PGresult_p]
-
- lib.PQerrorMessage.restype = ctypes.c_char_p
- lib.PQerrorMessage.argtypes = [_PGconn_p]
-
- lib.PQfinish.restype = None
- lib.PQfinish.argtypes = [_PGconn_p]
-
- return lib
-
-
-class PGresult(contextlib.AbstractContextManager):
- """Wraps a raw _PGresult_p with a more friendly interface."""
-
- def __init__(self, lib: ctypes.CDLL, res: _PGresult_p):
- self._lib = lib
- self._res = res
-
- def __exit__(self, *exc):
- self._lib.PQclear(self._res)
- self._res = None
-
- def status(self):
- return self._lib.PQresultStatus(self._res)
-
-
-class PGconn(contextlib.AbstractContextManager):
- """
- Wraps a raw _PGconn_p with a more friendly interface. This is just a
- stub; it's expected to grow.
- """
-
- def __init__(
- self,
- lib: ctypes.CDLL,
- handle: _PGconn_p,
- stack: contextlib.ExitStack,
- ):
- self._lib = lib
- self._handle = handle
- self._stack = stack
-
- def __exit__(self, *exc):
- self._lib.PQfinish(self._handle)
- self._handle = None
-
- def exec(self, query: str) -> PGresult:
- """
- Executes a query via PQexec() and returns a PGresult.
- """
- res = self._lib.PQexec(self._handle, query.encode())
- return self._stack.enter_context(PGresult(self._lib, res))
-
-
-@pytest.fixture
-def libpq(libpq_handle, remaining_timeout):
- """
- Provides a ctypes-based API wrapped around libpq.so. This fixture keeps
- track of allocated resources and cleans them up during teardown. See
- _Libpq's public API for details.
- """
-
- class _Libpq(contextlib.ExitStack):
- CONNECTION_OK = 0
-
- PGRES_EMPTY_QUERY = 0
-
- class Error(RuntimeError):
- """
- libpq.Error is the exception class for application-level errors that
- are encountered during libpq operations.
- """
-
- pass
-
- def __init__(self):
- super().__init__()
- self.lib = libpq_handle
-
- def _connstr(self, opts: Dict[str, Any]) -> str:
- """
- Flattens the provided options into a libpq connection string. Values
- are converted to str and quoted/escaped as necessary.
- """
- settings = []
-
- for k, v in opts.items():
- v = str(v)
- if not v:
- v = "''"
- else:
- v = v.replace("\\", "\\\\")
- v = v.replace("'", "\\'")
-
- if " " in v:
- v = f"'{v}'"
-
- settings.append(f"{k}={v}")
-
- return " ".join(settings)
-
- def must_connect(self, **opts) -> PGconn:
- """
- Connects to a server, using the given connection options, and
- returns a libpq.PGconn object wrapping the connection handle. A
- failure will raise libpq.Error.
-
- Connections honor PG_TEST_TIMEOUT_DEFAULT unless connect_timeout is
- explicitly overridden in opts.
- """
-
- if "connect_timeout" not in opts:
- t = int(remaining_timeout())
- opts["connect_timeout"] = max(t, 1)
-
- conn_p = self.lib.PQconnectdb(self._connstr(opts).encode())
-
- # Ensure the connection handle is always closed at the end of the
- # test.
- conn = self.enter_context(PGconn(self.lib, conn_p, stack=self))
-
- if self.lib.PQstatus(conn_p) != self.CONNECTION_OK:
- raise self.Error(self.lib.PQerrorMessage(conn_p).decode())
-
- return conn
-
- with _Libpq() as lib:
- yield lib
diff --git a/src/test/pytest/plugins/pgtap.py b/src/test/pytest/plugins/pgtap.py
index ef8291e291c..6a729d252e1 100644
--- a/src/test/pytest/plugins/pgtap.py
+++ b/src/test/pytest/plugins/pgtap.py
@@ -2,7 +2,6 @@
import os
import sys
-from typing import Optional
import pytest
diff --git a/src/test/pytest/pg/__init__.py b/src/test/pytest/pypg/__init__.py
similarity index 100%
rename from src/test/pytest/pg/__init__.py
rename to src/test/pytest/pypg/__init__.py
diff --git a/src/test/pytest/pg/_env.py b/src/test/pytest/pypg/_env.py
similarity index 97%
rename from src/test/pytest/pg/_env.py
rename to src/test/pytest/pypg/_env.py
index 6f18af07844..154c986d73e 100644
--- a/src/test/pytest/pg/_env.py
+++ b/src/test/pytest/pypg/_env.py
@@ -2,7 +2,6 @@
import logging
import os
-from typing import List, Optional
import pytest
diff --git a/src/test/pytest/pg/_win32.py b/src/test/pytest/pypg/_win32.py
similarity index 100%
rename from src/test/pytest/pg/_win32.py
rename to src/test/pytest/pypg/_win32.py
diff --git a/src/test/pytest/pypg/fixtures.py b/src/test/pytest/pypg/fixtures.py
new file mode 100644
index 00000000000..cf22c8ec436
--- /dev/null
+++ b/src/test/pytest/pypg/fixtures.py
@@ -0,0 +1,175 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import os
+import contextlib
+import pathlib
+import secrets
+import time
+
+import pytest
+
+from ._env import test_timeout_default
+from .util import capture
+from .server import PostgresServer
+
+from libpq import load_libpq_handle, connect as libpq_connect
+
+
+@pytest.fixture
+def remaining_timeout():
+ """
+ This fixture provides a function that returns how much of the
+ PG_TEST_TIMEOUT_DEFAULT remains for the current test, in fractional seconds.
+ This value is never less than zero.
+
+ This fixture is per-test, so the deadline is also reset on a per-test basis.
+ """
+ now = time.monotonic()
+ deadline = now + test_timeout_default()
+
+ return lambda: max(deadline - time.monotonic(), 0)
+
+
+@pytest.fixture(scope="session")
+def libpq_handle(libdir):
+ """
+ Loads a ctypes handle for libpq. Some common function prototypes are
+ initialized for general use.
+ """
+ return load_libpq_handle(libdir)
+
+
+@pytest.fixture
+def connect(libpq_handle, remaining_timeout):
+ """
+ Returns a function to connect to PostgreSQL via libpq.
+
+ The returned function accepts connection options as keyword arguments
+ (host, port, dbname, etc.) and returns a PGconn object. Connections
+ are automatically cleaned up at the end of the test.
+
+ Example:
+ conn = connect(host='localhost', port=5432, dbname='postgres')
+ result = conn.sql("SELECT 1")
+ """
+ with contextlib.ExitStack() as stack:
+
+ def _connect(**opts):
+ return libpq_connect(libpq_handle, stack, remaining_timeout, **opts)
+
+ yield _connect
+
+
+@pytest.fixture(scope="session")
+def pg_config():
+ """
+ Returns the path to pg_config. Uses PG_CONFIG environment variable if set,
+ otherwise uses 'pg_config' from PATH.
+ """
+ return os.environ.get("PG_CONFIG", "pg_config")
+
+
+@pytest.fixture(scope="session")
+def bindir(pg_config):
+ """
+ Returns the PostgreSQL bin directory using pg_config --bindir.
+ """
+ return capture(pg_config, "--bindir")
+
+
+@pytest.fixture(scope="session")
+def libdir(pg_config):
+ """
+ Returns the PostgreSQL lib directory using pg_config --libdir.
+ """
+ return capture(pg_config, "--libdir")
+
+
+@pytest.fixture(scope="session")
+def datadir(tmp_path_factory):
+ """
+ Returns the directory name to use as the server data directory. If
+ TESTDATADIR is provided, that will be used; otherwise a new temporary
+ directory is created in the pytest temp root.
+ """
+ d = os.getenv("TESTDATADIR")
+ if d:
+ d = pathlib.Path(d)
+ else:
+ d = tmp_path_factory.mktemp("tmp_check")
+
+ return d
+
+
+@pytest.fixture(scope="session")
+def sockdir(tmp_path_factory):
+ """
+ Returns the directory name to use as the server's unix_socket_directories
+ setting. Local client connections use this as the PGHOST.
+
+ At the moment, this is always put under the pytest temp root.
+ """
+ return tmp_path_factory.mktemp("sockfiles")
+
+
+@pytest.fixture(scope="session")
+def winpassword():
+ """The per-session SCRAM password for the server admin on Windows."""
+ return secrets.token_urlsafe(16)
+
+
+@pytest.fixture(scope="session")
+def pg_server_global(bindir, datadir, sockdir, winpassword, libpq_handle):
+ """
+ Starts a running Postgres server listening on localhost. The HBA initially
+ allows only local UNIX connections from the same user.
+
+ Returns a PostgresServer instance with methods for server management, configuration,
+ and creating test databases/users.
+ """
+ server = PostgresServer(bindir, datadir, sockdir, winpassword, libpq_handle)
+
+ yield server
+
+ # Cleanup any test resources
+ server.cleanup()
+
+ # Stop the server
+ server.stop()
+
+
+@pytest.fixture(scope="module")
+def pg_server_module(pg_server_global):
+ """
+ Module-scoped server context. Which can be useful so that certain settings
+ can be overriden at the module level through autouse fixtures. An example
+ of this is in the SSL tests.
+ """
+ with pg_server_global.subcontext() as s:
+ yield s
+
+
+@pytest.fixture
+def pg(pg_server_module, remaining_timeout):
+ """
+ Per-test server context. Use this fixture to make changes to the server
+ which will be rolled back at the end of the test (e.g., creating test
+ users/databases).
+ """
+ pg_server_module.set_timeout(remaining_timeout)
+ with pg_server_module.subcontext() as s:
+ yield s
+
+
+@pytest.fixture
+def conn(pg):
+ """
+ Returns a connected PGconn instance to the test PostgreSQL server.
+ The connection is automatically cleaned up at the end of the test.
+
+ Example:
+ def test_something(conn):
+ result = conn.sql("SELECT 1")
+ assert result == 1
+ """
+ return pg.connect()
diff --git a/src/test/pytest/pypg/server.py b/src/test/pytest/pypg/server.py
new file mode 100644
index 00000000000..d6675cde93d
--- /dev/null
+++ b/src/test/pytest/pypg/server.py
@@ -0,0 +1,387 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import glob
+import os
+import pathlib
+import platform
+import socket
+import subprocess
+import tempfile
+import time
+from collections import namedtuple
+from typing import Callable, Optional
+
+from .util import run
+from libpq import PGconn
+
+
+class FileBackup(contextlib.AbstractContextManager):
+ """
+ A context manager which backs up a file's contents, restoring them on exit.
+ """
+
+ def __init__(self, file: pathlib.Path):
+ super().__init__()
+
+ self._file = file
+
+ def __enter__(self):
+ import shutil
+
+ with tempfile.NamedTemporaryFile(
+ prefix=self._file.name, dir=self._file.parent, delete=False
+ ) as f:
+ self._backup = pathlib.Path(f.name)
+
+ shutil.copyfile(self._file, self._backup)
+
+ return self
+
+ def __exit__(self, *exc):
+ import shutil
+
+ # Swap the backup and the original file, so that the modified contents
+ # can still be inspected in case of failure.
+ tmp = self._backup.parent / (self._backup.name + ".tmp")
+
+ shutil.copyfile(self._file, tmp)
+ shutil.copyfile(self._backup, self._file)
+ shutil.move(tmp, self._backup)
+
+
+class HBA(FileBackup):
+ """
+ Backs up a server's HBA configuration and provides means for temporarily
+ editing it.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "pg_hba.conf")
+
+ def prepend(self, *lines):
+ """
+ Temporarily prepends lines to the server's pg_hba.conf.
+
+ As sugar for aligning HBA columns in the tests, each line can be either
+ a string or a list of strings. List elements will be joined by single
+ spaces before they are written to file.
+ """
+ with open(self._file, "r") as f:
+ prior_data = f.read()
+
+ with open(self._file, "w") as f:
+ for line in lines:
+ if isinstance(line, list):
+ print(*line, file=f)
+ else:
+ print(line, file=f)
+
+ f.write(prior_data)
+
+
+class Config(FileBackup):
+ """
+ Backs up a server's postgresql.conf and provides means for temporarily
+ editing it.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "postgresql.conf")
+
+ def set(self, **gucs):
+ """
+ Temporarily appends GUC settings to the server's postgresql.conf.
+ """
+
+ with open(self._file, "a") as f:
+ print(file=f)
+
+ for n, v in gucs.items():
+ v = str(v)
+
+ # TODO: proper quoting
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+ v = "'{}'".format(v)
+
+ print(n, "=", v, file=f)
+
+
+Backup = namedtuple("Backup", "conf, hba")
+
+
+class PostgresServer:
+ """
+ Represents a running PostgreSQL server instance with management utilities.
+ Provides methods for configuration, user/database creation, and server control.
+ """
+
+ def __init__(self, bindir, datadir, sockdir, winpassword, libpq_handle):
+ """
+ Initialize and start a PostgreSQL server instance.
+ """
+ self.datadir = datadir
+ self.sockdir = sockdir
+ self.libpq_handle = libpq_handle
+ self._remaining_timeout_fn: Optional[Callable[[], float]] = None
+ self._bindir = bindir
+ self._winpassword = winpassword
+ self._pg_ctl = os.path.join(bindir, "pg_ctl")
+ self._log = os.path.join(datadir, "postgresql.log")
+
+ initdb = os.path.join(bindir, "initdb")
+ pg_ctl = self._pg_ctl
+
+ # Lock down the HBA by default; tests can open it back up later.
+ if platform.system() == "Windows":
+ # On Windows, for admin connections, use SCRAM with a generated password
+ # over local sockets. This requires additional work during initdb.
+ method = "scram-sha-256"
+
+ # NamedTemporaryFile doesn't work very nicely on Windows until Python
+ # 3.12, which introduces NamedTemporaryFile(delete_on_close=False).
+ # Until then, specify delete=False and manually unlink after use.
+ with tempfile.NamedTemporaryFile("w", delete=False) as pwfile:
+ pwfile.write(winpassword)
+
+ run(initdb, "--auth=scram-sha-256", "--pwfile", pwfile.name, datadir)
+ os.unlink(pwfile.name)
+
+ else:
+ # For other OSes we can just use peer auth.
+ method = "peer"
+ run(pg_ctl, "-D", datadir, "init")
+
+ with open(datadir / "pg_hba.conf", "w") as f:
+ print(f"# default: local {method} connections only", file=f)
+ print(f"local all all {method}", file=f)
+
+ # Figure out a port to listen on. Attempt to reserve both IPv4 and IPv6
+ # addresses in one go.
+ #
+ # Note: socket.has_dualstack_ipv6/create_server are only in Python 3.8+.
+ if hasattr(socket, "has_dualstack_ipv6") and socket.has_dualstack_ipv6():
+ addr = ("::1", 0)
+ s = socket.create_server(addr, family=socket.AF_INET6, dualstack_ipv6=True)
+
+ hostaddr, port, _, _ = s.getsockname()
+ addrs = [hostaddr, "127.0.0.1"]
+
+ else:
+ addr = ("127.0.0.1", 0)
+
+ s = socket.socket()
+ s.bind(addr)
+
+ hostaddr, port = s.getsockname()
+ addrs = [hostaddr]
+
+ log = self._log
+
+ with s, open(os.path.join(datadir, "postgresql.conf"), "a") as f:
+ print(file=f)
+ print("unix_socket_directories = '{}'".format(sockdir.as_posix()), file=f)
+ print("listen_addresses = '{}'".format(",".join(addrs)), file=f)
+ print("port =", port, file=f)
+ print("log_connections = all", file=f)
+
+ # Between closing of the socket, s, and server start, we're racing against
+ # anything that wants to open up ephemeral ports, so try not to put any new
+ # work here.
+
+ run(pg_ctl, "-D", datadir, "-l", log, "start")
+
+ # Read the PID file to get the postmaster PID
+ with open(os.path.join(datadir, "postmaster.pid")) as f:
+ pid = int(f.readline().strip())
+
+ # Store the computed values
+ self.hostaddr = hostaddr
+ self.port = port
+ self.pid = pid
+
+ # ExitStack for cleanup callbacks
+ self._cleanup_stack = contextlib.ExitStack()
+
+ def psql(self, *args):
+ """Run psql with the given arguments."""
+ if platform.system() == "Windows":
+ pw = dict(PGPASSWORD=self._winpassword)
+ else:
+ pw = None
+ self._run(os.path.join(self._bindir, "psql"), "-w", *args, addenv=pw)
+
+ def pg_ctl(self, *args):
+ """Run pg_ctl with the given arguments."""
+ self._run(self._pg_ctl, "-l", self._log, *args)
+
+ def _run(self, cmd, *args, addenv: Optional[dict] = None):
+ """Run a command with PG* environment variables set."""
+ subenv = dict(os.environ)
+ subenv.update(
+ {
+ "PGHOST": str(self.sockdir),
+ "PGPORT": str(self.port),
+ "PGDATABASE": "postgres",
+ "PGDATA": str(self.datadir),
+ }
+ )
+ if addenv:
+ subenv.update(addenv)
+ run(cmd, *args, env=subenv)
+
+ def create_users(self, *userkeys: str):
+ """Create test users and register them for cleanup."""
+ usermap = {}
+ for u in userkeys:
+ name = u + "user"
+ usermap[u] = name
+ self.psql("-c", "CREATE USER " + name)
+ self._cleanup_stack.callback(self.psql, "-c", "DROP USER " + name)
+ return usermap
+
+ def create_dbs(self, *dbkeys: str):
+ """Create test databases and register them for cleanup."""
+ dbmap = {}
+ for d in dbkeys:
+ name = d + "db"
+ dbmap[d] = name
+ self.psql("-c", "CREATE DATABASE " + name)
+ self._cleanup_stack.callback(self.psql, "-c", "DROP DATABASE " + name)
+ return dbmap
+
+ @contextlib.contextmanager
+ def reloading(self):
+ """
+ Provides a context manager for making configuration changes.
+
+ If the context suite finishes successfully, the configuration will
+ be reloaded via pg_ctl. On teardown, the configuration changes will
+ be unwound, and the server will be signaled to reload again.
+
+ The context target contains the following attributes which can be
+ used to configure the server:
+ - .conf: modifies postgresql.conf
+ - .hba: modifies pg_hba.conf
+
+ For example:
+
+ with pg_server_session.reloading() as s:
+ s.conf.set(log_connections="on")
+ s.hba.prepend("local all all trust")
+ """
+ # Push a reload onto the stack before making any other
+ # unwindable changes. That way the order of operations will be
+ #
+ # # test
+ # - config change 1
+ # - config change 2
+ # - reload
+ # # teardown
+ # - undo config change 2
+ # - undo config change 1
+ # - reload
+ #
+ self._cleanup_stack.callback(self.pg_ctl, "reload")
+ yield self._backup_configuration()
+
+ # Now actually reload
+ self.pg_ctl("reload")
+
+ @contextlib.contextmanager
+ def restarting(self):
+ """Like .reloading(), but with a full server restart."""
+ self._cleanup_stack.callback(self.pg_ctl, "restart")
+ yield self._backup_configuration()
+ self.pg_ctl("restart")
+
+ def _backup_configuration(self):
+ # Wrap the existing HBA and configuration with FileBackups.
+ return Backup(
+ hba=self._cleanup_stack.enter_context(HBA(self.datadir)),
+ conf=self._cleanup_stack.enter_context(Config(self.datadir)),
+ )
+
+ @contextlib.contextmanager
+ def subcontext(self):
+ """
+ Create a new cleanup context for per-test isolation.
+
+ Temporarily replaces the cleanup stack so that any cleanup callbacks
+ registered within this context will be cleaned up when the context exits.
+ """
+ old_stack = self._cleanup_stack
+ self._cleanup_stack = contextlib.ExitStack()
+ try:
+ self._cleanup_stack.__enter__()
+ yield self
+ finally:
+ self._cleanup_stack.__exit__(None, None, None)
+ self._cleanup_stack = old_stack
+
+ def stop(self):
+ """
+ Stop the PostgreSQL server instance.
+
+ Ignores failures if the server is already stopped.
+ """
+ try:
+ run(self._pg_ctl, "-D", self.datadir, "-l", self._log, "stop")
+ except subprocess.CalledProcessError:
+ # Server may have already been stopped
+ pass
+
+ def cleanup(self):
+ """Run all registered cleanup callbacks."""
+ self._cleanup_stack.close()
+
+ def set_timeout(self, remaining_timeout_fn: Callable[[], float]) -> None:
+ """
+ Set the timeout function for connections.
+ This is typically called by pg fixture for each test.
+ """
+ self._remaining_timeout_fn = remaining_timeout_fn
+
+ def connect(self, **opts) -> PGconn:
+ """
+ Creates a connection to this PostgreSQL server instance.
+
+ This is a convenience method that automatically fills in the host, port,
+ and dbname (defaulting to 'postgres') for connecting to this server.
+
+ Args:
+ stack: ExitStack for managing connection cleanup (uses internal stack if not provided)
+ remaining_timeout_fn: Function that returns remaining timeout (uses stored timeout if not provided)
+ **opts: Additional connection options (can override defaults)
+
+ Returns:
+ PGconn: Connected database connection
+
+ Example:
+ conn = pg.connect()
+ conn = pg.connect(dbname='mydb')
+ """
+ from libpq import connect as libpq_connect
+
+ # Set default connection options for this server
+ defaults = {
+ "host": str(self.sockdir),
+ "port": self.port,
+ "dbname": "postgres",
+ }
+
+ # Merge with user-provided options (user options take precedence)
+ defaults.update(opts)
+
+ if self._remaining_timeout_fn is None:
+ raise RuntimeError(
+ "Timeout function not set. Use set_timeout() or pg fixture."
+ )
+
+ return libpq_connect(
+ self.libpq_handle,
+ self._cleanup_stack,
+ self._remaining_timeout_fn,
+ **defaults,
+ )
diff --git a/src/test/pytest/pypg/util.py b/src/test/pytest/pypg/util.py
new file mode 100644
index 00000000000..b2a1e627e4b
--- /dev/null
+++ b/src/test/pytest/pypg/util.py
@@ -0,0 +1,42 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import shlex
+import subprocess
+import sys
+
+
+def eprint(*args, **kwargs):
+ """eprint prints to stderr"""
+ print(*args, file=sys.stderr, **kwargs)
+
+
+def run(*command, check=True, shell=None, silent=False, **kwargs):
+ """run runs the given command and prints it to stderr"""
+
+ if shell is None:
+ shell = len(command) == 1 and isinstance(command[0], str)
+
+ if shell:
+ command = command[0]
+ else:
+ command = list(map(str, command))
+
+ if not silent:
+ if shell:
+ eprint(f"+ {command}")
+ else:
+ # We could normally use shlex.join here, but it's not available in
+ # Python 3.6 which we still like to support
+ unsafe_string_cmd = " ".join(map(shlex.quote, command))
+ eprint(f"+ {unsafe_string_cmd}")
+
+ if silent:
+ kwargs.setdefault("stdout", subprocess.DEVNULL)
+
+ return subprocess.run(command, check=check, shell=shell, **kwargs)
+
+
+def capture(command, *args, stdout=subprocess.PIPE, encoding="utf-8", **kwargs):
+ return run(
+ command, *args, stdout=stdout, encoding=encoding, **kwargs
+ ).stdout.removesuffix("\n")
diff --git a/src/test/pytest/pyt/conftest.py b/src/test/pytest/pyt/conftest.py
index ecb72be26d7..641af0bbac5 100644
--- a/src/test/pytest/pyt/conftest.py
+++ b/src/test/pytest/pyt/conftest.py
@@ -1,3 +1,4 @@
# Copyright (c) 2025, PostgreSQL Global Development Group
-from pg.fixtures import *
+
+from pypg.fixtures import *
diff --git a/src/test/pytest/pyt/test_libpq.py b/src/test/pytest/pyt/test_libpq.py
index 9f0857cc612..4fcf4056f41 100644
--- a/src/test/pytest/pyt/test_libpq.py
+++ b/src/test/pytest/pyt/test_libpq.py
@@ -9,6 +9,8 @@ from typing import Callable
import pytest
+from libpq import connstr, LibpqError
+
@pytest.mark.parametrize(
"opts, expected",
@@ -22,15 +24,15 @@ import pytest
(dict(keyword=" \\' "), r"keyword=' \\\' '"),
],
)
-def test_connstr(libpq, opts, expected):
- """Tests the escape behavior for libpq._connstr()."""
- assert libpq._connstr(opts) == expected
+def test_connstr(opts, expected):
+ """Tests the escape behavior for connstr()."""
+ assert connstr(opts) == expected
-def test_must_connect_errors(libpq):
- """Tests that must_connect() raises libpq.Error."""
- with pytest.raises(libpq.Error, match="invalid connection option"):
- libpq.must_connect(some_unknown_keyword="whatever")
+def test_must_connect_errors(connect):
+ """Tests that connect() raises LibpqError."""
+ with pytest.raises(LibpqError, match="invalid connection option"):
+ connect(some_unknown_keyword="whatever")
@pytest.fixture
@@ -145,7 +147,7 @@ def local_server(tmp_path, remaining_timeout):
yield s
-def test_connection_is_finished_on_error(libpq, local_server, remaining_timeout):
+def test_connection_is_finished_on_error(connect, local_server):
"""Tests that PQfinish() gets called at the end of testing."""
expected_error = "something is wrong"
@@ -165,7 +167,6 @@ def test_connection_is_finished_on_error(libpq, local_server, remaining_timeout)
local_server.background(serve_error)
- with pytest.raises(libpq.Error, match=expected_error):
+ with pytest.raises(LibpqError, match=expected_error):
# Exiting this context should result in PQfinish().
- with libpq:
- libpq.must_connect(host=local_server.host, port=local_server.port)
+ connect(host=local_server.host, port=local_server.port)
diff --git a/src/test/pytest/pyt/test_query_helpers.py b/src/test/pytest/pyt/test_query_helpers.py
new file mode 100644
index 00000000000..5a5a1ae1edf
--- /dev/null
+++ b/src/test/pytest/pyt/test_query_helpers.py
@@ -0,0 +1,286 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests for query helper functions with type conversion and result simplification.
+"""
+
+import pytest
+
+
+def test_single_cell_int(conn):
+ """Single cell integer query returns just the value."""
+ result = conn.sql("SELECT 1")
+ assert result == 1
+ assert isinstance(result, int)
+
+
+def test_single_cell_string(conn):
+ """Single cell string query returns just the value."""
+ result = conn.sql("SELECT 'hello'")
+ assert result == "hello"
+ assert isinstance(result, str)
+
+
+def test_single_cell_bool(conn):
+ """Single cell boolean query returns just the value."""
+
+ result = conn.sql("SELECT true")
+ assert result is True
+ assert isinstance(result, bool)
+
+ result = conn.sql("SELECT false")
+ assert result is False
+
+
+def test_single_cell_float(conn):
+ """Single cell float query returns just the value."""
+
+ result = conn.sql("SELECT 3.14::float4")
+ assert isinstance(result, float)
+ assert abs(result - 3.14) < 0.01
+
+
+def test_single_cell_null(conn):
+ """Single cell NULL query returns None."""
+
+ result = conn.sql("SELECT NULL")
+ assert result is None
+
+
+def test_single_row_multiple_columns(conn):
+ """Single row with multiple columns returns a tuple."""
+
+ result = conn.sql("SELECT 1, 'hello', true")
+ assert result == (1, "hello", True)
+ assert isinstance(result, tuple)
+
+
+def test_single_column_multiple_rows(conn):
+ """Single column with multiple rows returns a list of values."""
+
+ result = conn.sql("SELECT * FROM generate_series(1, 3)")
+ assert result == [1, 2, 3]
+ assert isinstance(result, list)
+
+
+def test_multiple_rows_and_columns(conn):
+ """Multiple rows and columns returns list of tuples."""
+
+ result = conn.sql("SELECT * FROM (VALUES (1, 'a'), (2, 'b'), (3, 'c')) AS t")
+ assert result == [(1, "a"), (2, "b"), (3, "c")]
+ assert isinstance(result, list)
+ assert all(isinstance(row, tuple) for row in result)
+
+
+def test_empty_result(conn):
+ """Empty result set returns empty list."""
+
+ result = conn.sql("SELECT 1 WHERE false")
+ assert result == []
+
+
+def test_query_error_handling(conn):
+ """Query errors raise RuntimeError with actual error message."""
+
+ with pytest.raises(RuntimeError) as exc_info:
+ conn.sql("SELECT * FROM nonexistent_table")
+
+ error_msg = str(exc_info.value)
+ assert "nonexistent_table" in error_msg or "does not exist" in error_msg
+
+
+def test_division_by_zero_error(conn):
+ """Division by zero raises RuntimeError."""
+
+ with pytest.raises(RuntimeError) as exc_info:
+ conn.sql("SELECT 1/0")
+
+ error_msg = str(exc_info.value)
+ assert "division by zero" in error_msg.lower()
+
+
+def test_simple_exec_create_table(conn):
+ """sql for CREATE TABLE returns None."""
+
+ result = conn.sql("CREATE TEMP TABLE test_table (id int, name text)")
+ assert result is None
+
+ # Verify table was created
+ count = conn.sql("SELECT COUNT(*) FROM test_table")
+ assert count == 0
+
+
+def test_simple_exec_insert(conn):
+ """sql for INSERT returns None."""
+
+ conn.sql("CREATE TEMP TABLE test_table (id int, name text)")
+ result = conn.sql("INSERT INTO test_table VALUES (1, 'Alice'), (2, 'Bob')")
+ assert result is None
+
+ # Verify data was inserted
+ count = conn.sql("SELECT COUNT(*) FROM test_table")
+ assert count == 2
+
+
+def test_type_conversion_mixed(conn):
+ """Test mixed type conversion in a single row."""
+
+ result = conn.sql(
+ "SELECT 42::int4, 123::int8, 3.14::float8, 'text', true, NULL"
+ )
+ assert result == (42, 123, 3.14, "text", True, None)
+ assert isinstance(result[0], int)
+ assert isinstance(result[1], int)
+ assert isinstance(result[2], float)
+ assert isinstance(result[3], str)
+ assert isinstance(result[4], bool)
+ assert result[5] is None
+
+
+def test_multiple_queries_same_connection(conn):
+ """Test running multiple queries on the same connection."""
+
+ result1 = conn.sql("SELECT 1")
+ assert result1 == 1
+
+ result2 = conn.sql("SELECT 'hello', 'world'")
+ assert result2 == ("hello", "world")
+
+ result3 = conn.sql("SELECT * FROM generate_series(1, 5)")
+ assert result3 == [1, 2, 3, 4, 5]
+
+
+def test_date_type(conn):
+ """Test date type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '2025-10-20'::date")
+ assert result == datetime.date(2025, 10, 20)
+ assert isinstance(result, datetime.date)
+
+
+def test_timestamp_type(conn):
+ """Test timestamp type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '2025-10-20 15:30:45'::timestamp")
+ assert result == datetime.datetime(2025, 10, 20, 15, 30, 45)
+ assert isinstance(result, datetime.datetime)
+
+
+def test_time_type(conn):
+ """Test time type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '15:30:45'::time")
+ assert result == datetime.time(15, 30, 45)
+ assert isinstance(result, datetime.time)
+
+
+def test_numeric_type(conn):
+ """Test numeric/decimal type conversion."""
+ import decimal
+
+ result = conn.sql("SELECT 123.456::numeric")
+ assert result == decimal.Decimal("123.456")
+ assert isinstance(result, decimal.Decimal)
+
+
+def test_int_array(conn):
+ """Test integer array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[1, 2, 3, 4, 5]")
+ assert result == [1, 2, 3, 4, 5]
+ assert isinstance(result, list)
+ assert all(isinstance(x, int) for x in result)
+
+
+def test_text_array(conn):
+ """Test text array type conversion."""
+
+ result = conn.sql("SELECT ARRAY['hello', 'world', 'test']")
+ assert result == ["hello", "world", "test"]
+ assert isinstance(result, list)
+ assert all(isinstance(x, str) for x in result)
+
+
+def test_bool_array(conn):
+ """Test boolean array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[true, false, true]")
+ assert result == [True, False, True]
+ assert isinstance(result, list)
+ assert all(isinstance(x, bool) for x in result)
+
+
+def test_empty_array(conn):
+ """Test empty array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[]::int[]")
+ assert result == []
+ assert isinstance(result, list)
+
+
+def test_json_type(conn):
+ """Test JSON type (parsed to dict)."""
+
+ result = conn.sql('SELECT \'{"key": "value"}\'::json')
+ assert isinstance(result, dict)
+ assert result == {"key": "value"}
+
+
+def test_jsonb_type(conn):
+ """Test JSONB type (parsed to dict)."""
+
+ result = conn.sql('SELECT \'{"name": "test", "count": 42}\'::jsonb')
+ assert isinstance(result, dict)
+ assert result == {"name": "test", "count": 42}
+
+
+def test_json_array(conn):
+ """Test JSON array type."""
+
+ result = conn.sql("SELECT '[1, 2, 3, 4, 5]'::json")
+ assert isinstance(result, list)
+ assert result == [1, 2, 3, 4, 5]
+
+
+def test_json_nested(conn):
+ """Test nested JSON object."""
+
+ result = conn.sql(
+ 'SELECT \'{"user": {"id": 1, "name": "Alice"}, "active": true}\'::json'
+ )
+ assert isinstance(result, dict)
+ assert result == {"user": {"id": 1, "name": "Alice"}, "active": True}
+
+
+def test_mixed_types_with_arrays(conn):
+ """Test mixed types including arrays in a single row."""
+
+ result = conn.sql("SELECT 42, 'text', ARRAY[1, 2, 3], true")
+ assert result == (42, "text", [1, 2, 3], True)
+ assert isinstance(result[0], int)
+ assert isinstance(result[1], str)
+ assert isinstance(result[2], list)
+ assert isinstance(result[3], bool)
+
+
+def test_uuid_type(conn):
+ """Test UUID type conversion."""
+ import uuid
+
+ test_uuid = "550e8400-e29b-41d4-a716-446655440000"
+ result = conn.sql(f"SELECT '{test_uuid}'::uuid")
+ assert result == uuid.UUID(test_uuid)
+ assert isinstance(result, uuid.UUID)
+
+
+def test_uuid_generation(conn):
+ """Test generated UUID type conversion."""
+ import uuid
+
+ result = conn.sql("SELECT uuidv4()")
+ assert isinstance(result, uuid.UUID)
+ # Check it's a valid UUID by ensuring it can be converted to string
+ assert len(str(result)) == 36 # UUID string format length
diff --git a/src/test/ssl/pyt/conftest.py b/src/test/ssl/pyt/conftest.py
index 85d2c994828..6e8699e0971 100644
--- a/src/test/ssl/pyt/conftest.py
+++ b/src/test/ssl/pyt/conftest.py
@@ -1,19 +1,14 @@
# Copyright (c) 2025, PostgreSQL Global Development Group
import datetime
-import os
-import pathlib
-import platform
-import secrets
-import socket
+import re
import subprocess
import tempfile
from collections import namedtuple
import pytest
-import pg
-from pg.fixtures import *
+from pypg.fixtures import *
@pytest.fixture(scope="session")
@@ -135,108 +130,51 @@ def certs(cryptography, tmp_path_factory):
return _Certs()
-@pytest.fixture(scope="session")
-def datadir(tmp_path_factory):
+@pytest.fixture(scope="module", autouse=True)
+def ssl_setup(pg_server_module, certs, datadir):
"""
- Returns the directory name to use as the server data directory. If
- TESTDATADIR is provided, that will be used; otherwise a new temporary
- directory is created in the pytest temp root.
+ Sets up required server settings for all tests in this module.
"""
- d = os.getenv("TESTDATADIR")
- if d:
- d = pathlib.Path(d)
- else:
- d = tmp_path_factory.mktemp("tmp_check")
+ try:
+ with pg_server_module.restarting() as s:
+ s.conf.set(
+ ssl="on",
+ ssl_ca_file=certs.ca.certpath,
+ ssl_cert_file=certs.server.certpath,
+ ssl_key_file=certs.server.keypath,
+ )
- return d
+ # Reject by default.
+ s.hba.prepend("hostssl all all all reject")
+ except subprocess.CalledProcessError:
+ # This is a decent place to skip if the server isn't set up for SSL.
+ logpath = datadir / "postgresql.log"
+ unsupported = re.compile("SSL is not supported")
-@pytest.fixture(scope="session")
-def sockdir(tmp_path_factory):
- """
- Returns the directory name to use as the server's unix_socket_directories
- setting. Local client connections use this as the PGHOST.
+ with open(logpath, "r") as log:
+ for line in log:
+ if unsupported.search(line):
+ pytest.skip("the server does not support SSL")
- At the moment, this is always put under the pytest temp root.
- """
- return tmp_path_factory.mktemp("sockfiles")
+ # Some other error happened.
+ raise
+ users = pg_server_module.create_users("ssl")
+ dbs = pg_server_module.create_dbs("ssl")
-@pytest.fixture(scope="session")
-def winpassword():
- """The per-session SCRAM password for the server admin on Windows."""
- return secrets.token_urlsafe(16)
+ return (users, dbs)
-@pytest.fixture(scope="session")
-def server_instance(certs, datadir, sockdir, winpassword):
+@pytest.fixture(scope="module")
+def client_cert(ssl_setup, certs):
"""
- Starts a running Postgres server listening on localhost. The HBA initially
- allows only local UNIX connections from the same user.
-
- TODO: when installcheck is supported, this should optionally point to the
- currently running server instead.
+ Creates a Cert for the "ssl" user.
"""
+ from cryptography import x509
+ from cryptography.x509.oid import NameOID
+
+ users, _ = ssl_setup
+ user = users["ssl"]
- # Lock down the HBA by default; tests can open it back up later.
- if platform.system() == "Windows":
- # On Windows, for admin connections, use SCRAM with a generated password
- # over local sockets. This requires additional work during initdb.
- method = "scram-sha-256"
-
- # NamedTemporaryFile doesn't work very nicely on Windows until Python
- # 3.12, which introduces NamedTemporaryFile(delete_on_close=False).
- # Until then, specify delete=False and manually unlink after use.
- with tempfile.NamedTemporaryFile("w", delete=False) as pwfile:
- pwfile.write(winpassword)
-
- subprocess.check_call(
- ["initdb", "--auth=scram-sha-256", "--pwfile", pwfile.name, datadir]
- )
- os.unlink(pwfile.name)
-
- else:
- # For other OSes we can just use peer auth.
- method = "peer"
- subprocess.check_call(["pg_ctl", "-D", datadir, "init"])
-
- with open(datadir / "pg_hba.conf", "w") as f:
- print(f"# default: local {method} connections only", file=f)
- print(f"local all all {method}", file=f)
-
- # Figure out a port to listen on. Attempt to reserve both IPv4 and IPv6
- # addresses in one go.
- #
- # Note: socket.has_dualstack_ipv6/create_server are only in Python 3.8+.
- if hasattr(socket, "has_dualstack_ipv6") and socket.has_dualstack_ipv6():
- addr = ("::1", 0)
- s = socket.create_server(addr, family=socket.AF_INET6, dualstack_ipv6=True)
-
- hostaddr, port, _, _ = s.getsockname()
- addrs = [hostaddr, "127.0.0.1"]
-
- else:
- addr = ("127.0.0.1", 0)
-
- s = socket.socket()
- s.bind(addr)
-
- hostaddr, port = s.getsockname()
- addrs = [hostaddr]
-
- log = os.path.join(datadir, "postgresql.log")
-
- with s, open(os.path.join(datadir, "postgresql.conf"), "a") as f:
- print(file=f)
- print("unix_socket_directories = '{}'".format(sockdir.as_posix()), file=f)
- print("listen_addresses = '{}'".format(",".join(addrs)), file=f)
- print("port =", port, file=f)
- print("log_connections = all", file=f)
-
- # Between closing of the socket, s, and server start, we're racing against
- # anything that wants to open up ephemeral ports, so try not to put any new
- # work here.
-
- subprocess.check_call(["pg_ctl", "-D", datadir, "-l", log, "start"])
- yield (hostaddr, port)
- subprocess.check_call(["pg_ctl", "-D", datadir, "-l", log, "stop"])
+ return certs.new(x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, user)]))
diff --git a/src/test/ssl/pyt/test_client.py b/src/test/ssl/pyt/test_client.py
index 28110ae0717..247681f93cb 100644
--- a/src/test/ssl/pyt/test_client.py
+++ b/src/test/ssl/pyt/test_client.py
@@ -10,10 +10,11 @@ from typing import Callable
import pytest
-import pg
+import pypg
+from libpq import LibpqError, ExecStatus
# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
-pytestmark = pg.require_test_extra("ssl")
+pytestmark = pypg.require_test_extra("ssl")
@pytest.fixture(scope="session", autouse=True)
@@ -192,7 +193,7 @@ def ssl_server(tcp_server_class, certs):
@pytest.mark.parametrize("sslmode", ("require", "verify-ca", "verify-full"))
-def test_server_with_ssl_disabled(libpq, tcp_server, certs, sslmode):
+def test_server_with_ssl_disabled(connect, tcp_server, certs, sslmode):
"""
Make sure client refuses to talk to non-SSL servers with stricter
sslmodes.
@@ -214,16 +215,15 @@ def test_server_with_ssl_disabled(libpq, tcp_server, certs, sslmode):
tcp_server.background(refuse_ssl)
- with pytest.raises(libpq.Error, match="server does not support SSL"):
- with libpq: # XXX tests shouldn't need to do this
- libpq.must_connect(
- **tcp_server.conninfo,
- sslrootcert=certs.ca.certpath,
- sslmode=sslmode,
- )
+ with pytest.raises(LibpqError, match="server does not support SSL"):
+ connect(
+ **tcp_server.conninfo,
+ sslrootcert=certs.ca.certpath,
+ sslmode=sslmode,
+ )
-def test_verify_full_connection(libpq, ssl_server, certs):
+def test_verify_full_connection(connect, ssl_server, certs):
"""Completes a verify-full connection and empty query."""
def handle_empty_query(s: ssl.SSLSocket):
@@ -269,10 +269,10 @@ def test_verify_full_connection(libpq, ssl_server, certs):
ssl_server.background_ssl(handle_empty_query)
- conn = libpq.must_connect(
+ conn = connect(
**ssl_server.conninfo,
sslrootcert=certs.ca.certpath,
sslmode="verify-full",
)
with conn:
- assert conn.exec("").status() == libpq.PGRES_EMPTY_QUERY
+ assert conn.exec("").status() == ExecStatus.PGRES_EMPTY_QUERY
diff --git a/src/test/ssl/pyt/test_server.py b/src/test/ssl/pyt/test_server.py
index 2d0be735371..60628d0c067 100644
--- a/src/test/ssl/pyt/test_server.py
+++ b/src/test/ssl/pyt/test_server.py
@@ -1,25 +1,16 @@
# Copyright (c) 2025, PostgreSQL Global Development Group
-import contextlib
-import os
-import pathlib
-import platform
import re
-import shutil
import socket
import ssl
import struct
-import subprocess
-import tempfile
-from collections import namedtuple
-from typing import Dict, List, Union
import pytest
-import pg
+import pypg
# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
-pytestmark = pg.require_test_extra("ssl")
+pytestmark = pypg.require_test_extra("ssl")
#
@@ -27,363 +18,6 @@ pytestmark = pg.require_test_extra("ssl")
#
-@pytest.fixture(scope="session")
-def connenv(server_instance, sockdir, datadir):
- """
- Provides the values for several PG* environment variables needed for our
- utility programs to connect to the server_instance.
- """
- return {
- "PGHOST": str(sockdir),
- "PGPORT": str(server_instance[1]),
- "PGDATABASE": "postgres",
- "PGDATA": str(datadir),
- }
-
-
-class FileBackup(contextlib.AbstractContextManager):
- """
- A context manager which backs up a file's contents, restoring them on exit.
- """
-
- def __init__(self, file: pathlib.Path):
- super().__init__()
-
- self._file = file
-
- def __enter__(self):
- with tempfile.NamedTemporaryFile(
- prefix=self._file.name, dir=self._file.parent, delete=False
- ) as f:
- self._backup = pathlib.Path(f.name)
-
- shutil.copyfile(self._file, self._backup)
-
- return self
-
- def __exit__(self, *exc):
- # Swap the backup and the original file, so that the modified contents
- # can still be inspected in case of failure.
- #
- # TODO: this is less helpful if there are multiple layers, because it's
- # not clear which backup to look at. Can the backup name be printed as
- # part of the failed test output? Should we only swap on test failure?
- tmp = self._backup.parent / (self._backup.name + ".tmp")
-
- shutil.copyfile(self._file, tmp)
- shutil.copyfile(self._backup, self._file)
- shutil.move(tmp, self._backup)
-
-
-class HBA(FileBackup):
- """
- Backs up a server's HBA configuration and provides means for temporarily
- editing it. See also pg_server, which provides an instance of this class and
- context managers for enforcing the reload/restart order of operations.
- """
-
- def __init__(self, datadir: pathlib.Path):
- super().__init__(datadir / "pg_hba.conf")
-
- def prepend(self, *lines: Union[str, List[str]]):
- """
- Temporarily prepends lines to the server's pg_hba.conf.
-
- As sugar for aligning HBA columns in the tests, each line can be either
- a string or a list of strings. List elements will be joined by single
- spaces before they are written to file.
- """
- with open(self._file, "r") as f:
- prior_data = f.read()
-
- with open(self._file, "w") as f:
- for l in lines:
- if isinstance(l, list):
- print(*l, file=f)
- else:
- print(l, file=f)
-
- f.write(prior_data)
-
-
-class Config(FileBackup):
- """
- Backs up a server's postgresql.conf and provides means for temporarily
- editing it. See also pg_server, which provides an instance of this class and
- context managers for enforcing the reload/restart order of operations.
- """
-
- def __init__(self, datadir: pathlib.Path):
- super().__init__(datadir / "postgresql.conf")
-
- def set(self, **gucs):
- """
- Temporarily appends GUC settings to the server's postgresql.conf.
- """
-
- with open(self._file, "a") as f:
- print(file=f)
-
- for n, v in gucs.items():
- v = str(v)
-
- # TODO: proper quoting
- v = v.replace("\\", "\\\\")
- v = v.replace("'", "\\'")
- v = "'{}'".format(v)
-
- print(n, "=", v, file=f)
-
-
-@pytest.fixture(scope="session")
-def pg_server_session(server_instance, connenv, datadir, winpassword):
- """
- Provides common routines for configuring and connecting to the
- server_instance. For example:
-
- users = pg_server_session.create_users("one", "two")
- dbs = pg_server_session.create_dbs("default")
-
- with pg_server_session.reloading() as s:
- s.hba.prepend(["local", dbs["default"], users["two"], "peer"])
-
- conn = connect_somehow(**pg_server_session.conninfo)
- ...
-
- Attributes of note are
- - .conninfo: provides TCP connection info for the server
-
- This fixture unwinds its configuration changes at the end of the pytest
- session. For more granular changes, pg_server_session.subcontext() splits
- off a "nested" context to allow smaller scopes.
- """
-
- class _Server(contextlib.ExitStack):
- conninfo = dict(
- hostaddr=server_instance[0],
- port=server_instance[1],
- )
-
- # for _backup_configuration()
- _Backup = namedtuple("Backup", "conf, hba")
-
- def subcontext(self):
- """
- Creates a new server stack instance that can be tied to a smaller
- scope than "session".
- """
- # So far, there doesn't seem to be a need to link the two objects,
- # since HBA/Config/FileBackup operate directly on the filesystem and
- # will appear to "nest" naturally.
- return self.__class__()
-
- def create_users(self, *userkeys: str) -> Dict[str, str]:
- """
- Creates new users which will be dropped at the end of the server
- context.
-
- For each provided key, a related user name will be selected and
- stored in a map. This map is returned to let calling code look up
- the selected usernames (instead of hardcoding them and potentially
- stomping on an existing installation).
- """
- usermap = {}
-
- for u in userkeys:
- # TODO: use a uniquifier to support installcheck
- name = u + "user"
- usermap[u] = name
-
- # TODO: proper escaping
- self.psql("-c", "CREATE USER " + name)
- self.callback(self.psql, "-c", "DROP USER " + name)
-
- return usermap
-
- def create_dbs(self, *dbkeys: str) -> Dict[str, str]:
- """
- Creates new databases which will be dropped at the end of the server
- context. See create_users() for the meaning of the keys and returned
- map.
- """
- dbmap = {}
-
- for d in dbkeys:
- # TODO: use a uniquifier to support installcheck
- name = d + "db"
- dbmap[d] = name
-
- # TODO: proper escaping
- self.psql("-c", "CREATE DATABASE " + name)
- self.callback(self.psql, "-c", "DROP DATABASE " + name)
-
- return dbmap
-
- @contextlib.contextmanager
- def reloading(self):
- """
- Provides a context manager for making configuration changes.
-
- If the context suite finishes successfully, the configuration will
- be reloaded via pg_ctl. On teardown, the configuration changes will
- be unwound, and the server will be signaled to reload again.
-
- The context target contains the following attributes which can be
- used to configure the server:
- - .conf: modifies postgresql.conf
- - .hba: modifies pg_hba.conf
-
- For example:
-
- with pg_server_session.reloading() as s:
- s.conf.set(log_connections="on")
- s.hba.prepend("local all all trust")
- """
- try:
- # Push a reload onto the stack before making any other
- # unwindable changes. That way the order of operations will be
- #
- # # test
- # - config change 1
- # - config change 2
- # - reload
- # # teardown
- # - undo config change 2
- # - undo config change 1
- # - reload
- #
- self.callback(self.pg_ctl, "reload")
- yield self._backup_configuration()
- except:
- # We only want to reload at the end of the suite if there were
- # no errors. During exceptions, the pushed callback handles
- # things instead, so there's nothing to do here.
- raise
- else:
- # Suite completed successfully.
- self.pg_ctl("reload")
-
- @contextlib.contextmanager
- def restarting(self):
- """Like .reloading(), but with a full server restart."""
- try:
- self.callback(self.pg_ctl, "restart")
- yield self._backup_configuration()
- except:
- raise
- else:
- self.pg_ctl("restart")
-
- def psql(self, *args):
- """
- Runs psql with the given arguments. Password prompts are always
- disabled. On Windows, the admin password will be included in the
- environment.
- """
- if platform.system() == "Windows":
- pw = dict(PGPASSWORD=winpassword)
- else:
- pw = None
-
- self._run("psql", "-w", *args, addenv=pw)
-
- def pg_ctl(self, *args):
- """
- Runs pg_ctl with the given arguments. Log output will be placed in
- postgresql.log in the server's data directory.
-
- TODO: put the log in TESTLOGDIR
- """
- self._run("pg_ctl", "-l", str(datadir / "postgresql.log"), *args)
-
- def _run(self, cmd, *args, addenv: dict = None):
- # Override the existing environment with the connenv values and
- # anything the caller wanted to add. (Python 3.9 gives us the
- # less-ugly `os.environ | connenv` merge operator.)
- subenv = dict(os.environ, **connenv)
- if addenv:
- subenv.update(addenv)
-
- subprocess.check_call([cmd, *args], env=subenv)
-
- def _backup_configuration(self):
- # Wrap the existing HBA and configuration with FileBackups.
- return self._Backup(
- hba=self.enter_context(HBA(datadir)),
- conf=self.enter_context(Config(datadir)),
- )
-
- with _Server() as s:
- yield s
-
-
-@pytest.fixture(scope="module", autouse=True)
-def ssl_setup(pg_server_session, certs, datadir):
- """
- Sets up required server settings for all tests in this module. The fixture
- variable is a tuple (users, dbs) containing the user and database names that
- have been chosen for the test session.
- """
- try:
- with pg_server_session.restarting() as s:
- s.conf.set(
- ssl="on",
- ssl_ca_file=certs.ca.certpath,
- ssl_cert_file=certs.server.certpath,
- ssl_key_file=certs.server.keypath,
- )
-
- # Reject by default.
- s.hba.prepend("hostssl all all all reject")
-
- except subprocess.CalledProcessError:
- # This is a decent place to skip if the server isn't set up for SSL.
- logpath = datadir / "postgresql.log"
- unsupported = re.compile("SSL is not supported")
-
- with open(logpath, "r") as log:
- for line in log:
- if unsupported.search(line):
- pytest.skip("the server does not support SSL")
-
- # Some other error happened.
- raise
-
- users = pg_server_session.create_users(
- "ssl",
- )
-
- dbs = pg_server_session.create_dbs(
- "ssl",
- )
-
- return (users, dbs)
-
-
-@pytest.fixture(scope="module")
-def client_cert(ssl_setup, certs):
- """
- Creates a Cert for the "ssl" user.
- """
- from cryptography import x509
- from cryptography.x509.oid import NameOID
-
- users, _ = ssl_setup
- user = users["ssl"]
-
- return certs.new(x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, user)]))
-
-
-@pytest.fixture
-def pg_server(pg_server_session):
- """
- A per-test instance of pg_server_session. Use this fixture to make changes
- to the server which will be rolled back at the end of every test.
- """
- with pg_server_session.subcontext() as s:
- yield s
-
-
#
# Tests
#
@@ -394,8 +28,8 @@ CLIENT = "client"
SERVER = "server"
+# fmt: off
@pytest.mark.parametrize(
- # fmt: off
"auth_method, creds, expected_error",
[
# Trust allows anything.
@@ -416,10 +50,10 @@ SERVER = "server"
("cert", CLIENT, None),
("cert", SERVER, "authentication failed for user"),
],
- # fmt: on
)
+# fmt: on
def test_direct_ssl_certificate_authentication(
- pg_server,
+ pg,
ssl_setup,
certs,
client_cert,
@@ -440,7 +74,7 @@ def test_direct_ssl_certificate_authentication(
user = users["ssl"]
db = dbs["ssl"]
- with pg_server.reloading() as s:
+ with pg.reloading() as s:
s.hba.prepend(
["hostssl", db, user, "127.0.0.1/32", auth_method],
["hostssl", db, user, "::1/128", auth_method],
@@ -461,7 +95,7 @@ def test_direct_ssl_certificate_authentication(
# Make a direct SSL connection. There's no SSLRequest in the handshake; we
# simply wrap a TCP connection with OpenSSL.
- addr = (pg_server.conninfo["hostaddr"], pg_server.conninfo["port"])
+ addr = (pg.hostaddr, pg.port)
with socket.create_connection(addr) as s:
s.settimeout(remaining_timeout()) # XXX this resets every operation
--
2.51.1
On 2025-11-10 22:11:50 +0100, Jelte Fennema-Nio wrote:
On Wed Oct 22, 2025 at 2:44 PM CEST, Jelte Fennema-Nio wrote:
So here's your patchset with an additional commit on top that does a
bunch of refactoring/renaming and adding features.Rebased to fix conflicts.
I assume this intentionally doesn't pass CI:
https://cirrus-ci.com/github/postgresql-cfbot/postgresql/cf%2F6045
From f6823405eb994d457f8123df0d417ca2340e4c71 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Fri, 5 Sep 2025 16:39:08 -0700
Subject: [PATCH v3 01/10] meson: Include TAP tests in the configuration
summary...to make it obvious when they've been enabled. prove is added to the
executables list for good measure.TODO: does Autoconf need something similar?
I agree with adding tap to the configuration summary, but I don't understand
the prove part, that just seems like a waste of vertical space.
From 5a27976496db53d8e9b88ab59e6c71f0f42dedcd Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 13 Aug 2025 10:58:56 -0700
Subject: [PATCH v3 02/10] Add support for pytest test suitesSpecify --enable-pytest/-Dpytest=enabled at configure time. This
contains no Postgres test logic -- it is just a "vanilla" pytest
skeleton.I've written a custom pgtap output plugin, used by the Meson mtest
runner, to fully control what we see during CI test failures. The
pytest-tap plugin would have been preferable, but it's now in
maintenance mode, and it has problems with accidentally suppressing
important collection failures.test_something.py is intended to show a sample failure in the CI.
TODOs:
- OpenBSD has an ANSI-related terminal bug, but I'm not sure if the bug
is in Cirrus, the image, pytest, Python, or readline. The TERM envvar
is unset to work around it. If this workaround is removed, a bad ANSI
escape is inserted into the pgtap output and mtest is unable to parse
it.
- The Chocolatey CI setup is subpar. Need to find a way to bless the
dependencies in use rather than pulling from pip... or maybe that will
be done by the image baker.
Yes, that needs to be baked into the image. Chocolatey is catastrophically
slow and unreliable. It's also just bad form to hit any service with such
repeated downloads.
This is true for *all* of the platforms.
+############################################################### +# Library: pytest +############################################################### + +pytest_enabled = false +pytest = not_found_dep + +pytestopt = get_option('pytest') +if not pytestopt.disabled() + pytest = find_program(get_option('PYTEST'), native: true, required: pytestopt) + if pytest.found() + pytest_check = run_command(pytest, + '-c', 'pytest.ini', + '--confcutdir=config', + '--capture=no', + 'config/check_pytest.py', + '--requirements', 'config/pytest-requirements.txt', + check: false) + if pytest_check.returncode() != 0 + message(pytest_check.stderr()) + if pytestopt.enabled() + error('Additional Python packages are required to run the pytest suites.') + else + warning('Additional Python packages are required to run the pytest suites.') + endif + else + pytest_enabled = true + endif + endif +endif
Why do we need pytest the program at all? Running the tests one-by-one with
pytest as a runner doesn't seem to make a whole lot of sense to me.
diff --git a/src/test/Makefile b/src/test/Makefile index 511a72e6238..0be9771d71f 100644 --- a/src/test/Makefile +++ b/src/test/Makefile @@ -12,7 +12,16 @@ subdir = src/test top_builddir = ../.. include $(top_builddir)/src/Makefile.global-SUBDIRS = perl postmaster regress isolation modules authentication recovery subscription +SUBDIRS = \ + authentication \ + isolation \ + modules \ + perl \ + postmaster \ + pytest \ + recovery \ + regress \ + subscription
I'm onboard with that, but we should do it separately and probably check for
other cases where we should do it at the same time.
I think it'd be a seriously bad idea to start with no central infrastructure,
we'd be force to duplicate that all over. Eventually we'll be forced to
introduce some central infrastructure, but we'll probably not go around and
carefully go through the existing tests for stuff that should now use the
common infrastructure.
Greetings,
Andres Freund
On Wed, 17 Dec 2025 at 17:10, Andres Freund <andres@anarazel.de> wrote:
I assume this intentionally doesn't pass CI:
https://cirrus-ci.com/github/postgresql-cfbot/postgresql/cf%2F6045
Yeah it was, but turns out was also actually broken because of that.
Attached is a new version that actually passes all tests. It also adds
logic to convert postgres errors into python exceptions. I also moved
the commits around a bit, so the SSL tests from Jacob are now built on
top of my improvements to the test infra.
Why do we need pytest the program at all? Running the tests one-by-one with
pytest as a runner doesn't seem to make a whole lot of sense to me.
Do you mean use "python -m pytest" instead of "pytest"?
Or do you mean running python files manually somehow? Because that's not
possible. There's only functions defined in those test files, they're
not executable by themselves. pytest is still needed to find those in
each file, as well as the fixtures they require. And ofcourse to make
assertion errors show up as nicely.
I think it'd be a seriously bad idea to start with no central infrastructure,
we'd be force to duplicate that all over. Eventually we'll be forced to
introduce some central infrastructure, but we'll probably not go around and
carefully go through the existing tests for stuff that should now use the
common infrastructure.
The infra to do query execution on a single postgres server is there
(patch 0004). That one seemed the most important to me. I'm currently
still working on some infrastructure to be able to spawn multiple
postgres servers (I'm validating that by converting the libpq load
balance TAP tests that I wrote in the past). Is there other
infrastructure that you think is needed?
Attachments:
v4-0001-meson-Include-TAP-tests-in-the-configuration-summ.patchtext/x-patch; charset=utf-8; name=v4-0001-meson-Include-TAP-tests-in-the-configuration-summ.patchDownload
From 053998cfe8a414c96c806baa587a598085eaa793 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Fri, 5 Sep 2025 16:39:08 -0700
Subject: [PATCH v4 1/7] meson: Include TAP tests in the configuration summary
...to make it obvious when they've been enabled. prove is added to the
executables list for good measure.
TODO: does Autoconf need something similar?
Per complaint by Peter Eisentraut.
---
meson.build | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/meson.build b/meson.build
index d7c5193d4ce..551e27f5eb3 100644
--- a/meson.build
+++ b/meson.build
@@ -3981,6 +3981,7 @@ summary(
'bison': '@0@ @1@'.format(bison.full_path(), bison_version),
'dtrace': dtrace,
'flex': '@0@ @1@'.format(flex.full_path(), flex_version),
+ 'prove': prove,
},
section: 'Programs',
)
@@ -4017,3 +4018,11 @@ summary(
section: 'External libraries',
list_sep: ' ',
)
+
+summary(
+ {
+ 'tap': tap_tests_enabled,
+ },
+ section: 'Other features',
+ list_sep: ' ',
+)
base-commit: b47c50e5667b489bec3affb55ecdf4e9c306ca2d
--
2.52.0
v4-0002-Add-support-for-pytest-test-suites.patchtext/x-patch; charset=utf-8; name=v4-0002-Add-support-for-pytest-test-suites.patchDownload
From 4e20afb9677fe443f274b7e91ee99bfab874003d Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 13 Aug 2025 10:58:56 -0700
Subject: [PATCH v4 2/7] Add support for pytest test suites
Specify --enable-pytest/-Dpytest=enabled at configure time. This
contains no Postgres test logic -- it is just a "vanilla" pytest
skeleton.
I've written a custom pgtap output plugin, used by the Meson mtest
runner, to fully control what we see during CI test failures. The
pytest-tap plugin would have been preferable, but it's now in
maintenance mode, and it has problems with accidentally suppressing
important collection failures.
test_something.py is intended to show a sample failure in the CI.
TODOs:
- OpenBSD has an ANSI-related terminal bug, but I'm not sure if the bug
is in Cirrus, the image, pytest, Python, or readline. The TERM envvar
is unset to work around it. If this workaround is removed, a bad ANSI
escape is inserted into the pgtap output and mtest is unable to parse
it.
- The Chocolatey CI setup is subpar. Need to find a way to bless the
dependencies in use rather than pulling from pip... or maybe that will
be done by the image baker.
---
.cirrus.tasks.yml | 38 ++++--
.gitignore | 1 +
config/check_pytest.py | 150 ++++++++++++++++++++++++
config/conftest.py | 18 +++
config/pytest-requirements.txt | 21 ++++
configure | 108 ++++++++++++++++-
configure.ac | 25 +++-
meson.build | 92 +++++++++++++++
meson_options.txt | 8 +-
pytest.ini | 6 +
src/Makefile.global.in | 23 ++++
src/makefiles/meson.build | 2 +
src/test/Makefile | 11 +-
src/test/meson.build | 1 +
src/test/pytest/Makefile | 20 ++++
src/test/pytest/README | 1 +
src/test/pytest/meson.build | 16 +++
src/test/pytest/plugins/pgtap.py | 192 +++++++++++++++++++++++++++++++
18 files changed, 718 insertions(+), 15 deletions(-)
create mode 100644 config/check_pytest.py
create mode 100644 config/conftest.py
create mode 100644 config/pytest-requirements.txt
create mode 100644 pytest.ini
create mode 100644 src/test/pytest/Makefile
create mode 100644 src/test/pytest/README
create mode 100644 src/test/pytest/meson.build
create mode 100644 src/test/pytest/plugins/pgtap.py
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 038d043d00e..ee2084bdfb6 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -21,7 +21,8 @@ env:
# target to test, for all but windows
CHECK: check-world PROVE_FLAGS=$PROVE_FLAGS
- CHECKFLAGS: -Otarget
+ # TODO were we avoiding --keep-going on purpose?
+ CHECKFLAGS: -Otarget --keep-going
PROVE_FLAGS: --timer
# Build test dependencies as part of the build step, to see compiler
# errors/warnings in one place.
@@ -44,6 +45,7 @@ env:
-Dldap=enabled
-Dssl=openssl
-Dtap_tests=enabled
+ -Dpytest=enabled
-Dplperl=enabled
-Dplpython=enabled
-Ddocs=enabled
@@ -225,7 +227,9 @@ task:
chown root:postgres /tmp/cores
sysctl kern.corefile='/tmp/cores/%N.%P.core'
setup_additional_packages_script: |
- #pkg install -y ...
+ pkg install -y \
+ py311-packaging \
+ py311-pytest
# NB: Intentionally build without -Dllvm. The freebsd image size is already
# large enough to make VM startup slow, and even without llvm freebsd
@@ -317,7 +321,10 @@ task:
-Dpam=enabled
setup_additional_packages_script: |
- #pkgin -y install ...
+ pkgin -y install \
+ py312-packaging \
+ py312-test
+ ln -s /usr/pkg/bin/pytest-3.12 /usr/pkg/bin/pytest
<<: *netbsd_task_template
- name: OpenBSD - Meson
@@ -329,6 +336,7 @@ task:
IMAGE_FAMILY: pg-ci-openbsd-postgres
PKGCONFIG_PATH: '/usr/lib/pkgconfig:/usr/local/lib/pkgconfig'
CORE_DUMP_EXECUTABLE_DIR: $CIRRUS_WORKING_DIR/build/tmp_install/usr/local/pgsql/bin
+ TERM: # TODO why does pytest print ANSI escapes on OpenBSD?
MESON_FEATURES: >-
-Dbsd_auth=enabled
@@ -337,7 +345,9 @@ task:
-Duuid=e2fs
setup_additional_packages_script: |
- #pkg_add -I ...
+ pkg_add -I \
+ py3-test \
+ py3-packaging
# Always core dump to ${CORE_DUMP_DIR}
set_core_dump_script: sysctl -w kern.nosuidcoredump=2
<<: *openbsd_task_template
@@ -496,8 +506,10 @@ task:
EOF
setup_additional_packages_script: |
- #apt-get update
- #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+ apt-get update
+ DEBIAN_FRONTEND=noninteractive apt-get -y install \
+ python3-pytest \
+ python3-packaging
matrix:
# SPECIAL:
@@ -521,14 +533,15 @@ task:
set -e
./configure \
--enable-cassert --enable-injection-points --enable-debug \
- --enable-tap-tests --enable-nls \
+ --enable-tap-tests --enable-pytest --enable-nls \
--with-segsize-blocks=6 \
--with-libnuma \
--with-liburing \
\
${LINUX_CONFIGURE_FEATURES} \
\
- CLANG="ccache clang"
+ CLANG="ccache clang" \
+ PYTEST="env LD_PRELOAD=/lib/x86_64-linux-gnu/libasan.so.8 pytest"
EOF
build_script: su postgres -c "make -s -j${BUILD_JOBS} world-bin"
upload_caches: ccache
@@ -665,6 +678,8 @@ task:
p5.34-io-tty
p5.34-ipc-run
python312
+ py312-packaging
+ py312-pytest
tcl
zstd
@@ -714,6 +729,7 @@ task:
sh src/tools/ci/ci_macports_packages.sh $MACOS_PACKAGE_LIST
# system python doesn't provide headers
sudo /opt/local/bin/port select python3 python312
+ sudo /opt/local/bin/port select pytest pytest312
# Make macports install visible for subsequent steps
echo PATH=/opt/local/sbin/:/opt/local/bin/:$PATH >> $CIRRUS_ENV
upload_caches: macports
@@ -787,6 +803,8 @@ task:
-Dldap=enabled
-Dssl=openssl
-Dtap_tests=enabled
+ -Dpytest=enabled
+ -DPYTEST=c:\Windows\system32\config\systemprofile\AppData\Roaming\Python\Python310\Scripts\pytest.exe
-Dplperl=enabled
-Dplpython=enabled
@@ -795,8 +813,10 @@ task:
depends_on: SanityCheck
only_if: $CI_WINDOWS_ENABLED
+ # XXX Does Chocolatey really not have any Python package installers?
setup_additional_packages_script: |
REM choco install -y --no-progress ...
+ pip3 install --user packaging pytest
setup_hosts_file_script: |
echo 127.0.0.1 pg-loadbalancetest >> c:\Windows\System32\Drivers\etc\hosts
@@ -859,7 +879,7 @@ task:
folder: ${CCACHE_DIR}
setup_additional_packages_script: |
- REM C:\msys64\usr\bin\pacman.exe -S --noconfirm ...
+ C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-python-packaging mingw-w64-ucrt-x86_64-python-pytest
mingw_info_script: |
%BASH% -c "where gcc"
diff --git a/.gitignore b/.gitignore
index 4e911395fe3..268426003b1 100644
--- a/.gitignore
+++ b/.gitignore
@@ -31,6 +31,7 @@ win32ver.rc
*.exe
lib*dll.def
lib*.pc
+__pycache__/
# Local excludes in root directory
/GNUmakefile
diff --git a/config/check_pytest.py b/config/check_pytest.py
new file mode 100644
index 00000000000..1562d16bcda
--- /dev/null
+++ b/config/check_pytest.py
@@ -0,0 +1,150 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+#
+# Verify that pytest-requirements.txt is satisfied. This would probably be
+# easier with pip, but requiring pip on build machines is a non-starter for
+# many.
+#
+# This is coded as a pytest suite in order to check the Python distribution in
+# use by pytest, as opposed to the Python distribution being linked against
+# Postgres. In some setups they are separate.
+#
+# The design philosophy of this script is to bend over backwards to help people
+# figure out what is missing. The target audience for error output is the
+# buildfarm operator who just wants to get the tests running, not the test
+# developer who presumably already knows how to solve these problems.
+
+import importlib
+import sys
+from typing import List, Union # needed for earlier Python versions
+
+# importlib.metadata is part of the standard library from 3.8 onwards. Earlier
+# Python versions have an official backport called importlib_metadata, which can
+# generally be installed as a separate OS package (python3-importlib-metadata).
+# This complication can be removed once we stop supporting Python 3.7.
+try:
+ from importlib import metadata
+except ImportError:
+ try:
+ import importlib_metadata as metadata
+ except ImportError:
+ # package_version() will need to fall back. This is unlikely to happen
+ # in practice, because pytest 7.x depends on importlib_metadata itself.
+ metadata = None
+
+
+def report(*args):
+ """
+ Prints a configure-time message to the user. (The configure scripts will
+ display these messages and ignore the output from the pytest suite.) This
+ assumes --capture=no is in use, to avoid pytest's standard stream capture.
+ """
+ print(*args, file=sys.stderr)
+
+
+def package_version(pkg: str) -> Union[str, None]:
+ """
+ Returns the version of the named package, or None if the package is not
+ installed.
+
+ This function prefers to use the distribution package version, if we have
+ the necessary prerequisites. Otherwise it will fall back to the __version__
+ of the imported module, which aligns with pytest.importorskip().
+ """
+ if metadata is not None:
+ try:
+ return metadata.version(pkg)
+ except metadata.PackageNotFoundError:
+ return None
+
+ # This is an older Python and we don't have importlib_metadata. Fall back to
+ # __version__ instead.
+ try:
+ mod = importlib.import_module(pkg)
+ except ModuleNotFoundError:
+ return None
+
+ if hasattr(mod, "__version__"):
+ return mod.__version__
+
+ # We're out of options. If this turns out to cause problems in practice, we
+ # might need to require importlib_metadata on older buildfarm members. But
+ # since our top-level requirements list will be small, and this possibility
+ # will eventually age out with newer Pythons, don't spend more effort on
+ # this case for now.
+ report(f"Fix check_pytest.py! {pkg} has no __version__")
+ assert False, "internal error in package_version()"
+
+
+def packaging_check(requirements: List[str]) -> bool:
+ """
+ Reports the status of each required package to the configure program.
+ Returns True if all dependencies were found.
+ """
+ report() # an opening newline makes the configure output easier to read
+
+ try:
+ # packaging contains the PyPA definitions of requirement specifiers.
+ # This is contained in a separate OS package (for example,
+ # python3-packaging), but it's extremely likely that the user has it
+ # installed already, because modern versions of pytest depend on it too.
+ import packaging
+ from packaging.requirements import Requirement
+
+ except ImportError as err:
+ # We don't even have enough prerequisites to check our prerequisites.
+ # Print the import error as-is.
+ report(err)
+ return False
+
+ # Strip extraneous whitespace, whole-line comments, and empty lines from our
+ # specifier list.
+ requirements = [r.strip() for r in requirements]
+ requirements = [r for r in requirements if r and r[0] != "#"]
+
+ found = True
+ for spec in requirements:
+ req = Requirement(spec)
+
+ # Skip any packages marked as unneeded for this particular Python env.
+ if req.marker and not req.marker.evaluate():
+ continue
+
+ # Make sure the package is installed...
+ version = package_version(req.name)
+ if version is None:
+ report(f"package '{req.name}': not installed")
+ found = False
+ continue
+
+ # ...and that it has a compatible version.
+ if not req.specifier.contains(version):
+ report(
+ "package '{}': has version {}, but '{}' is required".format(
+ req.name, version, req.specifier
+ ),
+ )
+ found = False
+ continue
+
+ # Report installed packages too, to mirror check_modules.pl.
+ report(f"package '{req.name}': installed (version {version})")
+
+ return found
+
+
+def test_packages(requirements_file):
+ """
+ Entry point.
+ """
+ try:
+ with open(requirements_file, "r") as f:
+ requirements = f.readlines()
+
+ all_found = packaging_check(requirements)
+
+ except Exception as err:
+ # Surface any breakage to the configure script before failing the test.
+ report(err)
+ raise
+
+ assert all_found, "required packages are missing"
diff --git a/config/conftest.py b/config/conftest.py
new file mode 100644
index 00000000000..a9c2bc546e8
--- /dev/null
+++ b/config/conftest.py
@@ -0,0 +1,18 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+#
+# Support for check_pytest.py. The configure script provides the path to
+# pytest-requirements.txt via the --requirements option added here.
+
+import pytest
+
+
+def pytest_addoption(parser):
+ parser.addoption(
+ "--requirements",
+ help="path to pytest-requirements.txt",
+ )
+
+
+@pytest.fixture
+def requirements_file(request):
+ return request.config.getoption("--requirements")
diff --git a/config/pytest-requirements.txt b/config/pytest-requirements.txt
new file mode 100644
index 00000000000..1c5e283d1e2
--- /dev/null
+++ b/config/pytest-requirements.txt
@@ -0,0 +1,21 @@
+#
+# This file contains the Python packages which are required in order for us to
+# enable pytest.
+#
+# The syntax is a *subset* of pip's requirements.txt syntax, so that both pip
+# and check_pytest.py can use it. Only whole-line comments and standard Python
+# dependency specifiers are allowed. pip-specific goodies like includes and
+# environment substitutions are not supported; keep it simple.
+#
+# Packages belong here if their absence should cause a configuration failure. If
+# you'd like to make a package optional, consider using pytest.importorskip()
+# instead.
+#
+
+# pytest 7.0 was the last version which supported Python 3.6, but the BSDs have
+# started putting 8.x into ports, so we support both. (pytest 8 can be used
+# throughout once we drop support for Python 3.7.)
+pytest >= 7.0, < 10
+
+# packaging is used by check_pytest.py at configure time.
+packaging
diff --git a/configure b/configure
index 14ad0a5006f..d6fe7d3d293 100755
--- a/configure
+++ b/configure
@@ -630,6 +630,7 @@ vpath_build
PG_SYSROOT
PG_VERSION_NUM
LDFLAGS_EX_BE
+PYTEST
PROVE
DBTOEPUB
FOP
@@ -772,6 +773,7 @@ CFLAGS
CC
enable_injection_points
PG_TEST_EXTRA
+enable_pytest
enable_tap_tests
enable_dtrace
DTRACEFLAGS
@@ -850,6 +852,7 @@ enable_profiling
enable_coverage
enable_dtrace
enable_tap_tests
+enable_pytest
enable_injection_points
with_blocksize
with_segsize
@@ -1550,7 +1553,10 @@ Optional Features:
--enable-profiling build with profiling enabled
--enable-coverage build with coverage testing instrumentation
--enable-dtrace build with DTrace support
- --enable-tap-tests enable TAP tests (requires Perl and IPC::Run)
+ --enable-tap-tests enable (Perl-based) TAP tests (requires Perl and
+ IPC::Run)
+ --enable-pytest enable (Python-based) pytest suites (requires
+ Python)
--enable-injection-points
enable injection points (for testing)
--enable-depend turn on automatic dependency tracking
@@ -3632,7 +3638,7 @@ fi
#
-# TAP tests
+# Test frameworks
#
@@ -3660,6 +3666,32 @@ fi
+
+# Check whether --enable-pytest was given.
+if test "${enable_pytest+set}" = set; then :
+ enableval=$enable_pytest;
+ case $enableval in
+ yes)
+ :
+ ;;
+ no)
+ :
+ ;;
+ *)
+ as_fn_error $? "no argument expected for --enable-pytest option" "$LINENO" 5
+ ;;
+ esac
+
+else
+ enable_pytest=no
+
+fi
+
+
+
+
+
+
#
# Injection points
#
@@ -19229,6 +19261,78 @@ $as_echo "$modulestderr" >&6; }
fi
fi
+if test "$enable_pytest" = yes; then
+ if test -z "$PYTEST"; then
+ for ac_prog in pytest py.test
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_PYTEST+:} false; then :
+ $as_echo_n "(cached) " >&6
+else
+ case $PYTEST in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_PYTEST="$PYTEST" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_path_PYTEST="$as_dir/$ac_word$ac_exec_ext"
+ $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+PYTEST=$ac_cv_path_PYTEST
+if test -n "$PYTEST"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTEST" >&5
+$as_echo "$PYTEST" >&6; }
+else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+ test -n "$PYTEST" && break
+done
+
+else
+ # Report the value of PYTEST in configure's output in all cases.
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for PYTEST" >&5
+$as_echo_n "checking for PYTEST... " >&6; }
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTEST" >&5
+$as_echo "$PYTEST" >&6; }
+fi
+
+ if test -z "$PYTEST"; then
+ as_fn_error $? "pytest not found" "$LINENO" 5
+ fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for Python packages required for pytest" >&5
+$as_echo_n "checking for Python packages required for pytest... " >&6; }
+ modulestderr=`$PYTEST -c "$srcdir/pytest.ini" --confcutdir="$srcdir/config" --capture=no "$srcdir/config/check_pytest.py" --requirements "$srcdir/config/pytest-requirements.txt" 2>&1 >/dev/null`
+ if test $? -eq 0; then
+ echo "$modulestderr" >&5
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+ else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $modulestderr" >&5
+$as_echo "$modulestderr" >&6; }
+ as_fn_error $? "Additional Python packages are required to run the pytest suites" "$LINENO" 5
+ fi
+fi
+
# If compiler will take -Wl,--as-needed (or various platform-specific
# spellings thereof) then add that to LDFLAGS. This is much easier than
# trying to filter LIBS to the minimum for each executable.
diff --git a/configure.ac b/configure.ac
index 01b3bbc1be8..d513d374f3e 100644
--- a/configure.ac
+++ b/configure.ac
@@ -225,11 +225,16 @@ AC_SUBST(DTRACEFLAGS)])
AC_SUBST(enable_dtrace)
#
-# TAP tests
+# Test frameworks
#
PGAC_ARG_BOOL(enable, tap-tests, no,
- [enable TAP tests (requires Perl and IPC::Run)])
+ [enable (Perl-based) TAP tests (requires Perl and IPC::Run)])
AC_SUBST(enable_tap_tests)
+
+PGAC_ARG_BOOL(enable, pytest, no,
+ [enable (Python-based) pytest suites (requires Python)])
+AC_SUBST(enable_pytest)
+
AC_ARG_VAR(PG_TEST_EXTRA,
[enable selected extra tests (overridden at runtime by PG_TEST_EXTRA environment variable)])
@@ -2412,6 +2417,22 @@ if test "$enable_tap_tests" = yes; then
fi
fi
+if test "$enable_pytest" = yes; then
+ PGAC_PATH_PROGS(PYTEST, pytest py.test)
+ if test -z "$PYTEST"; then
+ AC_MSG_ERROR([pytest not found])
+ fi
+ AC_MSG_CHECKING(for Python packages required for pytest)
+ [modulestderr=`$PYTEST -c "$srcdir/pytest.ini" --confcutdir="$srcdir/config" --capture=no "$srcdir/config/check_pytest.py" --requirements "$srcdir/config/pytest-requirements.txt" 2>&1 >/dev/null`]
+ if test $? -eq 0; then
+ echo "$modulestderr" >&AS_MESSAGE_LOG_FD
+ AC_MSG_RESULT(yes)
+ else
+ AC_MSG_RESULT([$modulestderr])
+ AC_MSG_ERROR([Additional Python packages are required to run the pytest suites])
+ fi
+fi
+
# If compiler will take -Wl,--as-needed (or various platform-specific
# spellings thereof) then add that to LDFLAGS. This is much easier than
# trying to filter LIBS to the minimum for each executable.
diff --git a/meson.build b/meson.build
index 551e27f5eb3..e27c8ad4455 100644
--- a/meson.build
+++ b/meson.build
@@ -1711,6 +1711,39 @@ endif
+###############################################################
+# Library: pytest
+###############################################################
+
+pytest_enabled = false
+pytest = not_found_dep
+
+pytestopt = get_option('pytest')
+if not pytestopt.disabled()
+ pytest = find_program(get_option('PYTEST'), native: true, required: pytestopt)
+ if pytest.found()
+ pytest_check = run_command(pytest,
+ '-c', 'pytest.ini',
+ '--confcutdir=config',
+ '--capture=no',
+ 'config/check_pytest.py',
+ '--requirements', 'config/pytest-requirements.txt',
+ check: false)
+ if pytest_check.returncode() != 0
+ message(pytest_check.stderr())
+ if pytestopt.enabled()
+ error('Additional Python packages are required to run the pytest suites.')
+ else
+ warning('Additional Python packages are required to run the pytest suites.')
+ endif
+ else
+ pytest_enabled = true
+ endif
+ endif
+endif
+
+
+
###############################################################
# Library: zstd
###############################################################
@@ -3808,6 +3841,63 @@ foreach test_dir : tests
)
endforeach
install_suites += test_group
+ elif kind == 'pytest'
+ testwrap_pytest = testwrap_base
+ if not pytest_enabled
+ testwrap_pytest += ['--skip', 'pytest not enabled']
+ endif
+
+ test_command = [
+ pytest.full_path(),
+ '-c', meson.project_source_root() / 'pytest.ini',
+ '--verbose',
+ '-p', 'pgtap', # enable our test reporter plugin
+ '-ra', # show skipped and xfailed tests too
+ ]
+
+ # Add temporary install, the build directory for non-installed binaries and
+ # also test/ for non-installed test binaries built separately.
+ env = test_env
+ env.prepend('PATH', temp_install_bindir, test_dir['bd'], test_dir['bd'] / 'test')
+ temp_install_datadir = '@0@@1@'.format(test_install_destdir, dir_prefix / dir_data)
+ env.set('share_contrib_dir', temp_install_datadir / 'contrib')
+ env.prepend('PYTHONPATH', meson.project_source_root() / 'src' / 'test' / 'pytest' / 'plugins')
+
+ foreach name, value : t.get('env', {})
+ env.set(name, value)
+ endforeach
+
+ test_group = test_dir['name']
+ test_kwargs = {
+ 'protocol': 'tap',
+ 'suite': test_group,
+ 'timeout': 1000,
+ 'depends': test_deps + t.get('deps', []),
+ 'env': env,
+ } + t.get('test_kwargs', {})
+
+ foreach onetest : t['tests']
+ # Make test names prettier, remove pyt/ and .py
+ onetest_p = onetest
+ if onetest_p.startswith('pyt/')
+ onetest_p = onetest.split('pyt/')[1]
+ endif
+ if onetest_p.endswith('.py')
+ onetest_p = fs.stem(onetest_p)
+ endif
+
+ test(test_dir['name'] / onetest_p,
+ python,
+ kwargs: test_kwargs,
+ args: testwrap_pytest + [
+ '--testgroup', test_dir['name'],
+ '--testname', onetest_p,
+ '--', test_command,
+ test_dir['sd'] / onetest,
+ ],
+ )
+ endforeach
+ install_suites += test_group
else
error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
endif
@@ -3982,6 +4072,7 @@ summary(
'dtrace': dtrace,
'flex': '@0@ @1@'.format(flex.full_path(), flex_version),
'prove': prove,
+ 'pytest': pytest,
},
section: 'Programs',
)
@@ -4022,6 +4113,7 @@ summary(
summary(
{
'tap': tap_tests_enabled,
+ 'pytest': pytest_enabled,
},
section: 'Other features',
list_sep: ' ',
diff --git a/meson_options.txt b/meson_options.txt
index 06bf5627d3c..88f22e699d9 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -41,7 +41,10 @@ option('cassert', type: 'boolean', value: false,
description: 'Enable assertion checks (for debugging)')
option('tap_tests', type: 'feature', value: 'auto',
- description: 'Enable TAP tests')
+ description: 'Enable (Perl-based) TAP tests')
+
+option('pytest', type: 'feature', value: 'auto',
+ description: 'Enable (Python-based) pytest suites')
option('injection_points', type: 'boolean', value: false,
description: 'Enable injection points')
@@ -195,6 +198,9 @@ option('PERL', type: 'string', value: 'perl',
option('PROVE', type: 'string', value: 'prove',
description: 'Path to prove binary')
+option('PYTEST', type: 'array', value: ['pytest', 'py.test'],
+ description: 'Path to pytest binary')
+
option('PYTHON', type: 'array', value: ['python3', 'python'],
description: 'Path to python binary')
diff --git a/pytest.ini b/pytest.ini
new file mode 100644
index 00000000000..8e8388f3afc
--- /dev/null
+++ b/pytest.ini
@@ -0,0 +1,6 @@
+[pytest]
+minversion = 7.0
+
+# Ignore ./config (which contains the configure-time check_pytest.py tests) by
+# default.
+addopts = --ignore ./config
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 371cd7eba2c..39e67358289 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -211,6 +211,7 @@ enable_dtrace = @enable_dtrace@
enable_coverage = @enable_coverage@
enable_injection_points = @enable_injection_points@
enable_tap_tests = @enable_tap_tests@
+enable_pytest = @enable_pytest@
python_includespec = @python_includespec@
python_libdir = @python_libdir@
@@ -354,6 +355,7 @@ MSGFMT = @MSGFMT@
MSGFMT_FLAGS = @MSGFMT_FLAGS@
MSGMERGE = @MSGMERGE@
OPENSSL = @OPENSSL@
+PYTEST = @PYTEST@
PYTHON = @PYTHON@
TAR = @TAR@
XGETTEXT = @XGETTEXT@
@@ -508,6 +510,27 @@ prove_installcheck = @echo "TAP tests not enabled. Try configuring with --enable
prove_check = $(prove_installcheck)
endif
+ifeq ($(enable_pytest),yes)
+
+pytest_installcheck = @echo "Installcheck is not currently supported for pytest."
+
+define pytest_check
+echo "# +++ pytest check in $(subdir) +++" && \
+rm -rf '$(CURDIR)'/tmp_check && \
+$(MKDIR_P) '$(CURDIR)'/tmp_check && \
+cd $(srcdir) && \
+ TESTLOGDIR='$(CURDIR)/tmp_check/log' \
+ TESTDATADIR='$(CURDIR)/tmp_check' \
+ PYTHONPATH='$(abs_top_srcdir)/src/test/pytest/plugins:$$PYTHONPATH' \
+ $(with_temp_install) \
+ $(PYTEST) -c '$(abs_top_srcdir)/pytest.ini' --verbose -ra ./pyt/
+endef
+
+else
+pytest_installcheck = @echo "pytest is not enabled. Try configuring with --enable-pytest"
+pytest_check = $(pytest_installcheck)
+endif
+
# Installation.
install_bin = @install_bin@
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index c6edf14ec44..5b9a804aa94 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -56,6 +56,7 @@ pgxs_kv = {
'enable_nls': libintl.found() ? 'yes' : 'no',
'enable_injection_points': get_option('injection_points') ? 'yes' : 'no',
'enable_tap_tests': tap_tests_enabled ? 'yes' : 'no',
+ 'enable_pytest': pytest_enabled ? 'yes' : 'no',
'enable_debug': get_option('debug') ? 'yes' : 'no',
'enable_coverage': 'no',
'enable_dtrace': dtrace.found() ? 'yes' : 'no',
@@ -145,6 +146,7 @@ pgxs_bins = {
'OPENSSL': openssl,
'PERL': perl,
'PROVE': prove,
+ 'PYTEST': pytest,
'PYTHON': python,
'TAR': tar,
'ZSTD': program_zstd,
diff --git a/src/test/Makefile b/src/test/Makefile
index 511a72e6238..0be9771d71f 100644
--- a/src/test/Makefile
+++ b/src/test/Makefile
@@ -12,7 +12,16 @@ subdir = src/test
top_builddir = ../..
include $(top_builddir)/src/Makefile.global
-SUBDIRS = perl postmaster regress isolation modules authentication recovery subscription
+SUBDIRS = \
+ authentication \
+ isolation \
+ modules \
+ perl \
+ postmaster \
+ pytest \
+ recovery \
+ regress \
+ subscription
ifeq ($(with_icu),yes)
SUBDIRS += icu
diff --git a/src/test/meson.build b/src/test/meson.build
index ccc31d6a86a..d08a6ef61c2 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -5,6 +5,7 @@ subdir('isolation')
subdir('authentication')
subdir('postmaster')
+subdir('pytest')
subdir('recovery')
subdir('subscription')
subdir('modules')
diff --git a/src/test/pytest/Makefile b/src/test/pytest/Makefile
new file mode 100644
index 00000000000..2bdca96ccbe
--- /dev/null
+++ b/src/test/pytest/Makefile
@@ -0,0 +1,20 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for pytest
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/pytest/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/test/pytest
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+check:
+ $(pytest_check)
+
+clean distclean maintainer-clean:
+ rm -rf tmp_check
diff --git a/src/test/pytest/README b/src/test/pytest/README
new file mode 100644
index 00000000000..1333ed77b7e
--- /dev/null
+++ b/src/test/pytest/README
@@ -0,0 +1 @@
+TODO
diff --git a/src/test/pytest/meson.build b/src/test/pytest/meson.build
new file mode 100644
index 00000000000..abd128dfa24
--- /dev/null
+++ b/src/test/pytest/meson.build
@@ -0,0 +1,16 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+if not pytest_enabled
+ subdir_done()
+endif
+
+tests += {
+ 'name': 'pytest',
+ 'sd': meson.current_source_dir(),
+ 'bd': meson.current_build_dir(),
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_something.py',
+ ],
+ },
+}
diff --git a/src/test/pytest/plugins/pgtap.py b/src/test/pytest/plugins/pgtap.py
new file mode 100644
index 00000000000..6a729d252e1
--- /dev/null
+++ b/src/test/pytest/plugins/pgtap.py
@@ -0,0 +1,192 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import os
+import sys
+
+import pytest
+
+#
+# Helpers
+#
+
+
+class TAP:
+ """
+ A basic API for reporting via the TAP protocol.
+ """
+
+ def __init__(self):
+ self.count = 0
+
+ # XXX interacts poorly with testwrap's boilerplate diagnostics
+ # self.print("TAP version 13")
+
+ def expect(self, num: int):
+ self.print(f"1..{num}")
+
+ def print(self, *args):
+ print(*args, file=sys.__stdout__)
+
+ def ok(self, name: str):
+ self.count += 1
+ self.print("ok", self.count, "-", name)
+
+ def skip(self, name: str, reason: str):
+ self.count += 1
+ self.print("ok", self.count, "-", name, "# skip", reason)
+
+ def fail(self, name: str, details: str):
+ self.count += 1
+ self.print("not ok", self.count, "-", name)
+
+ # mtest has some odd behavior around TAP tests where it won't print
+ # diagnostics on failure if they're part of the stdout stream, so we
+ # might as well just dump the details directly to stderr instead.
+ print(details, file=sys.__stderr__)
+
+
+tap = TAP()
+
+
+class TestNotes:
+ """
+ Annotations for a single test. The existing pytest hooks keep interesting
+ information somewhat separated across the different stages
+ (setup/test/teardown), so this class is used to correlate them.
+ """
+
+ skipped = False
+ skip_reason = None
+
+ failed = False
+ details = ""
+
+
+# Register a custom key in the stash dictionary for keeping our TestNotes.
+notes_key = pytest.StashKey[TestNotes]()
+
+
+#
+# Hook Implementations
+#
+
+
+@pytest.hookimpl(tryfirst=True)
+def pytest_configure(config):
+ """
+ Hijacks the standard streams as soon as possible during pytest startup. The
+ pytest-formatted output gets logged to file instead, and we'll use the
+ original sys.__stdout__/__stderr__ streams for the TAP protocol.
+ """
+ logdir = os.getenv("TESTLOGDIR")
+ if not logdir:
+ raise RuntimeError("pgtap requires the TESTLOGDIR envvar to be set")
+
+ os.makedirs(logdir)
+ logpath = os.path.join(logdir, "pytest.log")
+ sys.stdout = sys.stderr = open(logpath, "a", buffering=1)
+
+
+@pytest.hookimpl(trylast=True)
+def pytest_sessionfinish(session, exitstatus):
+ """
+ Suppresses nonzero exit codes due to failed tests. (In that case, we want
+ Meson to report a failure count, not a generic ERROR.)
+ """
+ if exitstatus == pytest.ExitCode.TESTS_FAILED:
+ session.exitstatus = pytest.ExitCode.OK
+
+
+@pytest.hookimpl
+def pytest_collectreport(report):
+ # Include collection failures directly in Meson error output.
+ if report.failed:
+ print(report.longreprtext, file=sys.__stderr__)
+
+
+@pytest.hookimpl
+def pytest_internalerror(excrepr, excinfo):
+ # Include internal errors directly in Meson error output.
+ print(excrepr, file=sys.__stderr__)
+
+
+#
+# Hook Wrappers
+#
+# In pytest parlance, a "wrapper" for a hook can inspect and optionally modify
+# existing hooks' behavior, but it does not replace the hook chain. This is done
+# through a generator-style API which chains the hooks together (see the use of
+# `yield`).
+#
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_collection(session):
+ """Reports the number of gathered tests after collection is finished."""
+ res = yield
+ tap.expect(session.testscollected)
+ return res
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_runtest_makereport(item, call):
+ """
+ Annotates a test item with our TestNotes and grabs relevant information for
+ reporting.
+
+ This is called multiple times per test, so it's not correct to print the TAP
+ result here. (A test and its teardown stage can both fail, and we want to
+ see the details for both.) We instead combine all the information for use by
+ our pytest_runtest_protocol wrapper later on.
+ """
+ res = yield
+
+ if notes_key not in item.stash:
+ item.stash[notes_key] = TestNotes()
+ notes = item.stash[notes_key]
+
+ report = res.get_result()
+ if report.passed:
+ pass # no annotation needed
+
+ elif report.skipped:
+ notes.skipped = True
+ _, _, notes.skip_reason = report.longrepr
+
+ elif report.failed:
+ notes.failed = True
+
+ if not notes.details:
+ notes.details += "{:_^72}\n\n".format(f" {report.head_line} ")
+
+ if report.when in ("setup", "teardown"):
+ notes.details += "\n{:_^72}\n\n".format(
+ f" Error during {report.when} of {report.head_line} "
+ )
+
+ notes.details += report.longreprtext + "\n"
+
+ else:
+ raise RuntimeError("pytest_runtest_makereport received unknown test status")
+
+ return res
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_runtest_protocol(item, nextitem):
+ """
+ Reports the TAP result for this test item using our gathered TestNotes.
+ """
+ res = yield
+
+ assert notes_key in item.stash, "pgtap didn't annotate a test item?"
+ notes = item.stash[notes_key]
+
+ if notes.failed:
+ tap.fail(item.nodeid, notes.details)
+ elif notes.skipped:
+ tap.skip(item.nodeid, notes.skip_reason)
+ else:
+ tap.ok(item.nodeid)
+
+ return res
--
2.52.0
v4-0003-ci-Add-MTEST_SUITES-for-optional-test-tailoring.patchtext/x-patch; charset=utf-8; name=v4-0003-ci-Add-MTEST_SUITES-for-optional-test-tailoring.patchDownload
From 6be433093dd5d7942bfb12ff8ff00a348ea50be8 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 2 Sep 2025 15:37:53 -0700
Subject: [PATCH v4 3/7] ci: Add MTEST_SUITES for optional test tailoring
Should make it easier to control the test cycle time for Cirrus. Add the
desired suites (remembering `--suite setup`!) to the top-level envvar.
---
.cirrus.tasks.yml | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index ee2084bdfb6..3b0bb202276 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -28,6 +28,7 @@ env:
# errors/warnings in one place.
MBUILD_TARGET: all testprep
MTEST_ARGS: --print-errorlogs --no-rebuild -C build
+ MTEST_SUITES: # --suite setup --suite ssl --suite ...
PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
@@ -251,7 +252,7 @@ task:
su postgres <<-EOF
set -e
ulimit -c unlimited
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
# test runningcheck, freebsd chosen because it's currently fast enough
@@ -397,7 +398,7 @@ task:
# Otherwise tests will fail on OpenBSD, due to inability to start enough
# processes.
ulimit -p 256
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
on_failure:
@@ -615,7 +616,7 @@ task:
su postgres <<-EOF
set -e
ulimit -c unlimited
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
# so that we don't upload 64bit logs if 32bit fails
rm -rf build/
@@ -628,7 +629,7 @@ task:
su postgres <<-EOF
set -e
ulimit -c unlimited
- PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS}
+ PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
on_failure:
@@ -752,7 +753,7 @@ task:
test_world_script: |
ulimit -c unlimited # default is 0
ulimit -n 1024 # default is 256, pretty low
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
on_failure:
<<: *on_failure_meson
@@ -835,7 +836,7 @@ task:
check_world_script: |
vcvarsall x64
- meson test %MTEST_ARGS% --num-processes %TEST_JOBS%
+ meson test %MTEST_ARGS% --num-processes %TEST_JOBS% %MTEST_SUITES%
on_failure:
<<: *on_failure_meson
@@ -896,7 +897,7 @@ task:
upload_caches: ccache
test_world_script: |
- %BASH% -c "meson test %MTEST_ARGS% --num-processes %TEST_JOBS%"
+ %BASH% -c "meson test %MTEST_ARGS% --num-processes %TEST_JOBS% %MTEST_SUITES%"
on_failure:
<<: *on_failure_meson
--
2.52.0
v4-0004-Add-pytest-infrastructure-to-interact-with-Postgr.patchtext/x-patch; charset=utf-8; name=v4-0004-Add-pytest-infrastructure-to-interact-with-Postgr.patchDownload
From e951a06800fae075d0f180ed0b409b2ed281e40e Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <postgres@jeltef.nl>
Date: Tue, 16 Dec 2025 09:25:48 +0100
Subject: [PATCH v4 4/7] Add pytest infrastructure to interact with PostgreSQL
servers
This adds functionality to the pytest infrastructure that allows tests
to do common things with PostgreSQL servers like:
- starting
- stopping
- connecting
- running queries
Especially the running of queries is designed in such a way that it.
Types get converted to their Python counter parts automatically. Errors
become actual python exceptions. And there's logic to automatically
unpack single fields or single rows so you don't have to do rows[0][0]
if the query only returns a single cell. In a similar vein Postgres
errors automatically get converted to dedicated Python exception types.
---
pytest.ini | 3 +
src/backend/utils/errcodes.txt | 5 +
src/test/pytest/libpq/__init__.py | 36 +
src/test/pytest/libpq/_core.py | 467 +++++
src/test/pytest/libpq/_error_base.py | 74 +
src/test/pytest/libpq/_generated_errors.py | 2116 ++++++++++++++++++++
src/test/pytest/libpq/errors.py | 39 +
src/test/pytest/meson.build | 4 +-
src/test/pytest/pypg/__init__.py | 4 +
src/test/pytest/pypg/_env.py | 54 +
src/test/pytest/pypg/_win32.py | 145 ++
src/test/pytest/pypg/fixtures.py | 191 ++
src/test/pytest/pypg/server.py | 391 ++++
src/test/pytest/pypg/util.py | 42 +
src/test/pytest/pyt/conftest.py | 4 +
src/test/pytest/pyt/test_errors.py | 34 +
src/test/pytest/pyt/test_libpq.py | 172 ++
src/test/pytest/pyt/test_query_helpers.py | 286 +++
src/tools/generate_pytest_libpq_errors.py | 147 ++
19 files changed, 4213 insertions(+), 1 deletion(-)
create mode 100644 src/test/pytest/libpq/__init__.py
create mode 100644 src/test/pytest/libpq/_core.py
create mode 100644 src/test/pytest/libpq/_error_base.py
create mode 100644 src/test/pytest/libpq/_generated_errors.py
create mode 100644 src/test/pytest/libpq/errors.py
create mode 100644 src/test/pytest/pypg/__init__.py
create mode 100644 src/test/pytest/pypg/_env.py
create mode 100644 src/test/pytest/pypg/_win32.py
create mode 100644 src/test/pytest/pypg/fixtures.py
create mode 100644 src/test/pytest/pypg/server.py
create mode 100644 src/test/pytest/pypg/util.py
create mode 100644 src/test/pytest/pyt/conftest.py
create mode 100644 src/test/pytest/pyt/test_errors.py
create mode 100644 src/test/pytest/pyt/test_libpq.py
create mode 100644 src/test/pytest/pyt/test_query_helpers.py
create mode 100755 src/tools/generate_pytest_libpq_errors.py
diff --git a/pytest.ini b/pytest.ini
index 8e8388f3afc..e7aa84f3a84 100644
--- a/pytest.ini
+++ b/pytest.ini
@@ -4,3 +4,6 @@ minversion = 7.0
# Ignore ./config (which contains the configure-time check_pytest.py tests) by
# default.
addopts = --ignore ./config
+
+# Common test code can be found here.
+pythonpath = src/test/pytest
diff --git a/src/backend/utils/errcodes.txt b/src/backend/utils/errcodes.txt
index c96aa7c49ef..40c7555047e 100644
--- a/src/backend/utils/errcodes.txt
+++ b/src/backend/utils/errcodes.txt
@@ -21,6 +21,11 @@
# doc/src/sgml/errcodes-table.sgml
# a SGML table of error codes for inclusion in the documentation
#
+# src/test/pytest/libpq/_generated_errors.py
+# Python exception classes for the pytest libpq wrapper
+# Note: This needs to be manually regenerated by running
+# src/tools/generate_pytest_libpq_errors.py
+#
# The format of this file is one error code per line, with the following
# whitespace-separated fields:
#
diff --git a/src/test/pytest/libpq/__init__.py b/src/test/pytest/libpq/__init__.py
new file mode 100644
index 00000000000..cb4d18b6206
--- /dev/null
+++ b/src/test/pytest/libpq/__init__.py
@@ -0,0 +1,36 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+libpq testing utilities - ctypes bindings and helpers for PostgreSQL's libpq library.
+
+This module provides Python wrappers around libpq for use in pytest tests.
+"""
+
+from . import errors
+from .errors import LibpqError, LibpqWarning
+from ._core import (
+ ConnectionStatus,
+ DiagField,
+ ExecStatus,
+ PGconn,
+ PGresult,
+ connect,
+ connstr,
+ load_libpq_handle,
+ register_type_info,
+)
+
+__all__ = [
+ "errors",
+ "LibpqError",
+ "LibpqWarning",
+ "ConnectionStatus",
+ "DiagField",
+ "ExecStatus",
+ "PGconn",
+ "PGresult",
+ "connect",
+ "connstr",
+ "load_libpq_handle",
+ "register_type_info",
+]
diff --git a/src/test/pytest/libpq/_core.py b/src/test/pytest/libpq/_core.py
new file mode 100644
index 00000000000..4776f0ff47e
--- /dev/null
+++ b/src/test/pytest/libpq/_core.py
@@ -0,0 +1,467 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Core libpq functionality - ctypes bindings and connection handling.
+"""
+
+import contextlib
+import ctypes
+import datetime
+import decimal
+import enum
+import json
+import platform
+import os
+import uuid
+from typing import Any, Callable, Dict, Optional
+
+from .errors import LibpqError, make_error
+
+
+# PG_DIAG field identifiers from postgres_ext.h
+class DiagField(enum.IntEnum):
+ SEVERITY = ord("S")
+ SEVERITY_NONLOCALIZED = ord("V")
+ SQLSTATE = ord("C")
+ MESSAGE_PRIMARY = ord("M")
+ MESSAGE_DETAIL = ord("D")
+ MESSAGE_HINT = ord("H")
+ STATEMENT_POSITION = ord("P")
+ INTERNAL_POSITION = ord("p")
+ INTERNAL_QUERY = ord("q")
+ CONTEXT = ord("W")
+ SCHEMA_NAME = ord("s")
+ TABLE_NAME = ord("t")
+ COLUMN_NAME = ord("c")
+ DATATYPE_NAME = ord("d")
+ CONSTRAINT_NAME = ord("n")
+ SOURCE_FILE = ord("F")
+ SOURCE_LINE = ord("L")
+ SOURCE_FUNCTION = ord("R")
+
+
+class ConnectionStatus(enum.IntEnum):
+ """PostgreSQL connection status codes from libpq."""
+
+ CONNECTION_OK = 0
+ CONNECTION_BAD = 1
+
+
+class ExecStatus(enum.IntEnum):
+ """PostgreSQL result status codes from PQresultStatus."""
+
+ PGRES_EMPTY_QUERY = 0
+ PGRES_COMMAND_OK = 1
+ PGRES_TUPLES_OK = 2
+ PGRES_COPY_OUT = 3
+ PGRES_COPY_IN = 4
+ PGRES_BAD_RESPONSE = 5
+ PGRES_NONFATAL_ERROR = 6
+ PGRES_FATAL_ERROR = 7
+ PGRES_COPY_BOTH = 8
+ PGRES_SINGLE_TUPLE = 9
+ PGRES_PIPELINE_SYNC = 10
+ PGRES_PIPELINE_ABORTED = 11
+
+
+class _PGconn(ctypes.Structure):
+ pass
+
+
+class _PGresult(ctypes.Structure):
+ pass
+
+
+_PGconn_p = ctypes.POINTER(_PGconn)
+_PGresult_p = ctypes.POINTER(_PGresult)
+
+
+def load_libpq_handle(libdir, bindir):
+ """
+ Loads a ctypes handle for libpq. Some common function prototypes are
+ initialized for general use.
+ """
+ system = platform.system()
+
+ if system in ("Linux", "FreeBSD", "NetBSD", "OpenBSD"):
+ name = "libpq.so.5"
+ elif system == "Darwin":
+ name = "libpq.5.dylib"
+ elif system == "Windows":
+ name = "libpq.dll"
+ else:
+ assert False, f"the libpq fixture must be updated for {system}"
+
+ if system == "Windows":
+ # On Windows, libpq.dll is confusingly in bindir, not libdir. And we
+ # need to add this directory the the search path.
+ libpq_path = os.path.join(bindir, name)
+ lib = ctypes.CDLL(libpq_path)
+ else:
+ libpq_path = os.path.join(libdir, name)
+ lib = ctypes.CDLL(libpq_path)
+
+ #
+ # Function Prototypes
+ #
+
+ lib.PQconnectdb.restype = _PGconn_p
+ lib.PQconnectdb.argtypes = [ctypes.c_char_p]
+
+ lib.PQstatus.restype = ctypes.c_int
+ lib.PQstatus.argtypes = [_PGconn_p]
+
+ lib.PQexec.restype = _PGresult_p
+ lib.PQexec.argtypes = [_PGconn_p, ctypes.c_char_p]
+
+ lib.PQresultStatus.restype = ctypes.c_int
+ lib.PQresultStatus.argtypes = [_PGresult_p]
+
+ lib.PQclear.restype = None
+ lib.PQclear.argtypes = [_PGresult_p]
+
+ lib.PQerrorMessage.restype = ctypes.c_char_p
+ lib.PQerrorMessage.argtypes = [_PGconn_p]
+
+ lib.PQfinish.restype = None
+ lib.PQfinish.argtypes = [_PGconn_p]
+
+ lib.PQresultErrorMessage.restype = ctypes.c_char_p
+ lib.PQresultErrorMessage.argtypes = [_PGresult_p]
+
+ lib.PQntuples.restype = ctypes.c_int
+ lib.PQntuples.argtypes = [_PGresult_p]
+
+ lib.PQnfields.restype = ctypes.c_int
+ lib.PQnfields.argtypes = [_PGresult_p]
+
+ lib.PQgetvalue.restype = ctypes.c_char_p
+ lib.PQgetvalue.argtypes = [_PGresult_p, ctypes.c_int, ctypes.c_int]
+
+ lib.PQgetisnull.restype = ctypes.c_int
+ lib.PQgetisnull.argtypes = [_PGresult_p, ctypes.c_int, ctypes.c_int]
+
+ lib.PQftype.restype = ctypes.c_uint
+ lib.PQftype.argtypes = [_PGresult_p, ctypes.c_int]
+
+ lib.PQresultErrorField.restype = ctypes.c_char_p
+ lib.PQresultErrorField.argtypes = [_PGresult_p, ctypes.c_int]
+
+ return lib
+
+
+# PostgreSQL type OIDs and conversion system
+# Type registry - maps OID to converter function
+_type_converters: Dict[int, Callable[[str], Any]] = {}
+_array_to_elem_map: Dict[int, int] = {}
+
+
+def register_type_info(
+ name: str, oid: int, array_oid: int, converter: Callable[[str], Any]
+):
+ """
+ Register a PostgreSQL type with its OID, array OID, and conversion function.
+
+ Usage:
+ register_type_info("bool", 16, 1000, lambda v: v == "t")
+ """
+ _type_converters[oid] = converter
+ if array_oid is not None:
+ _array_to_elem_map[array_oid] = oid
+
+
+# Helper converters
+def _parse_array(value: str, elem_oid: int) -> list:
+ """Parse PostgreSQL array syntax: {elem1,elem2,elem3}"""
+ if not (value.startswith("{") and value.endswith("}")):
+ return value
+
+ inner = value[1:-1]
+ if not inner:
+ return []
+
+ elements = inner.split(",")
+ result = []
+ for elem in elements:
+ elem = elem.strip()
+ if elem == "NULL":
+ result.append(None)
+ else:
+ # Remove quotes if present
+ if elem.startswith('"') and elem.endswith('"'):
+ elem = elem[1:-1]
+ result.append(_convert_pg_value(elem, elem_oid))
+
+ return result
+
+
+# Register standard PostgreSQL types that we'll likely encounter in tests
+register_type_info("bool", 16, 1000, lambda v: v == "t")
+register_type_info("int2", 21, 1005, int)
+register_type_info("int4", 23, 1007, int)
+register_type_info("int8", 20, 1016, int)
+register_type_info("float4", 700, 1021, float)
+register_type_info("float8", 701, 1022, float)
+register_type_info("numeric", 1700, 1231, decimal.Decimal)
+register_type_info("text", 25, 1009, str)
+register_type_info("varchar", 1043, 1015, str)
+register_type_info("date", 1082, 1182, datetime.date.fromisoformat)
+register_type_info("time", 1083, 1183, datetime.time.fromisoformat)
+register_type_info("timestamp", 1114, 1115, datetime.datetime.fromisoformat)
+register_type_info("timestamptz", 1184, 1185, datetime.datetime.fromisoformat)
+register_type_info("uuid", 2950, 2951, uuid.UUID)
+register_type_info("json", 114, 199, json.loads)
+register_type_info("jsonb", 3802, 3807, json.loads)
+
+
+def _convert_pg_value(value: str, type_oid: int) -> Any:
+ """
+ Convert PostgreSQL string value to appropriate Python type based on OID.
+ Uses the registered type converters from register_type_info().
+ """
+ # Check if it's an array type
+ if type_oid in _array_to_elem_map:
+ elem_oid = _array_to_elem_map[type_oid]
+ return _parse_array(value, elem_oid)
+
+ # Use registered converter if available
+ converter = _type_converters.get(type_oid)
+ if converter:
+ return converter(value)
+
+ # Unknown types - return as string
+ return value
+
+
+def simplify_query_results(results) -> Any:
+ """
+ Simplify the results of a query so that the caller doesn't have to unpack
+ lists and tuples of length 1.
+ """
+ if len(results) == 1:
+ row = results[0]
+ if len(row) == 1:
+ # If there's only a single cell, just return the value
+ return row[0]
+ # If there's only a single row, just return that row
+ return row
+
+ if len(results) != 0 and len(results[0]) == 1:
+ # If there's only a single column, return an array of values
+ return [row[0] for row in results]
+
+ # if there are multiple rows and columns, return the results as is
+ return results
+
+
+class PGresult(contextlib.AbstractContextManager):
+ """Wraps a raw _PGresult_p with a more friendly interface."""
+
+ def __init__(self, lib: ctypes.CDLL, res: _PGresult_p):
+ self._lib = lib
+ self._res = res
+
+ def __exit__(self, *exc):
+ self._lib.PQclear(self._res)
+ self._res = None
+
+ def status(self) -> ExecStatus:
+ return ExecStatus(self._lib.PQresultStatus(self._res))
+
+ def error_message(self):
+ """Returns the error message associated with this result."""
+ msg = self._lib.PQresultErrorMessage(self._res)
+ return msg.decode() if msg else ""
+
+ def _get_error_field(self, field: DiagField) -> Optional[str]:
+ """Get an error field from the result using PQresultErrorField."""
+ val = self._lib.PQresultErrorField(self._res, int(field))
+ return val.decode() if val else None
+
+ def raise_error(self, query: Optional[str] = None) -> None:
+ """
+ Raises an appropriate LibpqError subclass based on the error fields.
+ Extracts SQLSTATE and other diagnostic information from the result.
+ """
+ sqlstate = self._get_error_field(DiagField.SQLSTATE)
+ primary = self._get_error_field(DiagField.MESSAGE_PRIMARY)
+ detail = self._get_error_field(DiagField.MESSAGE_DETAIL)
+ hint = self._get_error_field(DiagField.MESSAGE_HINT)
+ severity = self._get_error_field(DiagField.SEVERITY)
+ schema_name = self._get_error_field(DiagField.SCHEMA_NAME)
+ table_name = self._get_error_field(DiagField.TABLE_NAME)
+ column_name = self._get_error_field(DiagField.COLUMN_NAME)
+ datatype_name = self._get_error_field(DiagField.DATATYPE_NAME)
+ constraint_name = self._get_error_field(DiagField.CONSTRAINT_NAME)
+ context = self._get_error_field(DiagField.CONTEXT)
+
+ position_str = self._get_error_field(DiagField.STATEMENT_POSITION)
+ position = int(position_str) if position_str else None
+
+ # Build the error message
+ message = primary or self.error_message()
+ if query:
+ message = f"{message}\nQuery: {query}"
+
+ raise make_error(
+ message,
+ sqlstate=sqlstate,
+ severity=severity,
+ primary=primary,
+ detail=detail,
+ hint=hint,
+ schema_name=schema_name,
+ table_name=table_name,
+ column_name=column_name,
+ datatype_name=datatype_name,
+ constraint_name=constraint_name,
+ position=position,
+ context=context,
+ )
+
+ def fetch_all(self):
+ """
+ Fetch all rows and convert to Python types.
+ Returns a list of tuples, with values converted based on their PostgreSQL type.
+ """
+ nrows = self._lib.PQntuples(self._res)
+ ncols = self._lib.PQnfields(self._res)
+
+ # Get type OIDs for each column
+ type_oids = [self._lib.PQftype(self._res, col) for col in range(ncols)]
+
+ results = []
+ for row in range(nrows):
+ row_data = []
+ for col in range(ncols):
+ if self._lib.PQgetisnull(self._res, row, col):
+ row_data.append(None)
+ else:
+ value = self._lib.PQgetvalue(self._res, row, col).decode()
+ row_data.append(_convert_pg_value(value, type_oids[col]))
+ results.append(tuple(row_data))
+
+ return results
+
+
+class PGconn(contextlib.AbstractContextManager):
+ """
+ Wraps a raw _PGconn_p with a more friendly interface. This is just a
+ stub; it's expected to grow.
+ """
+
+ def __init__(
+ self,
+ lib: ctypes.CDLL,
+ handle: _PGconn_p,
+ stack: contextlib.ExitStack,
+ ):
+ self._lib = lib
+ self._handle = handle
+ self._stack = stack
+
+ def __exit__(self, *exc):
+ self._lib.PQfinish(self._handle)
+ self._handle = None
+
+ def exec(self, query: str):
+ """
+ Executes a query via PQexec() and returns a PGresult.
+ """
+ res = self._lib.PQexec(self._handle, query.encode())
+ return self._stack.enter_context(PGresult(self._lib, res))
+
+ def sql(self, query: str):
+ """
+ Executes a query and raises an exception if it fails.
+ Returns the query results with automatic type conversion and simplification.
+ For commands that don't return data (INSERT, UPDATE, etc.), returns None.
+
+ Examples:
+ - SELECT 1 -> 1
+ - SELECT 1, 2 -> (1, 2)
+ - SELECT * FROM generate_series(1, 3) -> [1, 2, 3]
+ - SELECT * FROM (VALUES (1, 'a'), (2, 'b')) t -> [(1, 'a'), (2, 'b')]
+ - CREATE TABLE ... -> None
+ - INSERT INTO ... -> None
+ """
+ res = self.exec(query)
+ status = res.status()
+
+ if status == ExecStatus.PGRES_FATAL_ERROR:
+ res.raise_error(query)
+ elif status == ExecStatus.PGRES_COMMAND_OK:
+ return None
+ elif status == ExecStatus.PGRES_TUPLES_OK:
+ results = res.fetch_all()
+ return simplify_query_results(results)
+ else:
+ res.raise_error(query)
+
+
+def connstr(opts: Dict[str, Any]) -> str:
+ """
+ Flattens the provided options into a libpq connection string. Values
+ are converted to str and quoted/escaped as necessary.
+ """
+ settings = []
+
+ for k, v in opts.items():
+ v = str(v)
+ if not v:
+ v = "''"
+ else:
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+
+ if " " in v:
+ v = f"'{v}'"
+
+ settings.append(f"{k}={v}")
+
+ return " ".join(settings)
+
+
+def connect(
+ libpq_handle: ctypes.CDLL,
+ stack: contextlib.ExitStack,
+ remaining_timeout_fn: Callable[[], float],
+ **opts,
+) -> PGconn:
+ """
+ Connects to a server, using the given connection options, and
+ returns a PGconn object wrapping the connection handle. A
+ failure will raise LibpqError.
+
+ Connections honor PG_TEST_TIMEOUT_DEFAULT unless connect_timeout is
+ explicitly overridden in opts.
+
+ Args:
+ libpq_handle: ctypes.CDLL handle to libpq library
+ stack: ExitStack for managing connection cleanup
+ remaining_timeout_fn: Function that returns remaining timeout in seconds
+ **opts: Connection options (host, port, dbname, etc.)
+
+ Returns:
+ PGconn: Connected database connection
+
+ Raises:
+ LibpqError: If connection fails
+ """
+
+ if "connect_timeout" not in opts:
+ t = int(remaining_timeout_fn())
+ opts["connect_timeout"] = max(t, 1)
+
+ conn_p = libpq_handle.PQconnectdb(connstr(opts).encode())
+
+ # Check connection status before adding to stack
+ if libpq_handle.PQstatus(conn_p) != ConnectionStatus.CONNECTION_OK:
+ error_msg = libpq_handle.PQerrorMessage(conn_p).decode()
+ # Manually close the failed connection
+ libpq_handle.PQfinish(conn_p)
+ raise LibpqError(error_msg)
+
+ # Connection succeeded - add to stack for cleanup
+ conn = stack.enter_context(PGconn(libpq_handle, conn_p, stack=stack))
+ return conn
diff --git a/src/test/pytest/libpq/_error_base.py b/src/test/pytest/libpq/_error_base.py
new file mode 100644
index 00000000000..5c70c077193
--- /dev/null
+++ b/src/test/pytest/libpq/_error_base.py
@@ -0,0 +1,74 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Base exception classes for libpq errors and warnings.
+"""
+
+from typing import Optional
+
+
+class LibpqExceptionMixin:
+ """Mixin providing PostgreSQL error field attributes."""
+
+ sqlstate: Optional[str]
+ severity: Optional[str]
+ primary: Optional[str]
+ detail: Optional[str]
+ hint: Optional[str]
+ schema_name: Optional[str]
+ table_name: Optional[str]
+ column_name: Optional[str]
+ datatype_name: Optional[str]
+ constraint_name: Optional[str]
+ position: Optional[int]
+ context: Optional[str]
+
+ def __init__(
+ self,
+ message: str,
+ *,
+ sqlstate: Optional[str] = None,
+ severity: Optional[str] = None,
+ primary: Optional[str] = None,
+ detail: Optional[str] = None,
+ hint: Optional[str] = None,
+ schema_name: Optional[str] = None,
+ table_name: Optional[str] = None,
+ column_name: Optional[str] = None,
+ datatype_name: Optional[str] = None,
+ constraint_name: Optional[str] = None,
+ position: Optional[int] = None,
+ context: Optional[str] = None,
+ ):
+ super().__init__(message)
+ self.sqlstate = sqlstate
+ self.severity = severity
+ self.primary = primary
+ self.detail = detail
+ self.hint = hint
+ self.schema_name = schema_name
+ self.table_name = table_name
+ self.column_name = column_name
+ self.datatype_name = datatype_name
+ self.constraint_name = constraint_name
+ self.position = position
+ self.context = context
+
+ @property
+ def sqlstate_class(self) -> Optional[str]:
+ """Returns the 2-character SQLSTATE class."""
+ if self.sqlstate and len(self.sqlstate) >= 2:
+ return self.sqlstate[:2]
+ return None
+
+
+class LibpqError(LibpqExceptionMixin, RuntimeError):
+ """Base exception for libpq errors."""
+
+ pass
+
+
+class LibpqWarning(LibpqExceptionMixin, UserWarning):
+ """Base exception for libpq warnings."""
+
+ pass
diff --git a/src/test/pytest/libpq/_generated_errors.py b/src/test/pytest/libpq/_generated_errors.py
new file mode 100644
index 00000000000..f50f3143580
--- /dev/null
+++ b/src/test/pytest/libpq/_generated_errors.py
@@ -0,0 +1,2116 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+# This file is generated by src/tools/generate_pytest_libpq_errors.py - do not edit directly.
+
+"""
+Generated PostgreSQL error classes mapped from SQLSTATE codes.
+"""
+
+from typing import Dict
+
+from ._error_base import LibpqError, LibpqWarning
+
+
+class SuccessfulCompletion(LibpqError):
+ """SQLSTATE 00000 - successful completion."""
+
+ pass
+
+
+class Warning(LibpqWarning):
+ """SQLSTATE 01000 - warning."""
+
+ pass
+
+
+class DynamicResultSetsReturnedWarning(Warning):
+ """SQLSTATE 0100C - dynamic result sets returned."""
+
+ pass
+
+
+class ImplicitZeroBitPaddingWarning(Warning):
+ """SQLSTATE 01008 - implicit zero bit padding."""
+
+ pass
+
+
+class NullValueEliminatedInSetFunctionWarning(Warning):
+ """SQLSTATE 01003 - null value eliminated in set function."""
+
+ pass
+
+
+class PrivilegeNotGrantedWarning(Warning):
+ """SQLSTATE 01007 - privilege not granted."""
+
+ pass
+
+
+class PrivilegeNotRevokedWarning(Warning):
+ """SQLSTATE 01006 - privilege not revoked."""
+
+ pass
+
+
+class StringDataRightTruncationWarning(Warning):
+ """SQLSTATE 01004 - string data right truncation."""
+
+ pass
+
+
+class DeprecatedFeatureWarning(Warning):
+ """SQLSTATE 01P01 - deprecated feature."""
+
+ pass
+
+
+class NoData(LibpqError):
+ """SQLSTATE 02000 - no data."""
+
+ pass
+
+
+class NoAdditionalDynamicResultSetsReturned(NoData):
+ """SQLSTATE 02001 - no additional dynamic result sets returned."""
+
+ pass
+
+
+class SQLStatementNotYetComplete(LibpqError):
+ """SQLSTATE 03000 - sql statement not yet complete."""
+
+ pass
+
+
+class ConnectionException(LibpqError):
+ """SQLSTATE 08000 - connection exception."""
+
+ pass
+
+
+class ConnectionDoesNotExist(ConnectionException):
+ """SQLSTATE 08003 - connection does not exist."""
+
+ pass
+
+
+class ConnectionFailure(ConnectionException):
+ """SQLSTATE 08006 - connection failure."""
+
+ pass
+
+
+class SQLClientUnableToEstablishSQLConnection(ConnectionException):
+ """SQLSTATE 08001 - sqlclient unable to establish sqlconnection."""
+
+ pass
+
+
+class SQLServerRejectedEstablishmentOfSQLConnection(ConnectionException):
+ """SQLSTATE 08004 - sqlserver rejected establishment of sqlconnection."""
+
+ pass
+
+
+class TransactionResolutionUnknown(ConnectionException):
+ """SQLSTATE 08007 - transaction resolution unknown."""
+
+ pass
+
+
+class ProtocolViolation(ConnectionException):
+ """SQLSTATE 08P01 - protocol violation."""
+
+ pass
+
+
+class TriggeredActionException(LibpqError):
+ """SQLSTATE 09000 - triggered action exception."""
+
+ pass
+
+
+class FeatureNotSupported(LibpqError):
+ """SQLSTATE 0A000 - feature not supported."""
+
+ pass
+
+
+class InvalidTransactionInitiation(LibpqError):
+ """SQLSTATE 0B000 - invalid transaction initiation."""
+
+ pass
+
+
+class LocatorException(LibpqError):
+ """SQLSTATE 0F000 - locator exception."""
+
+ pass
+
+
+class InvalidLocatorSpecification(LocatorException):
+ """SQLSTATE 0F001 - invalid locator specification."""
+
+ pass
+
+
+class InvalidGrantor(LibpqError):
+ """SQLSTATE 0L000 - invalid grantor."""
+
+ pass
+
+
+class InvalidGrantOperation(InvalidGrantor):
+ """SQLSTATE 0LP01 - invalid grant operation."""
+
+ pass
+
+
+class InvalidRoleSpecification(LibpqError):
+ """SQLSTATE 0P000 - invalid role specification."""
+
+ pass
+
+
+class DiagnosticsException(LibpqError):
+ """SQLSTATE 0Z000 - diagnostics exception."""
+
+ pass
+
+
+class StackedDiagnosticsAccessedWithoutActiveHandler(DiagnosticsException):
+ """SQLSTATE 0Z002 - stacked diagnostics accessed without active handler."""
+
+ pass
+
+
+class InvalidArgumentForXquery(LibpqError):
+ """SQLSTATE 10608 - invalid argument for xquery."""
+
+ pass
+
+
+class CaseNotFound(LibpqError):
+ """SQLSTATE 20000 - case not found."""
+
+ pass
+
+
+class CardinalityViolation(LibpqError):
+ """SQLSTATE 21000 - cardinality violation."""
+
+ pass
+
+
+class DataException(LibpqError):
+ """SQLSTATE 22000 - data exception."""
+
+ pass
+
+
+class ArraySubscriptError(DataException):
+ """SQLSTATE 2202E - array subscript error."""
+
+ pass
+
+
+class CharacterNotInRepertoire(DataException):
+ """SQLSTATE 22021 - character not in repertoire."""
+
+ pass
+
+
+class DatetimeFieldOverflow(DataException):
+ """SQLSTATE 22008 - datetime field overflow."""
+
+ pass
+
+
+class DivisionByZero(DataException):
+ """SQLSTATE 22012 - division by zero."""
+
+ pass
+
+
+class ErrorInAssignment(DataException):
+ """SQLSTATE 22005 - error in assignment."""
+
+ pass
+
+
+class EscapeCharacterConflict(DataException):
+ """SQLSTATE 2200B - escape character conflict."""
+
+ pass
+
+
+class IndicatorOverflow(DataException):
+ """SQLSTATE 22022 - indicator overflow."""
+
+ pass
+
+
+class IntervalFieldOverflow(DataException):
+ """SQLSTATE 22015 - interval field overflow."""
+
+ pass
+
+
+class InvalidArgumentForLogarithm(DataException):
+ """SQLSTATE 2201E - invalid argument for logarithm."""
+
+ pass
+
+
+class InvalidArgumentForNtileFunction(DataException):
+ """SQLSTATE 22014 - invalid argument for ntile function."""
+
+ pass
+
+
+class InvalidArgumentForNthValueFunction(DataException):
+ """SQLSTATE 22016 - invalid argument for nth value function."""
+
+ pass
+
+
+class InvalidArgumentForPowerFunction(DataException):
+ """SQLSTATE 2201F - invalid argument for power function."""
+
+ pass
+
+
+class InvalidArgumentForWidthBucketFunction(DataException):
+ """SQLSTATE 2201G - invalid argument for width bucket function."""
+
+ pass
+
+
+class InvalidCharacterValueForCast(DataException):
+ """SQLSTATE 22018 - invalid character value for cast."""
+
+ pass
+
+
+class InvalidDatetimeFormat(DataException):
+ """SQLSTATE 22007 - invalid datetime format."""
+
+ pass
+
+
+class InvalidEscapeCharacter(DataException):
+ """SQLSTATE 22019 - invalid escape character."""
+
+ pass
+
+
+class InvalidEscapeOctet(DataException):
+ """SQLSTATE 2200D - invalid escape octet."""
+
+ pass
+
+
+class InvalidEscapeSequence(DataException):
+ """SQLSTATE 22025 - invalid escape sequence."""
+
+ pass
+
+
+class NonstandardUseOfEscapeCharacter(DataException):
+ """SQLSTATE 22P06 - nonstandard use of escape character."""
+
+ pass
+
+
+class InvalidIndicatorParameterValue(DataException):
+ """SQLSTATE 22010 - invalid indicator parameter value."""
+
+ pass
+
+
+class InvalidParameterValue(DataException):
+ """SQLSTATE 22023 - invalid parameter value."""
+
+ pass
+
+
+class InvalidPrecedingOrFollowingSize(DataException):
+ """SQLSTATE 22013 - invalid preceding or following size."""
+
+ pass
+
+
+class InvalidRegularExpression(DataException):
+ """SQLSTATE 2201B - invalid regular expression."""
+
+ pass
+
+
+class InvalidRowCountInLimitClause(DataException):
+ """SQLSTATE 2201W - invalid row count in limit clause."""
+
+ pass
+
+
+class InvalidRowCountInResultOffsetClause(DataException):
+ """SQLSTATE 2201X - invalid row count in result offset clause."""
+
+ pass
+
+
+class InvalidTablesampleArgument(DataException):
+ """SQLSTATE 2202H - invalid tablesample argument."""
+
+ pass
+
+
+class InvalidTablesampleRepeat(DataException):
+ """SQLSTATE 2202G - invalid tablesample repeat."""
+
+ pass
+
+
+class InvalidTimeZoneDisplacementValue(DataException):
+ """SQLSTATE 22009 - invalid time zone displacement value."""
+
+ pass
+
+
+class InvalidUseOfEscapeCharacter(DataException):
+ """SQLSTATE 2200C - invalid use of escape character."""
+
+ pass
+
+
+class MostSpecificTypeMismatch(DataException):
+ """SQLSTATE 2200G - most specific type mismatch."""
+
+ pass
+
+
+class NullValueNotAllowed(DataException):
+ """SQLSTATE 22004 - null value not allowed."""
+
+ pass
+
+
+class NullValueNoIndicatorParameter(DataException):
+ """SQLSTATE 22002 - null value no indicator parameter."""
+
+ pass
+
+
+class NumericValueOutOfRange(DataException):
+ """SQLSTATE 22003 - numeric value out of range."""
+
+ pass
+
+
+class SequenceGeneratorLimitExceeded(DataException):
+ """SQLSTATE 2200H - sequence generator limit exceeded."""
+
+ pass
+
+
+class StringDataLengthMismatch(DataException):
+ """SQLSTATE 22026 - string data length mismatch."""
+
+ pass
+
+
+class StringDataRightTruncation(DataException):
+ """SQLSTATE 22001 - string data right truncation."""
+
+ pass
+
+
+class SubstringError(DataException):
+ """SQLSTATE 22011 - substring error."""
+
+ pass
+
+
+class TrimError(DataException):
+ """SQLSTATE 22027 - trim error."""
+
+ pass
+
+
+class UnterminatedCString(DataException):
+ """SQLSTATE 22024 - unterminated c string."""
+
+ pass
+
+
+class ZeroLengthCharacterString(DataException):
+ """SQLSTATE 2200F - zero length character string."""
+
+ pass
+
+
+class FloatingPointException(DataException):
+ """SQLSTATE 22P01 - floating point exception."""
+
+ pass
+
+
+class InvalidTextRepresentation(DataException):
+ """SQLSTATE 22P02 - invalid text representation."""
+
+ pass
+
+
+class InvalidBinaryRepresentation(DataException):
+ """SQLSTATE 22P03 - invalid binary representation."""
+
+ pass
+
+
+class BadCopyFileFormat(DataException):
+ """SQLSTATE 22P04 - bad copy file format."""
+
+ pass
+
+
+class UntranslatableCharacter(DataException):
+ """SQLSTATE 22P05 - untranslatable character."""
+
+ pass
+
+
+class NotAnXmlDocument(DataException):
+ """SQLSTATE 2200L - not an xml document."""
+
+ pass
+
+
+class InvalidXmlDocument(DataException):
+ """SQLSTATE 2200M - invalid xml document."""
+
+ pass
+
+
+class InvalidXmlContent(DataException):
+ """SQLSTATE 2200N - invalid xml content."""
+
+ pass
+
+
+class InvalidXmlComment(DataException):
+ """SQLSTATE 2200S - invalid xml comment."""
+
+ pass
+
+
+class InvalidXmlProcessingInstruction(DataException):
+ """SQLSTATE 2200T - invalid xml processing instruction."""
+
+ pass
+
+
+class DuplicateJsonObjectKeyValue(DataException):
+ """SQLSTATE 22030 - duplicate json object key value."""
+
+ pass
+
+
+class InvalidArgumentForSQLJsonDatetimeFunction(DataException):
+ """SQLSTATE 22031 - invalid argument for sql json datetime function."""
+
+ pass
+
+
+class InvalidJsonText(DataException):
+ """SQLSTATE 22032 - invalid json text."""
+
+ pass
+
+
+class InvalidSQLJsonSubscript(DataException):
+ """SQLSTATE 22033 - invalid sql json subscript."""
+
+ pass
+
+
+class MoreThanOneSQLJsonItem(DataException):
+ """SQLSTATE 22034 - more than one sql json item."""
+
+ pass
+
+
+class NoSQLJsonItem(DataException):
+ """SQLSTATE 22035 - no sql json item."""
+
+ pass
+
+
+class NonNumericSQLJsonItem(DataException):
+ """SQLSTATE 22036 - non numeric sql json item."""
+
+ pass
+
+
+class NonUniqueKeysInAJsonObject(DataException):
+ """SQLSTATE 22037 - non unique keys in a json object."""
+
+ pass
+
+
+class SingletonSQLJsonItemRequired(DataException):
+ """SQLSTATE 22038 - singleton sql json item required."""
+
+ pass
+
+
+class SQLJsonArrayNotFound(DataException):
+ """SQLSTATE 22039 - sql json array not found."""
+
+ pass
+
+
+class SQLJsonMemberNotFound(DataException):
+ """SQLSTATE 2203A - sql json member not found."""
+
+ pass
+
+
+class SQLJsonNumberNotFound(DataException):
+ """SQLSTATE 2203B - sql json number not found."""
+
+ pass
+
+
+class SQLJsonObjectNotFound(DataException):
+ """SQLSTATE 2203C - sql json object not found."""
+
+ pass
+
+
+class TooManyJsonArrayElements(DataException):
+ """SQLSTATE 2203D - too many json array elements."""
+
+ pass
+
+
+class TooManyJsonObjectMembers(DataException):
+ """SQLSTATE 2203E - too many json object members."""
+
+ pass
+
+
+class SQLJsonScalarRequired(DataException):
+ """SQLSTATE 2203F - sql json scalar required."""
+
+ pass
+
+
+class SQLJsonItemCannotBeCastToTargetType(DataException):
+ """SQLSTATE 2203G - sql json item cannot be cast to target type."""
+
+ pass
+
+
+class IntegrityConstraintViolation(LibpqError):
+ """SQLSTATE 23000 - integrity constraint violation."""
+
+ pass
+
+
+class RestrictViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23001 - restrict violation."""
+
+ pass
+
+
+class NotNullViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23502 - not null violation."""
+
+ pass
+
+
+class ForeignKeyViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23503 - foreign key violation."""
+
+ pass
+
+
+class UniqueViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23505 - unique violation."""
+
+ pass
+
+
+class CheckViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23514 - check violation."""
+
+ pass
+
+
+class ExclusionViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23P01 - exclusion violation."""
+
+ pass
+
+
+class InvalidCursorState(LibpqError):
+ """SQLSTATE 24000 - invalid cursor state."""
+
+ pass
+
+
+class InvalidTransactionState(LibpqError):
+ """SQLSTATE 25000 - invalid transaction state."""
+
+ pass
+
+
+class ActiveSQLTransaction(InvalidTransactionState):
+ """SQLSTATE 25001 - active sql transaction."""
+
+ pass
+
+
+class BranchTransactionAlreadyActive(InvalidTransactionState):
+ """SQLSTATE 25002 - branch transaction already active."""
+
+ pass
+
+
+class HeldCursorRequiresSameIsolationLevel(InvalidTransactionState):
+ """SQLSTATE 25008 - held cursor requires same isolation level."""
+
+ pass
+
+
+class InappropriateAccessModeForBranchTransaction(InvalidTransactionState):
+ """SQLSTATE 25003 - inappropriate access mode for branch transaction."""
+
+ pass
+
+
+class InappropriateIsolationLevelForBranchTransaction(InvalidTransactionState):
+ """SQLSTATE 25004 - inappropriate isolation level for branch transaction."""
+
+ pass
+
+
+class NoActiveSQLTransactionForBranchTransaction(InvalidTransactionState):
+ """SQLSTATE 25005 - no active sql transaction for branch transaction."""
+
+ pass
+
+
+class ReadOnlySQLTransaction(InvalidTransactionState):
+ """SQLSTATE 25006 - read only sql transaction."""
+
+ pass
+
+
+class SchemaAndDataStatementMixingNotSupported(InvalidTransactionState):
+ """SQLSTATE 25007 - schema and data statement mixing not supported."""
+
+ pass
+
+
+class NoActiveSQLTransaction(InvalidTransactionState):
+ """SQLSTATE 25P01 - no active sql transaction."""
+
+ pass
+
+
+class InFailedSQLTransaction(InvalidTransactionState):
+ """SQLSTATE 25P02 - in failed sql transaction."""
+
+ pass
+
+
+class IdleInTransactionSessionTimeout(InvalidTransactionState):
+ """SQLSTATE 25P03 - idle in transaction session timeout."""
+
+ pass
+
+
+class TransactionTimeout(InvalidTransactionState):
+ """SQLSTATE 25P04 - transaction timeout."""
+
+ pass
+
+
+class InvalidSQLStatementName(LibpqError):
+ """SQLSTATE 26000 - invalid sql statement name."""
+
+ pass
+
+
+class TriggeredDataChangeViolation(LibpqError):
+ """SQLSTATE 27000 - triggered data change violation."""
+
+ pass
+
+
+class InvalidAuthorizationSpecification(LibpqError):
+ """SQLSTATE 28000 - invalid authorization specification."""
+
+ pass
+
+
+class InvalidPassword(InvalidAuthorizationSpecification):
+ """SQLSTATE 28P01 - invalid password."""
+
+ pass
+
+
+class DependentPrivilegeDescriptorsStillExist(LibpqError):
+ """SQLSTATE 2B000 - dependent privilege descriptors still exist."""
+
+ pass
+
+
+class DependentObjectsStillExist(DependentPrivilegeDescriptorsStillExist):
+ """SQLSTATE 2BP01 - dependent objects still exist."""
+
+ pass
+
+
+class InvalidTransactionTermination(LibpqError):
+ """SQLSTATE 2D000 - invalid transaction termination."""
+
+ pass
+
+
+class SQLRoutineException(LibpqError):
+ """SQLSTATE 2F000 - sql routine exception."""
+
+ pass
+
+
+class FunctionExecutedNoReturnStatement(SQLRoutineException):
+ """SQLSTATE 2F005 - function executed no return statement."""
+
+ pass
+
+
+class SREModifyingSQLDataNotPermitted(SQLRoutineException):
+ """SQLSTATE 2F002 - modifying sql data not permitted."""
+
+ pass
+
+
+class SREProhibitedSQLStatementAttempted(SQLRoutineException):
+ """SQLSTATE 2F003 - prohibited sql statement attempted."""
+
+ pass
+
+
+class SREReadingSQLDataNotPermitted(SQLRoutineException):
+ """SQLSTATE 2F004 - reading sql data not permitted."""
+
+ pass
+
+
+class InvalidCursorName(LibpqError):
+ """SQLSTATE 34000 - invalid cursor name."""
+
+ pass
+
+
+class ExternalRoutineException(LibpqError):
+ """SQLSTATE 38000 - external routine exception."""
+
+ pass
+
+
+class ContainingSQLNotPermitted(ExternalRoutineException):
+ """SQLSTATE 38001 - containing sql not permitted."""
+
+ pass
+
+
+class EREModifyingSQLDataNotPermitted(ExternalRoutineException):
+ """SQLSTATE 38002 - modifying sql data not permitted."""
+
+ pass
+
+
+class EREProhibitedSQLStatementAttempted(ExternalRoutineException):
+ """SQLSTATE 38003 - prohibited sql statement attempted."""
+
+ pass
+
+
+class EREReadingSQLDataNotPermitted(ExternalRoutineException):
+ """SQLSTATE 38004 - reading sql data not permitted."""
+
+ pass
+
+
+class ExternalRoutineInvocationException(LibpqError):
+ """SQLSTATE 39000 - external routine invocation exception."""
+
+ pass
+
+
+class InvalidSqlstateReturned(ExternalRoutineInvocationException):
+ """SQLSTATE 39001 - invalid sqlstate returned."""
+
+ pass
+
+
+class ERIENullValueNotAllowed(ExternalRoutineInvocationException):
+ """SQLSTATE 39004 - null value not allowed."""
+
+ pass
+
+
+class TriggerProtocolViolated(ExternalRoutineInvocationException):
+ """SQLSTATE 39P01 - trigger protocol violated."""
+
+ pass
+
+
+class SrfProtocolViolated(ExternalRoutineInvocationException):
+ """SQLSTATE 39P02 - srf protocol violated."""
+
+ pass
+
+
+class EventTriggerProtocolViolated(ExternalRoutineInvocationException):
+ """SQLSTATE 39P03 - event trigger protocol violated."""
+
+ pass
+
+
+class SavepointException(LibpqError):
+ """SQLSTATE 3B000 - savepoint exception."""
+
+ pass
+
+
+class InvalidSavepointSpecification(SavepointException):
+ """SQLSTATE 3B001 - invalid savepoint specification."""
+
+ pass
+
+
+class InvalidCatalogName(LibpqError):
+ """SQLSTATE 3D000 - invalid catalog name."""
+
+ pass
+
+
+class InvalidSchemaName(LibpqError):
+ """SQLSTATE 3F000 - invalid schema name."""
+
+ pass
+
+
+class TransactionRollback(LibpqError):
+ """SQLSTATE 40000 - transaction rollback."""
+
+ pass
+
+
+class TransactionIntegrityConstraintViolation(TransactionRollback):
+ """SQLSTATE 40002 - transaction integrity constraint violation."""
+
+ pass
+
+
+class SerializationFailure(TransactionRollback):
+ """SQLSTATE 40001 - serialization failure."""
+
+ pass
+
+
+class StatementCompletionUnknown(TransactionRollback):
+ """SQLSTATE 40003 - statement completion unknown."""
+
+ pass
+
+
+class DeadlockDetected(TransactionRollback):
+ """SQLSTATE 40P01 - deadlock detected."""
+
+ pass
+
+
+class SyntaxErrorOrAccessRuleViolation(LibpqError):
+ """SQLSTATE 42000 - syntax error or access rule violation."""
+
+ pass
+
+
+class SyntaxError(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42601 - syntax error."""
+
+ pass
+
+
+class InsufficientPrivilege(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42501 - insufficient privilege."""
+
+ pass
+
+
+class CannotCoerce(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42846 - cannot coerce."""
+
+ pass
+
+
+class GroupingError(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42803 - grouping error."""
+
+ pass
+
+
+class WindowingError(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P20 - windowing error."""
+
+ pass
+
+
+class InvalidRecursion(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P19 - invalid recursion."""
+
+ pass
+
+
+class InvalidForeignKey(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42830 - invalid foreign key."""
+
+ pass
+
+
+class InvalidName(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42602 - invalid name."""
+
+ pass
+
+
+class NameTooLong(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42622 - name too long."""
+
+ pass
+
+
+class ReservedName(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42939 - reserved name."""
+
+ pass
+
+
+class DatatypeMismatch(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42804 - datatype mismatch."""
+
+ pass
+
+
+class IndeterminateDatatype(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P18 - indeterminate datatype."""
+
+ pass
+
+
+class CollationMismatch(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P21 - collation mismatch."""
+
+ pass
+
+
+class IndeterminateCollation(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P22 - indeterminate collation."""
+
+ pass
+
+
+class WrongObjectType(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42809 - wrong object type."""
+
+ pass
+
+
+class GeneratedAlways(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 428C9 - generated always."""
+
+ pass
+
+
+class UndefinedColumn(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42703 - undefined column."""
+
+ pass
+
+
+class UndefinedFunction(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42883 - undefined function."""
+
+ pass
+
+
+class UndefinedTable(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P01 - undefined table."""
+
+ pass
+
+
+class UndefinedParameter(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P02 - undefined parameter."""
+
+ pass
+
+
+class UndefinedObject(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42704 - undefined object."""
+
+ pass
+
+
+class DuplicateColumn(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42701 - duplicate column."""
+
+ pass
+
+
+class DuplicateCursor(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P03 - duplicate cursor."""
+
+ pass
+
+
+class DuplicateDatabase(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P04 - duplicate database."""
+
+ pass
+
+
+class DuplicateFunction(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42723 - duplicate function."""
+
+ pass
+
+
+class DuplicatePreparedStatement(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P05 - duplicate prepared statement."""
+
+ pass
+
+
+class DuplicateSchema(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P06 - duplicate schema."""
+
+ pass
+
+
+class DuplicateTable(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P07 - duplicate table."""
+
+ pass
+
+
+class DuplicateAlias(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42712 - duplicate alias."""
+
+ pass
+
+
+class DuplicateObject(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42710 - duplicate object."""
+
+ pass
+
+
+class AmbiguousColumn(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42702 - ambiguous column."""
+
+ pass
+
+
+class AmbiguousFunction(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42725 - ambiguous function."""
+
+ pass
+
+
+class AmbiguousParameter(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P08 - ambiguous parameter."""
+
+ pass
+
+
+class AmbiguousAlias(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P09 - ambiguous alias."""
+
+ pass
+
+
+class InvalidColumnReference(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P10 - invalid column reference."""
+
+ pass
+
+
+class InvalidColumnDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42611 - invalid column definition."""
+
+ pass
+
+
+class InvalidCursorDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P11 - invalid cursor definition."""
+
+ pass
+
+
+class InvalidDatabaseDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P12 - invalid database definition."""
+
+ pass
+
+
+class InvalidFunctionDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P13 - invalid function definition."""
+
+ pass
+
+
+class InvalidPreparedStatementDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P14 - invalid prepared statement definition."""
+
+ pass
+
+
+class InvalidSchemaDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P15 - invalid schema definition."""
+
+ pass
+
+
+class InvalidTableDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P16 - invalid table definition."""
+
+ pass
+
+
+class InvalidObjectDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P17 - invalid object definition."""
+
+ pass
+
+
+class WithCheckOptionViolation(LibpqError):
+ """SQLSTATE 44000 - with check option violation."""
+
+ pass
+
+
+class InsufficientResources(LibpqError):
+ """SQLSTATE 53000 - insufficient resources."""
+
+ pass
+
+
+class DiskFull(InsufficientResources):
+ """SQLSTATE 53100 - disk full."""
+
+ pass
+
+
+class OutOfMemory(InsufficientResources):
+ """SQLSTATE 53200 - out of memory."""
+
+ pass
+
+
+class TooManyConnections(InsufficientResources):
+ """SQLSTATE 53300 - too many connections."""
+
+ pass
+
+
+class ConfigurationLimitExceeded(InsufficientResources):
+ """SQLSTATE 53400 - configuration limit exceeded."""
+
+ pass
+
+
+class ProgramLimitExceeded(LibpqError):
+ """SQLSTATE 54000 - program limit exceeded."""
+
+ pass
+
+
+class StatementTooComplex(ProgramLimitExceeded):
+ """SQLSTATE 54001 - statement too complex."""
+
+ pass
+
+
+class TooManyColumns(ProgramLimitExceeded):
+ """SQLSTATE 54011 - too many columns."""
+
+ pass
+
+
+class TooManyArguments(ProgramLimitExceeded):
+ """SQLSTATE 54023 - too many arguments."""
+
+ pass
+
+
+class ObjectNotInPrerequisiteState(LibpqError):
+ """SQLSTATE 55000 - object not in prerequisite state."""
+
+ pass
+
+
+class ObjectInUse(ObjectNotInPrerequisiteState):
+ """SQLSTATE 55006 - object in use."""
+
+ pass
+
+
+class CantChangeRuntimeParam(ObjectNotInPrerequisiteState):
+ """SQLSTATE 55P02 - cant change runtime param."""
+
+ pass
+
+
+class LockNotAvailable(ObjectNotInPrerequisiteState):
+ """SQLSTATE 55P03 - lock not available."""
+
+ pass
+
+
+class UnsafeNewEnumValueUsage(ObjectNotInPrerequisiteState):
+ """SQLSTATE 55P04 - unsafe new enum value usage."""
+
+ pass
+
+
+class OperatorIntervention(LibpqError):
+ """SQLSTATE 57000 - operator intervention."""
+
+ pass
+
+
+class QueryCanceled(OperatorIntervention):
+ """SQLSTATE 57014 - query canceled."""
+
+ pass
+
+
+class AdminShutdown(OperatorIntervention):
+ """SQLSTATE 57P01 - admin shutdown."""
+
+ pass
+
+
+class CrashShutdown(OperatorIntervention):
+ """SQLSTATE 57P02 - crash shutdown."""
+
+ pass
+
+
+class CannotConnectNow(OperatorIntervention):
+ """SQLSTATE 57P03 - cannot connect now."""
+
+ pass
+
+
+class DatabaseDropped(OperatorIntervention):
+ """SQLSTATE 57P04 - database dropped."""
+
+ pass
+
+
+class IdleSessionTimeout(OperatorIntervention):
+ """SQLSTATE 57P05 - idle session timeout."""
+
+ pass
+
+
+class SystemError(LibpqError):
+ """SQLSTATE 58000 - system error."""
+
+ pass
+
+
+class IoError(SystemError):
+ """SQLSTATE 58030 - io error."""
+
+ pass
+
+
+class UndefinedFile(SystemError):
+ """SQLSTATE 58P01 - undefined file."""
+
+ pass
+
+
+class DuplicateFile(SystemError):
+ """SQLSTATE 58P02 - duplicate file."""
+
+ pass
+
+
+class FileNameTooLong(SystemError):
+ """SQLSTATE 58P03 - file name too long."""
+
+ pass
+
+
+class ConfigFileError(LibpqError):
+ """SQLSTATE F0000 - config file error."""
+
+ pass
+
+
+class LockFileExists(ConfigFileError):
+ """SQLSTATE F0001 - lock file exists."""
+
+ pass
+
+
+class FDWError(LibpqError):
+ """SQLSTATE HV000 - fdw error."""
+
+ pass
+
+
+class FDWColumnNameNotFound(FDWError):
+ """SQLSTATE HV005 - fdw column name not found."""
+
+ pass
+
+
+class FDWDynamicParameterValueNeeded(FDWError):
+ """SQLSTATE HV002 - fdw dynamic parameter value needed."""
+
+ pass
+
+
+class FDWFunctionSequenceError(FDWError):
+ """SQLSTATE HV010 - fdw function sequence error."""
+
+ pass
+
+
+class FDWInconsistentDescriptorInformation(FDWError):
+ """SQLSTATE HV021 - fdw inconsistent descriptor information."""
+
+ pass
+
+
+class FDWInvalidAttributeValue(FDWError):
+ """SQLSTATE HV024 - fdw invalid attribute value."""
+
+ pass
+
+
+class FDWInvalidColumnName(FDWError):
+ """SQLSTATE HV007 - fdw invalid column name."""
+
+ pass
+
+
+class FDWInvalidColumnNumber(FDWError):
+ """SQLSTATE HV008 - fdw invalid column number."""
+
+ pass
+
+
+class FDWInvalidDataType(FDWError):
+ """SQLSTATE HV004 - fdw invalid data type."""
+
+ pass
+
+
+class FDWInvalidDataTypeDescriptors(FDWError):
+ """SQLSTATE HV006 - fdw invalid data type descriptors."""
+
+ pass
+
+
+class FDWInvalidDescriptorFieldIdentifier(FDWError):
+ """SQLSTATE HV091 - fdw invalid descriptor field identifier."""
+
+ pass
+
+
+class FDWInvalidHandle(FDWError):
+ """SQLSTATE HV00B - fdw invalid handle."""
+
+ pass
+
+
+class FDWInvalidOptionIndex(FDWError):
+ """SQLSTATE HV00C - fdw invalid option index."""
+
+ pass
+
+
+class FDWInvalidOptionName(FDWError):
+ """SQLSTATE HV00D - fdw invalid option name."""
+
+ pass
+
+
+class FDWInvalidStringLengthOrBufferLength(FDWError):
+ """SQLSTATE HV090 - fdw invalid string length or buffer length."""
+
+ pass
+
+
+class FDWInvalidStringFormat(FDWError):
+ """SQLSTATE HV00A - fdw invalid string format."""
+
+ pass
+
+
+class FDWInvalidUseOfNullPointer(FDWError):
+ """SQLSTATE HV009 - fdw invalid use of null pointer."""
+
+ pass
+
+
+class FDWTooManyHandles(FDWError):
+ """SQLSTATE HV014 - fdw too many handles."""
+
+ pass
+
+
+class FDWOutOfMemory(FDWError):
+ """SQLSTATE HV001 - fdw out of memory."""
+
+ pass
+
+
+class FDWNoSchemas(FDWError):
+ """SQLSTATE HV00P - fdw no schemas."""
+
+ pass
+
+
+class FDWOptionNameNotFound(FDWError):
+ """SQLSTATE HV00J - fdw option name not found."""
+
+ pass
+
+
+class FDWReplyHandle(FDWError):
+ """SQLSTATE HV00K - fdw reply handle."""
+
+ pass
+
+
+class FDWSchemaNotFound(FDWError):
+ """SQLSTATE HV00Q - fdw schema not found."""
+
+ pass
+
+
+class FDWTableNotFound(FDWError):
+ """SQLSTATE HV00R - fdw table not found."""
+
+ pass
+
+
+class FDWUnableToCreateExecution(FDWError):
+ """SQLSTATE HV00L - fdw unable to create execution."""
+
+ pass
+
+
+class FDWUnableToCreateReply(FDWError):
+ """SQLSTATE HV00M - fdw unable to create reply."""
+
+ pass
+
+
+class FDWUnableToEstablishConnection(FDWError):
+ """SQLSTATE HV00N - fdw unable to establish connection."""
+
+ pass
+
+
+class PlpgsqlError(LibpqError):
+ """SQLSTATE P0000 - plpgsql error."""
+
+ pass
+
+
+class RaiseException(PlpgsqlError):
+ """SQLSTATE P0001 - raise exception."""
+
+ pass
+
+
+class NoDataFound(PlpgsqlError):
+ """SQLSTATE P0002 - no data found."""
+
+ pass
+
+
+class TooManyRows(PlpgsqlError):
+ """SQLSTATE P0003 - too many rows."""
+
+ pass
+
+
+class AssertFailure(PlpgsqlError):
+ """SQLSTATE P0004 - assert failure."""
+
+ pass
+
+
+class InternalError(LibpqError):
+ """SQLSTATE XX000 - internal error."""
+
+ pass
+
+
+class DataCorrupted(InternalError):
+ """SQLSTATE XX001 - data corrupted."""
+
+ pass
+
+
+class IndexCorrupted(InternalError):
+ """SQLSTATE XX002 - index corrupted."""
+
+ pass
+
+
+SQLSTATE_TO_EXCEPTION: Dict[str, type] = {
+ "00000": SuccessfulCompletion,
+ "01000": Warning,
+ "0100C": DynamicResultSetsReturnedWarning,
+ "01008": ImplicitZeroBitPaddingWarning,
+ "01003": NullValueEliminatedInSetFunctionWarning,
+ "01007": PrivilegeNotGrantedWarning,
+ "01006": PrivilegeNotRevokedWarning,
+ "01004": StringDataRightTruncationWarning,
+ "01P01": DeprecatedFeatureWarning,
+ "02000": NoData,
+ "02001": NoAdditionalDynamicResultSetsReturned,
+ "03000": SQLStatementNotYetComplete,
+ "08000": ConnectionException,
+ "08003": ConnectionDoesNotExist,
+ "08006": ConnectionFailure,
+ "08001": SQLClientUnableToEstablishSQLConnection,
+ "08004": SQLServerRejectedEstablishmentOfSQLConnection,
+ "08007": TransactionResolutionUnknown,
+ "08P01": ProtocolViolation,
+ "09000": TriggeredActionException,
+ "0A000": FeatureNotSupported,
+ "0B000": InvalidTransactionInitiation,
+ "0F000": LocatorException,
+ "0F001": InvalidLocatorSpecification,
+ "0L000": InvalidGrantor,
+ "0LP01": InvalidGrantOperation,
+ "0P000": InvalidRoleSpecification,
+ "0Z000": DiagnosticsException,
+ "0Z002": StackedDiagnosticsAccessedWithoutActiveHandler,
+ "10608": InvalidArgumentForXquery,
+ "20000": CaseNotFound,
+ "21000": CardinalityViolation,
+ "22000": DataException,
+ "2202E": ArraySubscriptError,
+ "22021": CharacterNotInRepertoire,
+ "22008": DatetimeFieldOverflow,
+ "22012": DivisionByZero,
+ "22005": ErrorInAssignment,
+ "2200B": EscapeCharacterConflict,
+ "22022": IndicatorOverflow,
+ "22015": IntervalFieldOverflow,
+ "2201E": InvalidArgumentForLogarithm,
+ "22014": InvalidArgumentForNtileFunction,
+ "22016": InvalidArgumentForNthValueFunction,
+ "2201F": InvalidArgumentForPowerFunction,
+ "2201G": InvalidArgumentForWidthBucketFunction,
+ "22018": InvalidCharacterValueForCast,
+ "22007": InvalidDatetimeFormat,
+ "22019": InvalidEscapeCharacter,
+ "2200D": InvalidEscapeOctet,
+ "22025": InvalidEscapeSequence,
+ "22P06": NonstandardUseOfEscapeCharacter,
+ "22010": InvalidIndicatorParameterValue,
+ "22023": InvalidParameterValue,
+ "22013": InvalidPrecedingOrFollowingSize,
+ "2201B": InvalidRegularExpression,
+ "2201W": InvalidRowCountInLimitClause,
+ "2201X": InvalidRowCountInResultOffsetClause,
+ "2202H": InvalidTablesampleArgument,
+ "2202G": InvalidTablesampleRepeat,
+ "22009": InvalidTimeZoneDisplacementValue,
+ "2200C": InvalidUseOfEscapeCharacter,
+ "2200G": MostSpecificTypeMismatch,
+ "22004": NullValueNotAllowed,
+ "22002": NullValueNoIndicatorParameter,
+ "22003": NumericValueOutOfRange,
+ "2200H": SequenceGeneratorLimitExceeded,
+ "22026": StringDataLengthMismatch,
+ "22001": StringDataRightTruncation,
+ "22011": SubstringError,
+ "22027": TrimError,
+ "22024": UnterminatedCString,
+ "2200F": ZeroLengthCharacterString,
+ "22P01": FloatingPointException,
+ "22P02": InvalidTextRepresentation,
+ "22P03": InvalidBinaryRepresentation,
+ "22P04": BadCopyFileFormat,
+ "22P05": UntranslatableCharacter,
+ "2200L": NotAnXmlDocument,
+ "2200M": InvalidXmlDocument,
+ "2200N": InvalidXmlContent,
+ "2200S": InvalidXmlComment,
+ "2200T": InvalidXmlProcessingInstruction,
+ "22030": DuplicateJsonObjectKeyValue,
+ "22031": InvalidArgumentForSQLJsonDatetimeFunction,
+ "22032": InvalidJsonText,
+ "22033": InvalidSQLJsonSubscript,
+ "22034": MoreThanOneSQLJsonItem,
+ "22035": NoSQLJsonItem,
+ "22036": NonNumericSQLJsonItem,
+ "22037": NonUniqueKeysInAJsonObject,
+ "22038": SingletonSQLJsonItemRequired,
+ "22039": SQLJsonArrayNotFound,
+ "2203A": SQLJsonMemberNotFound,
+ "2203B": SQLJsonNumberNotFound,
+ "2203C": SQLJsonObjectNotFound,
+ "2203D": TooManyJsonArrayElements,
+ "2203E": TooManyJsonObjectMembers,
+ "2203F": SQLJsonScalarRequired,
+ "2203G": SQLJsonItemCannotBeCastToTargetType,
+ "23000": IntegrityConstraintViolation,
+ "23001": RestrictViolation,
+ "23502": NotNullViolation,
+ "23503": ForeignKeyViolation,
+ "23505": UniqueViolation,
+ "23514": CheckViolation,
+ "23P01": ExclusionViolation,
+ "24000": InvalidCursorState,
+ "25000": InvalidTransactionState,
+ "25001": ActiveSQLTransaction,
+ "25002": BranchTransactionAlreadyActive,
+ "25008": HeldCursorRequiresSameIsolationLevel,
+ "25003": InappropriateAccessModeForBranchTransaction,
+ "25004": InappropriateIsolationLevelForBranchTransaction,
+ "25005": NoActiveSQLTransactionForBranchTransaction,
+ "25006": ReadOnlySQLTransaction,
+ "25007": SchemaAndDataStatementMixingNotSupported,
+ "25P01": NoActiveSQLTransaction,
+ "25P02": InFailedSQLTransaction,
+ "25P03": IdleInTransactionSessionTimeout,
+ "25P04": TransactionTimeout,
+ "26000": InvalidSQLStatementName,
+ "27000": TriggeredDataChangeViolation,
+ "28000": InvalidAuthorizationSpecification,
+ "28P01": InvalidPassword,
+ "2B000": DependentPrivilegeDescriptorsStillExist,
+ "2BP01": DependentObjectsStillExist,
+ "2D000": InvalidTransactionTermination,
+ "2F000": SQLRoutineException,
+ "2F005": FunctionExecutedNoReturnStatement,
+ "2F002": SREModifyingSQLDataNotPermitted,
+ "2F003": SREProhibitedSQLStatementAttempted,
+ "2F004": SREReadingSQLDataNotPermitted,
+ "34000": InvalidCursorName,
+ "38000": ExternalRoutineException,
+ "38001": ContainingSQLNotPermitted,
+ "38002": EREModifyingSQLDataNotPermitted,
+ "38003": EREProhibitedSQLStatementAttempted,
+ "38004": EREReadingSQLDataNotPermitted,
+ "39000": ExternalRoutineInvocationException,
+ "39001": InvalidSqlstateReturned,
+ "39004": ERIENullValueNotAllowed,
+ "39P01": TriggerProtocolViolated,
+ "39P02": SrfProtocolViolated,
+ "39P03": EventTriggerProtocolViolated,
+ "3B000": SavepointException,
+ "3B001": InvalidSavepointSpecification,
+ "3D000": InvalidCatalogName,
+ "3F000": InvalidSchemaName,
+ "40000": TransactionRollback,
+ "40002": TransactionIntegrityConstraintViolation,
+ "40001": SerializationFailure,
+ "40003": StatementCompletionUnknown,
+ "40P01": DeadlockDetected,
+ "42000": SyntaxErrorOrAccessRuleViolation,
+ "42601": SyntaxError,
+ "42501": InsufficientPrivilege,
+ "42846": CannotCoerce,
+ "42803": GroupingError,
+ "42P20": WindowingError,
+ "42P19": InvalidRecursion,
+ "42830": InvalidForeignKey,
+ "42602": InvalidName,
+ "42622": NameTooLong,
+ "42939": ReservedName,
+ "42804": DatatypeMismatch,
+ "42P18": IndeterminateDatatype,
+ "42P21": CollationMismatch,
+ "42P22": IndeterminateCollation,
+ "42809": WrongObjectType,
+ "428C9": GeneratedAlways,
+ "42703": UndefinedColumn,
+ "42883": UndefinedFunction,
+ "42P01": UndefinedTable,
+ "42P02": UndefinedParameter,
+ "42704": UndefinedObject,
+ "42701": DuplicateColumn,
+ "42P03": DuplicateCursor,
+ "42P04": DuplicateDatabase,
+ "42723": DuplicateFunction,
+ "42P05": DuplicatePreparedStatement,
+ "42P06": DuplicateSchema,
+ "42P07": DuplicateTable,
+ "42712": DuplicateAlias,
+ "42710": DuplicateObject,
+ "42702": AmbiguousColumn,
+ "42725": AmbiguousFunction,
+ "42P08": AmbiguousParameter,
+ "42P09": AmbiguousAlias,
+ "42P10": InvalidColumnReference,
+ "42611": InvalidColumnDefinition,
+ "42P11": InvalidCursorDefinition,
+ "42P12": InvalidDatabaseDefinition,
+ "42P13": InvalidFunctionDefinition,
+ "42P14": InvalidPreparedStatementDefinition,
+ "42P15": InvalidSchemaDefinition,
+ "42P16": InvalidTableDefinition,
+ "42P17": InvalidObjectDefinition,
+ "44000": WithCheckOptionViolation,
+ "53000": InsufficientResources,
+ "53100": DiskFull,
+ "53200": OutOfMemory,
+ "53300": TooManyConnections,
+ "53400": ConfigurationLimitExceeded,
+ "54000": ProgramLimitExceeded,
+ "54001": StatementTooComplex,
+ "54011": TooManyColumns,
+ "54023": TooManyArguments,
+ "55000": ObjectNotInPrerequisiteState,
+ "55006": ObjectInUse,
+ "55P02": CantChangeRuntimeParam,
+ "55P03": LockNotAvailable,
+ "55P04": UnsafeNewEnumValueUsage,
+ "57000": OperatorIntervention,
+ "57014": QueryCanceled,
+ "57P01": AdminShutdown,
+ "57P02": CrashShutdown,
+ "57P03": CannotConnectNow,
+ "57P04": DatabaseDropped,
+ "57P05": IdleSessionTimeout,
+ "58000": SystemError,
+ "58030": IoError,
+ "58P01": UndefinedFile,
+ "58P02": DuplicateFile,
+ "58P03": FileNameTooLong,
+ "F0000": ConfigFileError,
+ "F0001": LockFileExists,
+ "HV000": FDWError,
+ "HV005": FDWColumnNameNotFound,
+ "HV002": FDWDynamicParameterValueNeeded,
+ "HV010": FDWFunctionSequenceError,
+ "HV021": FDWInconsistentDescriptorInformation,
+ "HV024": FDWInvalidAttributeValue,
+ "HV007": FDWInvalidColumnName,
+ "HV008": FDWInvalidColumnNumber,
+ "HV004": FDWInvalidDataType,
+ "HV006": FDWInvalidDataTypeDescriptors,
+ "HV091": FDWInvalidDescriptorFieldIdentifier,
+ "HV00B": FDWInvalidHandle,
+ "HV00C": FDWInvalidOptionIndex,
+ "HV00D": FDWInvalidOptionName,
+ "HV090": FDWInvalidStringLengthOrBufferLength,
+ "HV00A": FDWInvalidStringFormat,
+ "HV009": FDWInvalidUseOfNullPointer,
+ "HV014": FDWTooManyHandles,
+ "HV001": FDWOutOfMemory,
+ "HV00P": FDWNoSchemas,
+ "HV00J": FDWOptionNameNotFound,
+ "HV00K": FDWReplyHandle,
+ "HV00Q": FDWSchemaNotFound,
+ "HV00R": FDWTableNotFound,
+ "HV00L": FDWUnableToCreateExecution,
+ "HV00M": FDWUnableToCreateReply,
+ "HV00N": FDWUnableToEstablishConnection,
+ "P0000": PlpgsqlError,
+ "P0001": RaiseException,
+ "P0002": NoDataFound,
+ "P0003": TooManyRows,
+ "P0004": AssertFailure,
+ "XX000": InternalError,
+ "XX001": DataCorrupted,
+ "XX002": IndexCorrupted,
+}
+
+
+__all__ = [
+ "InvalidCursorName",
+ "UndefinedParameter",
+ "UndefinedColumn",
+ "NotAnXmlDocument",
+ "FDWOutOfMemory",
+ "InvalidRoleSpecification",
+ "InvalidArgumentForNthValueFunction",
+ "SQLJsonObjectNotFound",
+ "FDWSchemaNotFound",
+ "InvalidParameterValue",
+ "InvalidTableDefinition",
+ "AssertFailure",
+ "FDWInvalidOptionName",
+ "InvalidEscapeOctet",
+ "ReadOnlySQLTransaction",
+ "ExternalRoutineInvocationException",
+ "CrashShutdown",
+ "FDWInvalidOptionIndex",
+ "NotNullViolation",
+ "ConfigFileError",
+ "InvalidSQLJsonSubscript",
+ "InvalidForeignKey",
+ "InsufficientResources",
+ "ObjectNotInPrerequisiteState",
+ "InvalidRowCountInLimitClause",
+ "IntervalFieldOverflow",
+ "CollationMismatch",
+ "InvalidArgumentForNtileFunction",
+ "InvalidCharacterValueForCast",
+ "NonUniqueKeysInAJsonObject",
+ "DependentPrivilegeDescriptorsStillExist",
+ "InFailedSQLTransaction",
+ "GroupingError",
+ "TransactionTimeout",
+ "CaseNotFound",
+ "ConnectionException",
+ "DuplicateJsonObjectKeyValue",
+ "InvalidSchemaDefinition",
+ "FDWUnableToCreateReply",
+ "UndefinedTable",
+ "SequenceGeneratorLimitExceeded",
+ "InvalidJsonText",
+ "IdleSessionTimeout",
+ "NullValueNotAllowed",
+ "BranchTransactionAlreadyActive",
+ "InvalidGrantOperation",
+ "NullValueNoIndicatorParameter",
+ "ProtocolViolation",
+ "FDWInvalidDataTypeDescriptors",
+ "TriggeredDataChangeViolation",
+ "ExternalRoutineException",
+ "InvalidSqlstateReturned",
+ "PlpgsqlError",
+ "InvalidXmlContent",
+ "TriggeredActionException",
+ "SQLClientUnableToEstablishSQLConnection",
+ "FDWTableNotFound",
+ "NumericValueOutOfRange",
+ "RestrictViolation",
+ "AmbiguousParameter",
+ "StatementTooComplex",
+ "UnsafeNewEnumValueUsage",
+ "NonNumericSQLJsonItem",
+ "InvalidIndicatorParameterValue",
+ "ExclusionViolation",
+ "OperatorIntervention",
+ "QueryCanceled",
+ "Warning",
+ "InvalidArgumentForSQLJsonDatetimeFunction",
+ "ForeignKeyViolation",
+ "StringDataLengthMismatch",
+ "SQLRoutineException",
+ "TooManyConnections",
+ "TooManyJsonObjectMembers",
+ "NoData",
+ "UntranslatableCharacter",
+ "FDWUnableToEstablishConnection",
+ "LockFileExists",
+ "SREReadingSQLDataNotPermitted",
+ "IndeterminateDatatype",
+ "CheckViolation",
+ "InvalidDatabaseDefinition",
+ "NoActiveSQLTransactionForBranchTransaction",
+ "SQLServerRejectedEstablishmentOfSQLConnection",
+ "DuplicateFile",
+ "FDWInvalidColumnNumber",
+ "TransactionRollback",
+ "MoreThanOneSQLJsonItem",
+ "WithCheckOptionViolation",
+ "FDWNoSchemas",
+ "GeneratedAlways",
+ "CannotConnectNow",
+ "CardinalityViolation",
+ "InvalidAuthorizationSpecification",
+ "SQLJsonNumberNotFound",
+ "SQLJsonMemberNotFound",
+ "InvalidUseOfEscapeCharacter",
+ "UnterminatedCString",
+ "TrimError",
+ "SrfProtocolViolated",
+ "DiskFull",
+ "TooManyColumns",
+ "InvalidObjectDefinition",
+ "InvalidArgumentForLogarithm",
+ "TooManyJsonArrayElements",
+ "OutOfMemory",
+ "EREProhibitedSQLStatementAttempted",
+ "FDWInvalidStringFormat",
+ "StackedDiagnosticsAccessedWithoutActiveHandler",
+ "SchemaAndDataStatementMixingNotSupported",
+ "InternalError",
+ "InvalidEscapeCharacter",
+ "FDWError",
+ "ImplicitZeroBitPaddingWarning",
+ "DivisionByZero",
+ "InvalidTablesampleArgument",
+ "DeadlockDetected",
+ "CantChangeRuntimeParam",
+ "UndefinedObject",
+ "UniqueViolation",
+ "InvalidCursorDefinition",
+ "ConnectionFailure",
+ "UndefinedFunction",
+ "FDWFunctionSequenceError",
+ "ErrorInAssignment",
+ "SuccessfulCompletion",
+ "StringDataRightTruncation",
+ "FDWTooManyHandles",
+ "FDWInvalidDataType",
+ "ActiveSQLTransaction",
+ "InvalidTextRepresentation",
+ "InvalidSQLStatementName",
+ "PrivilegeNotGrantedWarning",
+ "SREModifyingSQLDataNotPermitted",
+ "IndeterminateCollation",
+ "SystemError",
+ "NullValueEliminatedInSetFunctionWarning",
+ "DependentObjectsStillExist",
+ "InvalidSchemaName",
+ "DuplicateColumn",
+ "FunctionExecutedNoReturnStatement",
+ "InvalidColumnDefinition",
+ "DynamicResultSetsReturnedWarning",
+ "IdleInTransactionSessionTimeout",
+ "StatementCompletionUnknown",
+ "CannotCoerce",
+ "InvalidTransactionState",
+ "DuplicateTable",
+ "BadCopyFileFormat",
+ "ZeroLengthCharacterString",
+ "SyntaxErrorOrAccessRuleViolation",
+ "SingletonSQLJsonItemRequired",
+ "IndexCorrupted",
+ "FDWInvalidColumnName",
+ "DataCorrupted",
+ "ERIENullValueNotAllowed",
+ "ArraySubscriptError",
+ "FDWReplyHandle",
+ "DiagnosticsException",
+ "InvalidTablesampleRepeat",
+ "SQLJsonItemCannotBeCastToTargetType",
+ "FDWInvalidHandle",
+ "InvalidPassword",
+ "InvalidEscapeSequence",
+ "EscapeCharacterConflict",
+ "InvalidSavepointSpecification",
+ "FDWInvalidAttributeValue",
+ "ContainingSQLNotPermitted",
+ "LocatorException",
+ "DatatypeMismatch",
+ "InvalidCursorState",
+ "InvalidName",
+ "IndicatorOverflow",
+ "ReservedName",
+ "DatetimeFieldOverflow",
+ "FDWInconsistentDescriptorInformation",
+ "FloatingPointException",
+ "AmbiguousAlias",
+ "InvalidRecursion",
+ "WrongObjectType",
+ "UndefinedFile",
+ "LockNotAvailable",
+ "InvalidRowCountInResultOffsetClause",
+ "ObjectInUse",
+ "DeprecatedFeatureWarning",
+ "FDWDynamicParameterValueNeeded",
+ "DuplicateFunction",
+ "InvalidXmlDocument",
+ "StringDataRightTruncationWarning",
+ "DuplicatePreparedStatement",
+ "InvalidGrantor",
+ "EventTriggerProtocolViolated",
+ "FDWInvalidUseOfNullPointer",
+ "FDWUnableToCreateExecution",
+ "ConnectionDoesNotExist",
+ "InvalidCatalogName",
+ "InvalidArgumentForXquery",
+ "FDWColumnNameNotFound",
+ "TransactionIntegrityConstraintViolation",
+ "InvalidPreparedStatementDefinition",
+ "FDWInvalidDescriptorFieldIdentifier",
+ "FDWOptionNameNotFound",
+ "InvalidArgumentForPowerFunction",
+ "FDWInvalidStringLengthOrBufferLength",
+ "SREProhibitedSQLStatementAttempted",
+ "NoDataFound",
+ "DuplicateDatabase",
+ "FeatureNotSupported",
+ "IntegrityConstraintViolation",
+ "AmbiguousColumn",
+ "PrivilegeNotRevokedWarning",
+ "FileNameTooLong",
+ "InvalidArgumentForWidthBucketFunction",
+ "HeldCursorRequiresSameIsolationLevel",
+ "NoSQLJsonItem",
+ "IoError",
+ "SavepointException",
+ "NoActiveSQLTransaction",
+ "InvalidFunctionDefinition",
+ "AdminShutdown",
+ "DatabaseDropped",
+ "InvalidRegularExpression",
+ "WindowingError",
+ "InvalidColumnReference",
+ "InvalidBinaryRepresentation",
+ "SQLJsonScalarRequired",
+ "ConfigurationLimitExceeded",
+ "SyntaxError",
+ "SerializationFailure",
+ "ProgramLimitExceeded",
+ "DuplicateSchema",
+ "SQLStatementNotYetComplete",
+ "LibpqError",
+ "DataException",
+ "SubstringError",
+ "InvalidLocatorSpecification",
+ "InappropriateAccessModeForBranchTransaction",
+ "EREModifyingSQLDataNotPermitted",
+ "InsufficientPrivilege",
+ "NoAdditionalDynamicResultSetsReturned",
+ "SQLJsonArrayNotFound",
+ "NameTooLong",
+ "InvalidTimeZoneDisplacementValue",
+ "InappropriateIsolationLevelForBranchTransaction",
+ "RaiseException",
+ "EREReadingSQLDataNotPermitted",
+ "TriggerProtocolViolated",
+ "NonstandardUseOfEscapeCharacter",
+ "InvalidTransactionInitiation",
+ "DuplicateAlias",
+ "TransactionResolutionUnknown",
+ "TooManyRows",
+ "InvalidXmlComment",
+ "MostSpecificTypeMismatch",
+ "DuplicateObject",
+ "DuplicateCursor",
+ "AmbiguousFunction",
+ "TooManyArguments",
+ "InvalidXmlProcessingInstruction",
+ "InvalidTransactionTermination",
+ "InvalidDatetimeFormat",
+ "InvalidPrecedingOrFollowingSize",
+ "CharacterNotInRepertoire",
+ "SQLSTATE_TO_EXCEPTION",
+]
diff --git a/src/test/pytest/libpq/errors.py b/src/test/pytest/libpq/errors.py
new file mode 100644
index 00000000000..764a96c2478
--- /dev/null
+++ b/src/test/pytest/libpq/errors.py
@@ -0,0 +1,39 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+PostgreSQL error types mapped from SQLSTATE codes.
+
+This module provides LibpqError and its subclasses for handling PostgreSQL
+errors based on SQLSTATE codes. The exception classes in _generated_errors.py
+are auto-generated from src/backend/utils/errcodes.txt.
+
+To regenerate: src/tools/generate_pytest_libpq_errors.py
+"""
+
+from typing import Optional
+
+from ._error_base import LibpqError, LibpqWarning
+from ._generated_errors import (
+ SQLSTATE_TO_EXCEPTION,
+)
+from ._generated_errors import * # noqa: F403
+
+
+def get_exception_class(sqlstate: Optional[str]) -> type:
+ """Get the appropriate exception class for a SQLSTATE code."""
+ if sqlstate in SQLSTATE_TO_EXCEPTION:
+ return SQLSTATE_TO_EXCEPTION[sqlstate]
+ return LibpqError
+
+
+def make_error(message: str, *, sqlstate: Optional[str] = None, **kwargs) -> LibpqError:
+ """Create an appropriate LibpqError subclass based on the SQLSTATE code."""
+ exc_class = get_exception_class(sqlstate)
+ return exc_class(message, sqlstate=sqlstate, **kwargs)
+
+
+__all__ = [
+ "LibpqError",
+ "LibpqWarning",
+ "make_error",
+]
diff --git a/src/test/pytest/meson.build b/src/test/pytest/meson.build
index abd128dfa24..20ca4416ebc 100644
--- a/src/test/pytest/meson.build
+++ b/src/test/pytest/meson.build
@@ -10,7 +10,9 @@ tests += {
'bd': meson.current_build_dir(),
'pytest': {
'tests': [
- 'pyt/test_something.py',
+ 'pyt/test_errors.py',
+ 'pyt/test_libpq.py',
+ 'pyt/test_query_helpers.py',
],
},
}
diff --git a/src/test/pytest/pypg/__init__.py b/src/test/pytest/pypg/__init__.py
new file mode 100644
index 00000000000..5dae49b6406
--- /dev/null
+++ b/src/test/pytest/pypg/__init__.py
@@ -0,0 +1,4 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+from ._env import has_test_extra, require_test_extra
+from ._win32 import current_windows_user
diff --git a/src/test/pytest/pypg/_env.py b/src/test/pytest/pypg/_env.py
new file mode 100644
index 00000000000..154c986d73e
--- /dev/null
+++ b/src/test/pytest/pypg/_env.py
@@ -0,0 +1,54 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import logging
+import os
+
+import pytest
+
+logger = logging.getLogger(__name__)
+
+
+def has_test_extra(key: str) -> bool:
+ """
+ Returns True if the PG_TEST_EXTRA environment variable contains the given
+ key.
+ """
+ extra = os.getenv("PG_TEST_EXTRA", "")
+ return key in extra.split()
+
+
+def require_test_extra(*keys: str) -> bool:
+ """
+ A convenience annotation which will skip tests if all of the required keys
+ are not present in PG_TEST_EXTRA.
+
+ To skip a particular test function or class:
+
+ @pg.require_test_extra("ldap")
+ def test_some_ldap_feature():
+ ...
+
+ To skip an entire module:
+
+ pytestmark = pg.require_test_extra("ssl", "kerberos")
+ """
+ return pytest.mark.skipif(
+ not all([has_test_extra(k) for k in keys]),
+ reason="requires {} to be set in PG_TEST_EXTRA".format(", ".join(keys)),
+ )
+
+
+def test_timeout_default() -> int:
+ """
+ Returns the value of the PG_TEST_TIMEOUT_DEFAULT environment variable, in
+ seconds, or 180 if one was not provided.
+ """
+ default = os.getenv("PG_TEST_TIMEOUT_DEFAULT", "")
+ if not default:
+ return 180
+
+ try:
+ return int(default)
+ except ValueError as v:
+ logger.warning("PG_TEST_TIMEOUT_DEFAULT could not be parsed: " + str(v))
+ return 180
diff --git a/src/test/pytest/pypg/_win32.py b/src/test/pytest/pypg/_win32.py
new file mode 100644
index 00000000000..3fd67b10191
--- /dev/null
+++ b/src/test/pytest/pypg/_win32.py
@@ -0,0 +1,145 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import ctypes
+import platform
+
+
+def current_windows_user():
+ """
+ A port of pg_regress.c's current_windows_user() helper. Returns
+ (accountname, domainname).
+
+ XXX This is dead code now, but I'm keeping it as a motivating example of
+ Win32 interaction, and someone may find it useful in the future when writing
+ SSPI tests?
+ """
+ try:
+ advapi32 = ctypes.windll.advapi32
+ kernel32 = ctypes.windll.kernel32
+ except AttributeError:
+ raise RuntimeError(
+ f"current_windows_user() is not supported on {platform.system()}"
+ )
+
+ def raise_winerror_when_false(result, func, arguments):
+ """
+ A ctypes errcheck handler that raises WinError (which will contain the
+ result of GetLastError()) when the function's return value is false.
+ """
+ if not result:
+ raise ctypes.WinError()
+
+ #
+ # Function Prototypes
+ #
+
+ from ctypes import wintypes
+
+ # GetCurrentProcess
+ kernel32.GetCurrentProcess.restype = wintypes.HANDLE
+ kernel32.GetCurrentProcess.argtypes = []
+
+ # OpenProcessToken
+ TOKEN_READ = 0x00020008
+
+ advapi32.OpenProcessToken.restype = wintypes.BOOL
+ advapi32.OpenProcessToken.argtypes = [
+ wintypes.HANDLE,
+ wintypes.DWORD,
+ wintypes.PHANDLE,
+ ]
+ advapi32.OpenProcessToken.errcheck = raise_winerror_when_false
+
+ # GetTokenInformation
+ PSID = wintypes.LPVOID # we don't need the internals
+ TOKEN_INFORMATION_CLASS = wintypes.INT
+ TokenUser = 1
+
+ class SID_AND_ATTRIBUTES(ctypes.Structure):
+ _fields_ = [
+ ("Sid", PSID),
+ ("Attributes", wintypes.DWORD),
+ ]
+
+ class TOKEN_USER(ctypes.Structure):
+ _fields_ = [
+ ("User", SID_AND_ATTRIBUTES),
+ ]
+
+ advapi32.GetTokenInformation.restype = wintypes.BOOL
+ advapi32.GetTokenInformation.argtypes = [
+ wintypes.HANDLE,
+ TOKEN_INFORMATION_CLASS,
+ wintypes.LPVOID,
+ wintypes.DWORD,
+ wintypes.PDWORD,
+ ]
+ advapi32.GetTokenInformation.errcheck = raise_winerror_when_false
+
+ # LookupAccountSid
+ SID_NAME_USE = wintypes.INT
+ PSID_NAME_USE = ctypes.POINTER(SID_NAME_USE)
+
+ advapi32.LookupAccountSidW.restype = wintypes.BOOL
+ advapi32.LookupAccountSidW.argtypes = [
+ wintypes.LPCWSTR,
+ PSID,
+ wintypes.LPWSTR,
+ wintypes.LPDWORD,
+ wintypes.LPWSTR,
+ wintypes.LPDWORD,
+ PSID_NAME_USE,
+ ]
+ advapi32.LookupAccountSidW.errcheck = raise_winerror_when_false
+
+ #
+ # Implementation (see pg_SSPI_recv_auth())
+ #
+
+ # Get the current process token...
+ token = wintypes.HANDLE()
+ proc = kernel32.GetCurrentProcess()
+ advapi32.OpenProcessToken(proc, TOKEN_READ, token)
+
+ # ...then read the TOKEN_USER struct for that token...
+ info = TOKEN_USER()
+ infolen = wintypes.DWORD()
+
+ try:
+ # (GetTokenInformation creates a buffer bigger than TOKEN_USER, so we
+ # have to query the correct length first.)
+ advapi32.GetTokenInformation(token, TokenUser, None, 0, ctypes.byref(infolen))
+ assert False, "GetTokenInformation succeeded unexpectedly"
+
+ except OSError as err:
+ assert err.winerror == 122 # insufficient buffer
+
+ ctypes.resize(info, infolen.value)
+ advapi32.GetTokenInformation(
+ token,
+ TokenUser,
+ ctypes.byref(info),
+ ctypes.sizeof(info),
+ ctypes.byref(infolen),
+ )
+
+ # ...then pull the account and domain names out of the user SID.
+ MAXPGPATH = 1024
+
+ account = ctypes.create_unicode_buffer(MAXPGPATH)
+ domain = ctypes.create_unicode_buffer(MAXPGPATH)
+ accountlen = wintypes.DWORD(ctypes.sizeof(account))
+ domainlen = wintypes.DWORD(ctypes.sizeof(domain))
+ use = SID_NAME_USE()
+
+ advapi32.LookupAccountSidW(
+ None,
+ info.User.Sid,
+ account,
+ ctypes.byref(accountlen),
+ domain,
+ ctypes.byref(domainlen),
+ ctypes.byref(use),
+ )
+
+ return (account.value, domain.value)
diff --git a/src/test/pytest/pypg/fixtures.py b/src/test/pytest/pypg/fixtures.py
new file mode 100644
index 00000000000..9caa5b22b25
--- /dev/null
+++ b/src/test/pytest/pypg/fixtures.py
@@ -0,0 +1,191 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import os
+import contextlib
+import pathlib
+import secrets
+import time
+
+import pytest
+
+from ._env import test_timeout_default
+from .util import capture
+from .server import PostgresServer
+
+from libpq import load_libpq_handle, connect as libpq_connect
+
+
+@pytest.fixture
+def remaining_timeout():
+ """
+ This fixture provides a function that returns how much of the
+ PG_TEST_TIMEOUT_DEFAULT remains for the current test, in fractional seconds.
+ This value is never less than zero.
+
+ This fixture is per-test, so the deadline is also reset on a per-test basis.
+ """
+ now = time.monotonic()
+ deadline = now + test_timeout_default()
+
+ return lambda: max(deadline - time.monotonic(), 0)
+
+
+@pytest.fixture(scope="session")
+def libpq_handle(libdir, bindir):
+ """
+ Loads a ctypes handle for libpq. Some common function prototypes are
+ initialized for general use.
+ """
+ try:
+ return load_libpq_handle(libdir, bindir)
+ except OSError as e:
+ if "wrong ELF class" in str(e):
+ # This happens in CI when trying to lead a 32-bit libpq library
+ # with a 64-bit Python
+ pytest.skip("libpq architecture does not match Python interpreter")
+ raise
+
+
+@pytest.fixture
+def connect(libpq_handle, remaining_timeout):
+ """
+ Returns a function to connect to PostgreSQL via libpq.
+
+ The returned function accepts connection options as keyword arguments
+ (host, port, dbname, etc.) and returns a PGconn object. Connections
+ are automatically cleaned up at the end of the test.
+
+ Example:
+ conn = connect(host='localhost', port=5432, dbname='postgres')
+ result = conn.sql("SELECT 1")
+ """
+ with contextlib.ExitStack() as stack:
+
+ def _connect(**opts):
+ return libpq_connect(libpq_handle, stack, remaining_timeout, **opts)
+
+ yield _connect
+
+
+@pytest.fixture(scope="session")
+def pg_config():
+ """
+ Returns the path to pg_config. Uses PG_CONFIG environment variable if set,
+ otherwise uses 'pg_config' from PATH.
+ """
+ return os.environ.get("PG_CONFIG", "pg_config")
+
+
+@pytest.fixture(scope="session")
+def bindir(pg_config):
+ """
+ Returns the PostgreSQL bin directory using pg_config --bindir.
+ """
+ return capture(pg_config, "--bindir")
+
+
+@pytest.fixture(scope="session")
+def libdir(pg_config):
+ """
+ Returns the PostgreSQL lib directory using pg_config --libdir.
+ """
+ return capture(pg_config, "--libdir")
+
+
+@pytest.fixture(scope="session")
+def tmp_check(tmp_path_factory) -> pathlib.Path:
+ """
+ Returns the tmp_check directory that should be used for the tests. If
+ TESTDATADIR is provided, that will be used; otherwise a new temporary
+ directory is created in the pytest temp root.
+ """
+ d = os.getenv("TESTDATADIR")
+ if d:
+ d = pathlib.Path(d)
+ else:
+ d = tmp_path_factory.mktemp("tmp_check")
+
+ return tmp_path_factory.mktemp("check_tmp")
+
+
+@pytest.fixture(scope="session")
+def datadir(tmp_check):
+ """
+ Returns the data directory to use for the pg fixture.
+ """
+
+ return tmp_check / "pgdata"
+
+
+@pytest.fixture(scope="session")
+def sockdir(tmp_path_factory):
+ """
+ Returns the directory name to use as the server's unix_socket_directories
+ setting. Local client connections use this as the PGHOST.
+
+ At the moment, this is always put under the pytest temp root.
+ """
+ return tmp_path_factory.mktemp("sockfiles")
+
+
+@pytest.fixture(scope="session")
+def winpassword():
+ """The per-session SCRAM password for the server admin on Windows."""
+ return secrets.token_urlsafe(16)
+
+
+@pytest.fixture(scope="session")
+def pg_server_global(bindir, datadir, sockdir, winpassword, libpq_handle):
+ """
+ Starts a running Postgres server listening on localhost. The HBA initially
+ allows only local UNIX connections from the same user.
+
+ Returns a PostgresServer instance with methods for server management, configuration,
+ and creating test databases/users.
+ """
+ server = PostgresServer(bindir, datadir, sockdir, winpassword, libpq_handle)
+
+ yield server
+
+ # Cleanup any test resources
+ server.cleanup()
+
+ # Stop the server
+ server.stop()
+
+
+@pytest.fixture(scope="module")
+def pg_server_module(pg_server_global):
+ """
+ Module-scoped server context. Which can be useful so that certain settings
+ can be overriden at the module level through autouse fixtures. An example
+ of this is in the SSL tests.
+ """
+ with pg_server_global.subcontext() as s:
+ yield s
+
+
+@pytest.fixture
+def pg(pg_server_module, remaining_timeout):
+ """
+ Per-test server context. Use this fixture to make changes to the server
+ which will be rolled back at the end of the test (e.g., creating test
+ users/databases).
+ """
+ pg_server_module.set_timeout(remaining_timeout)
+ with pg_server_module.subcontext() as s:
+ yield s
+
+
+@pytest.fixture
+def conn(pg):
+ """
+ Returns a connected PGconn instance to the test PostgreSQL server.
+ The connection is automatically cleaned up at the end of the test.
+
+ Example:
+ def test_something(conn):
+ result = conn.sql("SELECT 1")
+ assert result == 1
+ """
+ return pg.connect()
diff --git a/src/test/pytest/pypg/server.py b/src/test/pytest/pypg/server.py
new file mode 100644
index 00000000000..dd2aa4fc434
--- /dev/null
+++ b/src/test/pytest/pypg/server.py
@@ -0,0 +1,391 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import glob
+import os
+import pathlib
+import platform
+import socket
+import subprocess
+import tempfile
+import time
+from collections import namedtuple
+from typing import Callable, Optional
+
+from .util import run
+from libpq import PGconn
+
+
+class FileBackup(contextlib.AbstractContextManager):
+ """
+ A context manager which backs up a file's contents, restoring them on exit.
+ """
+
+ def __init__(self, file: pathlib.Path):
+ super().__init__()
+
+ self._file = file
+
+ def __enter__(self):
+ import shutil
+
+ with tempfile.NamedTemporaryFile(
+ prefix=self._file.name, dir=self._file.parent, delete=False
+ ) as f:
+ self._backup = pathlib.Path(f.name)
+
+ shutil.copyfile(self._file, self._backup)
+
+ return self
+
+ def __exit__(self, *exc):
+ import shutil
+
+ # Swap the backup and the original file, so that the modified contents
+ # can still be inspected in case of failure.
+ tmp = self._backup.parent / (self._backup.name + ".tmp")
+
+ shutil.copyfile(self._file, tmp)
+ shutil.copyfile(self._backup, self._file)
+ shutil.move(tmp, self._backup)
+
+
+class HBA(FileBackup):
+ """
+ Backs up a server's HBA configuration and provides means for temporarily
+ editing it.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "pg_hba.conf")
+
+ def prepend(self, *lines):
+ """
+ Temporarily prepends lines to the server's pg_hba.conf.
+
+ As sugar for aligning HBA columns in the tests, each line can be either
+ a string or a list of strings. List elements will be joined by single
+ spaces before they are written to file.
+ """
+ with open(self._file, "r") as f:
+ prior_data = f.read()
+
+ with open(self._file, "w") as f:
+ for line in lines:
+ if isinstance(line, list):
+ print(*line, file=f)
+ else:
+ print(line, file=f)
+
+ f.write(prior_data)
+
+
+class Config(FileBackup):
+ """
+ Backs up a server's postgresql.conf and provides means for temporarily
+ editing it.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "postgresql.conf")
+
+ def set(self, **gucs):
+ """
+ Temporarily appends GUC settings to the server's postgresql.conf.
+ """
+
+ with open(self._file, "a") as f:
+ print(file=f)
+
+ for n, v in gucs.items():
+ v = str(v)
+
+ # TODO: proper quoting
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+ v = "'{}'".format(v)
+
+ print(n, "=", v, file=f)
+
+
+Backup = namedtuple("Backup", "conf, hba")
+
+
+class PostgresServer:
+ """
+ Represents a running PostgreSQL server instance with management utilities.
+ Provides methods for configuration, user/database creation, and server control.
+ """
+
+ def __init__(self, bindir, datadir, sockdir, winpassword, libpq_handle):
+ """
+ Initialize and start a PostgreSQL server instance.
+ """
+ self.datadir = datadir
+ self.sockdir = sockdir
+ self.libpq_handle = libpq_handle
+ self._remaining_timeout_fn: Optional[Callable[[], float]] = None
+ self._bindir = bindir
+ self._winpassword = winpassword
+ self._pg_ctl = os.path.join(bindir, "pg_ctl")
+ self._log = os.path.join(datadir, "postgresql.log")
+
+ initdb = os.path.join(bindir, "initdb")
+ pg_ctl = self._pg_ctl
+
+ # Lock down the HBA by default; tests can open it back up later.
+ if platform.system() == "Windows":
+ # On Windows, for admin connections, use SCRAM with a generated password
+ # over local sockets. This requires additional work during initdb.
+ method = "scram-sha-256"
+
+ # NamedTemporaryFile doesn't work very nicely on Windows until Python
+ # 3.12, which introduces NamedTemporaryFile(delete_on_close=False).
+ # Until then, specify delete=False and manually unlink after use.
+ with tempfile.NamedTemporaryFile("w", delete=False) as pwfile:
+ pwfile.write(winpassword)
+
+ run(initdb, "--auth=scram-sha-256", "--pwfile", pwfile.name, datadir)
+ os.unlink(pwfile.name)
+
+ else:
+ # For other OSes we can just use peer auth.
+ method = "peer"
+ run(pg_ctl, "-D", datadir, "init")
+
+ with open(datadir / "pg_hba.conf", "w") as f:
+ print(f"# default: local {method} connections only", file=f)
+ print(f"local all all {method}", file=f)
+
+ # Figure out a port to listen on. Attempt to reserve both IPv4 and IPv6
+ # addresses in one go.
+ #
+ # Note: socket.has_dualstack_ipv6/create_server are only in Python 3.8+.
+ if hasattr(socket, "has_dualstack_ipv6") and socket.has_dualstack_ipv6():
+ addr = ("::1", 0)
+ s = socket.create_server(addr, family=socket.AF_INET6, dualstack_ipv6=True)
+
+ hostaddr, port, _, _ = s.getsockname()
+ addrs = [hostaddr, "127.0.0.1"]
+
+ else:
+ addr = ("127.0.0.1", 0)
+
+ s = socket.socket()
+ s.bind(addr)
+
+ hostaddr, port = s.getsockname()
+ addrs = [hostaddr]
+
+ log = self._log
+
+ with s, open(os.path.join(datadir, "postgresql.conf"), "a") as f:
+ print(file=f)
+ print("unix_socket_directories = '{}'".format(sockdir.as_posix()), file=f)
+ print("listen_addresses = '{}'".format(",".join(addrs)), file=f)
+ print("port =", port, file=f)
+ print("log_connections = all", file=f)
+
+ # Between closing of the socket, s, and server start, we're racing against
+ # anything that wants to open up ephemeral ports, so try not to put any new
+ # work here.
+
+ run(pg_ctl, "-D", datadir, "-l", log, "start")
+
+ # Read the PID file to get the postmaster PID
+ with open(os.path.join(datadir, "postmaster.pid")) as f:
+ pid = int(f.readline().strip())
+
+ # Store the computed values
+ self.hostaddr = hostaddr
+ self.port = port
+ self.pid = pid
+
+ # ExitStack for cleanup callbacks
+ self._cleanup_stack = contextlib.ExitStack()
+
+ def psql(self, *args):
+ """Run psql with the given arguments."""
+ if platform.system() == "Windows":
+ pw = dict(PGPASSWORD=self._winpassword)
+ else:
+ pw = None
+ self._run(os.path.join(self._bindir, "psql"), "-w", *args, addenv=pw)
+
+ def pg_ctl(self, *args):
+ """Run pg_ctl with the given arguments."""
+ self._run(self._pg_ctl, "-l", self._log, *args)
+
+ def _run(self, cmd, *args, addenv: Optional[dict] = None):
+ """Run a command with PG* environment variables set."""
+ subenv = dict(os.environ)
+ subenv.update(
+ {
+ "PGHOST": str(self.sockdir),
+ "PGPORT": str(self.port),
+ "PGDATABASE": "postgres",
+ "PGDATA": str(self.datadir),
+ }
+ )
+ if addenv:
+ subenv.update(addenv)
+ run(cmd, *args, env=subenv)
+
+ def create_users(self, *userkeys: str):
+ """Create test users and register them for cleanup."""
+ usermap = {}
+ for u in userkeys:
+ name = u + "user"
+ usermap[u] = name
+ self.psql("-c", "CREATE USER " + name)
+ self._cleanup_stack.callback(self.psql, "-c", "DROP USER " + name)
+ return usermap
+
+ def create_dbs(self, *dbkeys: str):
+ """Create test databases and register them for cleanup."""
+ dbmap = {}
+ for d in dbkeys:
+ name = d + "db"
+ dbmap[d] = name
+ self.psql("-c", "CREATE DATABASE " + name)
+ self._cleanup_stack.callback(self.psql, "-c", "DROP DATABASE " + name)
+ return dbmap
+
+ @contextlib.contextmanager
+ def reloading(self):
+ """
+ Provides a context manager for making configuration changes.
+
+ If the context suite finishes successfully, the configuration will
+ be reloaded via pg_ctl. On teardown, the configuration changes will
+ be unwound, and the server will be signaled to reload again.
+
+ The context target contains the following attributes which can be
+ used to configure the server:
+ - .conf: modifies postgresql.conf
+ - .hba: modifies pg_hba.conf
+
+ For example:
+
+ with pg_server_session.reloading() as s:
+ s.conf.set(log_connections="on")
+ s.hba.prepend("local all all trust")
+ """
+ # Push a reload onto the stack before making any other
+ # unwindable changes. That way the order of operations will be
+ #
+ # # test
+ # - config change 1
+ # - config change 2
+ # - reload
+ # # teardown
+ # - undo config change 2
+ # - undo config change 1
+ # - reload
+ #
+ self._cleanup_stack.callback(self.pg_ctl, "reload")
+ yield self._backup_configuration()
+
+ # Now actually reload
+ self.pg_ctl("reload")
+
+ @contextlib.contextmanager
+ def restarting(self):
+ """Like .reloading(), but with a full server restart."""
+ self._cleanup_stack.callback(self.pg_ctl, "restart")
+ yield self._backup_configuration()
+ self.pg_ctl("restart")
+
+ def _backup_configuration(self):
+ # Wrap the existing HBA and configuration with FileBackups.
+ return Backup(
+ hba=self._cleanup_stack.enter_context(HBA(self.datadir)),
+ conf=self._cleanup_stack.enter_context(Config(self.datadir)),
+ )
+
+ @contextlib.contextmanager
+ def subcontext(self):
+ """
+ Create a new cleanup context for per-test isolation.
+
+ Temporarily replaces the cleanup stack so that any cleanup callbacks
+ registered within this context will be cleaned up when the context exits.
+ """
+ old_stack = self._cleanup_stack
+ self._cleanup_stack = contextlib.ExitStack()
+ try:
+ self._cleanup_stack.__enter__()
+ yield self
+ finally:
+ self._cleanup_stack.__exit__(None, None, None)
+ self._cleanup_stack = old_stack
+
+ def stop(self):
+ """
+ Stop the PostgreSQL server instance.
+
+ Ignores failures if the server is already stopped.
+ """
+ try:
+ run(self._pg_ctl, "-D", self.datadir, "-l", self._log, "stop")
+ except subprocess.CalledProcessError:
+ # Server may have already been stopped
+ pass
+
+ def cleanup(self):
+ """Run all registered cleanup callbacks."""
+ self._cleanup_stack.close()
+
+ def set_timeout(self, remaining_timeout_fn: Callable[[], float]) -> None:
+ """
+ Set the timeout function for connections.
+ This is typically called by pg fixture for each test.
+ """
+ self._remaining_timeout_fn = remaining_timeout_fn
+
+ def connect(self, **opts) -> PGconn:
+ """
+ Creates a connection to this PostgreSQL server instance.
+
+ This is a convenience method that automatically fills in the host, port,
+ and dbname (defaulting to 'postgres') for connecting to this server.
+
+ Args:
+ stack: ExitStack for managing connection cleanup (uses internal stack if not provided)
+ remaining_timeout_fn: Function that returns remaining timeout (uses stored timeout if not provided)
+ **opts: Additional connection options (can override defaults)
+
+ Returns:
+ PGconn: Connected database connection
+
+ Example:
+ conn = pg.connect()
+ conn = pg.connect(dbname='mydb')
+ """
+ from libpq import connect as libpq_connect
+
+ # Set default connection options for this server
+ defaults = {
+ "host": str(self.sockdir),
+ "port": self.port,
+ "dbname": "postgres",
+ }
+
+ # On Windows, include the password for SCRAM authentication
+ if platform.system() == "Windows" and self._winpassword:
+ defaults["password"] = self._winpassword
+
+ # Merge with user-provided options (user options take precedence)
+ defaults.update(opts)
+
+ if self._remaining_timeout_fn is None:
+ raise RuntimeError(
+ "Timeout function not set. Use set_timeout() or pg fixture."
+ )
+
+ return libpq_connect(
+ self.libpq_handle,
+ self._cleanup_stack,
+ self._remaining_timeout_fn,
+ **defaults,
+ )
diff --git a/src/test/pytest/pypg/util.py b/src/test/pytest/pypg/util.py
new file mode 100644
index 00000000000..b2a1e627e4b
--- /dev/null
+++ b/src/test/pytest/pypg/util.py
@@ -0,0 +1,42 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import shlex
+import subprocess
+import sys
+
+
+def eprint(*args, **kwargs):
+ """eprint prints to stderr"""
+ print(*args, file=sys.stderr, **kwargs)
+
+
+def run(*command, check=True, shell=None, silent=False, **kwargs):
+ """run runs the given command and prints it to stderr"""
+
+ if shell is None:
+ shell = len(command) == 1 and isinstance(command[0], str)
+
+ if shell:
+ command = command[0]
+ else:
+ command = list(map(str, command))
+
+ if not silent:
+ if shell:
+ eprint(f"+ {command}")
+ else:
+ # We could normally use shlex.join here, but it's not available in
+ # Python 3.6 which we still like to support
+ unsafe_string_cmd = " ".join(map(shlex.quote, command))
+ eprint(f"+ {unsafe_string_cmd}")
+
+ if silent:
+ kwargs.setdefault("stdout", subprocess.DEVNULL)
+
+ return subprocess.run(command, check=check, shell=shell, **kwargs)
+
+
+def capture(command, *args, stdout=subprocess.PIPE, encoding="utf-8", **kwargs):
+ return run(
+ command, *args, stdout=stdout, encoding=encoding, **kwargs
+ ).stdout.removesuffix("\n")
diff --git a/src/test/pytest/pyt/conftest.py b/src/test/pytest/pyt/conftest.py
new file mode 100644
index 00000000000..e750ac080b5
--- /dev/null
+++ b/src/test/pytest/pyt/conftest.py
@@ -0,0 +1,4 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+
+pytest_plugins = ["pypg.fixtures"]
diff --git a/src/test/pytest/pyt/test_errors.py b/src/test/pytest/pyt/test_errors.py
new file mode 100644
index 00000000000..ad109039668
--- /dev/null
+++ b/src/test/pytest/pyt/test_errors.py
@@ -0,0 +1,34 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests for libpq error types and SQLSTATE-based exception mapping.
+"""
+
+import pytest
+import libpq
+
+
+def test_syntax_error(conn):
+ """Invalid SQL syntax raises SyntaxError with correct SQLSTATE."""
+ with pytest.raises(libpq.errors.SyntaxError) as exc_info:
+ conn.sql("SELEC 1")
+
+ err = exc_info.value
+ assert err.sqlstate == "42601"
+ assert err.sqlstate_class == "42"
+ assert "syntax" in str(err).lower()
+
+
+def test_unique_violation(conn):
+ """Unique violation includes all error fields and can be caught as parent class."""
+ conn.sql("CREATE TEMP TABLE test_uv (id int CONSTRAINT test_uv_pk PRIMARY KEY)")
+ conn.sql("INSERT INTO test_uv VALUES (1)")
+
+ with pytest.raises(libpq.errors.UniqueViolation) as exc_info:
+ conn.sql("INSERT INTO test_uv VALUES (1)")
+
+ err = exc_info.value
+ assert err.sqlstate == "23505"
+ assert err.table_name == "test_uv"
+ assert err.constraint_name == "test_uv_pk"
+ assert err.detail == "Key (id)=(1) already exists."
diff --git a/src/test/pytest/pyt/test_libpq.py b/src/test/pytest/pyt/test_libpq.py
new file mode 100644
index 00000000000..4fcf4056f41
--- /dev/null
+++ b/src/test/pytest/pyt/test_libpq.py
@@ -0,0 +1,172 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import os
+import socket
+import struct
+import threading
+from typing import Callable
+
+import pytest
+
+from libpq import connstr, LibpqError
+
+
+@pytest.mark.parametrize(
+ "opts, expected",
+ [
+ (dict(), ""),
+ (dict(port=5432), "port=5432"),
+ (dict(port=5432, dbname="postgres"), "port=5432 dbname=postgres"),
+ (dict(host=""), "host=''"),
+ (dict(host=" "), r"host=' '"),
+ (dict(keyword="'"), r"keyword=\'"),
+ (dict(keyword=" \\' "), r"keyword=' \\\' '"),
+ ],
+)
+def test_connstr(opts, expected):
+ """Tests the escape behavior for connstr()."""
+ assert connstr(opts) == expected
+
+
+def test_must_connect_errors(connect):
+ """Tests that connect() raises LibpqError."""
+ with pytest.raises(LibpqError, match="invalid connection option"):
+ connect(some_unknown_keyword="whatever")
+
+
+@pytest.fixture
+def local_server(tmp_path, remaining_timeout):
+ """
+ Opens up a local UNIX socket for mocking a Postgres server on a background
+ thread. See the _Server API for usage.
+
+ This fixture requires AF_UNIX support; dependent tests will be skipped on
+ platforms that don't provide it.
+ """
+
+ try:
+ from socket import AF_UNIX
+ except ImportError:
+ pytest.skip("AF_UNIX not supported on this platform")
+
+ class _Server(contextlib.ExitStack):
+ """
+ Implementation class for local_server. See .background() for the primary
+ entry point for tests. Postgres clients may connect to this server via
+ local_server.host/local_server.port.
+
+ _Server derives from contextlib.ExitStack to provide easy cleanup of
+ associated resources; see the documentation for that class for a full
+ explanation.
+ """
+
+ def __init__(self):
+ super().__init__()
+
+ self.host = tmp_path
+ self.port = 5432
+
+ self._thread = None
+ self._thread_exc = None
+ self._listener = self.enter_context(
+ socket.socket(AF_UNIX, socket.SOCK_STREAM),
+ )
+
+ def bind_and_listen(self):
+ """
+ Does the actual work of binding the UNIX socket using the Postgres
+ server conventions and listening for connections.
+
+ The listen backlog is currently hardcoded to one.
+ """
+ sockfile = self.host / ".s.PGSQL.{}".format(self.port)
+
+ # Lock down the permissions on the new socket.
+ prev_mask = os.umask(0o077)
+
+ # Bind (creating the socket file), and immediately register it for
+ # deletion from disk when the stack is cleaned up.
+ self._listener.bind(bytes(sockfile))
+ self.callback(os.unlink, sockfile)
+
+ os.umask(prev_mask)
+
+ self._listener.listen(1)
+
+ def background(self, fn: Callable[[socket.socket], None]) -> None:
+ """
+ Accepts a client connection on a background thread and passes it to
+ the provided callback. Any exceptions raised from the callback will
+ be re-raised on the main thread during fixture teardown.
+
+ Blocking operations on the connected socket default to using the
+ remaining_timeout(), though this can be changed by the test via the
+ socket's .settimeout().
+ """
+
+ def _bg():
+ try:
+ self._listener.settimeout(remaining_timeout())
+ sock, _ = self._listener.accept()
+
+ with sock:
+ sock.settimeout(remaining_timeout())
+ fn(sock)
+
+ except Exception as e:
+ # Save the exception for re-raising on the main thread.
+ self._thread_exc = e
+
+ # TODO: rather than using callback(), consider explicitly signaling
+ # the fn() implementation to stop early if we get an exception.
+ # Otherwise we'll hang until the end of the timeout.
+ self._thread = threading.Thread(target=_bg)
+ self.callback(self._join)
+
+ self._thread.start()
+
+ def _join(self):
+ """
+ Waits for the background thread to finish and raises any thrown
+ exception. This is called during fixture teardown.
+ """
+ # Give a little bit of wiggle room on the join timeout, since we're
+ # racing against the test's own use of remaining_timeout(). (It's
+ # preferable to let tests report timeouts; the stack traces will
+ # help with debugging.)
+ self._thread.join(remaining_timeout() + 1)
+ if self._thread.is_alive():
+ raise TimeoutError("background thread is still running after timeout")
+
+ if self._thread_exc is not None:
+ raise self._thread_exc
+
+ with _Server() as s:
+ s.bind_and_listen()
+ yield s
+
+
+def test_connection_is_finished_on_error(connect, local_server):
+ """Tests that PQfinish() gets called at the end of testing."""
+ expected_error = "something is wrong"
+
+ def serve_error(s: socket.socket) -> None:
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Quick check for the startup packet version.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (3, 0)
+
+ # Discard the remainder of the startup packet and send a v2 error.
+ s.recv(pktlen - 8)
+ s.send(b"E" + expected_error.encode() + b"\0")
+
+ # And now the socket should be closed.
+ assert not s.recv(1), "client sent unexpected data"
+
+ local_server.background(serve_error)
+
+ with pytest.raises(LibpqError, match=expected_error):
+ # Exiting this context should result in PQfinish().
+ connect(host=local_server.host, port=local_server.port)
diff --git a/src/test/pytest/pyt/test_query_helpers.py b/src/test/pytest/pyt/test_query_helpers.py
new file mode 100644
index 00000000000..5a5a1ae1edf
--- /dev/null
+++ b/src/test/pytest/pyt/test_query_helpers.py
@@ -0,0 +1,286 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests for query helper functions with type conversion and result simplification.
+"""
+
+import pytest
+
+
+def test_single_cell_int(conn):
+ """Single cell integer query returns just the value."""
+ result = conn.sql("SELECT 1")
+ assert result == 1
+ assert isinstance(result, int)
+
+
+def test_single_cell_string(conn):
+ """Single cell string query returns just the value."""
+ result = conn.sql("SELECT 'hello'")
+ assert result == "hello"
+ assert isinstance(result, str)
+
+
+def test_single_cell_bool(conn):
+ """Single cell boolean query returns just the value."""
+
+ result = conn.sql("SELECT true")
+ assert result is True
+ assert isinstance(result, bool)
+
+ result = conn.sql("SELECT false")
+ assert result is False
+
+
+def test_single_cell_float(conn):
+ """Single cell float query returns just the value."""
+
+ result = conn.sql("SELECT 3.14::float4")
+ assert isinstance(result, float)
+ assert abs(result - 3.14) < 0.01
+
+
+def test_single_cell_null(conn):
+ """Single cell NULL query returns None."""
+
+ result = conn.sql("SELECT NULL")
+ assert result is None
+
+
+def test_single_row_multiple_columns(conn):
+ """Single row with multiple columns returns a tuple."""
+
+ result = conn.sql("SELECT 1, 'hello', true")
+ assert result == (1, "hello", True)
+ assert isinstance(result, tuple)
+
+
+def test_single_column_multiple_rows(conn):
+ """Single column with multiple rows returns a list of values."""
+
+ result = conn.sql("SELECT * FROM generate_series(1, 3)")
+ assert result == [1, 2, 3]
+ assert isinstance(result, list)
+
+
+def test_multiple_rows_and_columns(conn):
+ """Multiple rows and columns returns list of tuples."""
+
+ result = conn.sql("SELECT * FROM (VALUES (1, 'a'), (2, 'b'), (3, 'c')) AS t")
+ assert result == [(1, "a"), (2, "b"), (3, "c")]
+ assert isinstance(result, list)
+ assert all(isinstance(row, tuple) for row in result)
+
+
+def test_empty_result(conn):
+ """Empty result set returns empty list."""
+
+ result = conn.sql("SELECT 1 WHERE false")
+ assert result == []
+
+
+def test_query_error_handling(conn):
+ """Query errors raise RuntimeError with actual error message."""
+
+ with pytest.raises(RuntimeError) as exc_info:
+ conn.sql("SELECT * FROM nonexistent_table")
+
+ error_msg = str(exc_info.value)
+ assert "nonexistent_table" in error_msg or "does not exist" in error_msg
+
+
+def test_division_by_zero_error(conn):
+ """Division by zero raises RuntimeError."""
+
+ with pytest.raises(RuntimeError) as exc_info:
+ conn.sql("SELECT 1/0")
+
+ error_msg = str(exc_info.value)
+ assert "division by zero" in error_msg.lower()
+
+
+def test_simple_exec_create_table(conn):
+ """sql for CREATE TABLE returns None."""
+
+ result = conn.sql("CREATE TEMP TABLE test_table (id int, name text)")
+ assert result is None
+
+ # Verify table was created
+ count = conn.sql("SELECT COUNT(*) FROM test_table")
+ assert count == 0
+
+
+def test_simple_exec_insert(conn):
+ """sql for INSERT returns None."""
+
+ conn.sql("CREATE TEMP TABLE test_table (id int, name text)")
+ result = conn.sql("INSERT INTO test_table VALUES (1, 'Alice'), (2, 'Bob')")
+ assert result is None
+
+ # Verify data was inserted
+ count = conn.sql("SELECT COUNT(*) FROM test_table")
+ assert count == 2
+
+
+def test_type_conversion_mixed(conn):
+ """Test mixed type conversion in a single row."""
+
+ result = conn.sql(
+ "SELECT 42::int4, 123::int8, 3.14::float8, 'text', true, NULL"
+ )
+ assert result == (42, 123, 3.14, "text", True, None)
+ assert isinstance(result[0], int)
+ assert isinstance(result[1], int)
+ assert isinstance(result[2], float)
+ assert isinstance(result[3], str)
+ assert isinstance(result[4], bool)
+ assert result[5] is None
+
+
+def test_multiple_queries_same_connection(conn):
+ """Test running multiple queries on the same connection."""
+
+ result1 = conn.sql("SELECT 1")
+ assert result1 == 1
+
+ result2 = conn.sql("SELECT 'hello', 'world'")
+ assert result2 == ("hello", "world")
+
+ result3 = conn.sql("SELECT * FROM generate_series(1, 5)")
+ assert result3 == [1, 2, 3, 4, 5]
+
+
+def test_date_type(conn):
+ """Test date type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '2025-10-20'::date")
+ assert result == datetime.date(2025, 10, 20)
+ assert isinstance(result, datetime.date)
+
+
+def test_timestamp_type(conn):
+ """Test timestamp type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '2025-10-20 15:30:45'::timestamp")
+ assert result == datetime.datetime(2025, 10, 20, 15, 30, 45)
+ assert isinstance(result, datetime.datetime)
+
+
+def test_time_type(conn):
+ """Test time type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '15:30:45'::time")
+ assert result == datetime.time(15, 30, 45)
+ assert isinstance(result, datetime.time)
+
+
+def test_numeric_type(conn):
+ """Test numeric/decimal type conversion."""
+ import decimal
+
+ result = conn.sql("SELECT 123.456::numeric")
+ assert result == decimal.Decimal("123.456")
+ assert isinstance(result, decimal.Decimal)
+
+
+def test_int_array(conn):
+ """Test integer array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[1, 2, 3, 4, 5]")
+ assert result == [1, 2, 3, 4, 5]
+ assert isinstance(result, list)
+ assert all(isinstance(x, int) for x in result)
+
+
+def test_text_array(conn):
+ """Test text array type conversion."""
+
+ result = conn.sql("SELECT ARRAY['hello', 'world', 'test']")
+ assert result == ["hello", "world", "test"]
+ assert isinstance(result, list)
+ assert all(isinstance(x, str) for x in result)
+
+
+def test_bool_array(conn):
+ """Test boolean array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[true, false, true]")
+ assert result == [True, False, True]
+ assert isinstance(result, list)
+ assert all(isinstance(x, bool) for x in result)
+
+
+def test_empty_array(conn):
+ """Test empty array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[]::int[]")
+ assert result == []
+ assert isinstance(result, list)
+
+
+def test_json_type(conn):
+ """Test JSON type (parsed to dict)."""
+
+ result = conn.sql('SELECT \'{"key": "value"}\'::json')
+ assert isinstance(result, dict)
+ assert result == {"key": "value"}
+
+
+def test_jsonb_type(conn):
+ """Test JSONB type (parsed to dict)."""
+
+ result = conn.sql('SELECT \'{"name": "test", "count": 42}\'::jsonb')
+ assert isinstance(result, dict)
+ assert result == {"name": "test", "count": 42}
+
+
+def test_json_array(conn):
+ """Test JSON array type."""
+
+ result = conn.sql("SELECT '[1, 2, 3, 4, 5]'::json")
+ assert isinstance(result, list)
+ assert result == [1, 2, 3, 4, 5]
+
+
+def test_json_nested(conn):
+ """Test nested JSON object."""
+
+ result = conn.sql(
+ 'SELECT \'{"user": {"id": 1, "name": "Alice"}, "active": true}\'::json'
+ )
+ assert isinstance(result, dict)
+ assert result == {"user": {"id": 1, "name": "Alice"}, "active": True}
+
+
+def test_mixed_types_with_arrays(conn):
+ """Test mixed types including arrays in a single row."""
+
+ result = conn.sql("SELECT 42, 'text', ARRAY[1, 2, 3], true")
+ assert result == (42, "text", [1, 2, 3], True)
+ assert isinstance(result[0], int)
+ assert isinstance(result[1], str)
+ assert isinstance(result[2], list)
+ assert isinstance(result[3], bool)
+
+
+def test_uuid_type(conn):
+ """Test UUID type conversion."""
+ import uuid
+
+ test_uuid = "550e8400-e29b-41d4-a716-446655440000"
+ result = conn.sql(f"SELECT '{test_uuid}'::uuid")
+ assert result == uuid.UUID(test_uuid)
+ assert isinstance(result, uuid.UUID)
+
+
+def test_uuid_generation(conn):
+ """Test generated UUID type conversion."""
+ import uuid
+
+ result = conn.sql("SELECT uuidv4()")
+ assert isinstance(result, uuid.UUID)
+ # Check it's a valid UUID by ensuring it can be converted to string
+ assert len(str(result)) == 36 # UUID string format length
diff --git a/src/tools/generate_pytest_libpq_errors.py b/src/tools/generate_pytest_libpq_errors.py
new file mode 100755
index 00000000000..ba92891c17a
--- /dev/null
+++ b/src/tools/generate_pytest_libpq_errors.py
@@ -0,0 +1,147 @@
+#!/usr/bin/env python3
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Generate src/test/pytest/libpq/_generated_errors.py from errcodes.txt.
+"""
+
+import sys
+from pathlib import Path
+
+
+ACRONYMS = {"sql", "fdw"}
+WORD_MAP = {
+ "sqlclient": "SQLClient",
+ "sqlserver": "SQLServer",
+ "sqlconnection": "SQLConnection",
+}
+
+
+def snake_to_pascal(name: str) -> str:
+ """Convert snake_case to PascalCase, keeping acronyms uppercase."""
+ words = []
+ for word in name.split("_"):
+ if word in WORD_MAP:
+ words.append(WORD_MAP[word])
+ elif word in ACRONYMS:
+ words.append(word.upper())
+ else:
+ words.append(word.capitalize())
+ return "".join(words)
+
+
+def parse_errcodes(path: Path):
+ """Parse errcodes.txt and return list of (sqlstate, macro_name, spec_name) tuples."""
+ errors = []
+
+ with open(path) as f:
+ for line in f:
+ parts = line.split()
+ if len(parts) >= 4 and len(parts[0]) == 5:
+ sqlstate, _, macro_name, spec_name = parts[:4]
+ errors.append((sqlstate, macro_name, spec_name))
+
+ return errors
+
+
+def macro_to_class_name(macro_name: str) -> str:
+ """Convert ERRCODE_FOO_BAR to FooBar."""
+ name = macro_name.removeprefix("ERRCODE_")
+ # Move WARNING prefix to the end as a suffix
+ if name.startswith("WARNING_"):
+ name = name.removeprefix("WARNING_") + "_WARNING"
+ return snake_to_pascal(name.lower())
+
+
+def generate_errors(errcodes_path: Path):
+ """Generate the _generated_errors.py content."""
+ errors = parse_errcodes(errcodes_path)
+
+ # Find spec_names that appear more than once (collisions)
+ spec_name_counts: dict[str, int] = {}
+ for _, _, spec_name in errors:
+ spec_name_counts[spec_name] = spec_name_counts.get(spec_name, 0) + 1
+ colliding_spec_names = {
+ name for name, count in spec_name_counts.items() if count > 1
+ }
+
+ lines = [
+ "# Copyright (c) 2025, PostgreSQL Global Development Group",
+ "# This file is generated by src/tools/generate_pytest_libpq_errors.py - do not edit directly.",
+ "",
+ '"""',
+ "Generated PostgreSQL error classes mapped from SQLSTATE codes.",
+ '"""',
+ "",
+ "from typing import Dict",
+ "",
+ "from ._error_base import LibpqError, LibpqWarning",
+ "",
+ "",
+ ]
+
+ generated_classes = {"LibpqError"}
+ sqlstate_to_exception = {}
+
+ for sqlstate, macro_name, spec_name in errors:
+ # 000 errors define the parent class for all errors in this SQLSTATE class
+ if sqlstate.endswith("000"):
+ exc_name = snake_to_pascal(spec_name)
+ if exc_name == "Warning":
+ parent = "LibpqWarning"
+ else:
+ parent = "LibpqError"
+ else:
+ if spec_name in colliding_spec_names:
+ exc_name = macro_to_class_name(macro_name)
+ else:
+ exc_name = snake_to_pascal(spec_name)
+ # Use parent class if available, otherwise LibpqError
+ parent = sqlstate_to_exception.get(sqlstate[:2] + "000", "LibpqError")
+ # Warnings should end with "Warning"
+ if parent == "Warning" and not exc_name.endswith("Warning"):
+ exc_name += "Warning"
+
+ generated_classes.add(exc_name)
+ sqlstate_to_exception[sqlstate] = exc_name
+ lines.extend(
+ [
+ f"class {exc_name}({parent}):",
+ f' """SQLSTATE {sqlstate} - {spec_name.replace("_", " ")}."""',
+ "",
+ " pass",
+ "",
+ "",
+ ]
+ )
+
+ lines.append("SQLSTATE_TO_EXCEPTION: Dict[str, type] = {")
+ for sqlstate, exc_name in sqlstate_to_exception.items():
+ lines.append(f' "{sqlstate}": {exc_name},')
+ lines.extend(["}", "", ""])
+
+ all_exports = list(generated_classes) + ["SQLSTATE_TO_EXCEPTION"]
+ lines.append("__all__ = [")
+ for name in all_exports:
+ lines.append(f' "{name}",')
+ lines.append("]")
+
+ return "\n".join(lines) + "\n"
+
+
+if __name__ == "__main__":
+ script_dir = Path(__file__).resolve().parent
+ src_root = script_dir.parent.parent
+
+ errcodes_path = src_root / "src" / "backend" / "utils" / "errcodes.txt"
+ output_path = (
+ src_root / "src" / "test" / "pytest" / "libpq" / "_generated_errors.py"
+ )
+
+ if not errcodes_path.exists():
+ print(f"Error: {errcodes_path} not found", file=sys.stderr)
+ sys.exit(1)
+
+ output = generate_errors(errcodes_path)
+ output_path.write_text(output)
+ print(f"Generated {output_path}")
--
2.52.0
v4-0005-WIP-pytest-Add-some-SSL-client-tests.patchtext/x-patch; charset=utf-8; name=v4-0005-WIP-pytest-Add-some-SSL-client-tests.patchDownload
From cd2bae3906da7f08ef6afaa9e5d892542353a90c Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 16 Dec 2025 09:30:55 +0100
Subject: [PATCH v4 5/7] WIP: pytest: Add some SSL client tests
This is a sample client-only test suite. It tests some handshake
failures against a mock server, as well as a full SSL handshake + empty
query + response.
pyca/cryptography is added as a new package dependency. Certificates for
testing are generated on the fly.
The `pg` test package contains some helpers and fixtures (as well as
some self-tests for more complicated behavior). Of note:
- pg.require_test_extra() lets you mark a test/class/module as skippable
if PG_TEST_EXTRA does not contain the necessary strings.
- pg.remaining_timeout() is a function which can be repeatedly called to
determine how much of the PG_TEST_TIMEOUT_DEFAULT remains for the
current test item.
- pg.libpq is a fixture that wraps libpq.so in a more friendly, but
still low-level, ctypes FFI. Allocated resources are unwound and
released during test teardown.
The mock design is threaded: the server socket is listening on a
background thread, and the test provides the server logic via a
callback. There is some additional work still needed to make this
production-ready; see the notes for _TCPServer.background(). (Currently,
an exception in the wrong place could result in a hang-until-timeout
rather than an immediate failure.)
TODOs:
- local_server and tcp_server_class are nearly identical and should
share code.
- fix exception-related timeouts for .background()
- figure out the proper use of "session" vs "module" scope
- ensure that pq.libpq unwinds (to close connections) before tcp_server;
see comment in test_server_with_ssl_disabled()
---
.cirrus.tasks.yml | 18 ++-
config/pytest-requirements.txt | 10 ++
src/test/ssl/Makefile | 2 +
src/test/ssl/meson.build | 6 +
src/test/ssl/pyt/conftest.py | 130 +++++++++++++++
src/test/ssl/pyt/test_client.py | 278 ++++++++++++++++++++++++++++++++
6 files changed, 438 insertions(+), 6 deletions(-)
create mode 100644 src/test/ssl/pyt/conftest.py
create mode 100644 src/test/ssl/pyt/test_client.py
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 3b0bb202276..b976baa5b34 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -229,6 +229,7 @@ task:
sysctl kern.corefile='/tmp/cores/%N.%P.core'
setup_additional_packages_script: |
pkg install -y \
+ py311-cryptography \
py311-packaging \
py311-pytest
@@ -323,6 +324,7 @@ task:
setup_additional_packages_script: |
pkgin -y install \
+ py312-cryptography \
py312-packaging \
py312-test
ln -s /usr/pkg/bin/pytest-3.12 /usr/pkg/bin/pytest
@@ -347,8 +349,9 @@ task:
setup_additional_packages_script: |
pkg_add -I \
- py3-test \
- py3-packaging
+ py3-cryptography \
+ py3-packaging \
+ py3-test
# Always core dump to ${CORE_DUMP_DIR}
set_core_dump_script: sysctl -w kern.nosuidcoredump=2
<<: *openbsd_task_template
@@ -509,8 +512,9 @@ task:
setup_additional_packages_script: |
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get -y install \
- python3-pytest \
- python3-packaging
+ python3-cryptography \
+ python3-packaging \
+ python3-pytest
matrix:
# SPECIAL:
@@ -659,6 +663,7 @@ task:
CIRRUS_WORKING_DIR: ${HOME}/pgsql/
CCACHE_DIR: ${HOME}/ccache
MACPORTS_CACHE: ${HOME}/macports-cache
+ PYTEST_DEBUG_TEMPROOT: /tmp # default is too long for UNIX sockets on Mac
MESON_FEATURES: >-
-Dbonjour=enabled
@@ -679,6 +684,7 @@ task:
p5.34-io-tty
p5.34-ipc-run
python312
+ py312-cryptography
py312-packaging
py312-pytest
tcl
@@ -817,7 +823,7 @@ task:
# XXX Does Chocolatey really not have any Python package installers?
setup_additional_packages_script: |
REM choco install -y --no-progress ...
- pip3 install --user packaging pytest
+ pip3 install --user cryptography packaging pytest
setup_hosts_file_script: |
echo 127.0.0.1 pg-loadbalancetest >> c:\Windows\System32\Drivers\etc\hosts
@@ -880,7 +886,7 @@ task:
folder: ${CCACHE_DIR}
setup_additional_packages_script: |
- C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-python-packaging mingw-w64-ucrt-x86_64-python-pytest
+ C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-python-cryptography mingw-w64-ucrt-x86_64-python-packaging mingw-w64-ucrt-x86_64-python-pytest
mingw_info_script: |
%BASH% -c "where gcc"
diff --git a/config/pytest-requirements.txt b/config/pytest-requirements.txt
index 1c5e283d1e2..f0cc0a1c518 100644
--- a/config/pytest-requirements.txt
+++ b/config/pytest-requirements.txt
@@ -19,3 +19,13 @@ pytest >= 7.0, < 10
# packaging is used by check_pytest.py at configure time.
packaging
+
+# Notes on the cryptography package:
+# - 3.3.2 is shipped on Debian bullseye.
+# - 3.4.x drops support for Python 2, making it a version of note for older LTS
+# distros.
+# - 35.x switched versioning schemes and moved to Rust parsing.
+# - 40.x is the last version supporting Python 3.6.
+# XXX Is it appropriate to require cryptography, or should we simply skip
+# dependent tests?
+cryptography >= 3.3.2
diff --git a/src/test/ssl/Makefile b/src/test/ssl/Makefile
index e8a1639db2d..895ea5ea41c 100644
--- a/src/test/ssl/Makefile
+++ b/src/test/ssl/Makefile
@@ -30,6 +30,8 @@ clean distclean:
# Doesn't depend on sslfiles because we don't rebuild them by default
check:
$(prove_check)
+ # XXX these suites should run independently, not serially
+ $(pytest_check)
installcheck:
$(prove_installcheck)
diff --git a/src/test/ssl/meson.build b/src/test/ssl/meson.build
index d8e0fb518e0..a0ee2af0899 100644
--- a/src/test/ssl/meson.build
+++ b/src/test/ssl/meson.build
@@ -15,4 +15,10 @@ tests += {
't/003_sslinfo.pl',
],
},
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_client.py',
+ 'pyt/test_server.py',
+ ],
+ },
}
diff --git a/src/test/ssl/pyt/conftest.py b/src/test/ssl/pyt/conftest.py
new file mode 100644
index 00000000000..db8fa8655a8
--- /dev/null
+++ b/src/test/ssl/pyt/conftest.py
@@ -0,0 +1,130 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import datetime
+import re
+import subprocess
+import tempfile
+from collections import namedtuple
+
+import pytest
+
+pytest_plugins = ["pypg.fixtures"]
+
+
+@pytest.fixture(scope="session")
+def cryptography():
+ return pytest.importorskip("cryptography", "3.3.2")
+
+
+Cert = namedtuple("Cert", "cert, certpath, key, keypath")
+
+
+@pytest.fixture(scope="session")
+def certs(cryptography, tmp_path_factory):
+ """
+ Caches commonly used certificates at the session level, and provides a way
+ to create new ones.
+
+ - certs.ca: the root CA certificate
+
+ - certs.server: the "standard" server certficate, signed by certs.ca
+
+ - certs.server_host: the hostname of the certs.server certificate
+
+ - certs.new(): creates a custom certificate, signed by certs.ca
+ """
+
+ from cryptography import x509
+ from cryptography.hazmat.primitives import hashes, serialization
+ from cryptography.hazmat.primitives.asymmetric import rsa
+ from cryptography.x509.oid import NameOID
+
+ tmpdir = tmp_path_factory.mktemp("test-certs")
+
+ class _Certs:
+ def __init__(self):
+ self.ca = self.new(
+ x509.Name(
+ [x509.NameAttribute(NameOID.COMMON_NAME, "PG pytest CA")],
+ ),
+ ca=True,
+ )
+
+ self.server_host = "example.org"
+ self.server = self.new(
+ x509.Name(
+ [x509.NameAttribute(NameOID.COMMON_NAME, self.server_host)],
+ )
+ )
+
+ def new(self, subject: x509.Name, *, ca=False) -> Cert:
+ """
+ Creates and signs a new Cert with the given subject name. If ca is
+ True, the certificate will be self-signed; otherwise the certificate
+ is signed by self.ca.
+ """
+ key = rsa.generate_private_key(
+ public_exponent=65537,
+ key_size=2048,
+ )
+
+ builder = x509.CertificateBuilder()
+ now = datetime.datetime.now(datetime.timezone.utc)
+
+ builder = (
+ builder.subject_name(subject)
+ .public_key(key.public_key())
+ .serial_number(x509.random_serial_number())
+ .not_valid_before(now)
+ .not_valid_after(now + datetime.timedelta(hours=1))
+ )
+
+ if ca:
+ builder = builder.issuer_name(subject)
+ else:
+ builder = builder.issuer_name(self.ca.cert.subject)
+
+ builder = builder.add_extension(
+ x509.BasicConstraints(ca=ca, path_length=None),
+ critical=True,
+ )
+
+ cert = builder.sign(
+ private_key=key if ca else self.ca.key,
+ algorithm=hashes.SHA256(),
+ )
+
+ # Dump the certificate and key to file.
+ keypath = self._tofile(
+ key.private_bytes(
+ serialization.Encoding.PEM,
+ serialization.PrivateFormat.PKCS8,
+ serialization.NoEncryption(),
+ ),
+ suffix=".key",
+ )
+ certpath = self._tofile(
+ cert.public_bytes(serialization.Encoding.PEM),
+ suffix="-ca.crt" if ca else ".crt",
+ )
+
+ return Cert(
+ cert=cert,
+ certpath=certpath,
+ key=key,
+ keypath=keypath,
+ )
+
+ def _tofile(self, data: bytes, *, suffix) -> str:
+ """
+ Dumps data to a file on disk with the requested suffix and returns
+ the path. The file is located somewhere in pytest's temporary
+ directory root.
+ """
+ f = tempfile.NamedTemporaryFile(suffix=suffix, dir=tmpdir, delete=False)
+ with f:
+ f.write(data)
+
+ return f.name
+
+ return _Certs()
diff --git a/src/test/ssl/pyt/test_client.py b/src/test/ssl/pyt/test_client.py
new file mode 100644
index 00000000000..247681f93cb
--- /dev/null
+++ b/src/test/ssl/pyt/test_client.py
@@ -0,0 +1,278 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import ctypes
+import socket
+import ssl
+import struct
+import threading
+from typing import Callable
+
+import pytest
+
+import pypg
+from libpq import LibpqError, ExecStatus
+
+# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
+pytestmark = pypg.require_test_extra("ssl")
+
+
+@pytest.fixture(scope="session", autouse=True)
+def skip_if_no_ssl_support(libpq_handle):
+ """Skips tests if SSL support is not configured."""
+
+ # Declare PQsslAttribute().
+ PQsslAttribute = libpq_handle.PQsslAttribute
+ PQsslAttribute.restype = ctypes.c_char_p
+ PQsslAttribute.argtypes = [ctypes.c_void_p, ctypes.c_char_p]
+
+ if not PQsslAttribute(None, b"library"):
+ pytest.skip("requires SSL support to be configured")
+
+
+#
+# Test Fixtures
+#
+
+
+@pytest.fixture
+def tcp_server_class(remaining_timeout):
+ """
+ Metafixture to combine related logic for tcp_server and ssl_server.
+
+ TODO: combine with test_libpq.local_server
+ """
+
+ class _TCPServer(contextlib.ExitStack):
+ """
+ Implementation class for tcp_server. See .background() for the primary
+ entry point for tests. Postgres clients may connect to this server via
+ **tcp_server.conninfo.
+
+ _TCPServer derives from contextlib.ExitStack to provide easy cleanup of
+ associated resources; see the documentation for that class for a full
+ explanation.
+ """
+
+ def __init__(self):
+ super().__init__()
+
+ self._thread = None
+ self._thread_exc = None
+ self._listener = self.enter_context(
+ socket.socket(socket.AF_INET, socket.SOCK_STREAM),
+ )
+
+ self._bind_and_listen()
+ sockname = self._listener.getsockname()
+ self.conninfo = dict(
+ hostaddr=sockname[0],
+ port=sockname[1],
+ )
+
+ def _bind_and_listen(self):
+ """
+ Does the actual work of binding the socket and listening for
+ connections.
+
+ The listen backlog is currently hardcoded to one.
+ """
+ self._listener.bind(("127.0.0.1", 0))
+ self._listener.listen(1)
+
+ def background(self, fn: Callable[[socket.socket], None]) -> None:
+ """
+ Accepts a client connection on a background thread and passes it to
+ the provided callback. Any exceptions raised from the callback will
+ be re-raised on the main thread during fixture teardown.
+
+ Blocking operations on the connected socket default to using the
+ remaining_timeout(), though this can be changed by the test via the
+ socket's .settimeout().
+ """
+
+ def _bg():
+ try:
+ self._listener.settimeout(remaining_timeout())
+ sock, _ = self._listener.accept()
+
+ with sock:
+ sock.settimeout(remaining_timeout())
+ fn(sock)
+
+ except Exception as e:
+ # Save the exception for re-raising on the main thread.
+ self._thread_exc = e
+
+ # TODO: rather than using callback(), consider explicitly signaling
+ # the fn() implementation to stop early if we get an exception.
+ # Otherwise we'll hang until the end of the timeout.
+ self._thread = threading.Thread(target=_bg)
+ self.callback(self._join)
+
+ self._thread.start()
+
+ def _join(self):
+ """
+ Waits for the background thread to finish and raises any thrown
+ exception. This is called during fixture teardown.
+ """
+ # Give a little bit of wiggle room on the join timeout, since we're
+ # racing against the test's own use of remaining_timeout(). (It's
+ # preferable to let tests report timeouts; the stack traces will
+ # help with debugging.)
+ self._thread.join(remaining_timeout() + 1)
+ if self._thread.is_alive():
+ raise TimeoutError("background thread is still running after timeout")
+
+ if self._thread_exc is not None:
+ raise self._thread_exc
+
+ return _TCPServer
+
+
+@pytest.fixture
+def tcp_server(tcp_server_class):
+ """
+ Opens up a local TCP socket for mocking a Postgres server on a background
+ thread. See the _TCPServer API for usage.
+ """
+ with tcp_server_class() as s:
+ yield s
+
+
+@pytest.fixture
+def ssl_server(tcp_server_class, certs):
+ """
+ Like tcp_server, but with an additional .background_ssl() method which will
+ perform a SSLRequest handshake on the socket before handing the connection
+ to the test callback.
+
+ This server uses certs.server as its identity.
+ """
+
+ class _SSLServer(tcp_server_class):
+ def __init__(self):
+ super().__init__()
+
+ self.conninfo["host"] = certs.server_host
+
+ self._ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
+ self._ctx.load_cert_chain(certs.server.certpath, certs.server.keypath)
+
+ def background_ssl(self, fn: Callable[[ssl.SSLSocket], None]) -> None:
+ """
+ Invokes a server callback as with .background(), but an SSLRequest
+ handshake is performed first, and the socket provided to the
+ callback has been wrapped in an OpenSSL layer.
+ """
+
+ def handshake(s: socket.socket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Make sure we get an SSLRequest.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (1234, 5679)
+ assert pktlen == 8
+
+ # Accept the SSLRequest.
+ s.send(b"S")
+
+ with self._ctx.wrap_socket(s, server_side=True) as wrapped:
+ fn(wrapped)
+
+ self.background(handshake)
+
+ with _SSLServer() as s:
+ yield s
+
+
+#
+# Tests
+#
+
+
+@pytest.mark.parametrize("sslmode", ("require", "verify-ca", "verify-full"))
+def test_server_with_ssl_disabled(connect, tcp_server, certs, sslmode):
+ """
+ Make sure client refuses to talk to non-SSL servers with stricter
+ sslmodes.
+ """
+
+ def refuse_ssl(s: socket.socket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Make sure we get an SSLRequest.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (1234, 5679)
+ assert pktlen == 8
+
+ # Refuse the SSLRequest.
+ s.send(b"N")
+
+ # Wait for the client to close the connection.
+ assert not s.recv(1), "client sent unexpected data"
+
+ tcp_server.background(refuse_ssl)
+
+ with pytest.raises(LibpqError, match="server does not support SSL"):
+ connect(
+ **tcp_server.conninfo,
+ sslrootcert=certs.ca.certpath,
+ sslmode=sslmode,
+ )
+
+
+def test_verify_full_connection(connect, ssl_server, certs):
+ """Completes a verify-full connection and empty query."""
+
+ def handle_empty_query(s: ssl.SSLSocket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Check the startup packet version, then discard the remainder.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (3, 0)
+ s.recv(pktlen - 8)
+
+ # Send the required litany of server messages.
+ s.send(struct.pack("!cII", b"R", 8, 0)) # AuthenticationOK
+
+ # ParameterStatus: client_encoding
+ key = b"client_encoding\0"
+ val = b"UTF-8\0"
+ s.send(struct.pack("!cI", b"S", 4 + len(key) + len(val)) + key + val)
+
+ # ParameterStatus: DateStyle
+ key = b"DateStyle\0"
+ val = b"ISO, MDY\0"
+ s.send(struct.pack("!cI", b"S", 4 + len(key) + len(val)) + key + val)
+
+ s.send(struct.pack("!cIII", b"K", 12, 1234, 1234)) # BackendKeyData
+ s.send(struct.pack("!cIc", b"Z", 5, b"I")) # ReadyForQuery
+
+ # Expect an empty query.
+ pkttype = s.recv(1)
+ assert pkttype == b"Q"
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+ assert s.recv(pktlen - 4) == b"\0"
+
+ # Send an EmptyQueryResponse+ReadyForQuery.
+ s.send(struct.pack("!cI", b"I", 4))
+ s.send(struct.pack("!cIc", b"Z", 5, b"I"))
+
+ # libpq should terminate and close the connection.
+ assert s.recv(1) == b"X"
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+ assert pktlen == 4
+
+ assert not s.recv(1), "client sent unexpected data"
+
+ ssl_server.background_ssl(handle_empty_query)
+
+ conn = connect(
+ **ssl_server.conninfo,
+ sslrootcert=certs.ca.certpath,
+ sslmode="verify-full",
+ )
+ with conn:
+ assert conn.exec("").status() == ExecStatus.PGRES_EMPTY_QUERY
--
2.52.0
v4-0006-WIP-pytest-Add-some-server-side-SSL-tests.patchtext/x-patch; charset=utf-8; name=v4-0006-WIP-pytest-Add-some-server-side-SSL-tests.patchDownload
From 14fbc06cace6dd27862dd0607608e9af27aa5f80 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 16 Dec 2025 09:31:46 +0100
Subject: [PATCH v4 6/7] WIP: pytest: Add some server-side SSL tests
In the same vein as the previous commit, this is a server-only test
suite operating against a mock client. The test itself is a heavily
parameterized check for direct-SSL handshake behavior, using a
combination of "standard" and "custom" certificates via the certs
fixture.
installcheck is currently unsupported, but the architecture has some
extension points that should make it possible later. For now, a new
server is always started for the test session.
New session-level fixtures have been added which probably need to
migrate to the `pg` package. Of note:
- datadir points to the server's data directory
- sockdir points to the server's UNIX socket/lock directory
- server_instance actually inits and starts a server via the pg_ctl on
PATH (and could eventually point at an installcheck target)
Wrapping these session-level fixtures is pg_server[_session], which
provides APIs for configuration changes that unwind themselves at the
end of fixture scopes. There's also an example of nested scopes, via
pg_server_session.subcontext(). Many TODOs remain before we're on par
with Test::Cluster, but this should illustrate my desired architecture
pretty well.
Windows currently uses SCRAM-over-UNIX for the admin account rather than
SSPI-over-TCP. There's some dead Win32 code in pg.current_windows_user,
but I've kept it as an illustration of how a developer might write such
code for SSPI. I'll probably remove it in a future patch version.
TODOs:
- port more server configuration behavior from PostgreSQL::Test::Cluster
- decide again on "session" vs. "module" scope for server fixtures
- improve remaining_timeout() integration with socket operations; at the
moment, the timeout resets on every call rather than decrementing
---
src/test/ssl/pyt/conftest.py | 50 ++++++++++
src/test/ssl/pyt/test_server.py | 161 ++++++++++++++++++++++++++++++++
2 files changed, 211 insertions(+)
create mode 100644 src/test/ssl/pyt/test_server.py
diff --git a/src/test/ssl/pyt/conftest.py b/src/test/ssl/pyt/conftest.py
index db8fa8655a8..dacb9599532 100644
--- a/src/test/ssl/pyt/conftest.py
+++ b/src/test/ssl/pyt/conftest.py
@@ -128,3 +128,53 @@ def certs(cryptography, tmp_path_factory):
return f.name
return _Certs()
+
+
+@pytest.fixture(scope="module", autouse=True)
+def ssl_setup(pg_server_module, certs, datadir):
+ """
+ Sets up required server settings for all tests in this module.
+ """
+ try:
+ with pg_server_module.restarting() as s:
+ s.conf.set(
+ ssl="on",
+ ssl_ca_file=certs.ca.certpath,
+ ssl_cert_file=certs.server.certpath,
+ ssl_key_file=certs.server.keypath,
+ )
+
+ # Reject by default.
+ s.hba.prepend("hostssl all all all reject")
+
+ except subprocess.CalledProcessError:
+ # This is a decent place to skip if the server isn't set up for SSL.
+ logpath = datadir / "postgresql.log"
+ unsupported = re.compile("SSL is not supported")
+
+ with open(logpath, "r") as log:
+ for line in log:
+ if unsupported.search(line):
+ pytest.skip("the server does not support SSL")
+
+ # Some other error happened.
+ raise
+
+ users = pg_server_module.create_users("ssl")
+ dbs = pg_server_module.create_dbs("ssl")
+
+ return (users, dbs)
+
+
+@pytest.fixture(scope="module")
+def client_cert(ssl_setup, certs):
+ """
+ Creates a Cert for the "ssl" user.
+ """
+ from cryptography import x509
+ from cryptography.x509.oid import NameOID
+
+ users, _ = ssl_setup
+ user = users["ssl"]
+
+ return certs.new(x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, user)]))
diff --git a/src/test/ssl/pyt/test_server.py b/src/test/ssl/pyt/test_server.py
new file mode 100644
index 00000000000..4a327b40714
--- /dev/null
+++ b/src/test/ssl/pyt/test_server.py
@@ -0,0 +1,161 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import re
+import socket
+import ssl
+import struct
+
+import pytest
+
+import pypg
+
+# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
+pytestmark = pypg.require_test_extra("ssl")
+
+# For use with the `creds` parameter below.
+CLIENT = "client"
+SERVER = "server"
+
+
+# fmt: off
+@pytest.mark.parametrize(
+ "auth_method, creds, expected_error",
+[
+ # Trust allows anything.
+ ("trust", None, None),
+ ("trust", CLIENT, None),
+ ("trust", SERVER, None),
+
+ # verify-ca allows any CA-signed certificate.
+ ("trust clientcert=verify-ca", None, "requires a valid client certificate"),
+ ("trust clientcert=verify-ca", CLIENT, None),
+ ("trust clientcert=verify-ca", SERVER, None),
+
+ # cert and verify-full allow only the correct certificate.
+ ("trust clientcert=verify-full", None, "requires a valid client certificate"),
+ ("trust clientcert=verify-full", CLIENT, None),
+ ("trust clientcert=verify-full", SERVER, "authentication failed for user"),
+ ("cert", None, "requires a valid client certificate"),
+ ("cert", CLIENT, None),
+ ("cert", SERVER, "authentication failed for user"),
+],
+)
+# fmt: on
+def test_direct_ssl_certificate_authentication(
+ pg,
+ ssl_setup,
+ certs,
+ client_cert,
+ remaining_timeout,
+ # test parameters
+ auth_method,
+ creds,
+ expected_error,
+):
+ """
+ Tests direct SSL connections with various client-certificate/HBA
+ combinations.
+ """
+
+ # Set up the HBA as desired by the test.
+ users, dbs = ssl_setup
+
+ user = users["ssl"]
+ db = dbs["ssl"]
+
+ with pg.reloading() as s:
+ s.hba.prepend(
+ ["hostssl", db, user, "127.0.0.1/32", auth_method],
+ ["hostssl", db, user, "::1/128", auth_method],
+ )
+
+ # Configure the SSL settings for the client.
+ ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
+ ctx.load_verify_locations(cafile=certs.ca.certpath)
+ ctx.set_alpn_protocols(["postgresql"]) # for direct SSL
+
+ # Load up a client certificate if required by the test.
+ if creds == CLIENT:
+ ctx.load_cert_chain(client_cert.certpath, client_cert.keypath)
+ elif creds == SERVER:
+ # Using a server certificate as the client credential is expected to
+ # work only for clientcert=verify-ca (and `trust`, naturally).
+ ctx.load_cert_chain(certs.server.certpath, certs.server.keypath)
+
+ # Make a direct SSL connection. There's no SSLRequest in the handshake; we
+ # simply wrap a TCP connection with OpenSSL.
+ addr = (pg.hostaddr, pg.port)
+ with socket.create_connection(addr) as s:
+ s.settimeout(remaining_timeout()) # XXX this resets every operation
+
+ with ctx.wrap_socket(s, server_hostname=certs.server_host) as conn:
+ # Build and send the startup packet.
+ startup_options = dict(
+ user=user,
+ database=db,
+ application_name="pytest",
+ )
+
+ payload = b""
+ for k, v in startup_options.items():
+ payload += k.encode() + b"\0"
+ payload += str(v).encode() + b"\0"
+ payload += b"\0" # null terminator
+
+ pktlen = 4 + 4 + len(payload)
+ conn.send(struct.pack("!IHH", pktlen, 3, 0) + payload)
+
+ if not expected_error:
+ # Expect an AuthenticationOK to come back.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"R"
+ assert pktlen == 8
+
+ authn_result = struct.unpack("!I", conn.recv(4))[0]
+ assert authn_result == 0
+
+ # Read and discard to ReadyForQuery.
+ while True:
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ payload = conn.recv(pktlen - 4)
+
+ if pkttype == b"Z":
+ assert payload == b"I"
+ break
+
+ # Send an empty query.
+ conn.send(struct.pack("!cI", b"Q", 5) + b"\0")
+
+ # Expect EmptyQueryResponse+ReadyForQuery.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"I"
+ assert pktlen == 4
+
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"Z"
+
+ payload = conn.recv(pktlen - 4)
+ assert payload == b"I"
+
+ else:
+ # Match the expected authentication error.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"E"
+
+ payload = conn.recv(pktlen - 4)
+ msg = None
+
+ for component in payload.split(b"\0"):
+ if not component:
+ break # end of message
+
+ key, val = component[:1], component[1:]
+ if key == b"S":
+ assert val == b"FATAL"
+ elif key == b"M":
+ msg = val.decode()
+
+ assert re.search(expected_error, msg), "server error did not match"
+
+ # Terminate.
+ conn.send(struct.pack("!cI", b"X", 4))
--
2.52.0
v4-0007-XXX-run-pytest-and-ssl-suite-all-OSes.patchtext/x-patch; charset=utf-8; name=v4-0007-XXX-run-pytest-and-ssl-suite-all-OSes.patchDownload
From fa5bb9dc099cad06f14eb8ce069104f6f22838c2 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 2 Sep 2025 15:38:52 -0700
Subject: [PATCH v4 7/7] XXX run pytest and ssl suite, all OSes
---
.cirrus.star | 2 +-
.cirrus.tasks.yml | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/.cirrus.star b/.cirrus.star
index e9bb672b959..7c1caaa12f1 100644
--- a/.cirrus.star
+++ b/.cirrus.star
@@ -73,7 +73,7 @@ def compute_environment_vars():
# REPO_CI_AUTOMATIC_TRIGGER_TASKS="task_name other_task" under "Repository
# Settings" on Cirrus CI's website.
- default_manual_trigger_tasks = ['mingw', 'netbsd', 'openbsd']
+ default_manual_trigger_tasks = []
repo_ci_automatic_trigger_tasks = env.get('REPO_CI_AUTOMATIC_TRIGGER_TASKS', '')
for task in default_manual_trigger_tasks:
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index b976baa5b34..dfb9ae64068 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -28,7 +28,7 @@ env:
# errors/warnings in one place.
MBUILD_TARGET: all testprep
MTEST_ARGS: --print-errorlogs --no-rebuild -C build
- MTEST_SUITES: # --suite setup --suite ssl --suite ...
+ MTEST_SUITES: --suite setup --suite pytest --suite ssl
PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
--
2.52.0
Hi Jelte,
Thanks for working on this. I’ve done an initial review of patch 4 and
here are some comments below.
1) Test infra: tmp_check() fixture looks wrong / unused variable
def tmp_check(tmp_path_factory):
d = os.getenv("TESTDATADIR")
if d:
d = pathlib.Path(d)
else:
d = tmp_path_factory.mktemp("tmp_check")
return tmp_path_factory.mktemp("check_tmp")
You compute d then ignore it and always return a new temp dir
"check_tmp". It should probably return d (or d / "check_tmp").
2) PQexec NULL handling is missing
+ def exec(self, query: str):
+ ...
+ res = self._lib.PQexec(self._handle, query.encode())
+ return self._stack.enter_context(PGresult(self._lib, res))
No NULL check. If PQexec returns NULL (OOM, connection lost), the
PGresult wrapper will pass it to PQresultStatus etc., causing
undefined behavior or crash.
3) array parsing may be too simple
_parse_array() does inner.split(","). That will break for valid
Postgres arrays containing:
quoted elements with commas: {"a,b","c"}
escapes / quotes: {"a\"b"} or {"a\\b"}
nested arrays: {{1,2},{3,4}}
Should we document the constraint here or implement a more complete
state-machine?
4) Type conversion: timestamp/timestamptz conversion could be wrong
datetime.datetime.fromisoformat for both timestamp and timestamptz.
- timestamptz output formatting from Postgres is not always
fromisoformat() friendly (can include timezone formats that differ).
- fromisoformat() yields timezone-aware datetimes only if the string
has an offset; but Postgres output depends on DateStyle and timezone
settings.
This could make tests brittle across environments. Should we use
dateutil.parser (if allowed) or document that the server uses ISO
settings for pytest.
5) Server init: pg_ctl init vs initdb
In non-Windows branch:
run(pg_ctl, "-D", datadir, "init")
initdb seems to be a more common way here.
6) Logging config: log_connections = all seems wrong
print("log_connections = all", file=f)
I don't see an option "all" for this parameter
https://postgresqlco.nf/doc/en/param/log_connections/
7) UX: error message handling and query attachment
raise_error() builds message with primary + optional Query: ....
Should we include SQLSTATE and severity in the message string by
default, because it helps when reading CI logs.
--
Best,
Xuneng
Hi,
Thanks for the significant work and effort invested in this project. I
believe it has the potential to be valuable for PostgreSQL over the
next 10 years or many more.
I noticed that this topic has gone through a long series of
discussions in the past. Have we reached consensus on the necessity of
the project and its overall direction? As we’ve seen before, many
ambitious initiatives are started but don’t always make it to
completion. Projects with a clearly articulated roadmap and milestones
tend to attract and sustain contributor interest more effectively.
Assuming there is interest in continuing and contributing to this
work, the amount of historical context to absorb could be intimidating
for new contributors. Do we currently have a wiki page or similar
document that summarizes the project’s current state and future
direction? If not, it might be worthwhile to create one—not only to
lower the barrier to entry for new contributors, but also to support
long-term maintainability.
--
Best,
Xuneng
On Fri Dec 19, 2025 at 3:18 PM CET, Xuneng Zhou wrote:
Thanks for working on this. I’ve done an initial review of patch 4 and
here are some comments below.
Attached is a version where I addressed all of those comemnts (the few
that I didn't or did in non-obvious ways are discussed at the end). I
also made a lot of improvements to the patchset:
1. Includes infrastructure to create multiple Postgres clusters in a
single test or module.
2. Basic documentation for the pytest testing infrastructure.
3. Configure pytest and dependencies in pyproject.toml instead of a
requirements file.
4. Set up the environment using "uv"[1]https://github.com/astral-sh/uv if no pytest binary can be
found and uv is found.
5. Use INITDB_TEMPLATE to speed up cluster creation.
6. Don't enable fsync during tests to speed them up.
7. I added a patch where I've rewritten the libpq load_balance_hosts
tests in python as a validation that the new infrastructure and APIs
work well. (the perl ones were also written by me)
8. Postgres logs are now included as a separate pytest "section" in the
output.
9. The pgtap output now includes all of the pytest sections. For an
example of what the failure output looks like, take a look here[2]https://cirrus-ci.com/task/4805772426608640?logs=test_world#L16
I also *removed* a few things that Jacob had initially added:
1. The SCRAM based windows auth. I think it's a good idea, but it
doesn't work with INITDB_TEMPLATE. I think that logic should be made
part of a separate patchset that stops use trust auth when creating
the INITDB_TEMPLATE. That way also the perl tests can benefit from it.
So seems good to do, but separate from the whole pytest work.
2. I removed the current_windows_user() function. This was dead code (as
also written in Jacob's comment) and python has built-in ways to get the current user.
3. I removed the fancy missing/incorrect dependency detection script. I
think (as Jacob also suggested in his code comments) that
importorskip is a better fit for this. Especially since we only have
pytest as a dependency for the core framework, and only the
cryptography package for the ssl tests.
Finally, I prepared a PR for our images to include the pytest
dependencies, so in a future version of the patchset we don't need to
ad-hoc install the required packages.
IMO patch 4 now serves as a good enough central infrastructure base for
people to develop tests on top of (which would possibly add some more
infrastructure as needed).
In regards to your second message, related to consensus on the
necessity of the project: Yes, based on the in-person conversations
about this people are either in favor of a pytest based test framework,
or neutral. There were no strong objections. We were now at the point
where "someone" now actually needs to do the work of getting some decent
infrastructure in place. Which is what Jacob and me have been trying to
do.
[1]: https://github.com/astral-sh/uv
[2]: https://cirrus-ci.com/task/4805772426608640?logs=test_world#L16
[3]: https://github.com/anarazel/pg-vm-images/pull/130
4) Type conversion: timestamp/timestamptz conversion could be wrong
datetime.datetime.fromisoformat for both timestamp and timestamptz.
I now configured datestyle to ISO and UTC in postgresql.conf. If that
turns out not to be enough at some point (e.g. because a test sets a
different datestyle), we can revisit this.
6) Logging config: log_connections = all seems wrong
print("log_connections = all", file=f)I don't see an option "all" for this parameter
https://postgresqlco.nf/doc/en/param/log_connections/
That's a new value since PG18:
https://postgresqlco.nf/doc/en/param/log_connections/18/
7) UX: error message handling and query attachment
raise_error() builds message with primary + optional Query: ....Should we include SQLSTATE and severity in the message string by
default, because it helps when reading CI logs.
I don't think adding this additional info helps much anymore, now that
the postgres logs (item 8) are part of the output too. So I left it like
this, and even removed the query from the error message. By only
including the actual error message it's easier to match on it for errors
that a test expects. Matching on the SQLSTATE can be also done by
matching on the error type.
Attachments:
v5-0001-meson-Include-TAP-tests-in-the-configuration-summ.patchtext/x-patch; charset=utf-8; name=v5-0001-meson-Include-TAP-tests-in-the-configuration-summ.patchDownload
From 00ed77c794a7c626a46d05aa782e1c382e29da0e Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Fri, 5 Sep 2025 16:39:08 -0700
Subject: [PATCH v5 1/7] meson: Include TAP tests in the configuration summary
...to make it obvious when they've been enabled. prove is added to the
executables list for good measure.
TODO: does Autoconf need something similar?
Per complaint by Peter Eisentraut.
---
meson.build | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/meson.build b/meson.build
index d7c5193d4ce..551e27f5eb3 100644
--- a/meson.build
+++ b/meson.build
@@ -3981,6 +3981,7 @@ summary(
'bison': '@0@ @1@'.format(bison.full_path(), bison_version),
'dtrace': dtrace,
'flex': '@0@ @1@'.format(flex.full_path(), flex_version),
+ 'prove': prove,
},
section: 'Programs',
)
@@ -4017,3 +4018,11 @@ summary(
section: 'External libraries',
list_sep: ' ',
)
+
+summary(
+ {
+ 'tap': tap_tests_enabled,
+ },
+ section: 'Other features',
+ list_sep: ' ',
+)
base-commit: 36b8f4974a884a7206df97f37ea62d2adc0b77f0
--
2.52.0
v5-0002-Add-support-for-pytest-test-suites.patchtext/x-patch; charset=utf-8; name=v5-0002-Add-support-for-pytest-test-suites.patchDownload
From eae9e70547c3e92b53f6028292f387a6e5dd38d6 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 13 Aug 2025 10:58:56 -0700
Subject: [PATCH v5 2/7] Add support for pytest test suites
Specify --enable-pytest/-Dpytest=enabled at configure time. This
contains no Postgres test logic -- it is just a "vanilla" pytest
skeleton.
I've written a custom pgtap output plugin, used by the Meson mtest
runner, to fully control what we see during CI test failures. The
pytest-tap plugin would have been preferable, but it's now in
maintenance mode, and it has problems with accidentally suppressing
important collection failures.
TODOs:
- The Chocolatey CI setup is subpar. Need to find a way to bless the
dependencies in use rather than pulling from pip... or maybe that will
be done by the image baker.
Co-authored-by: Jelte Fennema-Nio <postgres@jeltef.nl>
---
.cirrus.tasks.yml | 37 +++++--
.gitignore | 4 +
configure | 166 +++++++++++++++++++++++++++++-
configure.ac | 29 +++++-
meson.build | 107 +++++++++++++++++++
meson_options.txt | 8 +-
pyproject.toml | 21 ++++
src/Makefile.global.in | 29 ++++++
src/makefiles/meson.build | 2 +
src/test/Makefile | 2 +-
src/test/meson.build | 1 +
src/test/pytest/Makefile | 20 ++++
src/test/pytest/README | 1 +
src/test/pytest/meson.build | 16 +++
src/test/pytest/pgtap.py | 198 ++++++++++++++++++++++++++++++++++++
src/tools/testwrap | 6 +-
16 files changed, 631 insertions(+), 16 deletions(-)
create mode 100644 pyproject.toml
create mode 100644 src/test/pytest/Makefile
create mode 100644 src/test/pytest/README
create mode 100644 src/test/pytest/meson.build
create mode 100644 src/test/pytest/pgtap.py
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 038d043d00e..a83acb39e97 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -21,7 +21,8 @@ env:
# target to test, for all but windows
CHECK: check-world PROVE_FLAGS=$PROVE_FLAGS
- CHECKFLAGS: -Otarget
+ # TODO were we avoiding --keep-going on purpose?
+ CHECKFLAGS: -Otarget --keep-going
PROVE_FLAGS: --timer
# Build test dependencies as part of the build step, to see compiler
# errors/warnings in one place.
@@ -44,6 +45,7 @@ env:
-Dldap=enabled
-Dssl=openssl
-Dtap_tests=enabled
+ -Dpytest=enabled
-Dplperl=enabled
-Dplpython=enabled
-Ddocs=enabled
@@ -225,7 +227,9 @@ task:
chown root:postgres /tmp/cores
sysctl kern.corefile='/tmp/cores/%N.%P.core'
setup_additional_packages_script: |
- #pkg install -y ...
+ pkg install -y \
+ py311-packaging \
+ py311-pytest
# NB: Intentionally build without -Dllvm. The freebsd image size is already
# large enough to make VM startup slow, and even without llvm freebsd
@@ -317,7 +321,10 @@ task:
-Dpam=enabled
setup_additional_packages_script: |
- #pkgin -y install ...
+ pkgin -y install \
+ py312-packaging \
+ py312-test
+ ln -s /usr/pkg/bin/pytest-3.12 /usr/pkg/bin/pytest
<<: *netbsd_task_template
- name: OpenBSD - Meson
@@ -337,7 +344,9 @@ task:
-Duuid=e2fs
setup_additional_packages_script: |
- #pkg_add -I ...
+ pkg_add -I \
+ py3-test \
+ py3-packaging
# Always core dump to ${CORE_DUMP_DIR}
set_core_dump_script: sysctl -w kern.nosuidcoredump=2
<<: *openbsd_task_template
@@ -496,8 +505,10 @@ task:
EOF
setup_additional_packages_script: |
- #apt-get update
- #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+ apt-get update
+ DEBIAN_FRONTEND=noninteractive apt-get -y install \
+ python3-pytest \
+ python3-packaging
matrix:
# SPECIAL:
@@ -521,14 +532,15 @@ task:
set -e
./configure \
--enable-cassert --enable-injection-points --enable-debug \
- --enable-tap-tests --enable-nls \
+ --enable-tap-tests --enable-pytest --enable-nls \
--with-segsize-blocks=6 \
--with-libnuma \
--with-liburing \
\
${LINUX_CONFIGURE_FEATURES} \
\
- CLANG="ccache clang"
+ CLANG="ccache clang" \
+ PYTEST="env LD_PRELOAD=/lib/x86_64-linux-gnu/libasan.so.8 pytest"
EOF
build_script: su postgres -c "make -s -j${BUILD_JOBS} world-bin"
upload_caches: ccache
@@ -665,6 +677,8 @@ task:
p5.34-io-tty
p5.34-ipc-run
python312
+ py312-packaging
+ py312-pytest
tcl
zstd
@@ -714,6 +728,7 @@ task:
sh src/tools/ci/ci_macports_packages.sh $MACOS_PACKAGE_LIST
# system python doesn't provide headers
sudo /opt/local/bin/port select python3 python312
+ sudo /opt/local/bin/port select pytest pytest312
# Make macports install visible for subsequent steps
echo PATH=/opt/local/sbin/:/opt/local/bin/:$PATH >> $CIRRUS_ENV
upload_caches: macports
@@ -787,6 +802,8 @@ task:
-Dldap=enabled
-Dssl=openssl
-Dtap_tests=enabled
+ -Dpytest=enabled
+ -DPYTEST=c:\Windows\system32\config\systemprofile\AppData\Roaming\Python\Python310\Scripts\pytest.exe
-Dplperl=enabled
-Dplpython=enabled
@@ -795,8 +812,10 @@ task:
depends_on: SanityCheck
only_if: $CI_WINDOWS_ENABLED
+ # XXX Does Chocolatey really not have any Python package installers?
setup_additional_packages_script: |
REM choco install -y --no-progress ...
+ pip3 install --user packaging pytest
setup_hosts_file_script: |
echo 127.0.0.1 pg-loadbalancetest >> c:\Windows\System32\Drivers\etc\hosts
@@ -859,7 +878,7 @@ task:
folder: ${CCACHE_DIR}
setup_additional_packages_script: |
- REM C:\msys64\usr\bin\pacman.exe -S --noconfirm ...
+ C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-python-pytest
mingw_info_script: |
%BASH% -c "where gcc"
diff --git a/.gitignore b/.gitignore
index 4e911395fe3..a8c73bba9ba 100644
--- a/.gitignore
+++ b/.gitignore
@@ -31,6 +31,8 @@ win32ver.rc
*.exe
lib*dll.def
lib*.pc
+__pycache__/
+*.egg-info/
# Local excludes in root directory
/GNUmakefile
@@ -43,3 +45,5 @@ lib*.pc
/Release/
/tmp_install/
/portlock/
+/.venv/
+/uv.lock
diff --git a/configure b/configure
index 14ad0a5006f..f28db423cd8 100755
--- a/configure
+++ b/configure
@@ -630,6 +630,8 @@ vpath_build
PG_SYSROOT
PG_VERSION_NUM
LDFLAGS_EX_BE
+UV
+PYTEST
PROVE
DBTOEPUB
FOP
@@ -772,6 +774,7 @@ CFLAGS
CC
enable_injection_points
PG_TEST_EXTRA
+enable_pytest
enable_tap_tests
enable_dtrace
DTRACEFLAGS
@@ -850,6 +853,7 @@ enable_profiling
enable_coverage
enable_dtrace
enable_tap_tests
+enable_pytest
enable_injection_points
with_blocksize
with_segsize
@@ -1550,7 +1554,10 @@ Optional Features:
--enable-profiling build with profiling enabled
--enable-coverage build with coverage testing instrumentation
--enable-dtrace build with DTrace support
- --enable-tap-tests enable TAP tests (requires Perl and IPC::Run)
+ --enable-tap-tests enable (Perl-based) TAP tests (requires Perl and
+ IPC::Run)
+ --enable-pytest enable (Python-based) pytest suites (requires
+ Python)
--enable-injection-points
enable injection points (for testing)
--enable-depend turn on automatic dependency tracking
@@ -3632,7 +3639,7 @@ fi
#
-# TAP tests
+# Test frameworks
#
@@ -3660,6 +3667,32 @@ fi
+
+# Check whether --enable-pytest was given.
+if test "${enable_pytest+set}" = set; then :
+ enableval=$enable_pytest;
+ case $enableval in
+ yes)
+ :
+ ;;
+ no)
+ :
+ ;;
+ *)
+ as_fn_error $? "no argument expected for --enable-pytest option" "$LINENO" 5
+ ;;
+ esac
+
+else
+ enable_pytest=no
+
+fi
+
+
+
+
+
+
#
# Injection points
#
@@ -19229,6 +19262,135 @@ $as_echo "$modulestderr" >&6; }
fi
fi
+if test "$enable_pytest" = yes; then
+ if test -z "$PYTEST"; then
+ for ac_prog in pytest py.test
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_PYTEST+:} false; then :
+ $as_echo_n "(cached) " >&6
+else
+ case $PYTEST in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_PYTEST="$PYTEST" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_path_PYTEST="$as_dir/$ac_word$ac_exec_ext"
+ $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+PYTEST=$ac_cv_path_PYTEST
+if test -n "$PYTEST"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTEST" >&5
+$as_echo "$PYTEST" >&6; }
+else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+ test -n "$PYTEST" && break
+done
+
+else
+ # Report the value of PYTEST in configure's output in all cases.
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for PYTEST" >&5
+$as_echo_n "checking for PYTEST... " >&6; }
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTEST" >&5
+$as_echo "$PYTEST" >&6; }
+fi
+
+ if test -z "$PYTEST"; then
+ # If pytest not found, try installing with uv
+ if test -z "$UV"; then
+ for ac_prog in uv
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_UV+:} false; then :
+ $as_echo_n "(cached) " >&6
+else
+ case $UV in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_UV="$UV" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_path_UV="$as_dir/$ac_word$ac_exec_ext"
+ $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+UV=$ac_cv_path_UV
+if test -n "$UV"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $UV" >&5
+$as_echo "$UV" >&6; }
+else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+ test -n "$UV" && break
+done
+
+else
+ # Report the value of UV in configure's output in all cases.
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for UV" >&5
+$as_echo_n "checking for UV... " >&6; }
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $UV" >&5
+$as_echo "$UV" >&6; }
+fi
+
+ if test -n "$UV"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether uv can install pytest dependencies" >&5
+$as_echo_n "checking whether uv can install pytest dependencies... " >&6; }
+ if "$UV" pip install "$srcdir" >&5 2>&1; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+ PYTEST="$UV run pytest"
+ else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+ as_fn_error $? "pytest not found and uv failed to install dependencies" "$LINENO" 5
+ fi
+ else
+ as_fn_error $? "pytest not found" "$LINENO" 5
+ fi
+ fi
+fi
+
# If compiler will take -Wl,--as-needed (or various platform-specific
# spellings thereof) then add that to LDFLAGS. This is much easier than
# trying to filter LIBS to the minimum for each executable.
diff --git a/configure.ac b/configure.ac
index 01b3bbc1be8..8226e2a1342 100644
--- a/configure.ac
+++ b/configure.ac
@@ -225,11 +225,16 @@ AC_SUBST(DTRACEFLAGS)])
AC_SUBST(enable_dtrace)
#
-# TAP tests
+# Test frameworks
#
PGAC_ARG_BOOL(enable, tap-tests, no,
- [enable TAP tests (requires Perl and IPC::Run)])
+ [enable (Perl-based) TAP tests (requires Perl and IPC::Run)])
AC_SUBST(enable_tap_tests)
+
+PGAC_ARG_BOOL(enable, pytest, no,
+ [enable (Python-based) pytest suites (requires Python)])
+AC_SUBST(enable_pytest)
+
AC_ARG_VAR(PG_TEST_EXTRA,
[enable selected extra tests (overridden at runtime by PG_TEST_EXTRA environment variable)])
@@ -2412,6 +2417,26 @@ if test "$enable_tap_tests" = yes; then
fi
fi
+if test "$enable_pytest" = yes; then
+ PGAC_PATH_PROGS(PYTEST, pytest py.test)
+ if test -z "$PYTEST"; then
+ # If pytest not found, try installing with uv
+ PGAC_PATH_PROGS(UV, uv)
+ if test -n "$UV"; then
+ AC_MSG_CHECKING([whether uv can install pytest dependencies])
+ if "$UV" pip install "$srcdir" >&AS_MESSAGE_LOG_FD 2>&1; then
+ AC_MSG_RESULT([yes])
+ PYTEST="$UV run pytest"
+ else
+ AC_MSG_RESULT([no])
+ AC_MSG_ERROR([pytest not found and uv failed to install dependencies])
+ fi
+ else
+ AC_MSG_ERROR([pytest not found])
+ fi
+ fi
+fi
+
# If compiler will take -Wl,--as-needed (or various platform-specific
# spellings thereof) then add that to LDFLAGS. This is much easier than
# trying to filter LIBS to the minimum for each executable.
diff --git a/meson.build b/meson.build
index 551e27f5eb3..2ec125116a2 100644
--- a/meson.build
+++ b/meson.build
@@ -1711,6 +1711,41 @@ endif
+###############################################################
+# Library: pytest
+###############################################################
+
+pytest_enabled = false
+pytest = not_found_dep
+uv = not_found_dep
+use_uv = false
+
+pytestopt = get_option('pytest')
+if not pytestopt.disabled()
+ pytest = find_program(get_option('PYTEST'), native: true, required: false)
+
+ # If pytest not found, try installing with uv
+ if not pytest.found()
+ uv = find_program('uv', native: true, required: false)
+ if uv.found()
+ message('Installing pytest dependencies with uv...')
+ uv_install = run_command(uv, 'pip', 'install', meson.project_source_root(), check: false)
+ if uv_install.returncode() == 0
+ use_uv = true
+ pytest_enabled = true
+ endif
+ endif
+ else
+ pytest_enabled = true
+ endif
+
+ if not pytest_enabled and pytestopt.enabled()
+ error('pytest not found')
+ endif
+endif
+
+
+
###############################################################
# Library: zstd
###############################################################
@@ -3808,6 +3843,76 @@ foreach test_dir : tests
)
endforeach
install_suites += test_group
+ elif kind == 'pytest'
+ testwrap_pytest = testwrap_base
+ if not pytest_enabled
+ testwrap_pytest += ['--skip', 'pytest not enabled']
+ endif
+
+ if use_uv
+ test_command = [uv.full_path(), 'run', 'pytest']
+ elif pytest_enabled
+ test_command = [pytest.full_path()]
+ else
+ # Dummy value - test will be skipped anyway
+ test_command = ['pytest']
+ endif
+ test_command += [
+ '-c', meson.project_source_root() / 'pyproject.toml',
+ '--verbose',
+ '-p', 'pgtap', # enable our test reporter plugin
+ '-ra', # show skipped and xfailed tests too
+ ]
+
+ # Add temporary install, the build directory for non-installed binaries and
+ # also test/ for non-installed test binaries built separately.
+ env = test_env
+ env.prepend('PATH', temp_install_bindir, test_dir['bd'], test_dir['bd'] / 'test')
+ temp_install_datadir = '@0@@1@'.format(test_install_destdir, dir_prefix / dir_data)
+ env.set('share_contrib_dir', temp_install_datadir / 'contrib')
+ # We also configure the same PYTHONPATH in the pytest settings in
+ # pyproject.toml, but pytest versions below 8.4 only actually use that
+ # value after plugin loading. So we need to configure it here too. This
+ # won't help people manually running pytest outside of meson/make, but we
+ # expect those to use a recent enough version of pytest anyway (and if
+ # not they can manually configure PYTHONPATH too).
+ env.prepend('PYTHONPATH', meson.project_source_root() / 'src' / 'test' / 'pytest')
+
+ foreach name, value : t.get('env', {})
+ env.set(name, value)
+ endforeach
+
+ test_group = test_dir['name']
+ test_kwargs = {
+ 'protocol': 'tap',
+ 'suite': test_group,
+ 'timeout': 1000,
+ 'depends': test_deps + t.get('deps', []),
+ 'env': env,
+ } + t.get('test_kwargs', {})
+
+ foreach onetest : t['tests']
+ # Make test names prettier, remove pyt/ and .py
+ onetest_p = onetest
+ if onetest_p.startswith('pyt/')
+ onetest_p = onetest.split('pyt/')[1]
+ endif
+ if onetest_p.endswith('.py')
+ onetest_p = fs.stem(onetest_p)
+ endif
+
+ test(test_dir['name'] / onetest_p,
+ python,
+ kwargs: test_kwargs,
+ args: testwrap_pytest + [
+ '--testgroup', test_dir['name'],
+ '--testname', onetest_p,
+ '--', test_command,
+ test_dir['sd'] / onetest,
+ ],
+ )
+ endforeach
+ install_suites += test_group
else
error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
endif
@@ -3982,6 +4087,7 @@ summary(
'dtrace': dtrace,
'flex': '@0@ @1@'.format(flex.full_path(), flex_version),
'prove': prove,
+ 'pytest': pytest,
},
section: 'Programs',
)
@@ -4022,6 +4128,7 @@ summary(
summary(
{
'tap': tap_tests_enabled,
+ 'pytest': pytest_enabled,
},
section: 'Other features',
list_sep: ' ',
diff --git a/meson_options.txt b/meson_options.txt
index 06bf5627d3c..88f22e699d9 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -41,7 +41,10 @@ option('cassert', type: 'boolean', value: false,
description: 'Enable assertion checks (for debugging)')
option('tap_tests', type: 'feature', value: 'auto',
- description: 'Enable TAP tests')
+ description: 'Enable (Perl-based) TAP tests')
+
+option('pytest', type: 'feature', value: 'auto',
+ description: 'Enable (Python-based) pytest suites')
option('injection_points', type: 'boolean', value: false,
description: 'Enable injection points')
@@ -195,6 +198,9 @@ option('PERL', type: 'string', value: 'perl',
option('PROVE', type: 'string', value: 'prove',
description: 'Path to prove binary')
+option('PYTEST', type: 'array', value: ['pytest', 'py.test'],
+ description: 'Path to pytest binary')
+
option('PYTHON', type: 'array', value: ['python3', 'python'],
description: 'Path to python binary')
diff --git a/pyproject.toml b/pyproject.toml
new file mode 100644
index 00000000000..60abb4d0655
--- /dev/null
+++ b/pyproject.toml
@@ -0,0 +1,21 @@
+[project]
+name = "postgresql-hackers-tooling"
+version = "0.1.0"
+description = "Pytest infrastructure for PostgreSQL"
+requires-python = ">=3.6"
+dependencies = [
+ # pytest 7.0 was the last version which supported Python 3.6, but the BSDs
+ # have started putting 8.x into ports, so we support both. (pytest 8 can be
+ # used throughout once we drop support for Python 3.7.)
+ "pytest >= 7.0, < 10",
+
+ # Any other dependencies are effectively optional (added below). We import
+ # these libraries using pytest.importorskip(). So tests will be skipped if
+ # they are not available.
+]
+
+[tool.pytest.ini_options]
+minversion = "7.0"
+
+# Common test code can be found here.
+pythonpath = ["src/test/pytest"]
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 371cd7eba2c..160cdffd4f1 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -211,6 +211,7 @@ enable_dtrace = @enable_dtrace@
enable_coverage = @enable_coverage@
enable_injection_points = @enable_injection_points@
enable_tap_tests = @enable_tap_tests@
+enable_pytest = @enable_pytest@
python_includespec = @python_includespec@
python_libdir = @python_libdir@
@@ -354,6 +355,7 @@ MSGFMT = @MSGFMT@
MSGFMT_FLAGS = @MSGFMT_FLAGS@
MSGMERGE = @MSGMERGE@
OPENSSL = @OPENSSL@
+PYTEST = @PYTEST@
PYTHON = @PYTHON@
TAR = @TAR@
XGETTEXT = @XGETTEXT@
@@ -508,6 +510,33 @@ prove_installcheck = @echo "TAP tests not enabled. Try configuring with --enable
prove_check = $(prove_installcheck)
endif
+ifeq ($(enable_pytest),yes)
+
+pytest_installcheck = @echo "Installcheck is not currently supported for pytest."
+
+# We also configure the same PYTHONPATH in the pytest settings in
+# pyproject.toml, but pytest versions below 8.4 only actually use that value
+# after plugin loading. So we need to configure it here too. This won't help
+# people manually running pytest outside of meson/make, but we expect those to
+# use a recent enough version of pytest anyway (and if not they can manually
+# configure PYTHONPATH too).
+define pytest_check
+echo "# +++ pytest check in $(subdir) +++" && \
+rm -rf '$(CURDIR)'/tmp_check && \
+$(MKDIR_P) '$(CURDIR)'/tmp_check && \
+cd $(srcdir) && \
+ TESTLOGDIR='$(CURDIR)/tmp_check/log' \
+ TESTDATADIR='$(CURDIR)/tmp_check' \
+ PYTHONPATH='$(abs_top_srcdir)/src/test/pytest:$$PYTHONPATH' \
+ $(with_temp_install) \
+ $(PYTEST) -c '$(abs_top_srcdir)/pyproject.toml' --verbose -ra ./pyt/
+endef
+
+else
+pytest_installcheck = @echo "pytest is not enabled. Try configuring with --enable-pytest"
+pytest_check = $(pytest_installcheck)
+endif
+
# Installation.
install_bin = @install_bin@
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index c6edf14ec44..5b9a804aa94 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -56,6 +56,7 @@ pgxs_kv = {
'enable_nls': libintl.found() ? 'yes' : 'no',
'enable_injection_points': get_option('injection_points') ? 'yes' : 'no',
'enable_tap_tests': tap_tests_enabled ? 'yes' : 'no',
+ 'enable_pytest': pytest_enabled ? 'yes' : 'no',
'enable_debug': get_option('debug') ? 'yes' : 'no',
'enable_coverage': 'no',
'enable_dtrace': dtrace.found() ? 'yes' : 'no',
@@ -145,6 +146,7 @@ pgxs_bins = {
'OPENSSL': openssl,
'PERL': perl,
'PROVE': prove,
+ 'PYTEST': pytest,
'PYTHON': python,
'TAR': tar,
'ZSTD': program_zstd,
diff --git a/src/test/Makefile b/src/test/Makefile
index 511a72e6238..c035dbb7fc7 100644
--- a/src/test/Makefile
+++ b/src/test/Makefile
@@ -12,7 +12,7 @@ subdir = src/test
top_builddir = ../..
include $(top_builddir)/src/Makefile.global
-SUBDIRS = perl postmaster regress isolation modules authentication recovery subscription
+SUBDIRS = perl postmaster pytest regress isolation modules authentication recovery subscription
ifeq ($(with_icu),yes)
SUBDIRS += icu
diff --git a/src/test/meson.build b/src/test/meson.build
index ccc31d6a86a..d08a6ef61c2 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -5,6 +5,7 @@ subdir('isolation')
subdir('authentication')
subdir('postmaster')
+subdir('pytest')
subdir('recovery')
subdir('subscription')
subdir('modules')
diff --git a/src/test/pytest/Makefile b/src/test/pytest/Makefile
new file mode 100644
index 00000000000..2bdca96ccbe
--- /dev/null
+++ b/src/test/pytest/Makefile
@@ -0,0 +1,20 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for pytest
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/pytest/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/test/pytest
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+check:
+ $(pytest_check)
+
+clean distclean maintainer-clean:
+ rm -rf tmp_check
diff --git a/src/test/pytest/README b/src/test/pytest/README
new file mode 100644
index 00000000000..1333ed77b7e
--- /dev/null
+++ b/src/test/pytest/README
@@ -0,0 +1 @@
+TODO
diff --git a/src/test/pytest/meson.build b/src/test/pytest/meson.build
new file mode 100644
index 00000000000..abd128dfa24
--- /dev/null
+++ b/src/test/pytest/meson.build
@@ -0,0 +1,16 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+if not pytest_enabled
+ subdir_done()
+endif
+
+tests += {
+ 'name': 'pytest',
+ 'sd': meson.current_source_dir(),
+ 'bd': meson.current_build_dir(),
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_something.py',
+ ],
+ },
+}
diff --git a/src/test/pytest/pgtap.py b/src/test/pytest/pgtap.py
new file mode 100644
index 00000000000..c92cad98d95
--- /dev/null
+++ b/src/test/pytest/pgtap.py
@@ -0,0 +1,198 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import os
+import sys
+
+import pytest
+
+#
+# Helpers
+#
+
+
+class TAP:
+ """
+ A basic API for reporting via the TAP protocol.
+ """
+
+ def __init__(self):
+ self.count = 0
+
+ # XXX interacts poorly with testwrap's boilerplate diagnostics
+ # self.print("TAP version 13")
+
+ def expect(self, num: int):
+ self.print(f"1..{num}")
+
+ def print(self, *args):
+ print(*args, file=sys.__stdout__)
+
+ def ok(self, name: str):
+ self.count += 1
+ self.print("ok", self.count, "-", name)
+
+ def skip(self, name: str, reason: str):
+ self.count += 1
+ self.print("ok", self.count, "-", name, "# skip", reason)
+
+ def fail(self, name: str, details: str):
+ self.count += 1
+ self.print("not ok", self.count, "-", name)
+
+ # mtest has some odd behavior around TAP tests where it won't print
+ # diagnostics on failure if they're part of the stdout stream, so we
+ # might as well just dump the details directly to stderr instead.
+ print(details, file=sys.__stderr__)
+
+
+tap = TAP()
+
+
+class TestNotes:
+ """
+ Annotations for a single test. The existing pytest hooks keep interesting
+ information somewhat separated across the different stages
+ (setup/test/teardown), so this class is used to correlate them.
+ """
+
+ skipped = False
+ skip_reason = None
+
+ failed = False
+ details = ""
+
+
+# Register a custom key in the stash dictionary for keeping our TestNotes.
+notes_key = pytest.StashKey[TestNotes]()
+
+
+#
+# Hook Implementations
+#
+
+
+@pytest.hookimpl(tryfirst=True)
+def pytest_configure(config):
+ """
+ Hijacks the standard streams as soon as possible during pytest startup. The
+ pytest-formatted output gets logged to file instead, and we'll use the
+ original sys.__stdout__/__stderr__ streams for the TAP protocol.
+ """
+ logdir = os.getenv("TESTLOGDIR")
+ if not logdir:
+ raise RuntimeError("pgtap requires the TESTLOGDIR envvar to be set")
+
+ os.makedirs(logdir)
+ logpath = os.path.join(logdir, "pytest.log")
+ sys.stdout = sys.stderr = open(logpath, "a", buffering=1)
+
+
+@pytest.hookimpl(trylast=True)
+def pytest_sessionfinish(session, exitstatus):
+ """
+ Suppresses nonzero exit codes due to failed tests. (In that case, we want
+ Meson to report a failure count, not a generic ERROR.)
+ """
+ if exitstatus == pytest.ExitCode.TESTS_FAILED:
+ session.exitstatus = pytest.ExitCode.OK
+
+
+@pytest.hookimpl
+def pytest_collectreport(report):
+ # Include collection failures directly in Meson error output.
+ if report.failed:
+ print(report.longreprtext, file=sys.__stderr__)
+
+
+@pytest.hookimpl
+def pytest_internalerror(excrepr, excinfo):
+ # Include internal errors directly in Meson error output.
+ print(excrepr, file=sys.__stderr__)
+
+
+#
+# Hook Wrappers
+#
+# In pytest parlance, a "wrapper" for a hook can inspect and optionally modify
+# existing hooks' behavior, but it does not replace the hook chain. This is done
+# through a generator-style API which chains the hooks together (see the use of
+# `yield`).
+#
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_collection(session):
+ """Reports the number of gathered tests after collection is finished."""
+ res = yield
+ tap.expect(session.testscollected)
+ return res
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_runtest_makereport(item, call):
+ """
+ Annotates a test item with our TestNotes and grabs relevant information for
+ reporting.
+
+ This is called multiple times per test, so it's not correct to print the TAP
+ result here. (A test and its teardown stage can both fail, and we want to
+ see the details for both.) We instead combine all the information for use by
+ our pytest_runtest_protocol wrapper later on.
+ """
+ res = yield
+
+ if notes_key not in item.stash:
+ item.stash[notes_key] = TestNotes()
+ notes = item.stash[notes_key]
+
+ report = res.get_result()
+ if report.passed:
+ pass # no annotation needed
+
+ elif report.skipped:
+ notes.skipped = True
+ _, _, notes.skip_reason = report.longrepr
+
+ elif report.failed:
+ notes.failed = True
+
+ if not notes.details:
+ notes.details += "{:_^72}\n\n".format(f" {report.head_line} ")
+
+ if report.when in ("setup", "teardown"):
+ notes.details += "\n{:_^72}\n\n".format(
+ f" Error during {report.when} of {report.head_line} "
+ )
+
+ notes.details += report.longreprtext + "\n"
+
+ # Include captured stdout/stderr/log in failure output
+ for section_name, section_content in report.sections:
+ if section_content.strip():
+ notes.details += "\n{:-^72}\n".format(f" {section_name} ")
+ notes.details += section_content + "\n"
+
+ else:
+ raise RuntimeError("pytest_runtest_makereport received unknown test status")
+
+ return res
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_runtest_protocol(item, nextitem):
+ """
+ Reports the TAP result for this test item using our gathered TestNotes.
+ """
+ res = yield
+
+ assert notes_key in item.stash, "pgtap didn't annotate a test item?"
+ notes = item.stash[notes_key]
+
+ if notes.failed:
+ tap.fail(item.nodeid, notes.details)
+ elif notes.skipped:
+ tap.skip(item.nodeid, notes.skip_reason)
+ else:
+ tap.ok(item.nodeid)
+
+ return res
diff --git a/src/tools/testwrap b/src/tools/testwrap
index e91296ecd15..346f86b8ea3 100755
--- a/src/tools/testwrap
+++ b/src/tools/testwrap
@@ -42,7 +42,11 @@ open(os.path.join(testdir, 'test.start'), 'x')
env_dict = {**os.environ,
'TESTDATADIR': os.path.join(testdir, 'data'),
- 'TESTLOGDIR': os.path.join(testdir, 'log')}
+ 'TESTLOGDIR': os.path.join(testdir, 'log'),
+ # Prevent emitting terminal capability sequences that pollute the
+ # TAP output stream (i.e.\033[?1034h). This happens on OpenBSD with
+ # pytest for unknown reasons.
+ 'TERM': ''}
# The configuration time value of PG_TEST_EXTRA is supplied via argument
--
2.52.0
v5-0003-ci-Add-MTEST_SUITES-for-optional-test-tailoring.patchtext/x-patch; charset=utf-8; name=v5-0003-ci-Add-MTEST_SUITES-for-optional-test-tailoring.patchDownload
From 313ffe863b8a3eaf48ea578fc17596ad7078595f Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 2 Sep 2025 15:37:53 -0700
Subject: [PATCH v5 3/7] ci: Add MTEST_SUITES for optional test tailoring
Should make it easier to control the test cycle time for Cirrus. Add the
desired suites (remembering `--suite setup`!) to the top-level envvar.
---
.cirrus.tasks.yml | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index a83acb39e97..a2c3febc30c 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -28,6 +28,7 @@ env:
# errors/warnings in one place.
MBUILD_TARGET: all testprep
MTEST_ARGS: --print-errorlogs --no-rebuild -C build
+ MTEST_SUITES: # --suite setup --suite ssl --suite ...
PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
@@ -251,7 +252,7 @@ task:
su postgres <<-EOF
set -e
ulimit -c unlimited
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
# test runningcheck, freebsd chosen because it's currently fast enough
@@ -396,7 +397,7 @@ task:
# Otherwise tests will fail on OpenBSD, due to inability to start enough
# processes.
ulimit -p 256
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
on_failure:
@@ -614,7 +615,7 @@ task:
su postgres <<-EOF
set -e
ulimit -c unlimited
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
# so that we don't upload 64bit logs if 32bit fails
rm -rf build/
@@ -627,7 +628,7 @@ task:
su postgres <<-EOF
set -e
ulimit -c unlimited
- PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS}
+ PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
on_failure:
@@ -751,7 +752,7 @@ task:
test_world_script: |
ulimit -c unlimited # default is 0
ulimit -n 1024 # default is 256, pretty low
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
on_failure:
<<: *on_failure_meson
@@ -834,7 +835,7 @@ task:
check_world_script: |
vcvarsall x64
- meson test %MTEST_ARGS% --num-processes %TEST_JOBS%
+ meson test %MTEST_ARGS% --num-processes %TEST_JOBS% %MTEST_SUITES%
on_failure:
<<: *on_failure_meson
@@ -895,7 +896,7 @@ task:
upload_caches: ccache
test_world_script: |
- %BASH% -c "meson test %MTEST_ARGS% --num-processes %TEST_JOBS%"
+ %BASH% -c "meson test %MTEST_ARGS% --num-processes %TEST_JOBS% %MTEST_SUITES%"
on_failure:
<<: *on_failure_meson
--
2.52.0
v5-0004-Add-pytest-infrastructure-to-interact-with-Postgr.patchtext/x-patch; charset=utf-8; name=v5-0004-Add-pytest-infrastructure-to-interact-with-Postgr.patchDownload
From b710868bcb844416dc87faa70daab055a5cdb7f3 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <postgres@jeltef.nl>
Date: Tue, 16 Dec 2025 09:25:48 +0100
Subject: [PATCH v5 4/7] Add pytest infrastructure to interact with PostgreSQL
servers
This adds functionality to the pytest infrastructure that allows tests
to do common things with PostgreSQL servers like:
- creating
- starting
- stopping
- connecting
- running queries
- handling errors
The goal of this infrastructure is to be so easy to use that the actual
tests really only contain the logic to test the behaviour that the tests
are testing, as opposed to a bunch of boilerplate. Examples of this are:
Types get converted to their Python counter parts automatically. Errors
become actual Python exceptions. Results of queries that only return a
single row or cell are unpacked automatically, so you don't have to do
rows[0][0] if the query only returns a single cell.
The only new tests that are part of this commit are tests that cover
this testing infrastructure itself. It's debatable whether such tests
are useful long term, because any infrastructure that's unused by actual
tests should probably not exist. For now it seems good to test this
basic functionality though, both to make sure we don't break it before
committing actual tests that use it, and also as an example for people
writing new tests.
---
doc/src/sgml/regress.sgml | 54 +-
pyproject.toml | 3 +
src/backend/utils/errcodes.txt | 5 +
src/test/pytest/README | 140 +-
src/test/pytest/libpq/__init__.py | 36 +
src/test/pytest/libpq/_core.py | 489 +++++
src/test/pytest/libpq/_error_base.py | 74 +
src/test/pytest/libpq/_generated_errors.py | 2116 ++++++++++++++++++++
src/test/pytest/libpq/errors.py | 39 +
src/test/pytest/meson.build | 5 +-
src/test/pytest/pypg/__init__.py | 10 +
src/test/pytest/pypg/_env.py | 72 +
src/test/pytest/pypg/fixtures.py | 335 ++++
src/test/pytest/pypg/server.py | 470 +++++
src/test/pytest/pypg/util.py | 42 +
src/test/pytest/pyt/conftest.py | 1 +
src/test/pytest/pyt/test_errors.py | 34 +
src/test/pytest/pyt/test_libpq.py | 172 ++
src/test/pytest/pyt/test_multi_server.py | 46 +
src/test/pytest/pyt/test_query_helpers.py | 347 ++++
src/tools/generate_pytest_libpq_errors.py | 147 ++
21 files changed, 4634 insertions(+), 3 deletions(-)
create mode 100644 src/test/pytest/libpq/__init__.py
create mode 100644 src/test/pytest/libpq/_core.py
create mode 100644 src/test/pytest/libpq/_error_base.py
create mode 100644 src/test/pytest/libpq/_generated_errors.py
create mode 100644 src/test/pytest/libpq/errors.py
create mode 100644 src/test/pytest/pypg/__init__.py
create mode 100644 src/test/pytest/pypg/_env.py
create mode 100644 src/test/pytest/pypg/fixtures.py
create mode 100644 src/test/pytest/pypg/server.py
create mode 100644 src/test/pytest/pypg/util.py
create mode 100644 src/test/pytest/pyt/conftest.py
create mode 100644 src/test/pytest/pyt/test_errors.py
create mode 100644 src/test/pytest/pyt/test_libpq.py
create mode 100644 src/test/pytest/pyt/test_multi_server.py
create mode 100644 src/test/pytest/pyt/test_query_helpers.py
create mode 100755 src/tools/generate_pytest_libpq_errors.py
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index d80dd46c5fd..1440815b23a 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -840,7 +840,7 @@ float4:out:.*-.*-cygwin.*=float4-misrounded-input.out
</sect1>
<sect1 id="regress-tap">
- <title>TAP Tests</title>
+ <title>Perl TAP Tests</title>
<para>
Various tests, particularly the client program tests
@@ -929,6 +929,58 @@ PG_TEST_NOCLEAN=1 make -C src/bin/pg_dump check
</sect1>
+ <sect1 id="regress-pytest">
+ <title>Pytest Tests</title>
+
+ <para>
+ Tests in <filename>pyt</filename> directories use the Python
+ <application>pytest</application> framework. These tests provide a
+ convenient way to test libpq client functionality and scenarios requiring
+ multiple PostgreSQL server instances.
+ </para>
+
+ <para>
+ The pytest tests require <productname>PostgreSQL</productname> to be
+ configured with the option <option>--enable-pytest</option> (or
+ <option>-Dpytest=enabled</option> for Meson builds). You also need either
+ <application>pytest</application> or <application>uv</application>
+ installed on your system.
+ </para>
+
+ <para>
+ With Meson builds, you can run the pytest tests using:
+<programlisting>
+meson test --suite pytest
+</programlisting>
+ With autoconf-based builds, you can run them from the
+ <filename>src/test/pytest</filename> directory using:
+<programlisting>
+make check
+</programlisting>
+ </para>
+
+ <para>
+ You can also run specific test files directly using pytest:
+<programlisting>
+pytest src/test/pytest/pyt/test_libpq.py
+pytest -k "test_connstr"
+</programlisting>
+ </para>
+
+ <para>
+ Many operations in the test suites use a 180-second timeout, which on slow
+ hosts may lead to load-induced timeouts. Setting the environment variable
+ <varname>PG_TEST_TIMEOUT_DEFAULT</varname> to a higher number will change
+ the default to avoid this.
+ </para>
+
+ <para>
+ For more information on writing pytest tests, see the
+ <filename>src/test/pytest/README</filename> file.
+ </para>
+
+ </sect1>
+
<sect1 id="regress-coverage">
<title>Test Coverage Examination</title>
diff --git a/pyproject.toml b/pyproject.toml
index 60abb4d0655..4628d2274e0 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -19,3 +19,6 @@ minversion = "7.0"
# Common test code can be found here.
pythonpath = ["src/test/pytest"]
+
+# Load the shared fixtures plugin
+addopts = ["-p", "pypg.fixtures"]
diff --git a/src/backend/utils/errcodes.txt b/src/backend/utils/errcodes.txt
index c96aa7c49ef..40c7555047e 100644
--- a/src/backend/utils/errcodes.txt
+++ b/src/backend/utils/errcodes.txt
@@ -21,6 +21,11 @@
# doc/src/sgml/errcodes-table.sgml
# a SGML table of error codes for inclusion in the documentation
#
+# src/test/pytest/libpq/_generated_errors.py
+# Python exception classes for the pytest libpq wrapper
+# Note: This needs to be manually regenerated by running
+# src/tools/generate_pytest_libpq_errors.py
+#
# The format of this file is one error code per line, with the following
# whitespace-separated fields:
#
diff --git a/src/test/pytest/README b/src/test/pytest/README
index 1333ed77b7e..9dc50ca111f 100644
--- a/src/test/pytest/README
+++ b/src/test/pytest/README
@@ -1 +1,139 @@
-TODO
+src/test/pytest/README
+
+Pytest-based tests
+==================
+
+This directory contains infrastructure for Python-based tests using pytest,
+along with some core tests for the pytest infrastructure itself. The framework
+provides fixtures for managing PostgreSQL server instances and connecting to
+them via libpq.
+
+
+Running the tests
+=================
+
+NOTE: You must have given the --enable-pytest argument to configure (or
+-Dpytest=enabled for Meson builds). You also need to have either pytest or uv
+already installed.
+
+With Meson builds, you can run:
+ meson test --suite pytest
+
+With autoconf based builds, you can run:
+ make check
+or
+ make installcheck
+
+You can run specific test files and/or use pytest's -k option to select tests:
+ pytest src/test/pytest/pyt/test_libpq.py
+ pytest -k "test_connstr"
+
+
+Directory structure
+===================
+
+pypg/
+ Python library providing common functions and pytest fixtures that can be
+ used in tests.
+
+libpq/
+ A simple but user-friendly python wrapper around libpq
+
+pyt/
+ Tests for the pytest infrastructure itself
+
+pgtap.py
+ A pytest plugin to output results in TAP format
+
+
+Writing tests
+=============
+
+Tests use pytest fixtures to manage server instances and connections. The
+most commonly used fixtures are:
+
+pg
+ A PostgresServer instance configured for the current test. Use this for
+ creating test users/databases or modifying server configuration. Changes
+ are automatically rolled back after the test.
+
+conn
+ A connected PGconn instance to the test server. Automatically cleaned up
+ after the test.
+
+connect
+ A function to create additional connections with custom options.
+
+create_pg
+ A factory function to create additional PostgreSQL servers within a test.
+ Servers are automatically cleaned up at the end of the test. Useful for
+ testing scenarios that require multiple independent servers.
+
+create_pg_module
+ Like create_pg, but servers persist for the entire test module. Use this
+ when multiple tests in a module can share the same servers, which is
+ faster than creating new servers for each test.
+
+
+Example test:
+
+ def test_simple_query(conn):
+ result = conn.sql("SELECT 1 + 1")
+ assert result == 2
+
+ def test_with_user(pg):
+ users = pg.create_users("test")
+ with pg.reloading() as s:
+ s.hba.prepend(["local", "all", users["test"], "trust"])
+
+ conn = pg.connect(user=users["test"])
+ assert conn.sql("SELECT current_user") == users["test"]
+
+ def test_multiple_servers(create_pg):
+ node1 = create_pg("primary")
+ node2 = create_pg("secondary")
+
+ conn1 = node1.connect()
+ conn2 = node2.connect()
+
+ # Each server is independent
+ assert node1.port != node2.port
+
+
+Server configuration
+====================
+
+Tests can temporarily modify server configuration using context managers:
+
+ with pg.reloading() as s:
+ s.conf.set(log_connections="on")
+ s.hba.prepend("local all all trust")
+ # Server is reloaded here
+ # After the test finished the original configuration is restored and
+ # the server is reloaded again
+
+Use pg.restarting() instead if the configuration change requires a restart.
+
+
+Timeouts
+========
+
+Tests inherit the PG_TEST_TIMEOUT_DEFAULT environment variable (defaulting
+to 180 seconds). The remaining_timeout fixture provides a function that
+returns how much time remains for the current test.
+
+
+Environment variables
+=====================
+
+PG_TEST_TIMEOUT_DEFAULT
+ Per-test timeout in seconds (default: 180)
+
+PG_CONFIG
+ Path to pg_config (default: uses PATH)
+
+TESTDATADIR
+ Directory for test data (default: pytest temp directory)
+
+PG_TEST_EXTRA
+ Space-separated list of optional test categories to run (e.g., "ssl")
diff --git a/src/test/pytest/libpq/__init__.py b/src/test/pytest/libpq/__init__.py
new file mode 100644
index 00000000000..cb4d18b6206
--- /dev/null
+++ b/src/test/pytest/libpq/__init__.py
@@ -0,0 +1,36 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+libpq testing utilities - ctypes bindings and helpers for PostgreSQL's libpq library.
+
+This module provides Python wrappers around libpq for use in pytest tests.
+"""
+
+from . import errors
+from .errors import LibpqError, LibpqWarning
+from ._core import (
+ ConnectionStatus,
+ DiagField,
+ ExecStatus,
+ PGconn,
+ PGresult,
+ connect,
+ connstr,
+ load_libpq_handle,
+ register_type_info,
+)
+
+__all__ = [
+ "errors",
+ "LibpqError",
+ "LibpqWarning",
+ "ConnectionStatus",
+ "DiagField",
+ "ExecStatus",
+ "PGconn",
+ "PGresult",
+ "connect",
+ "connstr",
+ "load_libpq_handle",
+ "register_type_info",
+]
diff --git a/src/test/pytest/libpq/_core.py b/src/test/pytest/libpq/_core.py
new file mode 100644
index 00000000000..0d77996d572
--- /dev/null
+++ b/src/test/pytest/libpq/_core.py
@@ -0,0 +1,489 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Core libpq functionality - ctypes bindings and connection handling.
+"""
+
+import contextlib
+import ctypes
+import datetime
+import decimal
+import enum
+import json
+import platform
+import os
+import uuid
+from typing import Any, Callable, Dict, Optional
+
+from .errors import LibpqError, make_error
+
+
+# PG_DIAG field identifiers from postgres_ext.h
+class DiagField(enum.IntEnum):
+ SEVERITY = ord("S")
+ SEVERITY_NONLOCALIZED = ord("V")
+ SQLSTATE = ord("C")
+ MESSAGE_PRIMARY = ord("M")
+ MESSAGE_DETAIL = ord("D")
+ MESSAGE_HINT = ord("H")
+ STATEMENT_POSITION = ord("P")
+ INTERNAL_POSITION = ord("p")
+ INTERNAL_QUERY = ord("q")
+ CONTEXT = ord("W")
+ SCHEMA_NAME = ord("s")
+ TABLE_NAME = ord("t")
+ COLUMN_NAME = ord("c")
+ DATATYPE_NAME = ord("d")
+ CONSTRAINT_NAME = ord("n")
+ SOURCE_FILE = ord("F")
+ SOURCE_LINE = ord("L")
+ SOURCE_FUNCTION = ord("R")
+
+
+class ConnectionStatus(enum.IntEnum):
+ """PostgreSQL connection status codes from libpq."""
+
+ CONNECTION_OK = 0
+ CONNECTION_BAD = 1
+
+
+class ExecStatus(enum.IntEnum):
+ """PostgreSQL result status codes from PQresultStatus."""
+
+ PGRES_EMPTY_QUERY = 0
+ PGRES_COMMAND_OK = 1
+ PGRES_TUPLES_OK = 2
+ PGRES_COPY_OUT = 3
+ PGRES_COPY_IN = 4
+ PGRES_BAD_RESPONSE = 5
+ PGRES_NONFATAL_ERROR = 6
+ PGRES_FATAL_ERROR = 7
+ PGRES_COPY_BOTH = 8
+ PGRES_SINGLE_TUPLE = 9
+ PGRES_PIPELINE_SYNC = 10
+ PGRES_PIPELINE_ABORTED = 11
+
+
+class _PGconn(ctypes.Structure):
+ pass
+
+
+class _PGresult(ctypes.Structure):
+ pass
+
+
+_PGconn_p = ctypes.POINTER(_PGconn)
+_PGresult_p = ctypes.POINTER(_PGresult)
+
+
+def load_libpq_handle(libdir, bindir):
+ """
+ Loads a ctypes handle for libpq. Some common function prototypes are
+ initialized for general use.
+ """
+ system = platform.system()
+
+ if system in ("Linux", "FreeBSD", "NetBSD", "OpenBSD"):
+ name = "libpq.so.5"
+ elif system == "Darwin":
+ name = "libpq.5.dylib"
+ elif system == "Windows":
+ name = "libpq.dll"
+ else:
+ assert False, f"the libpq fixture must be updated for {system}"
+
+ if system == "Windows":
+ # On Windows, libpq.dll is confusingly in bindir, not libdir. And we
+ # need to add this directory the the search path.
+ libpq_path = os.path.join(bindir, name)
+ lib = ctypes.CDLL(libpq_path)
+ else:
+ libpq_path = os.path.join(libdir, name)
+ lib = ctypes.CDLL(libpq_path)
+
+ #
+ # Function Prototypes
+ #
+
+ lib.PQconnectdb.restype = _PGconn_p
+ lib.PQconnectdb.argtypes = [ctypes.c_char_p]
+
+ lib.PQstatus.restype = ctypes.c_int
+ lib.PQstatus.argtypes = [_PGconn_p]
+
+ lib.PQexec.restype = _PGresult_p
+ lib.PQexec.argtypes = [_PGconn_p, ctypes.c_char_p]
+
+ lib.PQresultStatus.restype = ctypes.c_int
+ lib.PQresultStatus.argtypes = [_PGresult_p]
+
+ lib.PQclear.restype = None
+ lib.PQclear.argtypes = [_PGresult_p]
+
+ lib.PQerrorMessage.restype = ctypes.c_char_p
+ lib.PQerrorMessage.argtypes = [_PGconn_p]
+
+ lib.PQfinish.restype = None
+ lib.PQfinish.argtypes = [_PGconn_p]
+
+ lib.PQresultErrorMessage.restype = ctypes.c_char_p
+ lib.PQresultErrorMessage.argtypes = [_PGresult_p]
+
+ lib.PQntuples.restype = ctypes.c_int
+ lib.PQntuples.argtypes = [_PGresult_p]
+
+ lib.PQnfields.restype = ctypes.c_int
+ lib.PQnfields.argtypes = [_PGresult_p]
+
+ lib.PQgetvalue.restype = ctypes.c_char_p
+ lib.PQgetvalue.argtypes = [_PGresult_p, ctypes.c_int, ctypes.c_int]
+
+ lib.PQgetisnull.restype = ctypes.c_int
+ lib.PQgetisnull.argtypes = [_PGresult_p, ctypes.c_int, ctypes.c_int]
+
+ lib.PQftype.restype = ctypes.c_uint
+ lib.PQftype.argtypes = [_PGresult_p, ctypes.c_int]
+
+ lib.PQresultErrorField.restype = ctypes.c_char_p
+ lib.PQresultErrorField.argtypes = [_PGresult_p, ctypes.c_int]
+
+ return lib
+
+
+# PostgreSQL type OIDs and conversion system
+# Type registry - maps OID to converter function
+_type_converters: Dict[int, Callable[[str], Any]] = {}
+_array_to_elem_map: Dict[int, int] = {}
+
+
+def register_type_info(
+ name: str, oid: int, array_oid: int, converter: Callable[[str], Any]
+):
+ """
+ Register a PostgreSQL type with its OID, array OID, and conversion function.
+
+ Usage:
+ register_type_info("bool", 16, 1000, lambda v: v == "t")
+ """
+ _type_converters[oid] = converter
+ if array_oid is not None:
+ _array_to_elem_map[array_oid] = oid
+
+
+def _parse_array(value: str, elem_oid: int):
+ """Parse PostgreSQL array syntax into nested Python lists."""
+ stack: list[list] = []
+ current_element: list[str] = []
+ in_quotes = False
+ was_quoted = False
+ pos = 0
+
+ while pos < len(value):
+ char = value[pos]
+
+ if in_quotes:
+ if char == "\\":
+ next_char = value[pos + 1]
+ if next_char not in '"\\':
+ raise NotImplementedError('Only \\" and \\\\ escapes are supported')
+ current_element.append(next_char)
+ pos += 2
+ continue
+ elif char == '"':
+ in_quotes = False
+ else:
+ current_element.append(char)
+ elif char == '"':
+ in_quotes = True
+ was_quoted = True
+ elif char == "{":
+ stack.append([])
+ elif char in ",}":
+ if current_element or was_quoted:
+ elem = "".join(current_element)
+ if not was_quoted and elem == "NULL":
+ stack[-1].append(None)
+ else:
+ stack[-1].append(_convert_pg_value(elem, elem_oid))
+ current_element = []
+ was_quoted = False
+ if char == "}":
+ completed = stack.pop()
+ if not stack:
+ return completed
+ stack[-1].append(completed)
+ elif char != " ":
+ current_element.append(char)
+ pos += 1
+
+ raise ValueError(f"Malformed array literal: {value}")
+
+
+# Register standard PostgreSQL types that we'll likely encounter in tests
+register_type_info("bool", 16, 1000, lambda v: v == "t")
+register_type_info("int2", 21, 1005, int)
+register_type_info("int4", 23, 1007, int)
+register_type_info("int8", 20, 1016, int)
+register_type_info("float4", 700, 1021, float)
+register_type_info("float8", 701, 1022, float)
+register_type_info("numeric", 1700, 1231, decimal.Decimal)
+register_type_info("text", 25, 1009, str)
+register_type_info("varchar", 1043, 1015, str)
+register_type_info("date", 1082, 1182, datetime.date.fromisoformat)
+register_type_info("time", 1083, 1183, datetime.time.fromisoformat)
+register_type_info("timestamp", 1114, 1115, datetime.datetime.fromisoformat)
+register_type_info("timestamptz", 1184, 1185, datetime.datetime.fromisoformat)
+register_type_info("uuid", 2950, 2951, uuid.UUID)
+register_type_info("json", 114, 199, json.loads)
+register_type_info("jsonb", 3802, 3807, json.loads)
+
+
+def _convert_pg_value(value: str, type_oid: int) -> Any:
+ """
+ Convert PostgreSQL string value to appropriate Python type based on OID.
+ Uses the registered type converters from register_type_info().
+ """
+ # Check if it's an array type
+ if type_oid in _array_to_elem_map:
+ elem_oid = _array_to_elem_map[type_oid]
+ return _parse_array(value, elem_oid)
+
+ # Use registered converter if available
+ converter = _type_converters.get(type_oid)
+ if converter:
+ return converter(value)
+
+ # Unknown types - return as string
+ return value
+
+
+def simplify_query_results(results) -> Any:
+ """
+ Simplify the results of a query so that the caller doesn't have to unpack
+ lists and tuples of length 1.
+ """
+ if len(results) == 1:
+ row = results[0]
+ if len(row) == 1:
+ # If there's only a single cell, just return the value
+ return row[0]
+ # If there's only a single row, just return that row
+ return row
+
+ if len(results) != 0 and len(results[0]) == 1:
+ # If there's only a single column, return an array of values
+ return [row[0] for row in results]
+
+ # if there are multiple rows and columns, return the results as is
+ return results
+
+
+class PGresult(contextlib.AbstractContextManager):
+ """Wraps a raw _PGresult_p with a more friendly interface."""
+
+ def __init__(self, lib: ctypes.CDLL, res: _PGresult_p):
+ self._lib = lib
+ self._res = res
+
+ def __exit__(self, *exc):
+ self._lib.PQclear(self._res)
+ self._res = None
+
+ def status(self) -> ExecStatus:
+ return ExecStatus(self._lib.PQresultStatus(self._res))
+
+ def error_message(self):
+ """Returns the error message associated with this result."""
+ msg = self._lib.PQresultErrorMessage(self._res)
+ return msg.decode() if msg else ""
+
+ def _get_error_field(self, field: DiagField) -> Optional[str]:
+ """Get an error field from the result using PQresultErrorField."""
+ val = self._lib.PQresultErrorField(self._res, int(field))
+ return val.decode() if val else None
+
+ def raise_error(self) -> None:
+ """
+ Raises an appropriate LibpqError subclass based on the error fields.
+ Extracts SQLSTATE and other diagnostic information from the result.
+ """
+ if not self._res:
+ raise LibpqError("query failed: out of memory or connection lost")
+
+ sqlstate = self._get_error_field(DiagField.SQLSTATE)
+ primary = self._get_error_field(DiagField.MESSAGE_PRIMARY)
+ detail = self._get_error_field(DiagField.MESSAGE_DETAIL)
+ hint = self._get_error_field(DiagField.MESSAGE_HINT)
+ severity = self._get_error_field(DiagField.SEVERITY)
+ schema_name = self._get_error_field(DiagField.SCHEMA_NAME)
+ table_name = self._get_error_field(DiagField.TABLE_NAME)
+ column_name = self._get_error_field(DiagField.COLUMN_NAME)
+ datatype_name = self._get_error_field(DiagField.DATATYPE_NAME)
+ constraint_name = self._get_error_field(DiagField.CONSTRAINT_NAME)
+ context = self._get_error_field(DiagField.CONTEXT)
+
+ position_str = self._get_error_field(DiagField.STATEMENT_POSITION)
+ position = int(position_str) if position_str else None
+
+ raise make_error(
+ primary or self.error_message(),
+ sqlstate=sqlstate,
+ severity=severity,
+ primary=primary,
+ detail=detail,
+ hint=hint,
+ schema_name=schema_name,
+ table_name=table_name,
+ column_name=column_name,
+ datatype_name=datatype_name,
+ constraint_name=constraint_name,
+ position=position,
+ context=context,
+ )
+
+ def fetch_all(self):
+ """
+ Fetch all rows and convert to Python types.
+ Returns a list of tuples, with values converted based on their PostgreSQL type.
+ """
+ nrows = self._lib.PQntuples(self._res)
+ ncols = self._lib.PQnfields(self._res)
+
+ # Get type OIDs for each column
+ type_oids = [self._lib.PQftype(self._res, col) for col in range(ncols)]
+
+ results = []
+ for row in range(nrows):
+ row_data = []
+ for col in range(ncols):
+ if self._lib.PQgetisnull(self._res, row, col):
+ row_data.append(None)
+ else:
+ value = self._lib.PQgetvalue(self._res, row, col).decode()
+ row_data.append(_convert_pg_value(value, type_oids[col]))
+ results.append(tuple(row_data))
+
+ return results
+
+
+class PGconn(contextlib.AbstractContextManager):
+ """
+ Wraps a raw _PGconn_p with a more friendly interface. This is just a
+ stub; it's expected to grow.
+ """
+
+ def __init__(
+ self,
+ lib: ctypes.CDLL,
+ handle: _PGconn_p,
+ stack: contextlib.ExitStack,
+ ):
+ self._lib = lib
+ self._handle = handle
+ self._stack = stack
+
+ def __exit__(self, *exc):
+ self._lib.PQfinish(self._handle)
+ self._handle = None
+
+ def exec(self, query: str):
+ """
+ Executes a query via PQexec() and returns a PGresult.
+ """
+ res = self._lib.PQexec(self._handle, query.encode())
+ return self._stack.enter_context(PGresult(self._lib, res))
+
+ def sql(self, query: str):
+ """
+ Executes a query and raises an exception if it fails.
+ Returns the query results with automatic type conversion and simplification.
+ For commands that don't return data (INSERT, UPDATE, etc.), returns None.
+
+ Examples:
+ - SELECT 1 -> 1
+ - SELECT 1, 2 -> (1, 2)
+ - SELECT * FROM generate_series(1, 3) -> [1, 2, 3]
+ - SELECT * FROM (VALUES (1, 'a'), (2, 'b')) t -> [(1, 'a'), (2, 'b')]
+ - CREATE TABLE ... -> None
+ - INSERT INTO ... -> None
+ """
+ res = self.exec(query)
+ status = res.status()
+
+ if status == ExecStatus.PGRES_FATAL_ERROR:
+ res.raise_error()
+ elif status == ExecStatus.PGRES_COMMAND_OK:
+ return None
+ elif status == ExecStatus.PGRES_TUPLES_OK:
+ results = res.fetch_all()
+ return simplify_query_results(results)
+ else:
+ res.raise_error()
+
+
+def connstr(opts: Dict[str, Any]) -> str:
+ """
+ Flattens the provided options into a libpq connection string. Values
+ are converted to str and quoted/escaped as necessary.
+ """
+ settings = []
+
+ for k, v in opts.items():
+ v = str(v)
+ if not v:
+ v = "''"
+ else:
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+
+ if " " in v:
+ v = f"'{v}'"
+
+ settings.append(f"{k}={v}")
+
+ return " ".join(settings)
+
+
+def connect(
+ libpq_handle: ctypes.CDLL,
+ stack: contextlib.ExitStack,
+ remaining_timeout_fn: Callable[[], float],
+ **opts,
+) -> PGconn:
+ """
+ Connects to a server, using the given connection options, and
+ returns a PGconn object wrapping the connection handle. A
+ failure will raise LibpqError.
+
+ Connections honor PG_TEST_TIMEOUT_DEFAULT unless connect_timeout is
+ explicitly overridden in opts.
+
+ Args:
+ libpq_handle: ctypes.CDLL handle to libpq library
+ stack: ExitStack for managing connection cleanup
+ remaining_timeout_fn: Function that returns remaining timeout in seconds
+ **opts: Connection options (host, port, dbname, etc.)
+
+ Returns:
+ PGconn: Connected database connection
+
+ Raises:
+ LibpqError: If connection fails
+ """
+
+ if "connect_timeout" not in opts:
+ t = int(remaining_timeout_fn())
+ opts["connect_timeout"] = max(t, 1)
+
+ conn_p = libpq_handle.PQconnectdb(connstr(opts).encode())
+
+ # Check connection status before adding to stack
+ if libpq_handle.PQstatus(conn_p) != ConnectionStatus.CONNECTION_OK:
+ error_msg = libpq_handle.PQerrorMessage(conn_p).decode()
+ # Manually close the failed connection
+ libpq_handle.PQfinish(conn_p)
+ raise LibpqError(error_msg)
+
+ # Connection succeeded - add to stack for cleanup
+ conn = stack.enter_context(PGconn(libpq_handle, conn_p, stack=stack))
+ return conn
diff --git a/src/test/pytest/libpq/_error_base.py b/src/test/pytest/libpq/_error_base.py
new file mode 100644
index 00000000000..5c70c077193
--- /dev/null
+++ b/src/test/pytest/libpq/_error_base.py
@@ -0,0 +1,74 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Base exception classes for libpq errors and warnings.
+"""
+
+from typing import Optional
+
+
+class LibpqExceptionMixin:
+ """Mixin providing PostgreSQL error field attributes."""
+
+ sqlstate: Optional[str]
+ severity: Optional[str]
+ primary: Optional[str]
+ detail: Optional[str]
+ hint: Optional[str]
+ schema_name: Optional[str]
+ table_name: Optional[str]
+ column_name: Optional[str]
+ datatype_name: Optional[str]
+ constraint_name: Optional[str]
+ position: Optional[int]
+ context: Optional[str]
+
+ def __init__(
+ self,
+ message: str,
+ *,
+ sqlstate: Optional[str] = None,
+ severity: Optional[str] = None,
+ primary: Optional[str] = None,
+ detail: Optional[str] = None,
+ hint: Optional[str] = None,
+ schema_name: Optional[str] = None,
+ table_name: Optional[str] = None,
+ column_name: Optional[str] = None,
+ datatype_name: Optional[str] = None,
+ constraint_name: Optional[str] = None,
+ position: Optional[int] = None,
+ context: Optional[str] = None,
+ ):
+ super().__init__(message)
+ self.sqlstate = sqlstate
+ self.severity = severity
+ self.primary = primary
+ self.detail = detail
+ self.hint = hint
+ self.schema_name = schema_name
+ self.table_name = table_name
+ self.column_name = column_name
+ self.datatype_name = datatype_name
+ self.constraint_name = constraint_name
+ self.position = position
+ self.context = context
+
+ @property
+ def sqlstate_class(self) -> Optional[str]:
+ """Returns the 2-character SQLSTATE class."""
+ if self.sqlstate and len(self.sqlstate) >= 2:
+ return self.sqlstate[:2]
+ return None
+
+
+class LibpqError(LibpqExceptionMixin, RuntimeError):
+ """Base exception for libpq errors."""
+
+ pass
+
+
+class LibpqWarning(LibpqExceptionMixin, UserWarning):
+ """Base exception for libpq warnings."""
+
+ pass
diff --git a/src/test/pytest/libpq/_generated_errors.py b/src/test/pytest/libpq/_generated_errors.py
new file mode 100644
index 00000000000..f50f3143580
--- /dev/null
+++ b/src/test/pytest/libpq/_generated_errors.py
@@ -0,0 +1,2116 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+# This file is generated by src/tools/generate_pytest_libpq_errors.py - do not edit directly.
+
+"""
+Generated PostgreSQL error classes mapped from SQLSTATE codes.
+"""
+
+from typing import Dict
+
+from ._error_base import LibpqError, LibpqWarning
+
+
+class SuccessfulCompletion(LibpqError):
+ """SQLSTATE 00000 - successful completion."""
+
+ pass
+
+
+class Warning(LibpqWarning):
+ """SQLSTATE 01000 - warning."""
+
+ pass
+
+
+class DynamicResultSetsReturnedWarning(Warning):
+ """SQLSTATE 0100C - dynamic result sets returned."""
+
+ pass
+
+
+class ImplicitZeroBitPaddingWarning(Warning):
+ """SQLSTATE 01008 - implicit zero bit padding."""
+
+ pass
+
+
+class NullValueEliminatedInSetFunctionWarning(Warning):
+ """SQLSTATE 01003 - null value eliminated in set function."""
+
+ pass
+
+
+class PrivilegeNotGrantedWarning(Warning):
+ """SQLSTATE 01007 - privilege not granted."""
+
+ pass
+
+
+class PrivilegeNotRevokedWarning(Warning):
+ """SQLSTATE 01006 - privilege not revoked."""
+
+ pass
+
+
+class StringDataRightTruncationWarning(Warning):
+ """SQLSTATE 01004 - string data right truncation."""
+
+ pass
+
+
+class DeprecatedFeatureWarning(Warning):
+ """SQLSTATE 01P01 - deprecated feature."""
+
+ pass
+
+
+class NoData(LibpqError):
+ """SQLSTATE 02000 - no data."""
+
+ pass
+
+
+class NoAdditionalDynamicResultSetsReturned(NoData):
+ """SQLSTATE 02001 - no additional dynamic result sets returned."""
+
+ pass
+
+
+class SQLStatementNotYetComplete(LibpqError):
+ """SQLSTATE 03000 - sql statement not yet complete."""
+
+ pass
+
+
+class ConnectionException(LibpqError):
+ """SQLSTATE 08000 - connection exception."""
+
+ pass
+
+
+class ConnectionDoesNotExist(ConnectionException):
+ """SQLSTATE 08003 - connection does not exist."""
+
+ pass
+
+
+class ConnectionFailure(ConnectionException):
+ """SQLSTATE 08006 - connection failure."""
+
+ pass
+
+
+class SQLClientUnableToEstablishSQLConnection(ConnectionException):
+ """SQLSTATE 08001 - sqlclient unable to establish sqlconnection."""
+
+ pass
+
+
+class SQLServerRejectedEstablishmentOfSQLConnection(ConnectionException):
+ """SQLSTATE 08004 - sqlserver rejected establishment of sqlconnection."""
+
+ pass
+
+
+class TransactionResolutionUnknown(ConnectionException):
+ """SQLSTATE 08007 - transaction resolution unknown."""
+
+ pass
+
+
+class ProtocolViolation(ConnectionException):
+ """SQLSTATE 08P01 - protocol violation."""
+
+ pass
+
+
+class TriggeredActionException(LibpqError):
+ """SQLSTATE 09000 - triggered action exception."""
+
+ pass
+
+
+class FeatureNotSupported(LibpqError):
+ """SQLSTATE 0A000 - feature not supported."""
+
+ pass
+
+
+class InvalidTransactionInitiation(LibpqError):
+ """SQLSTATE 0B000 - invalid transaction initiation."""
+
+ pass
+
+
+class LocatorException(LibpqError):
+ """SQLSTATE 0F000 - locator exception."""
+
+ pass
+
+
+class InvalidLocatorSpecification(LocatorException):
+ """SQLSTATE 0F001 - invalid locator specification."""
+
+ pass
+
+
+class InvalidGrantor(LibpqError):
+ """SQLSTATE 0L000 - invalid grantor."""
+
+ pass
+
+
+class InvalidGrantOperation(InvalidGrantor):
+ """SQLSTATE 0LP01 - invalid grant operation."""
+
+ pass
+
+
+class InvalidRoleSpecification(LibpqError):
+ """SQLSTATE 0P000 - invalid role specification."""
+
+ pass
+
+
+class DiagnosticsException(LibpqError):
+ """SQLSTATE 0Z000 - diagnostics exception."""
+
+ pass
+
+
+class StackedDiagnosticsAccessedWithoutActiveHandler(DiagnosticsException):
+ """SQLSTATE 0Z002 - stacked diagnostics accessed without active handler."""
+
+ pass
+
+
+class InvalidArgumentForXquery(LibpqError):
+ """SQLSTATE 10608 - invalid argument for xquery."""
+
+ pass
+
+
+class CaseNotFound(LibpqError):
+ """SQLSTATE 20000 - case not found."""
+
+ pass
+
+
+class CardinalityViolation(LibpqError):
+ """SQLSTATE 21000 - cardinality violation."""
+
+ pass
+
+
+class DataException(LibpqError):
+ """SQLSTATE 22000 - data exception."""
+
+ pass
+
+
+class ArraySubscriptError(DataException):
+ """SQLSTATE 2202E - array subscript error."""
+
+ pass
+
+
+class CharacterNotInRepertoire(DataException):
+ """SQLSTATE 22021 - character not in repertoire."""
+
+ pass
+
+
+class DatetimeFieldOverflow(DataException):
+ """SQLSTATE 22008 - datetime field overflow."""
+
+ pass
+
+
+class DivisionByZero(DataException):
+ """SQLSTATE 22012 - division by zero."""
+
+ pass
+
+
+class ErrorInAssignment(DataException):
+ """SQLSTATE 22005 - error in assignment."""
+
+ pass
+
+
+class EscapeCharacterConflict(DataException):
+ """SQLSTATE 2200B - escape character conflict."""
+
+ pass
+
+
+class IndicatorOverflow(DataException):
+ """SQLSTATE 22022 - indicator overflow."""
+
+ pass
+
+
+class IntervalFieldOverflow(DataException):
+ """SQLSTATE 22015 - interval field overflow."""
+
+ pass
+
+
+class InvalidArgumentForLogarithm(DataException):
+ """SQLSTATE 2201E - invalid argument for logarithm."""
+
+ pass
+
+
+class InvalidArgumentForNtileFunction(DataException):
+ """SQLSTATE 22014 - invalid argument for ntile function."""
+
+ pass
+
+
+class InvalidArgumentForNthValueFunction(DataException):
+ """SQLSTATE 22016 - invalid argument for nth value function."""
+
+ pass
+
+
+class InvalidArgumentForPowerFunction(DataException):
+ """SQLSTATE 2201F - invalid argument for power function."""
+
+ pass
+
+
+class InvalidArgumentForWidthBucketFunction(DataException):
+ """SQLSTATE 2201G - invalid argument for width bucket function."""
+
+ pass
+
+
+class InvalidCharacterValueForCast(DataException):
+ """SQLSTATE 22018 - invalid character value for cast."""
+
+ pass
+
+
+class InvalidDatetimeFormat(DataException):
+ """SQLSTATE 22007 - invalid datetime format."""
+
+ pass
+
+
+class InvalidEscapeCharacter(DataException):
+ """SQLSTATE 22019 - invalid escape character."""
+
+ pass
+
+
+class InvalidEscapeOctet(DataException):
+ """SQLSTATE 2200D - invalid escape octet."""
+
+ pass
+
+
+class InvalidEscapeSequence(DataException):
+ """SQLSTATE 22025 - invalid escape sequence."""
+
+ pass
+
+
+class NonstandardUseOfEscapeCharacter(DataException):
+ """SQLSTATE 22P06 - nonstandard use of escape character."""
+
+ pass
+
+
+class InvalidIndicatorParameterValue(DataException):
+ """SQLSTATE 22010 - invalid indicator parameter value."""
+
+ pass
+
+
+class InvalidParameterValue(DataException):
+ """SQLSTATE 22023 - invalid parameter value."""
+
+ pass
+
+
+class InvalidPrecedingOrFollowingSize(DataException):
+ """SQLSTATE 22013 - invalid preceding or following size."""
+
+ pass
+
+
+class InvalidRegularExpression(DataException):
+ """SQLSTATE 2201B - invalid regular expression."""
+
+ pass
+
+
+class InvalidRowCountInLimitClause(DataException):
+ """SQLSTATE 2201W - invalid row count in limit clause."""
+
+ pass
+
+
+class InvalidRowCountInResultOffsetClause(DataException):
+ """SQLSTATE 2201X - invalid row count in result offset clause."""
+
+ pass
+
+
+class InvalidTablesampleArgument(DataException):
+ """SQLSTATE 2202H - invalid tablesample argument."""
+
+ pass
+
+
+class InvalidTablesampleRepeat(DataException):
+ """SQLSTATE 2202G - invalid tablesample repeat."""
+
+ pass
+
+
+class InvalidTimeZoneDisplacementValue(DataException):
+ """SQLSTATE 22009 - invalid time zone displacement value."""
+
+ pass
+
+
+class InvalidUseOfEscapeCharacter(DataException):
+ """SQLSTATE 2200C - invalid use of escape character."""
+
+ pass
+
+
+class MostSpecificTypeMismatch(DataException):
+ """SQLSTATE 2200G - most specific type mismatch."""
+
+ pass
+
+
+class NullValueNotAllowed(DataException):
+ """SQLSTATE 22004 - null value not allowed."""
+
+ pass
+
+
+class NullValueNoIndicatorParameter(DataException):
+ """SQLSTATE 22002 - null value no indicator parameter."""
+
+ pass
+
+
+class NumericValueOutOfRange(DataException):
+ """SQLSTATE 22003 - numeric value out of range."""
+
+ pass
+
+
+class SequenceGeneratorLimitExceeded(DataException):
+ """SQLSTATE 2200H - sequence generator limit exceeded."""
+
+ pass
+
+
+class StringDataLengthMismatch(DataException):
+ """SQLSTATE 22026 - string data length mismatch."""
+
+ pass
+
+
+class StringDataRightTruncation(DataException):
+ """SQLSTATE 22001 - string data right truncation."""
+
+ pass
+
+
+class SubstringError(DataException):
+ """SQLSTATE 22011 - substring error."""
+
+ pass
+
+
+class TrimError(DataException):
+ """SQLSTATE 22027 - trim error."""
+
+ pass
+
+
+class UnterminatedCString(DataException):
+ """SQLSTATE 22024 - unterminated c string."""
+
+ pass
+
+
+class ZeroLengthCharacterString(DataException):
+ """SQLSTATE 2200F - zero length character string."""
+
+ pass
+
+
+class FloatingPointException(DataException):
+ """SQLSTATE 22P01 - floating point exception."""
+
+ pass
+
+
+class InvalidTextRepresentation(DataException):
+ """SQLSTATE 22P02 - invalid text representation."""
+
+ pass
+
+
+class InvalidBinaryRepresentation(DataException):
+ """SQLSTATE 22P03 - invalid binary representation."""
+
+ pass
+
+
+class BadCopyFileFormat(DataException):
+ """SQLSTATE 22P04 - bad copy file format."""
+
+ pass
+
+
+class UntranslatableCharacter(DataException):
+ """SQLSTATE 22P05 - untranslatable character."""
+
+ pass
+
+
+class NotAnXmlDocument(DataException):
+ """SQLSTATE 2200L - not an xml document."""
+
+ pass
+
+
+class InvalidXmlDocument(DataException):
+ """SQLSTATE 2200M - invalid xml document."""
+
+ pass
+
+
+class InvalidXmlContent(DataException):
+ """SQLSTATE 2200N - invalid xml content."""
+
+ pass
+
+
+class InvalidXmlComment(DataException):
+ """SQLSTATE 2200S - invalid xml comment."""
+
+ pass
+
+
+class InvalidXmlProcessingInstruction(DataException):
+ """SQLSTATE 2200T - invalid xml processing instruction."""
+
+ pass
+
+
+class DuplicateJsonObjectKeyValue(DataException):
+ """SQLSTATE 22030 - duplicate json object key value."""
+
+ pass
+
+
+class InvalidArgumentForSQLJsonDatetimeFunction(DataException):
+ """SQLSTATE 22031 - invalid argument for sql json datetime function."""
+
+ pass
+
+
+class InvalidJsonText(DataException):
+ """SQLSTATE 22032 - invalid json text."""
+
+ pass
+
+
+class InvalidSQLJsonSubscript(DataException):
+ """SQLSTATE 22033 - invalid sql json subscript."""
+
+ pass
+
+
+class MoreThanOneSQLJsonItem(DataException):
+ """SQLSTATE 22034 - more than one sql json item."""
+
+ pass
+
+
+class NoSQLJsonItem(DataException):
+ """SQLSTATE 22035 - no sql json item."""
+
+ pass
+
+
+class NonNumericSQLJsonItem(DataException):
+ """SQLSTATE 22036 - non numeric sql json item."""
+
+ pass
+
+
+class NonUniqueKeysInAJsonObject(DataException):
+ """SQLSTATE 22037 - non unique keys in a json object."""
+
+ pass
+
+
+class SingletonSQLJsonItemRequired(DataException):
+ """SQLSTATE 22038 - singleton sql json item required."""
+
+ pass
+
+
+class SQLJsonArrayNotFound(DataException):
+ """SQLSTATE 22039 - sql json array not found."""
+
+ pass
+
+
+class SQLJsonMemberNotFound(DataException):
+ """SQLSTATE 2203A - sql json member not found."""
+
+ pass
+
+
+class SQLJsonNumberNotFound(DataException):
+ """SQLSTATE 2203B - sql json number not found."""
+
+ pass
+
+
+class SQLJsonObjectNotFound(DataException):
+ """SQLSTATE 2203C - sql json object not found."""
+
+ pass
+
+
+class TooManyJsonArrayElements(DataException):
+ """SQLSTATE 2203D - too many json array elements."""
+
+ pass
+
+
+class TooManyJsonObjectMembers(DataException):
+ """SQLSTATE 2203E - too many json object members."""
+
+ pass
+
+
+class SQLJsonScalarRequired(DataException):
+ """SQLSTATE 2203F - sql json scalar required."""
+
+ pass
+
+
+class SQLJsonItemCannotBeCastToTargetType(DataException):
+ """SQLSTATE 2203G - sql json item cannot be cast to target type."""
+
+ pass
+
+
+class IntegrityConstraintViolation(LibpqError):
+ """SQLSTATE 23000 - integrity constraint violation."""
+
+ pass
+
+
+class RestrictViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23001 - restrict violation."""
+
+ pass
+
+
+class NotNullViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23502 - not null violation."""
+
+ pass
+
+
+class ForeignKeyViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23503 - foreign key violation."""
+
+ pass
+
+
+class UniqueViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23505 - unique violation."""
+
+ pass
+
+
+class CheckViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23514 - check violation."""
+
+ pass
+
+
+class ExclusionViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23P01 - exclusion violation."""
+
+ pass
+
+
+class InvalidCursorState(LibpqError):
+ """SQLSTATE 24000 - invalid cursor state."""
+
+ pass
+
+
+class InvalidTransactionState(LibpqError):
+ """SQLSTATE 25000 - invalid transaction state."""
+
+ pass
+
+
+class ActiveSQLTransaction(InvalidTransactionState):
+ """SQLSTATE 25001 - active sql transaction."""
+
+ pass
+
+
+class BranchTransactionAlreadyActive(InvalidTransactionState):
+ """SQLSTATE 25002 - branch transaction already active."""
+
+ pass
+
+
+class HeldCursorRequiresSameIsolationLevel(InvalidTransactionState):
+ """SQLSTATE 25008 - held cursor requires same isolation level."""
+
+ pass
+
+
+class InappropriateAccessModeForBranchTransaction(InvalidTransactionState):
+ """SQLSTATE 25003 - inappropriate access mode for branch transaction."""
+
+ pass
+
+
+class InappropriateIsolationLevelForBranchTransaction(InvalidTransactionState):
+ """SQLSTATE 25004 - inappropriate isolation level for branch transaction."""
+
+ pass
+
+
+class NoActiveSQLTransactionForBranchTransaction(InvalidTransactionState):
+ """SQLSTATE 25005 - no active sql transaction for branch transaction."""
+
+ pass
+
+
+class ReadOnlySQLTransaction(InvalidTransactionState):
+ """SQLSTATE 25006 - read only sql transaction."""
+
+ pass
+
+
+class SchemaAndDataStatementMixingNotSupported(InvalidTransactionState):
+ """SQLSTATE 25007 - schema and data statement mixing not supported."""
+
+ pass
+
+
+class NoActiveSQLTransaction(InvalidTransactionState):
+ """SQLSTATE 25P01 - no active sql transaction."""
+
+ pass
+
+
+class InFailedSQLTransaction(InvalidTransactionState):
+ """SQLSTATE 25P02 - in failed sql transaction."""
+
+ pass
+
+
+class IdleInTransactionSessionTimeout(InvalidTransactionState):
+ """SQLSTATE 25P03 - idle in transaction session timeout."""
+
+ pass
+
+
+class TransactionTimeout(InvalidTransactionState):
+ """SQLSTATE 25P04 - transaction timeout."""
+
+ pass
+
+
+class InvalidSQLStatementName(LibpqError):
+ """SQLSTATE 26000 - invalid sql statement name."""
+
+ pass
+
+
+class TriggeredDataChangeViolation(LibpqError):
+ """SQLSTATE 27000 - triggered data change violation."""
+
+ pass
+
+
+class InvalidAuthorizationSpecification(LibpqError):
+ """SQLSTATE 28000 - invalid authorization specification."""
+
+ pass
+
+
+class InvalidPassword(InvalidAuthorizationSpecification):
+ """SQLSTATE 28P01 - invalid password."""
+
+ pass
+
+
+class DependentPrivilegeDescriptorsStillExist(LibpqError):
+ """SQLSTATE 2B000 - dependent privilege descriptors still exist."""
+
+ pass
+
+
+class DependentObjectsStillExist(DependentPrivilegeDescriptorsStillExist):
+ """SQLSTATE 2BP01 - dependent objects still exist."""
+
+ pass
+
+
+class InvalidTransactionTermination(LibpqError):
+ """SQLSTATE 2D000 - invalid transaction termination."""
+
+ pass
+
+
+class SQLRoutineException(LibpqError):
+ """SQLSTATE 2F000 - sql routine exception."""
+
+ pass
+
+
+class FunctionExecutedNoReturnStatement(SQLRoutineException):
+ """SQLSTATE 2F005 - function executed no return statement."""
+
+ pass
+
+
+class SREModifyingSQLDataNotPermitted(SQLRoutineException):
+ """SQLSTATE 2F002 - modifying sql data not permitted."""
+
+ pass
+
+
+class SREProhibitedSQLStatementAttempted(SQLRoutineException):
+ """SQLSTATE 2F003 - prohibited sql statement attempted."""
+
+ pass
+
+
+class SREReadingSQLDataNotPermitted(SQLRoutineException):
+ """SQLSTATE 2F004 - reading sql data not permitted."""
+
+ pass
+
+
+class InvalidCursorName(LibpqError):
+ """SQLSTATE 34000 - invalid cursor name."""
+
+ pass
+
+
+class ExternalRoutineException(LibpqError):
+ """SQLSTATE 38000 - external routine exception."""
+
+ pass
+
+
+class ContainingSQLNotPermitted(ExternalRoutineException):
+ """SQLSTATE 38001 - containing sql not permitted."""
+
+ pass
+
+
+class EREModifyingSQLDataNotPermitted(ExternalRoutineException):
+ """SQLSTATE 38002 - modifying sql data not permitted."""
+
+ pass
+
+
+class EREProhibitedSQLStatementAttempted(ExternalRoutineException):
+ """SQLSTATE 38003 - prohibited sql statement attempted."""
+
+ pass
+
+
+class EREReadingSQLDataNotPermitted(ExternalRoutineException):
+ """SQLSTATE 38004 - reading sql data not permitted."""
+
+ pass
+
+
+class ExternalRoutineInvocationException(LibpqError):
+ """SQLSTATE 39000 - external routine invocation exception."""
+
+ pass
+
+
+class InvalidSqlstateReturned(ExternalRoutineInvocationException):
+ """SQLSTATE 39001 - invalid sqlstate returned."""
+
+ pass
+
+
+class ERIENullValueNotAllowed(ExternalRoutineInvocationException):
+ """SQLSTATE 39004 - null value not allowed."""
+
+ pass
+
+
+class TriggerProtocolViolated(ExternalRoutineInvocationException):
+ """SQLSTATE 39P01 - trigger protocol violated."""
+
+ pass
+
+
+class SrfProtocolViolated(ExternalRoutineInvocationException):
+ """SQLSTATE 39P02 - srf protocol violated."""
+
+ pass
+
+
+class EventTriggerProtocolViolated(ExternalRoutineInvocationException):
+ """SQLSTATE 39P03 - event trigger protocol violated."""
+
+ pass
+
+
+class SavepointException(LibpqError):
+ """SQLSTATE 3B000 - savepoint exception."""
+
+ pass
+
+
+class InvalidSavepointSpecification(SavepointException):
+ """SQLSTATE 3B001 - invalid savepoint specification."""
+
+ pass
+
+
+class InvalidCatalogName(LibpqError):
+ """SQLSTATE 3D000 - invalid catalog name."""
+
+ pass
+
+
+class InvalidSchemaName(LibpqError):
+ """SQLSTATE 3F000 - invalid schema name."""
+
+ pass
+
+
+class TransactionRollback(LibpqError):
+ """SQLSTATE 40000 - transaction rollback."""
+
+ pass
+
+
+class TransactionIntegrityConstraintViolation(TransactionRollback):
+ """SQLSTATE 40002 - transaction integrity constraint violation."""
+
+ pass
+
+
+class SerializationFailure(TransactionRollback):
+ """SQLSTATE 40001 - serialization failure."""
+
+ pass
+
+
+class StatementCompletionUnknown(TransactionRollback):
+ """SQLSTATE 40003 - statement completion unknown."""
+
+ pass
+
+
+class DeadlockDetected(TransactionRollback):
+ """SQLSTATE 40P01 - deadlock detected."""
+
+ pass
+
+
+class SyntaxErrorOrAccessRuleViolation(LibpqError):
+ """SQLSTATE 42000 - syntax error or access rule violation."""
+
+ pass
+
+
+class SyntaxError(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42601 - syntax error."""
+
+ pass
+
+
+class InsufficientPrivilege(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42501 - insufficient privilege."""
+
+ pass
+
+
+class CannotCoerce(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42846 - cannot coerce."""
+
+ pass
+
+
+class GroupingError(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42803 - grouping error."""
+
+ pass
+
+
+class WindowingError(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P20 - windowing error."""
+
+ pass
+
+
+class InvalidRecursion(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P19 - invalid recursion."""
+
+ pass
+
+
+class InvalidForeignKey(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42830 - invalid foreign key."""
+
+ pass
+
+
+class InvalidName(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42602 - invalid name."""
+
+ pass
+
+
+class NameTooLong(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42622 - name too long."""
+
+ pass
+
+
+class ReservedName(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42939 - reserved name."""
+
+ pass
+
+
+class DatatypeMismatch(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42804 - datatype mismatch."""
+
+ pass
+
+
+class IndeterminateDatatype(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P18 - indeterminate datatype."""
+
+ pass
+
+
+class CollationMismatch(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P21 - collation mismatch."""
+
+ pass
+
+
+class IndeterminateCollation(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P22 - indeterminate collation."""
+
+ pass
+
+
+class WrongObjectType(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42809 - wrong object type."""
+
+ pass
+
+
+class GeneratedAlways(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 428C9 - generated always."""
+
+ pass
+
+
+class UndefinedColumn(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42703 - undefined column."""
+
+ pass
+
+
+class UndefinedFunction(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42883 - undefined function."""
+
+ pass
+
+
+class UndefinedTable(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P01 - undefined table."""
+
+ pass
+
+
+class UndefinedParameter(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P02 - undefined parameter."""
+
+ pass
+
+
+class UndefinedObject(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42704 - undefined object."""
+
+ pass
+
+
+class DuplicateColumn(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42701 - duplicate column."""
+
+ pass
+
+
+class DuplicateCursor(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P03 - duplicate cursor."""
+
+ pass
+
+
+class DuplicateDatabase(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P04 - duplicate database."""
+
+ pass
+
+
+class DuplicateFunction(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42723 - duplicate function."""
+
+ pass
+
+
+class DuplicatePreparedStatement(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P05 - duplicate prepared statement."""
+
+ pass
+
+
+class DuplicateSchema(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P06 - duplicate schema."""
+
+ pass
+
+
+class DuplicateTable(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P07 - duplicate table."""
+
+ pass
+
+
+class DuplicateAlias(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42712 - duplicate alias."""
+
+ pass
+
+
+class DuplicateObject(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42710 - duplicate object."""
+
+ pass
+
+
+class AmbiguousColumn(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42702 - ambiguous column."""
+
+ pass
+
+
+class AmbiguousFunction(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42725 - ambiguous function."""
+
+ pass
+
+
+class AmbiguousParameter(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P08 - ambiguous parameter."""
+
+ pass
+
+
+class AmbiguousAlias(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P09 - ambiguous alias."""
+
+ pass
+
+
+class InvalidColumnReference(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P10 - invalid column reference."""
+
+ pass
+
+
+class InvalidColumnDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42611 - invalid column definition."""
+
+ pass
+
+
+class InvalidCursorDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P11 - invalid cursor definition."""
+
+ pass
+
+
+class InvalidDatabaseDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P12 - invalid database definition."""
+
+ pass
+
+
+class InvalidFunctionDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P13 - invalid function definition."""
+
+ pass
+
+
+class InvalidPreparedStatementDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P14 - invalid prepared statement definition."""
+
+ pass
+
+
+class InvalidSchemaDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P15 - invalid schema definition."""
+
+ pass
+
+
+class InvalidTableDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P16 - invalid table definition."""
+
+ pass
+
+
+class InvalidObjectDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P17 - invalid object definition."""
+
+ pass
+
+
+class WithCheckOptionViolation(LibpqError):
+ """SQLSTATE 44000 - with check option violation."""
+
+ pass
+
+
+class InsufficientResources(LibpqError):
+ """SQLSTATE 53000 - insufficient resources."""
+
+ pass
+
+
+class DiskFull(InsufficientResources):
+ """SQLSTATE 53100 - disk full."""
+
+ pass
+
+
+class OutOfMemory(InsufficientResources):
+ """SQLSTATE 53200 - out of memory."""
+
+ pass
+
+
+class TooManyConnections(InsufficientResources):
+ """SQLSTATE 53300 - too many connections."""
+
+ pass
+
+
+class ConfigurationLimitExceeded(InsufficientResources):
+ """SQLSTATE 53400 - configuration limit exceeded."""
+
+ pass
+
+
+class ProgramLimitExceeded(LibpqError):
+ """SQLSTATE 54000 - program limit exceeded."""
+
+ pass
+
+
+class StatementTooComplex(ProgramLimitExceeded):
+ """SQLSTATE 54001 - statement too complex."""
+
+ pass
+
+
+class TooManyColumns(ProgramLimitExceeded):
+ """SQLSTATE 54011 - too many columns."""
+
+ pass
+
+
+class TooManyArguments(ProgramLimitExceeded):
+ """SQLSTATE 54023 - too many arguments."""
+
+ pass
+
+
+class ObjectNotInPrerequisiteState(LibpqError):
+ """SQLSTATE 55000 - object not in prerequisite state."""
+
+ pass
+
+
+class ObjectInUse(ObjectNotInPrerequisiteState):
+ """SQLSTATE 55006 - object in use."""
+
+ pass
+
+
+class CantChangeRuntimeParam(ObjectNotInPrerequisiteState):
+ """SQLSTATE 55P02 - cant change runtime param."""
+
+ pass
+
+
+class LockNotAvailable(ObjectNotInPrerequisiteState):
+ """SQLSTATE 55P03 - lock not available."""
+
+ pass
+
+
+class UnsafeNewEnumValueUsage(ObjectNotInPrerequisiteState):
+ """SQLSTATE 55P04 - unsafe new enum value usage."""
+
+ pass
+
+
+class OperatorIntervention(LibpqError):
+ """SQLSTATE 57000 - operator intervention."""
+
+ pass
+
+
+class QueryCanceled(OperatorIntervention):
+ """SQLSTATE 57014 - query canceled."""
+
+ pass
+
+
+class AdminShutdown(OperatorIntervention):
+ """SQLSTATE 57P01 - admin shutdown."""
+
+ pass
+
+
+class CrashShutdown(OperatorIntervention):
+ """SQLSTATE 57P02 - crash shutdown."""
+
+ pass
+
+
+class CannotConnectNow(OperatorIntervention):
+ """SQLSTATE 57P03 - cannot connect now."""
+
+ pass
+
+
+class DatabaseDropped(OperatorIntervention):
+ """SQLSTATE 57P04 - database dropped."""
+
+ pass
+
+
+class IdleSessionTimeout(OperatorIntervention):
+ """SQLSTATE 57P05 - idle session timeout."""
+
+ pass
+
+
+class SystemError(LibpqError):
+ """SQLSTATE 58000 - system error."""
+
+ pass
+
+
+class IoError(SystemError):
+ """SQLSTATE 58030 - io error."""
+
+ pass
+
+
+class UndefinedFile(SystemError):
+ """SQLSTATE 58P01 - undefined file."""
+
+ pass
+
+
+class DuplicateFile(SystemError):
+ """SQLSTATE 58P02 - duplicate file."""
+
+ pass
+
+
+class FileNameTooLong(SystemError):
+ """SQLSTATE 58P03 - file name too long."""
+
+ pass
+
+
+class ConfigFileError(LibpqError):
+ """SQLSTATE F0000 - config file error."""
+
+ pass
+
+
+class LockFileExists(ConfigFileError):
+ """SQLSTATE F0001 - lock file exists."""
+
+ pass
+
+
+class FDWError(LibpqError):
+ """SQLSTATE HV000 - fdw error."""
+
+ pass
+
+
+class FDWColumnNameNotFound(FDWError):
+ """SQLSTATE HV005 - fdw column name not found."""
+
+ pass
+
+
+class FDWDynamicParameterValueNeeded(FDWError):
+ """SQLSTATE HV002 - fdw dynamic parameter value needed."""
+
+ pass
+
+
+class FDWFunctionSequenceError(FDWError):
+ """SQLSTATE HV010 - fdw function sequence error."""
+
+ pass
+
+
+class FDWInconsistentDescriptorInformation(FDWError):
+ """SQLSTATE HV021 - fdw inconsistent descriptor information."""
+
+ pass
+
+
+class FDWInvalidAttributeValue(FDWError):
+ """SQLSTATE HV024 - fdw invalid attribute value."""
+
+ pass
+
+
+class FDWInvalidColumnName(FDWError):
+ """SQLSTATE HV007 - fdw invalid column name."""
+
+ pass
+
+
+class FDWInvalidColumnNumber(FDWError):
+ """SQLSTATE HV008 - fdw invalid column number."""
+
+ pass
+
+
+class FDWInvalidDataType(FDWError):
+ """SQLSTATE HV004 - fdw invalid data type."""
+
+ pass
+
+
+class FDWInvalidDataTypeDescriptors(FDWError):
+ """SQLSTATE HV006 - fdw invalid data type descriptors."""
+
+ pass
+
+
+class FDWInvalidDescriptorFieldIdentifier(FDWError):
+ """SQLSTATE HV091 - fdw invalid descriptor field identifier."""
+
+ pass
+
+
+class FDWInvalidHandle(FDWError):
+ """SQLSTATE HV00B - fdw invalid handle."""
+
+ pass
+
+
+class FDWInvalidOptionIndex(FDWError):
+ """SQLSTATE HV00C - fdw invalid option index."""
+
+ pass
+
+
+class FDWInvalidOptionName(FDWError):
+ """SQLSTATE HV00D - fdw invalid option name."""
+
+ pass
+
+
+class FDWInvalidStringLengthOrBufferLength(FDWError):
+ """SQLSTATE HV090 - fdw invalid string length or buffer length."""
+
+ pass
+
+
+class FDWInvalidStringFormat(FDWError):
+ """SQLSTATE HV00A - fdw invalid string format."""
+
+ pass
+
+
+class FDWInvalidUseOfNullPointer(FDWError):
+ """SQLSTATE HV009 - fdw invalid use of null pointer."""
+
+ pass
+
+
+class FDWTooManyHandles(FDWError):
+ """SQLSTATE HV014 - fdw too many handles."""
+
+ pass
+
+
+class FDWOutOfMemory(FDWError):
+ """SQLSTATE HV001 - fdw out of memory."""
+
+ pass
+
+
+class FDWNoSchemas(FDWError):
+ """SQLSTATE HV00P - fdw no schemas."""
+
+ pass
+
+
+class FDWOptionNameNotFound(FDWError):
+ """SQLSTATE HV00J - fdw option name not found."""
+
+ pass
+
+
+class FDWReplyHandle(FDWError):
+ """SQLSTATE HV00K - fdw reply handle."""
+
+ pass
+
+
+class FDWSchemaNotFound(FDWError):
+ """SQLSTATE HV00Q - fdw schema not found."""
+
+ pass
+
+
+class FDWTableNotFound(FDWError):
+ """SQLSTATE HV00R - fdw table not found."""
+
+ pass
+
+
+class FDWUnableToCreateExecution(FDWError):
+ """SQLSTATE HV00L - fdw unable to create execution."""
+
+ pass
+
+
+class FDWUnableToCreateReply(FDWError):
+ """SQLSTATE HV00M - fdw unable to create reply."""
+
+ pass
+
+
+class FDWUnableToEstablishConnection(FDWError):
+ """SQLSTATE HV00N - fdw unable to establish connection."""
+
+ pass
+
+
+class PlpgsqlError(LibpqError):
+ """SQLSTATE P0000 - plpgsql error."""
+
+ pass
+
+
+class RaiseException(PlpgsqlError):
+ """SQLSTATE P0001 - raise exception."""
+
+ pass
+
+
+class NoDataFound(PlpgsqlError):
+ """SQLSTATE P0002 - no data found."""
+
+ pass
+
+
+class TooManyRows(PlpgsqlError):
+ """SQLSTATE P0003 - too many rows."""
+
+ pass
+
+
+class AssertFailure(PlpgsqlError):
+ """SQLSTATE P0004 - assert failure."""
+
+ pass
+
+
+class InternalError(LibpqError):
+ """SQLSTATE XX000 - internal error."""
+
+ pass
+
+
+class DataCorrupted(InternalError):
+ """SQLSTATE XX001 - data corrupted."""
+
+ pass
+
+
+class IndexCorrupted(InternalError):
+ """SQLSTATE XX002 - index corrupted."""
+
+ pass
+
+
+SQLSTATE_TO_EXCEPTION: Dict[str, type] = {
+ "00000": SuccessfulCompletion,
+ "01000": Warning,
+ "0100C": DynamicResultSetsReturnedWarning,
+ "01008": ImplicitZeroBitPaddingWarning,
+ "01003": NullValueEliminatedInSetFunctionWarning,
+ "01007": PrivilegeNotGrantedWarning,
+ "01006": PrivilegeNotRevokedWarning,
+ "01004": StringDataRightTruncationWarning,
+ "01P01": DeprecatedFeatureWarning,
+ "02000": NoData,
+ "02001": NoAdditionalDynamicResultSetsReturned,
+ "03000": SQLStatementNotYetComplete,
+ "08000": ConnectionException,
+ "08003": ConnectionDoesNotExist,
+ "08006": ConnectionFailure,
+ "08001": SQLClientUnableToEstablishSQLConnection,
+ "08004": SQLServerRejectedEstablishmentOfSQLConnection,
+ "08007": TransactionResolutionUnknown,
+ "08P01": ProtocolViolation,
+ "09000": TriggeredActionException,
+ "0A000": FeatureNotSupported,
+ "0B000": InvalidTransactionInitiation,
+ "0F000": LocatorException,
+ "0F001": InvalidLocatorSpecification,
+ "0L000": InvalidGrantor,
+ "0LP01": InvalidGrantOperation,
+ "0P000": InvalidRoleSpecification,
+ "0Z000": DiagnosticsException,
+ "0Z002": StackedDiagnosticsAccessedWithoutActiveHandler,
+ "10608": InvalidArgumentForXquery,
+ "20000": CaseNotFound,
+ "21000": CardinalityViolation,
+ "22000": DataException,
+ "2202E": ArraySubscriptError,
+ "22021": CharacterNotInRepertoire,
+ "22008": DatetimeFieldOverflow,
+ "22012": DivisionByZero,
+ "22005": ErrorInAssignment,
+ "2200B": EscapeCharacterConflict,
+ "22022": IndicatorOverflow,
+ "22015": IntervalFieldOverflow,
+ "2201E": InvalidArgumentForLogarithm,
+ "22014": InvalidArgumentForNtileFunction,
+ "22016": InvalidArgumentForNthValueFunction,
+ "2201F": InvalidArgumentForPowerFunction,
+ "2201G": InvalidArgumentForWidthBucketFunction,
+ "22018": InvalidCharacterValueForCast,
+ "22007": InvalidDatetimeFormat,
+ "22019": InvalidEscapeCharacter,
+ "2200D": InvalidEscapeOctet,
+ "22025": InvalidEscapeSequence,
+ "22P06": NonstandardUseOfEscapeCharacter,
+ "22010": InvalidIndicatorParameterValue,
+ "22023": InvalidParameterValue,
+ "22013": InvalidPrecedingOrFollowingSize,
+ "2201B": InvalidRegularExpression,
+ "2201W": InvalidRowCountInLimitClause,
+ "2201X": InvalidRowCountInResultOffsetClause,
+ "2202H": InvalidTablesampleArgument,
+ "2202G": InvalidTablesampleRepeat,
+ "22009": InvalidTimeZoneDisplacementValue,
+ "2200C": InvalidUseOfEscapeCharacter,
+ "2200G": MostSpecificTypeMismatch,
+ "22004": NullValueNotAllowed,
+ "22002": NullValueNoIndicatorParameter,
+ "22003": NumericValueOutOfRange,
+ "2200H": SequenceGeneratorLimitExceeded,
+ "22026": StringDataLengthMismatch,
+ "22001": StringDataRightTruncation,
+ "22011": SubstringError,
+ "22027": TrimError,
+ "22024": UnterminatedCString,
+ "2200F": ZeroLengthCharacterString,
+ "22P01": FloatingPointException,
+ "22P02": InvalidTextRepresentation,
+ "22P03": InvalidBinaryRepresentation,
+ "22P04": BadCopyFileFormat,
+ "22P05": UntranslatableCharacter,
+ "2200L": NotAnXmlDocument,
+ "2200M": InvalidXmlDocument,
+ "2200N": InvalidXmlContent,
+ "2200S": InvalidXmlComment,
+ "2200T": InvalidXmlProcessingInstruction,
+ "22030": DuplicateJsonObjectKeyValue,
+ "22031": InvalidArgumentForSQLJsonDatetimeFunction,
+ "22032": InvalidJsonText,
+ "22033": InvalidSQLJsonSubscript,
+ "22034": MoreThanOneSQLJsonItem,
+ "22035": NoSQLJsonItem,
+ "22036": NonNumericSQLJsonItem,
+ "22037": NonUniqueKeysInAJsonObject,
+ "22038": SingletonSQLJsonItemRequired,
+ "22039": SQLJsonArrayNotFound,
+ "2203A": SQLJsonMemberNotFound,
+ "2203B": SQLJsonNumberNotFound,
+ "2203C": SQLJsonObjectNotFound,
+ "2203D": TooManyJsonArrayElements,
+ "2203E": TooManyJsonObjectMembers,
+ "2203F": SQLJsonScalarRequired,
+ "2203G": SQLJsonItemCannotBeCastToTargetType,
+ "23000": IntegrityConstraintViolation,
+ "23001": RestrictViolation,
+ "23502": NotNullViolation,
+ "23503": ForeignKeyViolation,
+ "23505": UniqueViolation,
+ "23514": CheckViolation,
+ "23P01": ExclusionViolation,
+ "24000": InvalidCursorState,
+ "25000": InvalidTransactionState,
+ "25001": ActiveSQLTransaction,
+ "25002": BranchTransactionAlreadyActive,
+ "25008": HeldCursorRequiresSameIsolationLevel,
+ "25003": InappropriateAccessModeForBranchTransaction,
+ "25004": InappropriateIsolationLevelForBranchTransaction,
+ "25005": NoActiveSQLTransactionForBranchTransaction,
+ "25006": ReadOnlySQLTransaction,
+ "25007": SchemaAndDataStatementMixingNotSupported,
+ "25P01": NoActiveSQLTransaction,
+ "25P02": InFailedSQLTransaction,
+ "25P03": IdleInTransactionSessionTimeout,
+ "25P04": TransactionTimeout,
+ "26000": InvalidSQLStatementName,
+ "27000": TriggeredDataChangeViolation,
+ "28000": InvalidAuthorizationSpecification,
+ "28P01": InvalidPassword,
+ "2B000": DependentPrivilegeDescriptorsStillExist,
+ "2BP01": DependentObjectsStillExist,
+ "2D000": InvalidTransactionTermination,
+ "2F000": SQLRoutineException,
+ "2F005": FunctionExecutedNoReturnStatement,
+ "2F002": SREModifyingSQLDataNotPermitted,
+ "2F003": SREProhibitedSQLStatementAttempted,
+ "2F004": SREReadingSQLDataNotPermitted,
+ "34000": InvalidCursorName,
+ "38000": ExternalRoutineException,
+ "38001": ContainingSQLNotPermitted,
+ "38002": EREModifyingSQLDataNotPermitted,
+ "38003": EREProhibitedSQLStatementAttempted,
+ "38004": EREReadingSQLDataNotPermitted,
+ "39000": ExternalRoutineInvocationException,
+ "39001": InvalidSqlstateReturned,
+ "39004": ERIENullValueNotAllowed,
+ "39P01": TriggerProtocolViolated,
+ "39P02": SrfProtocolViolated,
+ "39P03": EventTriggerProtocolViolated,
+ "3B000": SavepointException,
+ "3B001": InvalidSavepointSpecification,
+ "3D000": InvalidCatalogName,
+ "3F000": InvalidSchemaName,
+ "40000": TransactionRollback,
+ "40002": TransactionIntegrityConstraintViolation,
+ "40001": SerializationFailure,
+ "40003": StatementCompletionUnknown,
+ "40P01": DeadlockDetected,
+ "42000": SyntaxErrorOrAccessRuleViolation,
+ "42601": SyntaxError,
+ "42501": InsufficientPrivilege,
+ "42846": CannotCoerce,
+ "42803": GroupingError,
+ "42P20": WindowingError,
+ "42P19": InvalidRecursion,
+ "42830": InvalidForeignKey,
+ "42602": InvalidName,
+ "42622": NameTooLong,
+ "42939": ReservedName,
+ "42804": DatatypeMismatch,
+ "42P18": IndeterminateDatatype,
+ "42P21": CollationMismatch,
+ "42P22": IndeterminateCollation,
+ "42809": WrongObjectType,
+ "428C9": GeneratedAlways,
+ "42703": UndefinedColumn,
+ "42883": UndefinedFunction,
+ "42P01": UndefinedTable,
+ "42P02": UndefinedParameter,
+ "42704": UndefinedObject,
+ "42701": DuplicateColumn,
+ "42P03": DuplicateCursor,
+ "42P04": DuplicateDatabase,
+ "42723": DuplicateFunction,
+ "42P05": DuplicatePreparedStatement,
+ "42P06": DuplicateSchema,
+ "42P07": DuplicateTable,
+ "42712": DuplicateAlias,
+ "42710": DuplicateObject,
+ "42702": AmbiguousColumn,
+ "42725": AmbiguousFunction,
+ "42P08": AmbiguousParameter,
+ "42P09": AmbiguousAlias,
+ "42P10": InvalidColumnReference,
+ "42611": InvalidColumnDefinition,
+ "42P11": InvalidCursorDefinition,
+ "42P12": InvalidDatabaseDefinition,
+ "42P13": InvalidFunctionDefinition,
+ "42P14": InvalidPreparedStatementDefinition,
+ "42P15": InvalidSchemaDefinition,
+ "42P16": InvalidTableDefinition,
+ "42P17": InvalidObjectDefinition,
+ "44000": WithCheckOptionViolation,
+ "53000": InsufficientResources,
+ "53100": DiskFull,
+ "53200": OutOfMemory,
+ "53300": TooManyConnections,
+ "53400": ConfigurationLimitExceeded,
+ "54000": ProgramLimitExceeded,
+ "54001": StatementTooComplex,
+ "54011": TooManyColumns,
+ "54023": TooManyArguments,
+ "55000": ObjectNotInPrerequisiteState,
+ "55006": ObjectInUse,
+ "55P02": CantChangeRuntimeParam,
+ "55P03": LockNotAvailable,
+ "55P04": UnsafeNewEnumValueUsage,
+ "57000": OperatorIntervention,
+ "57014": QueryCanceled,
+ "57P01": AdminShutdown,
+ "57P02": CrashShutdown,
+ "57P03": CannotConnectNow,
+ "57P04": DatabaseDropped,
+ "57P05": IdleSessionTimeout,
+ "58000": SystemError,
+ "58030": IoError,
+ "58P01": UndefinedFile,
+ "58P02": DuplicateFile,
+ "58P03": FileNameTooLong,
+ "F0000": ConfigFileError,
+ "F0001": LockFileExists,
+ "HV000": FDWError,
+ "HV005": FDWColumnNameNotFound,
+ "HV002": FDWDynamicParameterValueNeeded,
+ "HV010": FDWFunctionSequenceError,
+ "HV021": FDWInconsistentDescriptorInformation,
+ "HV024": FDWInvalidAttributeValue,
+ "HV007": FDWInvalidColumnName,
+ "HV008": FDWInvalidColumnNumber,
+ "HV004": FDWInvalidDataType,
+ "HV006": FDWInvalidDataTypeDescriptors,
+ "HV091": FDWInvalidDescriptorFieldIdentifier,
+ "HV00B": FDWInvalidHandle,
+ "HV00C": FDWInvalidOptionIndex,
+ "HV00D": FDWInvalidOptionName,
+ "HV090": FDWInvalidStringLengthOrBufferLength,
+ "HV00A": FDWInvalidStringFormat,
+ "HV009": FDWInvalidUseOfNullPointer,
+ "HV014": FDWTooManyHandles,
+ "HV001": FDWOutOfMemory,
+ "HV00P": FDWNoSchemas,
+ "HV00J": FDWOptionNameNotFound,
+ "HV00K": FDWReplyHandle,
+ "HV00Q": FDWSchemaNotFound,
+ "HV00R": FDWTableNotFound,
+ "HV00L": FDWUnableToCreateExecution,
+ "HV00M": FDWUnableToCreateReply,
+ "HV00N": FDWUnableToEstablishConnection,
+ "P0000": PlpgsqlError,
+ "P0001": RaiseException,
+ "P0002": NoDataFound,
+ "P0003": TooManyRows,
+ "P0004": AssertFailure,
+ "XX000": InternalError,
+ "XX001": DataCorrupted,
+ "XX002": IndexCorrupted,
+}
+
+
+__all__ = [
+ "InvalidCursorName",
+ "UndefinedParameter",
+ "UndefinedColumn",
+ "NotAnXmlDocument",
+ "FDWOutOfMemory",
+ "InvalidRoleSpecification",
+ "InvalidArgumentForNthValueFunction",
+ "SQLJsonObjectNotFound",
+ "FDWSchemaNotFound",
+ "InvalidParameterValue",
+ "InvalidTableDefinition",
+ "AssertFailure",
+ "FDWInvalidOptionName",
+ "InvalidEscapeOctet",
+ "ReadOnlySQLTransaction",
+ "ExternalRoutineInvocationException",
+ "CrashShutdown",
+ "FDWInvalidOptionIndex",
+ "NotNullViolation",
+ "ConfigFileError",
+ "InvalidSQLJsonSubscript",
+ "InvalidForeignKey",
+ "InsufficientResources",
+ "ObjectNotInPrerequisiteState",
+ "InvalidRowCountInLimitClause",
+ "IntervalFieldOverflow",
+ "CollationMismatch",
+ "InvalidArgumentForNtileFunction",
+ "InvalidCharacterValueForCast",
+ "NonUniqueKeysInAJsonObject",
+ "DependentPrivilegeDescriptorsStillExist",
+ "InFailedSQLTransaction",
+ "GroupingError",
+ "TransactionTimeout",
+ "CaseNotFound",
+ "ConnectionException",
+ "DuplicateJsonObjectKeyValue",
+ "InvalidSchemaDefinition",
+ "FDWUnableToCreateReply",
+ "UndefinedTable",
+ "SequenceGeneratorLimitExceeded",
+ "InvalidJsonText",
+ "IdleSessionTimeout",
+ "NullValueNotAllowed",
+ "BranchTransactionAlreadyActive",
+ "InvalidGrantOperation",
+ "NullValueNoIndicatorParameter",
+ "ProtocolViolation",
+ "FDWInvalidDataTypeDescriptors",
+ "TriggeredDataChangeViolation",
+ "ExternalRoutineException",
+ "InvalidSqlstateReturned",
+ "PlpgsqlError",
+ "InvalidXmlContent",
+ "TriggeredActionException",
+ "SQLClientUnableToEstablishSQLConnection",
+ "FDWTableNotFound",
+ "NumericValueOutOfRange",
+ "RestrictViolation",
+ "AmbiguousParameter",
+ "StatementTooComplex",
+ "UnsafeNewEnumValueUsage",
+ "NonNumericSQLJsonItem",
+ "InvalidIndicatorParameterValue",
+ "ExclusionViolation",
+ "OperatorIntervention",
+ "QueryCanceled",
+ "Warning",
+ "InvalidArgumentForSQLJsonDatetimeFunction",
+ "ForeignKeyViolation",
+ "StringDataLengthMismatch",
+ "SQLRoutineException",
+ "TooManyConnections",
+ "TooManyJsonObjectMembers",
+ "NoData",
+ "UntranslatableCharacter",
+ "FDWUnableToEstablishConnection",
+ "LockFileExists",
+ "SREReadingSQLDataNotPermitted",
+ "IndeterminateDatatype",
+ "CheckViolation",
+ "InvalidDatabaseDefinition",
+ "NoActiveSQLTransactionForBranchTransaction",
+ "SQLServerRejectedEstablishmentOfSQLConnection",
+ "DuplicateFile",
+ "FDWInvalidColumnNumber",
+ "TransactionRollback",
+ "MoreThanOneSQLJsonItem",
+ "WithCheckOptionViolation",
+ "FDWNoSchemas",
+ "GeneratedAlways",
+ "CannotConnectNow",
+ "CardinalityViolation",
+ "InvalidAuthorizationSpecification",
+ "SQLJsonNumberNotFound",
+ "SQLJsonMemberNotFound",
+ "InvalidUseOfEscapeCharacter",
+ "UnterminatedCString",
+ "TrimError",
+ "SrfProtocolViolated",
+ "DiskFull",
+ "TooManyColumns",
+ "InvalidObjectDefinition",
+ "InvalidArgumentForLogarithm",
+ "TooManyJsonArrayElements",
+ "OutOfMemory",
+ "EREProhibitedSQLStatementAttempted",
+ "FDWInvalidStringFormat",
+ "StackedDiagnosticsAccessedWithoutActiveHandler",
+ "SchemaAndDataStatementMixingNotSupported",
+ "InternalError",
+ "InvalidEscapeCharacter",
+ "FDWError",
+ "ImplicitZeroBitPaddingWarning",
+ "DivisionByZero",
+ "InvalidTablesampleArgument",
+ "DeadlockDetected",
+ "CantChangeRuntimeParam",
+ "UndefinedObject",
+ "UniqueViolation",
+ "InvalidCursorDefinition",
+ "ConnectionFailure",
+ "UndefinedFunction",
+ "FDWFunctionSequenceError",
+ "ErrorInAssignment",
+ "SuccessfulCompletion",
+ "StringDataRightTruncation",
+ "FDWTooManyHandles",
+ "FDWInvalidDataType",
+ "ActiveSQLTransaction",
+ "InvalidTextRepresentation",
+ "InvalidSQLStatementName",
+ "PrivilegeNotGrantedWarning",
+ "SREModifyingSQLDataNotPermitted",
+ "IndeterminateCollation",
+ "SystemError",
+ "NullValueEliminatedInSetFunctionWarning",
+ "DependentObjectsStillExist",
+ "InvalidSchemaName",
+ "DuplicateColumn",
+ "FunctionExecutedNoReturnStatement",
+ "InvalidColumnDefinition",
+ "DynamicResultSetsReturnedWarning",
+ "IdleInTransactionSessionTimeout",
+ "StatementCompletionUnknown",
+ "CannotCoerce",
+ "InvalidTransactionState",
+ "DuplicateTable",
+ "BadCopyFileFormat",
+ "ZeroLengthCharacterString",
+ "SyntaxErrorOrAccessRuleViolation",
+ "SingletonSQLJsonItemRequired",
+ "IndexCorrupted",
+ "FDWInvalidColumnName",
+ "DataCorrupted",
+ "ERIENullValueNotAllowed",
+ "ArraySubscriptError",
+ "FDWReplyHandle",
+ "DiagnosticsException",
+ "InvalidTablesampleRepeat",
+ "SQLJsonItemCannotBeCastToTargetType",
+ "FDWInvalidHandle",
+ "InvalidPassword",
+ "InvalidEscapeSequence",
+ "EscapeCharacterConflict",
+ "InvalidSavepointSpecification",
+ "FDWInvalidAttributeValue",
+ "ContainingSQLNotPermitted",
+ "LocatorException",
+ "DatatypeMismatch",
+ "InvalidCursorState",
+ "InvalidName",
+ "IndicatorOverflow",
+ "ReservedName",
+ "DatetimeFieldOverflow",
+ "FDWInconsistentDescriptorInformation",
+ "FloatingPointException",
+ "AmbiguousAlias",
+ "InvalidRecursion",
+ "WrongObjectType",
+ "UndefinedFile",
+ "LockNotAvailable",
+ "InvalidRowCountInResultOffsetClause",
+ "ObjectInUse",
+ "DeprecatedFeatureWarning",
+ "FDWDynamicParameterValueNeeded",
+ "DuplicateFunction",
+ "InvalidXmlDocument",
+ "StringDataRightTruncationWarning",
+ "DuplicatePreparedStatement",
+ "InvalidGrantor",
+ "EventTriggerProtocolViolated",
+ "FDWInvalidUseOfNullPointer",
+ "FDWUnableToCreateExecution",
+ "ConnectionDoesNotExist",
+ "InvalidCatalogName",
+ "InvalidArgumentForXquery",
+ "FDWColumnNameNotFound",
+ "TransactionIntegrityConstraintViolation",
+ "InvalidPreparedStatementDefinition",
+ "FDWInvalidDescriptorFieldIdentifier",
+ "FDWOptionNameNotFound",
+ "InvalidArgumentForPowerFunction",
+ "FDWInvalidStringLengthOrBufferLength",
+ "SREProhibitedSQLStatementAttempted",
+ "NoDataFound",
+ "DuplicateDatabase",
+ "FeatureNotSupported",
+ "IntegrityConstraintViolation",
+ "AmbiguousColumn",
+ "PrivilegeNotRevokedWarning",
+ "FileNameTooLong",
+ "InvalidArgumentForWidthBucketFunction",
+ "HeldCursorRequiresSameIsolationLevel",
+ "NoSQLJsonItem",
+ "IoError",
+ "SavepointException",
+ "NoActiveSQLTransaction",
+ "InvalidFunctionDefinition",
+ "AdminShutdown",
+ "DatabaseDropped",
+ "InvalidRegularExpression",
+ "WindowingError",
+ "InvalidColumnReference",
+ "InvalidBinaryRepresentation",
+ "SQLJsonScalarRequired",
+ "ConfigurationLimitExceeded",
+ "SyntaxError",
+ "SerializationFailure",
+ "ProgramLimitExceeded",
+ "DuplicateSchema",
+ "SQLStatementNotYetComplete",
+ "LibpqError",
+ "DataException",
+ "SubstringError",
+ "InvalidLocatorSpecification",
+ "InappropriateAccessModeForBranchTransaction",
+ "EREModifyingSQLDataNotPermitted",
+ "InsufficientPrivilege",
+ "NoAdditionalDynamicResultSetsReturned",
+ "SQLJsonArrayNotFound",
+ "NameTooLong",
+ "InvalidTimeZoneDisplacementValue",
+ "InappropriateIsolationLevelForBranchTransaction",
+ "RaiseException",
+ "EREReadingSQLDataNotPermitted",
+ "TriggerProtocolViolated",
+ "NonstandardUseOfEscapeCharacter",
+ "InvalidTransactionInitiation",
+ "DuplicateAlias",
+ "TransactionResolutionUnknown",
+ "TooManyRows",
+ "InvalidXmlComment",
+ "MostSpecificTypeMismatch",
+ "DuplicateObject",
+ "DuplicateCursor",
+ "AmbiguousFunction",
+ "TooManyArguments",
+ "InvalidXmlProcessingInstruction",
+ "InvalidTransactionTermination",
+ "InvalidDatetimeFormat",
+ "InvalidPrecedingOrFollowingSize",
+ "CharacterNotInRepertoire",
+ "SQLSTATE_TO_EXCEPTION",
+]
diff --git a/src/test/pytest/libpq/errors.py b/src/test/pytest/libpq/errors.py
new file mode 100644
index 00000000000..764a96c2478
--- /dev/null
+++ b/src/test/pytest/libpq/errors.py
@@ -0,0 +1,39 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+PostgreSQL error types mapped from SQLSTATE codes.
+
+This module provides LibpqError and its subclasses for handling PostgreSQL
+errors based on SQLSTATE codes. The exception classes in _generated_errors.py
+are auto-generated from src/backend/utils/errcodes.txt.
+
+To regenerate: src/tools/generate_pytest_libpq_errors.py
+"""
+
+from typing import Optional
+
+from ._error_base import LibpqError, LibpqWarning
+from ._generated_errors import (
+ SQLSTATE_TO_EXCEPTION,
+)
+from ._generated_errors import * # noqa: F403
+
+
+def get_exception_class(sqlstate: Optional[str]) -> type:
+ """Get the appropriate exception class for a SQLSTATE code."""
+ if sqlstate in SQLSTATE_TO_EXCEPTION:
+ return SQLSTATE_TO_EXCEPTION[sqlstate]
+ return LibpqError
+
+
+def make_error(message: str, *, sqlstate: Optional[str] = None, **kwargs) -> LibpqError:
+ """Create an appropriate LibpqError subclass based on the SQLSTATE code."""
+ exc_class = get_exception_class(sqlstate)
+ return exc_class(message, sqlstate=sqlstate, **kwargs)
+
+
+__all__ = [
+ "LibpqError",
+ "LibpqWarning",
+ "make_error",
+]
diff --git a/src/test/pytest/meson.build b/src/test/pytest/meson.build
index abd128dfa24..b86be901e7c 100644
--- a/src/test/pytest/meson.build
+++ b/src/test/pytest/meson.build
@@ -10,7 +10,10 @@ tests += {
'bd': meson.current_build_dir(),
'pytest': {
'tests': [
- 'pyt/test_something.py',
+ 'pyt/test_errors.py',
+ 'pyt/test_libpq.py',
+ 'pyt/test_multi_server.py',
+ 'pyt/test_query_helpers.py',
],
},
}
diff --git a/src/test/pytest/pypg/__init__.py b/src/test/pytest/pypg/__init__.py
new file mode 100644
index 00000000000..4ee91289f70
--- /dev/null
+++ b/src/test/pytest/pypg/__init__.py
@@ -0,0 +1,10 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+from ._env import require_test_extras, skip_unless_test_extras
+from .server import PostgresServer
+
+__all__ = [
+ "require_test_extras",
+ "skip_unless_test_extras",
+ "PostgresServer",
+]
diff --git a/src/test/pytest/pypg/_env.py b/src/test/pytest/pypg/_env.py
new file mode 100644
index 00000000000..c4087be3212
--- /dev/null
+++ b/src/test/pytest/pypg/_env.py
@@ -0,0 +1,72 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import logging
+import os
+
+import pytest
+
+logger = logging.getLogger(__name__)
+
+
+def _test_extra_skip_reason(*keys: str) -> str:
+ return "requires {} to be set in PG_TEST_EXTRA".format(", ".join(keys))
+
+
+def _has_test_extra(key: str) -> bool:
+ """
+ Returns True if the PG_TEST_EXTRA environment variable contains the given
+ key.
+ """
+ extra = os.getenv("PG_TEST_EXTRA", "")
+ return key in extra.split()
+
+
+def require_test_extras(*keys: str):
+ """
+ A convenience annotation which will skip tests if all of the required keys
+ are not present in PG_TEST_EXTRA.
+
+ To skip a particular test function or class:
+
+ @pypg.require_test_extras("ldap")
+ def test_some_ldap_feature():
+ ...
+
+ To skip an entire module:
+
+ pytestmark = pypg.require_test_extra("ssl", "kerberos")
+ """
+ return pytest.mark.skipif(
+ not all([_has_test_extra(k) for k in keys]),
+ reason=_test_extra_skip_reason(*keys),
+ )
+
+
+def skip_unless_test_extras(*keys: str):
+ """
+ Skip the current test/fixture if any of the required keys are not present
+ in PG_TEST_EXTRA. Use this inside fixtures where decorators can't be used.
+
+ @pytest.fixture
+ def my_fixture():
+ skip_unless_test_extras("ldap")
+ ...
+ """
+ if not all([_has_test_extra(k) for k in keys]):
+ pytest.skip(_test_extra_skip_reason(*keys))
+
+
+def test_timeout_default() -> int:
+ """
+ Returns the value of the PG_TEST_TIMEOUT_DEFAULT environment variable, in
+ seconds, or 180 if one was not provided.
+ """
+ default = os.getenv("PG_TEST_TIMEOUT_DEFAULT", "")
+ if not default:
+ return 180
+
+ try:
+ return int(default)
+ except ValueError as v:
+ logger.warning("PG_TEST_TIMEOUT_DEFAULT could not be parsed: " + str(v))
+ return 180
diff --git a/src/test/pytest/pypg/fixtures.py b/src/test/pytest/pypg/fixtures.py
new file mode 100644
index 00000000000..8c0cb60daa5
--- /dev/null
+++ b/src/test/pytest/pypg/fixtures.py
@@ -0,0 +1,335 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import os
+import contextlib
+import pathlib
+import time
+from typing import List
+
+import pytest
+
+from ._env import test_timeout_default
+from .util import capture
+from .server import PostgresServer
+
+from libpq import load_libpq_handle, connect as libpq_connect
+
+
+# Stash key for tracking servers for log reporting.
+_servers_key = pytest.StashKey[List[PostgresServer]]()
+
+
+def _record_server_for_log_reporting(request, server):
+ """Record a server for log reporting on test failure."""
+ if _servers_key not in request.node.stash:
+ request.node.stash[_servers_key] = []
+ request.node.stash[_servers_key].append(server)
+
+
+@pytest.fixture
+def remaining_timeout():
+ """
+ This fixture provides a function that returns how much of the
+ PG_TEST_TIMEOUT_DEFAULT remains for the current test, in fractional seconds.
+ This value is never less than zero.
+
+ This fixture is per-test, so the deadline is also reset on a per-test basis.
+ """
+ now = time.monotonic()
+ deadline = now + test_timeout_default()
+
+ return lambda: max(deadline - time.monotonic(), 0)
+
+
+@pytest.fixture(scope="module")
+def remaining_timeout_module():
+ """
+ Same as remaining_timeout, but the deadline is set once per module.
+
+ This fixture is per-module, which means it's generally only really useful
+ for configuring timeouts of operations that happen in the setup phase of
+ another module fixtures. If you use it in a test it would mean that each
+ subsequent test in the module gets a reduced timeout.
+ """
+ now = time.monotonic()
+ deadline = now + test_timeout_default()
+
+ return lambda: max(deadline - time.monotonic(), 0)
+
+
+@pytest.fixture(scope="session")
+def libpq_handle(libdir, bindir):
+ """
+ Loads a ctypes handle for libpq. Some common function prototypes are
+ initialized for general use.
+ """
+ try:
+ return load_libpq_handle(libdir, bindir)
+ except OSError as e:
+ if "wrong ELF class" in str(e):
+ # This happens in CI when trying to lead a 32-bit libpq library
+ # with a 64-bit Python
+ pytest.skip("libpq architecture does not match Python interpreter")
+ raise
+
+
+@pytest.fixture
+def connect(libpq_handle, remaining_timeout):
+ """
+ Returns a function to connect to PostgreSQL via libpq.
+
+ The returned function accepts connection options as keyword arguments
+ (host, port, dbname, etc.) and returns a PGconn object. Connections
+ are automatically cleaned up at the end of the test.
+
+ Example:
+ conn = connect(host='localhost', port=5432, dbname='postgres')
+ result = conn.sql("SELECT 1")
+ """
+ with contextlib.ExitStack() as stack:
+
+ def _connect(**opts):
+ return libpq_connect(libpq_handle, stack, remaining_timeout, **opts)
+
+ yield _connect
+
+
+@pytest.fixture(scope="session")
+def pg_config():
+ """
+ Returns the path to pg_config. Uses PG_CONFIG environment variable if set,
+ otherwise uses 'pg_config' from PATH.
+ """
+ return os.environ.get("PG_CONFIG", "pg_config")
+
+
+@pytest.fixture(scope="session")
+def bindir(pg_config):
+ """
+ Returns the PostgreSQL bin directory using pg_config --bindir.
+ """
+ return pathlib.Path(capture(pg_config, "--bindir"))
+
+
+@pytest.fixture(scope="session")
+def libdir(pg_config):
+ """
+ Returns the PostgreSQL lib directory using pg_config --libdir.
+ """
+ return pathlib.Path(capture(pg_config, "--libdir"))
+
+
+@pytest.fixture(scope="session")
+def tmp_check(tmp_path_factory) -> pathlib.Path:
+ """
+ Returns the tmp_check directory that should be used for the tests. If
+ TESTDATADIR is provided, that will be used; otherwise a new temporary
+ directory is created in the pytest temp root.
+ """
+ d = os.getenv("TESTDATADIR")
+ if d:
+ d = pathlib.Path(d)
+ else:
+ d = tmp_path_factory.mktemp("tmp_check")
+
+ return d
+
+
+@pytest.fixture(scope="session")
+def datadir(tmp_check):
+ """
+ Returns the data directory to use for the pg fixture.
+ """
+
+ return tmp_check / "pgdata"
+
+
+@pytest.fixture(scope="session")
+def sockdir(tmp_path_factory):
+ """
+ Returns the directory name to use as the server's unix_socket_directories
+ setting. Local client connections use this as the PGHOST.
+
+ At the moment, this is always put under the pytest temp root.
+ """
+ return tmp_path_factory.mktemp("sockfiles")
+
+
+@pytest.fixture(scope="session")
+def pg_server_global(bindir, datadir, sockdir, libpq_handle):
+ """
+ Starts a running Postgres server listening on localhost. The HBA initially
+ allows only local UNIX connections from the same user.
+
+ Returns a PostgresServer instance with methods for server management, configuration,
+ and creating test databases/users.
+ """
+ server = PostgresServer("default", bindir, datadir, sockdir, libpq_handle)
+
+ yield server
+
+ # Cleanup any test resources
+ server.cleanup()
+
+ # Stop the server
+ server.stop()
+
+
+@pytest.fixture(scope="module")
+def pg_server_module(pg_server_global):
+ """
+ Module-scoped server context. Which can be useful so that certain settings
+ can be overriden at the module level through autouse fixtures. An example
+ of this is in the SSL tests.
+ """
+ with pg_server_global.subcontext() as s:
+ yield s
+
+
+@pytest.fixture
+def pg(request, pg_server_module, remaining_timeout):
+ """
+ Per-test server context. Use this fixture to make changes to the server
+ which will be rolled back at the end of the test (e.g., creating test
+ users/databases).
+
+ Also captures the PostgreSQL log position at test start so that any new
+ log entries can be included in the test report on failure.
+ """
+ with pg_server_module.start_new_test(remaining_timeout) as s:
+ _record_server_for_log_reporting(request, s)
+ yield s
+
+
+@pytest.fixture
+def conn(pg):
+ """
+ Returns a connected PGconn instance to the test PostgreSQL server.
+ The connection is automatically cleaned up at the end of the test.
+
+ Example:
+ def test_something(conn):
+ result = conn.sql("SELECT 1")
+ assert result == 1
+ """
+ return pg.connect()
+
+
+@pytest.fixture
+def create_pg(request, bindir, sockdir, libpq_handle, tmp_check, remaining_timeout):
+ """
+ Factory fixture to create additional PostgreSQL servers (per-test scope).
+
+ Returns a function that creates new PostgreSQL server instances.
+ Servers are automatically cleaned up at the end of the test.
+
+ Example:
+ def test_multiple_servers(create_pg):
+ node1 = create_pg()
+ node2 = create_pg()
+ node3 = create_pg()
+ """
+ servers = []
+
+ def _create(name=None, **kwargs):
+ if name is None:
+ count = len(servers) + 1
+ name = f"pg{count}"
+
+ datadir = tmp_check / f"pgdata_{name}"
+ server = PostgresServer(name, bindir, datadir, sockdir, libpq_handle, **kwargs)
+ server.set_timeout(remaining_timeout)
+ _record_server_for_log_reporting(request, server)
+ servers.append(server)
+ return server
+
+ yield _create
+
+ for server in servers:
+ server.cleanup()
+ server.stop()
+
+
+@pytest.fixture(scope="module")
+def _module_scoped_servers():
+ """Session-scoped list to track servers created by create_pg_module."""
+ return []
+
+
+@pytest.fixture(scope="module")
+def create_pg_module(
+ bindir,
+ sockdir,
+ libpq_handle,
+ tmp_check,
+ remaining_timeout_module,
+ _module_scoped_servers,
+):
+ """
+ Factory fixture to create additional PostgreSQL servers (module scope).
+
+ Like create_pg, but servers persist for the entire test module.
+ Use this when multiple tests in a module can share the same servers.
+
+ The timeout is automatically set on all servers at the start of each test
+ via the _set_module_server_timeouts autouse fixture.
+
+ Example:
+ @pytest.fixture(scope="module")
+ def shared_nodes(create_pg_module):
+ return [create_pg_module() for _ in range(3)]
+ """
+
+ def _create(name=None, **kwargs):
+ if name is None:
+ count = len(_module_scoped_servers) + 1
+ name = f"pg{count}"
+ datadir = tmp_check / f"pgdata_{name}"
+ server = PostgresServer(name, bindir, datadir, sockdir, libpq_handle, **kwargs)
+ server.set_timeout(remaining_timeout_module)
+ _module_scoped_servers.append(server)
+ return server
+
+ yield _create
+
+ for server in _module_scoped_servers:
+ server.cleanup()
+ server.stop()
+
+
+@pytest.fixture(autouse=True)
+def _set_module_server_timeouts(request, _module_scoped_servers, remaining_timeout):
+ """Autouse fixture that sets timeout, enters subcontext, and records log positions for module-scoped servers."""
+ with contextlib.ExitStack() as stack:
+ for server in _module_scoped_servers:
+ stack.enter_context(server.start_new_test(remaining_timeout))
+ _record_server_for_log_reporting(request, server)
+ yield
+
+
+@pytest.hookimpl(hookwrapper=True, trylast=True)
+def pytest_runtest_makereport(item, call):
+ """
+ Adds PostgreSQL server logs to the test report sections.
+ """
+ outcome = yield
+ report = outcome.get_result()
+
+ if report.when != "call":
+ return
+
+ if _servers_key not in item.stash:
+ return
+
+ servers = item.stash[_servers_key]
+ del item.stash[_servers_key]
+
+ include_name = len(servers) > 1
+
+ for server in servers:
+ content = server.log_content()
+ if content.strip():
+ section_title = "Postgres log"
+ if include_name:
+ section_title += f" ({server.name})"
+ report.sections.append((section_title, content))
diff --git a/src/test/pytest/pypg/server.py b/src/test/pytest/pypg/server.py
new file mode 100644
index 00000000000..9242ab25007
--- /dev/null
+++ b/src/test/pytest/pypg/server.py
@@ -0,0 +1,470 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import os
+import pathlib
+import platform
+import re
+import shutil
+import socket
+import subprocess
+import tempfile
+from collections import namedtuple
+from typing import Callable, Optional
+
+from .util import run
+from libpq import PGconn, connect as libpq_connect
+
+
+class FileBackup(contextlib.AbstractContextManager):
+ """
+ A context manager which backs up a file's contents, restoring them on exit.
+ """
+
+ def __init__(self, file: pathlib.Path):
+ super().__init__()
+
+ self._file = file
+
+ def __enter__(self):
+ with tempfile.NamedTemporaryFile(
+ prefix=self._file.name, dir=self._file.parent, delete=False
+ ) as f:
+ self._backup = pathlib.Path(f.name)
+
+ shutil.copyfile(self._file, self._backup)
+
+ return self
+
+ def __exit__(self, *exc):
+ # Swap the backup and the original file, so that the modified contents
+ # can still be inspected in case of failure.
+ tmp = self._backup.parent / (self._backup.name + ".tmp")
+
+ shutil.copyfile(self._file, tmp)
+ shutil.copyfile(self._backup, self._file)
+ shutil.move(tmp, self._backup)
+
+
+class HBA(FileBackup):
+ """
+ Backs up a server's HBA configuration and provides means for temporarily
+ editing it.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "pg_hba.conf")
+
+ def prepend(self, *lines):
+ """
+ Temporarily prepends lines to the server's pg_hba.conf.
+
+ As sugar for aligning HBA columns in the tests, each line can be either
+ a string or a list of strings. List elements will be joined by single
+ spaces before they are written to file.
+ """
+ with open(self._file, "r") as f:
+ prior_data = f.read()
+
+ with open(self._file, "w") as f:
+ for line in lines:
+ if isinstance(line, list):
+ print(*line, file=f)
+ else:
+ print(line, file=f)
+
+ f.write(prior_data)
+
+
+class Config(FileBackup):
+ """
+ Backs up a server's postgresql.conf and provides means for temporarily
+ editing it.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "postgresql.conf")
+
+ def set(self, **gucs):
+ """
+ Temporarily appends GUC settings to the server's postgresql.conf.
+ """
+
+ with open(self._file, "a") as f:
+ print(file=f)
+
+ for n, v in gucs.items():
+ v = str(v)
+
+ # TODO: proper quoting
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+ v = "'{}'".format(v)
+
+ print(n, "=", v, file=f)
+
+
+Backup = namedtuple("Backup", "conf, hba")
+
+
+class PostgresServer:
+ """
+ Represents a running PostgreSQL server instance with management utilities.
+ Provides methods for configuration, user/database creation, and server control.
+ """
+
+ def __init__(
+ self,
+ name,
+ bindir,
+ datadir,
+ sockdir,
+ libpq_handle,
+ *,
+ hostaddr: Optional[str] = None,
+ port: Optional[int] = None,
+ ):
+ """
+ Initialize and start a PostgreSQL server instance.
+
+ Args:
+ name: The name of this server instance (for logging purposes)
+ bindir: Path to PostgreSQL bin directory
+ datadir: Path to data directory for this server
+ sockdir: Path to directory for Unix sockets
+ libpq_handle: ctypes handle to libpq
+ hostaddr: If provided, use this specific address (e.g., "127.0.0.2")
+ port: If provided, use this port instead of finding a free one,
+ is currently only allowed if hostaddr is also provided
+ """
+
+ if hostaddr is None and port is not None:
+ raise NotImplementedError("port was provided without hostaddr")
+
+ self.name = name
+ self.datadir = datadir
+ self.sockdir = sockdir
+ self.libpq_handle = libpq_handle
+ self._remaining_timeout_fn: Optional[Callable[[], float]] = None
+ self._bindir = bindir
+ self._pg_ctl = bindir / "pg_ctl"
+ self.log = datadir / "postgresql.log"
+ self._log_start_pos = 0
+
+ # Determine whether to use Unix sockets
+ use_unix_sockets = platform.system() != "Windows" and hostaddr is None
+
+ # Use INITDB_TEMPLATE if available (much faster than running initdb)
+ initdb_template = os.environ.get("INITDB_TEMPLATE")
+ if initdb_template and os.path.isdir(initdb_template):
+ shutil.copytree(initdb_template, datadir)
+ else:
+ if platform.system() == "Windows":
+ auth_method = "trust"
+ else:
+ auth_method = "peer"
+ run(
+ bindir / "initdb",
+ "--no-sync",
+ "--auth",
+ auth_method,
+ "--pgdata",
+ self.datadir,
+ )
+
+ # Figure out a port to listen on. Attempt to reserve both IPv4 and IPv6
+ # addresses in one go.
+ #
+ # Note: socket.has_dualstack_ipv6/create_server are only in Python 3.8+.
+ if hostaddr is not None:
+ # Explicit address provided
+ addrs: list[str] = [hostaddr]
+ temp_sock = socket.socket()
+ if port is None:
+ temp_sock.bind((hostaddr, 0))
+ _, port = temp_sock.getsockname()
+
+ elif hasattr(socket, "has_dualstack_ipv6") and socket.has_dualstack_ipv6():
+ addr = ("::1", 0)
+ temp_sock = socket.create_server(
+ addr, family=socket.AF_INET6, dualstack_ipv6=True
+ )
+
+ hostaddr, port, _, _ = temp_sock.getsockname()
+ assert hostaddr is not None
+ addrs = [hostaddr, "127.0.0.1"]
+
+ else:
+ addr = ("127.0.0.1", 0)
+
+ temp_sock = socket.socket()
+ temp_sock.bind(addr)
+
+ hostaddr, port = temp_sock.getsockname()
+ assert hostaddr is not None
+ addrs = [hostaddr]
+
+ # Store the computed values
+ self.hostaddr = hostaddr
+ self.port = port
+ # Including the host to use for connections - either the socket
+ # directory or TCP address
+ if use_unix_sockets:
+ self.host = str(sockdir)
+ else:
+ self.host = hostaddr
+
+ with open(os.path.join(datadir, "postgresql.conf"), "a") as f:
+ print(file=f)
+ if use_unix_sockets:
+ print(
+ "unix_socket_directories = '{}'".format(sockdir.as_posix()),
+ file=f,
+ )
+ else:
+ # Disable Unix sockets when using TCP to avoid lock conflicts
+ print("unix_socket_directories = ''", file=f)
+ print("listen_addresses = '{}'".format(",".join(addrs)), file=f)
+ print("port =", port, file=f)
+ print("log_connections = all", file=f)
+ print("fsync = off", file=f)
+ print("datestyle = 'ISO'", file=f)
+ print("timezone = 'UTC'", file=f)
+
+ # Between closing of the socket, s, and server start, we're racing
+ # against anything that wants to open up ephemeral ports, so try not to
+ # put any new work here.
+
+ temp_sock.close()
+ self.pg_ctl("start")
+
+ # Read the PID file to get the postmaster PID
+ with open(os.path.join(datadir, "postmaster.pid")) as f:
+ self.pid = int(f.readline().strip())
+
+ # ExitStack for cleanup callbacks
+ self._cleanup_stack = contextlib.ExitStack()
+
+ def current_log_position(self):
+ """Get the current end position of the log file."""
+ if self.log.exists():
+ return self.log.stat().st_size
+ return 0
+
+ def reset_log_position(self):
+ """Mark current log position as start for log_content()."""
+ self._log_start_pos = self.current_log_position()
+
+ @contextlib.contextmanager
+ def start_new_test(self, remaining_timeout):
+ """
+ Prepare server for a new test.
+
+ Sets timeout, resets log position, and enters a cleanup subcontext.
+ """
+ self.set_timeout(remaining_timeout)
+ self.reset_log_position()
+ with self.subcontext():
+ yield self
+
+ def psql(self, *args):
+ """Run psql with the given arguments."""
+ self._run(os.path.join(self._bindir, "psql"), "-w", *args)
+
+ def sql(self, query):
+ """Execute a SQL query via libpq. Returns simplified results."""
+ with self.connect() as conn:
+ return conn.sql(query)
+
+ def pg_ctl(self, *args):
+ """Run pg_ctl with the given arguments."""
+ self._run(self._pg_ctl, "--pgdata", self.datadir, "--log", self.log, *args)
+
+ def _run(self, cmd, *args, addenv: Optional[dict] = None):
+ """Run a command with PG* environment variables set."""
+ subenv = dict(os.environ)
+ subenv.update(
+ {
+ "PGHOST": str(self.host),
+ "PGPORT": str(self.port),
+ "PGDATABASE": "postgres",
+ "PGDATA": str(self.datadir),
+ }
+ )
+ if addenv:
+ subenv.update(addenv)
+ run(cmd, *args, env=subenv)
+
+ def create_users(self, *userkeys: str):
+ """Create test users and register them for cleanup."""
+ usermap = {}
+ for u in userkeys:
+ name = u + "user"
+ usermap[u] = name
+ self.psql("-c", "CREATE USER " + name)
+ self._cleanup_stack.callback(self.psql, "-c", "DROP USER " + name)
+ return usermap
+
+ def create_dbs(self, *dbkeys: str):
+ """Create test databases and register them for cleanup."""
+ dbmap = {}
+ for d in dbkeys:
+ name = d + "db"
+ dbmap[d] = name
+ self.psql("-c", "CREATE DATABASE " + name)
+ self._cleanup_stack.callback(self.psql, "-c", "DROP DATABASE " + name)
+ return dbmap
+
+ @contextlib.contextmanager
+ def reloading(self):
+ """
+ Provides a context manager for making configuration changes.
+
+ If the context suite finishes successfully, the configuration will
+ be reloaded via pg_ctl. On teardown, the configuration changes will
+ be unwound, and the server will be signaled to reload again.
+
+ The context target contains the following attributes which can be
+ used to configure the server:
+ - .conf: modifies postgresql.conf
+ - .hba: modifies pg_hba.conf
+
+ For example:
+
+ with pg_server_session.reloading() as s:
+ s.conf.set(log_connections="on")
+ s.hba.prepend("local all all trust")
+ """
+ # Push a reload onto the stack before making any other
+ # unwindable changes. That way the order of operations will be
+ #
+ # # test
+ # - config change 1
+ # - config change 2
+ # - reload
+ # # teardown
+ # - undo config change 2
+ # - undo config change 1
+ # - reload
+ #
+ self._cleanup_stack.callback(self.pg_ctl, "reload")
+ yield self._backup_configuration()
+
+ # Now actually reload
+ self.pg_ctl("reload")
+
+ @contextlib.contextmanager
+ def restarting(self):
+ """Like .reloading(), but with a full server restart."""
+ self._cleanup_stack.callback(self.pg_ctl, "restart")
+ yield self._backup_configuration()
+ self.pg_ctl("restart")
+
+ def _backup_configuration(self):
+ # Wrap the existing HBA and configuration with FileBackups.
+ return Backup(
+ hba=self._cleanup_stack.enter_context(HBA(self.datadir)),
+ conf=self._cleanup_stack.enter_context(Config(self.datadir)),
+ )
+
+ @contextlib.contextmanager
+ def subcontext(self):
+ """
+ Create a new cleanup context for per-test isolation.
+
+ Temporarily replaces the cleanup stack so that any cleanup callbacks
+ registered within this context will be cleaned up when the context exits.
+ """
+ old_stack = self._cleanup_stack
+ self._cleanup_stack = contextlib.ExitStack()
+ try:
+ self._cleanup_stack.__enter__()
+ yield self
+ finally:
+ self._cleanup_stack.__exit__(None, None, None)
+ self._cleanup_stack = old_stack
+
+ def stop(self, mode="fast"):
+ """
+ Stop the PostgreSQL server instance.
+
+ Ignores failures if the server is already stopped.
+ """
+ try:
+ self.pg_ctl("stop", "--mode", mode)
+ except subprocess.CalledProcessError:
+ # Server may have already been stopped
+ pass
+
+ def log_content(self) -> str:
+ """Return log content from the current context's start position."""
+ with open(self.log) as f:
+ f.seek(self._log_start_pos)
+ return f.read()
+
+ @contextlib.contextmanager
+ def log_contains(self, pattern, times=None):
+ """
+ Context manager that checks if the log matches pattern during the block.
+
+ Args:
+ pattern: The regex pattern to search for.
+ times: If None, any number of matches is accepted.
+ If a number, exactly that many matches are required.
+ """
+ start_pos = self.current_log_position()
+ yield
+ with open(self.log) as f:
+ f.seek(start_pos)
+ content = f.read()
+ if times is None:
+ assert re.search(pattern, content), f"Pattern {pattern!r} not found in log"
+ else:
+ match_count = len(re.findall(pattern, content))
+ assert match_count == times, (
+ f"Expected {times} matches of {pattern!r}, found {match_count}"
+ )
+
+ def cleanup(self):
+ """Run all registered cleanup callbacks."""
+ self._cleanup_stack.close()
+
+ def set_timeout(self, remaining_timeout_fn: Callable[[], float]) -> None:
+ """
+ Set the timeout function for connections.
+ This is typically called by pg fixture for each test.
+ """
+ self._remaining_timeout_fn = remaining_timeout_fn
+
+ def connect(self, **opts) -> PGconn:
+ """
+ Creates a connection to this PostgreSQL server instance.
+
+ Args:
+ **opts: Additional connection options (can override defaults)
+
+ Returns:
+ PGconn: Connected database connection
+
+ Example:
+ conn = pg.connect()
+ conn = pg.connect(dbname='mydb')
+ """
+ if self._remaining_timeout_fn is None:
+ raise RuntimeError(
+ "Timeout function not set. Use set_timeout() or pg fixture."
+ )
+
+ defaults = {
+ "host": self.host,
+ "port": self.port,
+ "dbname": "postgres",
+ }
+ defaults.update(opts)
+
+ return libpq_connect(
+ self.libpq_handle,
+ self._cleanup_stack,
+ self._remaining_timeout_fn,
+ **defaults,
+ )
diff --git a/src/test/pytest/pypg/util.py b/src/test/pytest/pypg/util.py
new file mode 100644
index 00000000000..b2a1e627e4b
--- /dev/null
+++ b/src/test/pytest/pypg/util.py
@@ -0,0 +1,42 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import shlex
+import subprocess
+import sys
+
+
+def eprint(*args, **kwargs):
+ """eprint prints to stderr"""
+ print(*args, file=sys.stderr, **kwargs)
+
+
+def run(*command, check=True, shell=None, silent=False, **kwargs):
+ """run runs the given command and prints it to stderr"""
+
+ if shell is None:
+ shell = len(command) == 1 and isinstance(command[0], str)
+
+ if shell:
+ command = command[0]
+ else:
+ command = list(map(str, command))
+
+ if not silent:
+ if shell:
+ eprint(f"+ {command}")
+ else:
+ # We could normally use shlex.join here, but it's not available in
+ # Python 3.6 which we still like to support
+ unsafe_string_cmd = " ".join(map(shlex.quote, command))
+ eprint(f"+ {unsafe_string_cmd}")
+
+ if silent:
+ kwargs.setdefault("stdout", subprocess.DEVNULL)
+
+ return subprocess.run(command, check=check, shell=shell, **kwargs)
+
+
+def capture(command, *args, stdout=subprocess.PIPE, encoding="utf-8", **kwargs):
+ return run(
+ command, *args, stdout=stdout, encoding=encoding, **kwargs
+ ).stdout.removesuffix("\n")
diff --git a/src/test/pytest/pyt/conftest.py b/src/test/pytest/pyt/conftest.py
new file mode 100644
index 00000000000..dd73917c68c
--- /dev/null
+++ b/src/test/pytest/pyt/conftest.py
@@ -0,0 +1 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
diff --git a/src/test/pytest/pyt/test_errors.py b/src/test/pytest/pyt/test_errors.py
new file mode 100644
index 00000000000..ad109039668
--- /dev/null
+++ b/src/test/pytest/pyt/test_errors.py
@@ -0,0 +1,34 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests for libpq error types and SQLSTATE-based exception mapping.
+"""
+
+import pytest
+import libpq
+
+
+def test_syntax_error(conn):
+ """Invalid SQL syntax raises SyntaxError with correct SQLSTATE."""
+ with pytest.raises(libpq.errors.SyntaxError) as exc_info:
+ conn.sql("SELEC 1")
+
+ err = exc_info.value
+ assert err.sqlstate == "42601"
+ assert err.sqlstate_class == "42"
+ assert "syntax" in str(err).lower()
+
+
+def test_unique_violation(conn):
+ """Unique violation includes all error fields and can be caught as parent class."""
+ conn.sql("CREATE TEMP TABLE test_uv (id int CONSTRAINT test_uv_pk PRIMARY KEY)")
+ conn.sql("INSERT INTO test_uv VALUES (1)")
+
+ with pytest.raises(libpq.errors.UniqueViolation) as exc_info:
+ conn.sql("INSERT INTO test_uv VALUES (1)")
+
+ err = exc_info.value
+ assert err.sqlstate == "23505"
+ assert err.table_name == "test_uv"
+ assert err.constraint_name == "test_uv_pk"
+ assert err.detail == "Key (id)=(1) already exists."
diff --git a/src/test/pytest/pyt/test_libpq.py b/src/test/pytest/pyt/test_libpq.py
new file mode 100644
index 00000000000..4fcf4056f41
--- /dev/null
+++ b/src/test/pytest/pyt/test_libpq.py
@@ -0,0 +1,172 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import os
+import socket
+import struct
+import threading
+from typing import Callable
+
+import pytest
+
+from libpq import connstr, LibpqError
+
+
+@pytest.mark.parametrize(
+ "opts, expected",
+ [
+ (dict(), ""),
+ (dict(port=5432), "port=5432"),
+ (dict(port=5432, dbname="postgres"), "port=5432 dbname=postgres"),
+ (dict(host=""), "host=''"),
+ (dict(host=" "), r"host=' '"),
+ (dict(keyword="'"), r"keyword=\'"),
+ (dict(keyword=" \\' "), r"keyword=' \\\' '"),
+ ],
+)
+def test_connstr(opts, expected):
+ """Tests the escape behavior for connstr()."""
+ assert connstr(opts) == expected
+
+
+def test_must_connect_errors(connect):
+ """Tests that connect() raises LibpqError."""
+ with pytest.raises(LibpqError, match="invalid connection option"):
+ connect(some_unknown_keyword="whatever")
+
+
+@pytest.fixture
+def local_server(tmp_path, remaining_timeout):
+ """
+ Opens up a local UNIX socket for mocking a Postgres server on a background
+ thread. See the _Server API for usage.
+
+ This fixture requires AF_UNIX support; dependent tests will be skipped on
+ platforms that don't provide it.
+ """
+
+ try:
+ from socket import AF_UNIX
+ except ImportError:
+ pytest.skip("AF_UNIX not supported on this platform")
+
+ class _Server(contextlib.ExitStack):
+ """
+ Implementation class for local_server. See .background() for the primary
+ entry point for tests. Postgres clients may connect to this server via
+ local_server.host/local_server.port.
+
+ _Server derives from contextlib.ExitStack to provide easy cleanup of
+ associated resources; see the documentation for that class for a full
+ explanation.
+ """
+
+ def __init__(self):
+ super().__init__()
+
+ self.host = tmp_path
+ self.port = 5432
+
+ self._thread = None
+ self._thread_exc = None
+ self._listener = self.enter_context(
+ socket.socket(AF_UNIX, socket.SOCK_STREAM),
+ )
+
+ def bind_and_listen(self):
+ """
+ Does the actual work of binding the UNIX socket using the Postgres
+ server conventions and listening for connections.
+
+ The listen backlog is currently hardcoded to one.
+ """
+ sockfile = self.host / ".s.PGSQL.{}".format(self.port)
+
+ # Lock down the permissions on the new socket.
+ prev_mask = os.umask(0o077)
+
+ # Bind (creating the socket file), and immediately register it for
+ # deletion from disk when the stack is cleaned up.
+ self._listener.bind(bytes(sockfile))
+ self.callback(os.unlink, sockfile)
+
+ os.umask(prev_mask)
+
+ self._listener.listen(1)
+
+ def background(self, fn: Callable[[socket.socket], None]) -> None:
+ """
+ Accepts a client connection on a background thread and passes it to
+ the provided callback. Any exceptions raised from the callback will
+ be re-raised on the main thread during fixture teardown.
+
+ Blocking operations on the connected socket default to using the
+ remaining_timeout(), though this can be changed by the test via the
+ socket's .settimeout().
+ """
+
+ def _bg():
+ try:
+ self._listener.settimeout(remaining_timeout())
+ sock, _ = self._listener.accept()
+
+ with sock:
+ sock.settimeout(remaining_timeout())
+ fn(sock)
+
+ except Exception as e:
+ # Save the exception for re-raising on the main thread.
+ self._thread_exc = e
+
+ # TODO: rather than using callback(), consider explicitly signaling
+ # the fn() implementation to stop early if we get an exception.
+ # Otherwise we'll hang until the end of the timeout.
+ self._thread = threading.Thread(target=_bg)
+ self.callback(self._join)
+
+ self._thread.start()
+
+ def _join(self):
+ """
+ Waits for the background thread to finish and raises any thrown
+ exception. This is called during fixture teardown.
+ """
+ # Give a little bit of wiggle room on the join timeout, since we're
+ # racing against the test's own use of remaining_timeout(). (It's
+ # preferable to let tests report timeouts; the stack traces will
+ # help with debugging.)
+ self._thread.join(remaining_timeout() + 1)
+ if self._thread.is_alive():
+ raise TimeoutError("background thread is still running after timeout")
+
+ if self._thread_exc is not None:
+ raise self._thread_exc
+
+ with _Server() as s:
+ s.bind_and_listen()
+ yield s
+
+
+def test_connection_is_finished_on_error(connect, local_server):
+ """Tests that PQfinish() gets called at the end of testing."""
+ expected_error = "something is wrong"
+
+ def serve_error(s: socket.socket) -> None:
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Quick check for the startup packet version.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (3, 0)
+
+ # Discard the remainder of the startup packet and send a v2 error.
+ s.recv(pktlen - 8)
+ s.send(b"E" + expected_error.encode() + b"\0")
+
+ # And now the socket should be closed.
+ assert not s.recv(1), "client sent unexpected data"
+
+ local_server.background(serve_error)
+
+ with pytest.raises(LibpqError, match=expected_error):
+ # Exiting this context should result in PQfinish().
+ connect(host=local_server.host, port=local_server.port)
diff --git a/src/test/pytest/pyt/test_multi_server.py b/src/test/pytest/pyt/test_multi_server.py
new file mode 100644
index 00000000000..8ee045b0cc8
--- /dev/null
+++ b/src/test/pytest/pyt/test_multi_server.py
@@ -0,0 +1,46 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests demonstrating multi-server functionality using create_pg fixture.
+
+These tests verify that the pytest infrastructure correctly handles
+multiple PostgreSQL server instances within a single test, and that
+module-scoped servers persist across tests.
+"""
+
+import pytest
+
+
+def test_multiple_servers_basic(create_pg):
+ """Test that we can create and connect to multiple servers."""
+ node1 = create_pg("primary")
+ node2 = create_pg("secondary")
+
+ conn1 = node1.connect()
+ conn2 = node2.connect()
+
+ # Each server should have its own data directory
+ datadir1 = conn1.sql("SHOW data_directory")
+ datadir2 = conn2.sql("SHOW data_directory")
+ assert datadir1 != datadir2
+
+ # Each server should be listening on a different port
+ assert node1.port != node2.port
+
+
+@pytest.fixture(scope="module")
+def shared_server(create_pg_module):
+ """A server shared across all tests in this module."""
+ server = create_pg_module("shared")
+ server.sql("CREATE TABLE module_state (value int DEFAULT 0)")
+ return server
+
+
+def test_module_server_create_row(shared_server):
+ """First test: create a row in the shared server."""
+ shared_server.connect().sql("INSERT INTO module_state VALUES (42)")
+
+
+def test_module_server_see_row(shared_server):
+ """Second test: verify we see the row from the previous test."""
+ assert shared_server.connect().sql("SELECT value FROM module_state") == 42
diff --git a/src/test/pytest/pyt/test_query_helpers.py b/src/test/pytest/pyt/test_query_helpers.py
new file mode 100644
index 00000000000..abcd9084214
--- /dev/null
+++ b/src/test/pytest/pyt/test_query_helpers.py
@@ -0,0 +1,347 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests for query helper functions with type conversion and result simplification.
+"""
+
+import uuid
+
+import pytest
+
+
+def test_single_cell_int(conn):
+ """Single cell integer query returns just the value."""
+ result = conn.sql("SELECT 1")
+ assert result == 1
+ assert isinstance(result, int)
+
+
+def test_single_cell_string(conn):
+ """Single cell string query returns just the value."""
+ result = conn.sql("SELECT 'hello'")
+ assert result == "hello"
+ assert isinstance(result, str)
+
+
+def test_single_cell_bool(conn):
+ """Single cell boolean query returns just the value."""
+
+ result = conn.sql("SELECT true")
+ assert result is True
+ assert isinstance(result, bool)
+
+ result = conn.sql("SELECT false")
+ assert result is False
+
+
+def test_single_cell_float(conn):
+ """Single cell float query returns just the value."""
+
+ result = conn.sql("SELECT 3.14::float4")
+ assert isinstance(result, float)
+ assert abs(result - 3.14) < 0.01
+
+
+def test_single_cell_null(conn):
+ """Single cell NULL query returns None."""
+
+ result = conn.sql("SELECT NULL")
+ assert result is None
+
+
+def test_single_row_multiple_columns(conn):
+ """Single row with multiple columns returns a tuple."""
+
+ result = conn.sql("SELECT 1, 'hello', true")
+ assert result == (1, "hello", True)
+ assert isinstance(result, tuple)
+
+
+def test_single_column_multiple_rows(conn):
+ """Single column with multiple rows returns a list of values."""
+
+ result = conn.sql("SELECT * FROM generate_series(1, 3)")
+ assert result == [1, 2, 3]
+ assert isinstance(result, list)
+
+
+def test_multiple_rows_and_columns(conn):
+ """Multiple rows and columns returns list of tuples."""
+
+ result = conn.sql("SELECT * FROM (VALUES (1, 'a'), (2, 'b'), (3, 'c')) AS t")
+ assert result == [(1, "a"), (2, "b"), (3, "c")]
+ assert isinstance(result, list)
+ assert all(isinstance(row, tuple) for row in result)
+
+
+def test_empty_result(conn):
+ """Empty result set returns empty list."""
+
+ result = conn.sql("SELECT 1 WHERE false")
+ assert result == []
+
+
+def test_query_error_handling(conn):
+ """Query errors raise RuntimeError with actual error message."""
+
+ with pytest.raises(RuntimeError) as exc_info:
+ conn.sql("SELECT * FROM nonexistent_table")
+
+ error_msg = str(exc_info.value)
+ assert "nonexistent_table" in error_msg or "does not exist" in error_msg
+
+
+def test_division_by_zero_error(conn):
+ """Division by zero raises RuntimeError."""
+
+ with pytest.raises(RuntimeError) as exc_info:
+ conn.sql("SELECT 1/0")
+
+ error_msg = str(exc_info.value)
+ assert "division by zero" in error_msg.lower()
+
+
+def test_simple_exec_create_table(conn):
+ """sql for CREATE TABLE returns None."""
+
+ result = conn.sql("CREATE TEMP TABLE test_table (id int, name text)")
+ assert result is None
+
+ # Verify table was created
+ count = conn.sql("SELECT COUNT(*) FROM test_table")
+ assert count == 0
+
+
+def test_simple_exec_insert(conn):
+ """sql for INSERT returns None."""
+
+ conn.sql("CREATE TEMP TABLE test_table (id int, name text)")
+ result = conn.sql("INSERT INTO test_table VALUES (1, 'Alice'), (2, 'Bob')")
+ assert result is None
+
+ # Verify data was inserted
+ count = conn.sql("SELECT COUNT(*) FROM test_table")
+ assert count == 2
+
+
+def test_type_conversion_mixed(conn):
+ """Test mixed type conversion in a single row."""
+
+ result = conn.sql("SELECT 42::int4, 123::int8, 3.14::float8, 'text', true, NULL")
+ assert result == (42, 123, 3.14, "text", True, None)
+ assert isinstance(result[0], int)
+ assert isinstance(result[1], int)
+ assert isinstance(result[2], float)
+ assert isinstance(result[3], str)
+ assert isinstance(result[4], bool)
+ assert result[5] is None
+
+
+def test_multiple_queries_same_connection(conn):
+ """Test running multiple queries on the same connection."""
+
+ result1 = conn.sql("SELECT 1")
+ assert result1 == 1
+
+ result2 = conn.sql("SELECT 'hello', 'world'")
+ assert result2 == ("hello", "world")
+
+ result3 = conn.sql("SELECT * FROM generate_series(1, 5)")
+ assert result3 == [1, 2, 3, 4, 5]
+
+
+def test_date_type(conn):
+ """Test date type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '2025-10-20'::date")
+ assert result == datetime.date(2025, 10, 20)
+ assert isinstance(result, datetime.date)
+
+
+def test_timestamp_type(conn):
+ """Test timestamp type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '2025-10-20 15:30:45'::timestamp")
+ assert result == datetime.datetime(2025, 10, 20, 15, 30, 45)
+ assert isinstance(result, datetime.datetime)
+
+
+def test_time_type(conn):
+ """Test time type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '15:30:45'::time")
+ assert result == datetime.time(15, 30, 45)
+ assert isinstance(result, datetime.time)
+
+
+def test_numeric_type(conn):
+ """Test numeric/decimal type conversion."""
+ import decimal
+
+ result = conn.sql("SELECT 123.456::numeric")
+ assert result == decimal.Decimal("123.456")
+ assert isinstance(result, decimal.Decimal)
+
+
+def test_int_array(conn):
+ """Test integer array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[1, 2, 3, 4, 5]")
+ assert result == [1, 2, 3, 4, 5]
+ assert isinstance(result, list)
+ assert all(isinstance(x, int) for x in result)
+
+
+def test_text_array(conn):
+ """Test text array type conversion."""
+
+ result = conn.sql("SELECT ARRAY['hello', 'world', 'test']")
+ assert result == ["hello", "world", "test"]
+ assert isinstance(result, list)
+ assert all(isinstance(x, str) for x in result)
+
+
+def test_bool_array(conn):
+ """Test boolean array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[true, false, true]")
+ assert result == [True, False, True]
+ assert isinstance(result, list)
+ assert all(isinstance(x, bool) for x in result)
+
+
+def test_empty_array(conn):
+ """Test empty array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[]::int[]")
+ assert result == []
+ assert isinstance(result, list)
+
+
+def test_json_type(conn):
+ """Test JSON type (parsed to dict)."""
+
+ result = conn.sql('SELECT \'{"key": "value"}\'::json')
+ assert isinstance(result, dict)
+ assert result == {"key": "value"}
+
+
+def test_jsonb_type(conn):
+ """Test JSONB type (parsed to dict)."""
+
+ result = conn.sql('SELECT \'{"name": "test", "count": 42}\'::jsonb')
+ assert isinstance(result, dict)
+ assert result == {"name": "test", "count": 42}
+
+
+def test_json_array(conn):
+ """Test JSON array type."""
+
+ result = conn.sql("SELECT '[1, 2, 3, 4, 5]'::json")
+ assert isinstance(result, list)
+ assert result == [1, 2, 3, 4, 5]
+
+
+def test_json_nested(conn):
+ """Test nested JSON object."""
+
+ result = conn.sql(
+ 'SELECT \'{"user": {"id": 1, "name": "Alice"}, "active": true}\'::json'
+ )
+ assert isinstance(result, dict)
+ assert result == {"user": {"id": 1, "name": "Alice"}, "active": True}
+
+
+def test_mixed_types_with_arrays(conn):
+ """Test mixed types including arrays in a single row."""
+
+ result = conn.sql("SELECT 42, 'text', ARRAY[1, 2, 3], true")
+ assert result == (42, "text", [1, 2, 3], True)
+ assert isinstance(result[0], int)
+ assert isinstance(result[1], str)
+ assert isinstance(result[2], list)
+ assert isinstance(result[3], bool)
+
+
+def test_uuid_type(conn):
+ """Test UUID type conversion."""
+ test_uuid = "550e8400-e29b-41d4-a716-446655440000"
+ result = conn.sql(f"SELECT '{test_uuid}'::uuid")
+ assert result == uuid.UUID(test_uuid)
+ assert isinstance(result, uuid.UUID)
+
+
+def test_uuid_generation(conn):
+ """Test generated UUID type conversion."""
+ result = conn.sql("SELECT uuidv4()")
+ assert isinstance(result, uuid.UUID)
+ # Check it's a valid UUID by ensuring it can be converted to string
+ assert len(str(result)) == 36 # UUID string format length
+
+
+def test_text_array_with_commas(conn):
+ """Test text array with elements containing commas."""
+
+ result = conn.sql("SELECT ARRAY['A,B', 'C', ' D ']")
+ assert result == ["A,B", "C", " D "]
+
+
+def test_text_array_with_quotes(conn):
+ """Test text array with elements containing quotes."""
+
+ result = conn.sql(r"SELECT ARRAY[E'a\"b', 'c']")
+ assert result == ['a"b', "c"]
+
+
+def test_text_array_with_backslash(conn):
+ """Test text array with elements containing backslashes."""
+
+ result = conn.sql(r"SELECT ARRAY[E'a\\b', 'c']")
+ assert result == ["a\\b", "c"]
+
+
+def test_json_array_type(conn):
+ """Test array of JSON values with embedded quotes and commas."""
+
+ result = conn.sql("""SELECT ARRAY['{"abc": 123, "xyz": 456}'::json]""")
+ assert result == [{"abc": 123, "xyz": 456}]
+
+
+def test_json_array_multiple(conn):
+ """Test array of multiple JSON objects."""
+
+ result = conn.sql(
+ """SELECT ARRAY['{"a": 1}'::json, '{"b": 2}'::json, '["x", "y"]'::json]"""
+ )
+ assert result == [{"a": 1}, {"b": 2}, ["x", "y"]]
+
+
+def test_2d_int_array(conn):
+ """Test 2D integer array."""
+
+ result = conn.sql("SELECT ARRAY[[1,2],[3,4]]")
+ assert result == [[1, 2], [3, 4]]
+
+
+def test_2d_text_array(conn):
+ """Test 2D integer array."""
+
+ result = conn.sql("SELECT ARRAY[['a','b'],['c','d,e']]")
+ assert result == [["a", "b"], ["c", "d,e"]]
+
+
+def test_3d_int_array(conn):
+ """Test 3D integer array."""
+
+ result = conn.sql("SELECT ARRAY[[[1,2],[3,4]],[[5,6],[7,8]]]")
+ assert result == [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]
+
+
+def test_array_with_null(conn):
+ """Test array with NULL elements."""
+
+ result = conn.sql("SELECT ARRAY[1, NULL, 3]")
+ assert result == [1, None, 3]
diff --git a/src/tools/generate_pytest_libpq_errors.py b/src/tools/generate_pytest_libpq_errors.py
new file mode 100755
index 00000000000..ba92891c17a
--- /dev/null
+++ b/src/tools/generate_pytest_libpq_errors.py
@@ -0,0 +1,147 @@
+#!/usr/bin/env python3
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Generate src/test/pytest/libpq/_generated_errors.py from errcodes.txt.
+"""
+
+import sys
+from pathlib import Path
+
+
+ACRONYMS = {"sql", "fdw"}
+WORD_MAP = {
+ "sqlclient": "SQLClient",
+ "sqlserver": "SQLServer",
+ "sqlconnection": "SQLConnection",
+}
+
+
+def snake_to_pascal(name: str) -> str:
+ """Convert snake_case to PascalCase, keeping acronyms uppercase."""
+ words = []
+ for word in name.split("_"):
+ if word in WORD_MAP:
+ words.append(WORD_MAP[word])
+ elif word in ACRONYMS:
+ words.append(word.upper())
+ else:
+ words.append(word.capitalize())
+ return "".join(words)
+
+
+def parse_errcodes(path: Path):
+ """Parse errcodes.txt and return list of (sqlstate, macro_name, spec_name) tuples."""
+ errors = []
+
+ with open(path) as f:
+ for line in f:
+ parts = line.split()
+ if len(parts) >= 4 and len(parts[0]) == 5:
+ sqlstate, _, macro_name, spec_name = parts[:4]
+ errors.append((sqlstate, macro_name, spec_name))
+
+ return errors
+
+
+def macro_to_class_name(macro_name: str) -> str:
+ """Convert ERRCODE_FOO_BAR to FooBar."""
+ name = macro_name.removeprefix("ERRCODE_")
+ # Move WARNING prefix to the end as a suffix
+ if name.startswith("WARNING_"):
+ name = name.removeprefix("WARNING_") + "_WARNING"
+ return snake_to_pascal(name.lower())
+
+
+def generate_errors(errcodes_path: Path):
+ """Generate the _generated_errors.py content."""
+ errors = parse_errcodes(errcodes_path)
+
+ # Find spec_names that appear more than once (collisions)
+ spec_name_counts: dict[str, int] = {}
+ for _, _, spec_name in errors:
+ spec_name_counts[spec_name] = spec_name_counts.get(spec_name, 0) + 1
+ colliding_spec_names = {
+ name for name, count in spec_name_counts.items() if count > 1
+ }
+
+ lines = [
+ "# Copyright (c) 2025, PostgreSQL Global Development Group",
+ "# This file is generated by src/tools/generate_pytest_libpq_errors.py - do not edit directly.",
+ "",
+ '"""',
+ "Generated PostgreSQL error classes mapped from SQLSTATE codes.",
+ '"""',
+ "",
+ "from typing import Dict",
+ "",
+ "from ._error_base import LibpqError, LibpqWarning",
+ "",
+ "",
+ ]
+
+ generated_classes = {"LibpqError"}
+ sqlstate_to_exception = {}
+
+ for sqlstate, macro_name, spec_name in errors:
+ # 000 errors define the parent class for all errors in this SQLSTATE class
+ if sqlstate.endswith("000"):
+ exc_name = snake_to_pascal(spec_name)
+ if exc_name == "Warning":
+ parent = "LibpqWarning"
+ else:
+ parent = "LibpqError"
+ else:
+ if spec_name in colliding_spec_names:
+ exc_name = macro_to_class_name(macro_name)
+ else:
+ exc_name = snake_to_pascal(spec_name)
+ # Use parent class if available, otherwise LibpqError
+ parent = sqlstate_to_exception.get(sqlstate[:2] + "000", "LibpqError")
+ # Warnings should end with "Warning"
+ if parent == "Warning" and not exc_name.endswith("Warning"):
+ exc_name += "Warning"
+
+ generated_classes.add(exc_name)
+ sqlstate_to_exception[sqlstate] = exc_name
+ lines.extend(
+ [
+ f"class {exc_name}({parent}):",
+ f' """SQLSTATE {sqlstate} - {spec_name.replace("_", " ")}."""',
+ "",
+ " pass",
+ "",
+ "",
+ ]
+ )
+
+ lines.append("SQLSTATE_TO_EXCEPTION: Dict[str, type] = {")
+ for sqlstate, exc_name in sqlstate_to_exception.items():
+ lines.append(f' "{sqlstate}": {exc_name},')
+ lines.extend(["}", "", ""])
+
+ all_exports = list(generated_classes) + ["SQLSTATE_TO_EXCEPTION"]
+ lines.append("__all__ = [")
+ for name in all_exports:
+ lines.append(f' "{name}",')
+ lines.append("]")
+
+ return "\n".join(lines) + "\n"
+
+
+if __name__ == "__main__":
+ script_dir = Path(__file__).resolve().parent
+ src_root = script_dir.parent.parent
+
+ errcodes_path = src_root / "src" / "backend" / "utils" / "errcodes.txt"
+ output_path = (
+ src_root / "src" / "test" / "pytest" / "libpq" / "_generated_errors.py"
+ )
+
+ if not errcodes_path.exists():
+ print(f"Error: {errcodes_path} not found", file=sys.stderr)
+ sys.exit(1)
+
+ output = generate_errors(errcodes_path)
+ output_path.write_text(output)
+ print(f"Generated {output_path}")
--
2.52.0
v5-0005-Convert-load-balance-tests-from-perl-to-python.patchtext/x-patch; charset=utf-8; name=v5-0005-Convert-load-balance-tests-from-perl-to-python.patchDownload
From 92b671c822f6f68247fc864ec1dbe03484935bd7 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <postgres@jeltef.nl>
Date: Fri, 26 Dec 2025 12:31:43 +0100
Subject: [PATCH v5 5/7] Convert load balance tests from perl to python
---
src/interfaces/libpq/Makefile | 1 +
src/interfaces/libpq/meson.build | 7 +-
src/interfaces/libpq/pyt/test_load_balance.py | 170 ++++++++++++++++++
.../libpq/t/003_load_balance_host_list.pl | 94 ----------
.../libpq/t/004_load_balance_dns.pl | 144 ---------------
5 files changed, 176 insertions(+), 240 deletions(-)
create mode 100644 src/interfaces/libpq/pyt/test_load_balance.py
delete mode 100644 src/interfaces/libpq/t/003_load_balance_host_list.pl
delete mode 100644 src/interfaces/libpq/t/004_load_balance_dns.pl
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 9fe321147fc..41ea88c7388 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -167,6 +167,7 @@ check installcheck: export PATH := $(CURDIR)/test:$(PATH)
check: test-build all
$(prove_check)
+ $(pytest_check)
installcheck: test-build all
$(prove_installcheck)
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index b259c998fa2..6d62ac17edb 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -150,8 +150,6 @@ tests += {
'tests': [
't/001_uri.pl',
't/002_api.pl',
- 't/003_load_balance_host_list.pl',
- 't/004_load_balance_dns.pl',
't/005_negotiate_encryption.pl',
't/006_service.pl',
],
@@ -162,6 +160,11 @@ tests += {
},
'deps': libpq_test_deps,
},
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_load_balance.py',
+ ],
+ },
}
subdir('po', if_found: libintl)
diff --git a/src/interfaces/libpq/pyt/test_load_balance.py b/src/interfaces/libpq/pyt/test_load_balance.py
new file mode 100644
index 00000000000..0af46d8f37d
--- /dev/null
+++ b/src/interfaces/libpq/pyt/test_load_balance.py
@@ -0,0 +1,170 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests for load_balance_hosts connection parameter.
+
+These tests verify that libpq correctly handles load balancing across multiple
+PostgreSQL servers specified in the connection string.
+"""
+
+import platform
+import re
+
+import pytest
+
+from libpq import LibpqError
+import pypg
+
+
+@pytest.fixture(scope="module")
+def load_balance_nodes_hostlist(create_pg_module):
+ """
+ Create 3 PostgreSQL nodes with different socket directories.
+
+ Each node has its own Unix socket directory for isolation.
+ Returns a tuple of (nodes, connect).
+ """
+ nodes = [create_pg_module() for _ in range(3)]
+
+ hostlist = ",".join(node.host for node in nodes)
+ portlist = ",".join(str(node.port) for node in nodes)
+
+ def connect(**kwargs):
+ return nodes[0].connect(host=hostlist, port=portlist, **kwargs)
+
+ return nodes, connect
+
+
+@pytest.fixture(scope="module")
+def load_balance_nodes_dns(create_pg_module):
+ """
+ Create 3 PostgreSQL nodes on the same port but different IP addresses.
+
+ Uses 127.0.0.1, 127.0.0.2, 127.0.0.3 with a shared port, so that
+ connections to 'pg-loadbalancetest' can be load balanced via DNS.
+
+ Since setting up a DNS server is more effort than we consider reasonable to
+ run this test, this situation is instead imitated by using a hosts file
+ where a single hostname maps to multiple different IP addresses. This test
+ requires the administrator to add the following lines to the hosts file (if
+ we detect that this hasn't happened we skip the test):
+
+ 127.0.0.1 pg-loadbalancetest
+ 127.0.0.2 pg-loadbalancetest
+ 127.0.0.3 pg-loadbalancetest
+
+ Windows or Linux are required to run this test because these OSes allow
+ binding to 127.0.0.2 and 127.0.0.3 addresses by default, but other OSes
+ don't. We need to bind to different IP addresses, so that we can use these
+ different IP addresses in the hosts file.
+
+ The hosts file needs to be prepared before running this test. We don't do
+ it on the fly, because it requires root permissions to change the hosts
+ file. In CI we set up the previously mentioned rules in the hosts file, so
+ that this load balancing method is tested.
+
+ Requires PG_TEST_EXTRA=load_balance because it requires this manual hosts
+ file configuration and also uses TCP with trust auth, which is potentially
+ unsafe on multiuser systems.
+ """
+ pypg.skip_unless_test_extras("load_balance")
+
+ if platform.system() not in ("Linux", "Windows"):
+ pytest.skip("DNS load balance test only supported on Linux and Windows")
+
+ if platform.system() == "Windows":
+ hosts_path = r"c:\Windows\System32\Drivers\etc\hosts"
+ else:
+ hosts_path = "/etc/hosts"
+
+ try:
+ with open(hosts_path) as f:
+ hosts_content = f.read()
+ except (OSError, IOError):
+ pytest.skip(f"Could not read hosts file: {hosts_path}")
+
+ count = len(re.findall(r"127\.0\.0\.[1-3]\s+pg-loadbalancetest", hosts_content))
+ if count != 3:
+ pytest.skip("hosts file not prepared for DNS load balance test")
+
+ first_node = create_pg_module(hostaddr="127.0.0.1")
+ nodes = [
+ first_node,
+ create_pg_module(hostaddr="127.0.0.2", port=first_node.port),
+ create_pg_module(hostaddr="127.0.0.3", port=first_node.port),
+ ]
+
+ # Allow trust authentication for TCP connections from loopback
+ for node in nodes:
+ hba_path = node.datadir / "pg_hba.conf"
+ with open(hba_path, "r") as f:
+ original_content = f.read()
+ with open(hba_path, "w") as f:
+ f.write("host all all 127.0.0.0/8 trust\n")
+ f.write(original_content)
+ node.pg_ctl("reload")
+
+ def connect(**kwargs):
+ return nodes[0].connect(host="pg-loadbalancetest", **kwargs)
+
+ return nodes, connect
+
+
+@pytest.fixture(scope="module", params=["hostlist", "dns"])
+def load_balance_nodes(request):
+ """
+ Parametrized fixture providing both load balancing test environments.
+ """
+ return request.getfixturevalue(f"load_balance_nodes_{request.param}")
+
+
+def test_load_balance_hosts_invalid_value(load_balance_nodes):
+ """load_balance_hosts doesn't accept unknown values."""
+ _, connect = load_balance_nodes
+
+ with pytest.raises(
+ LibpqError, match='invalid load_balance_hosts value: "doesnotexist"'
+ ):
+ connect(load_balance_hosts="doesnotexist")
+
+
+def test_load_balance_hosts_disable(load_balance_nodes):
+ """load_balance_hosts=disable always connects to the first node."""
+ nodes, connect = load_balance_nodes
+
+ with nodes[0].log_contains("connection received"):
+ connect(load_balance_hosts="disable")
+
+
+def test_load_balance_hosts_random_distribution(load_balance_nodes):
+ """load_balance_hosts=random distributes connections across all nodes."""
+ nodes, connect = load_balance_nodes
+
+ for _ in range(50):
+ connect(load_balance_hosts="random")
+
+ occurrences = [
+ len(re.findall("connection received", node.log_content())) for node in nodes
+ ]
+
+ # Statistically, each node should receive at least one connection.
+ # The probability of any node receiving 0 connections is (2/3)^50 ≈ 1.57e-9
+ assert occurrences[0] > 0, "node1 should receive at least one connection"
+ assert occurrences[1] > 0, "node2 should receive at least one connection"
+ assert occurrences[2] > 0, "node3 should receive at least one connection"
+ assert sum(occurrences) == 50, "total connections should be 50"
+
+
+def test_load_balance_hosts_failover(load_balance_nodes):
+ """load_balance_hosts continues trying hosts until it finds a working one."""
+ nodes, connect = load_balance_nodes
+
+ nodes[0].stop()
+ nodes[1].stop()
+
+ with nodes[2].log_contains("connection received"):
+ connect(load_balance_hosts="disable")
+
+ with nodes[2].log_contains("connection received", times=5):
+ for _ in range(5):
+ connect(load_balance_hosts="random")
diff --git a/src/interfaces/libpq/t/003_load_balance_host_list.pl b/src/interfaces/libpq/t/003_load_balance_host_list.pl
deleted file mode 100644
index 7a4c14ada98..00000000000
--- a/src/interfaces/libpq/t/003_load_balance_host_list.pl
+++ /dev/null
@@ -1,94 +0,0 @@
-# Copyright (c) 2023-2025, PostgreSQL Global Development Group
-use strict;
-use warnings FATAL => 'all';
-use Config;
-use PostgreSQL::Test::Utils;
-use PostgreSQL::Test::Cluster;
-use Test::More;
-
-# This tests load balancing across the list of different hosts in the host
-# parameter of the connection string.
-
-# Cluster setup which is shared for testing both load balancing methods
-my $node1 = PostgreSQL::Test::Cluster->new('node1');
-my $node2 = PostgreSQL::Test::Cluster->new('node2', own_host => 1);
-my $node3 = PostgreSQL::Test::Cluster->new('node3', own_host => 1);
-
-# Create a data directory with initdb
-$node1->init();
-$node2->init();
-$node3->init();
-
-# Start the PostgreSQL server
-$node1->start();
-$node2->start();
-$node3->start();
-
-# Start the tests for load balancing method 1
-my $hostlist = $node1->host . ',' . $node2->host . ',' . $node3->host;
-my $portlist = $node1->port . ',' . $node2->port . ',' . $node3->port;
-
-$node1->connect_fails(
- "host=$hostlist port=$portlist load_balance_hosts=doesnotexist",
- "load_balance_hosts doesn't accept unknown values",
- expected_stderr => qr/invalid load_balance_hosts value: "doesnotexist"/);
-
-# load_balance_hosts=disable should always choose the first one.
-$node1->connect_ok(
- "host=$hostlist port=$portlist load_balance_hosts=disable",
- "load_balance_hosts=disable connects to the first node",
- sql => "SELECT 'connect1'",
- log_like => [qr/statement: SELECT 'connect1'/]);
-
-# Statistically the following loop with load_balance_hosts=random will almost
-# certainly connect at least once to each of the nodes. The chance of that not
-# happening is so small that it's negligible: (2/3)^50 = 1.56832855e-9
-foreach my $i (1 .. 50)
-{
- $node1->connect_ok(
- "host=$hostlist port=$portlist load_balance_hosts=random",
- "repeated connections with random load balancing",
- sql => "SELECT 'connect2'");
-}
-
-my $node1_occurrences = () =
- $node1->log_content() =~ /statement: SELECT 'connect2'/g;
-my $node2_occurrences = () =
- $node2->log_content() =~ /statement: SELECT 'connect2'/g;
-my $node3_occurrences = () =
- $node3->log_content() =~ /statement: SELECT 'connect2'/g;
-
-my $total_occurrences =
- $node1_occurrences + $node2_occurrences + $node3_occurrences;
-
-cmp_ok($node1_occurrences, '>', 1,
- "received at least one connection on node1");
-cmp_ok($node2_occurrences, '>', 1,
- "received at least one connection on node2");
-cmp_ok($node3_occurrences, '>', 1,
- "received at least one connection on node3");
-is($total_occurrences, 50, "received 50 connections across all nodes");
-
-$node1->stop();
-$node2->stop();
-
-# load_balance_hosts=disable should continue trying hosts until it finds a
-# working one.
-$node3->connect_ok(
- "host=$hostlist port=$portlist load_balance_hosts=disable",
- "load_balance_hosts=disable continues until it connects to the a working node",
- sql => "SELECT 'connect3'",
- log_like => [qr/statement: SELECT 'connect3'/]);
-
-# Also with load_balance_hosts=random we continue to the next nodes if previous
-# ones are down. Connect a few times to make sure it's not just lucky.
-foreach my $i (1 .. 5)
-{
- $node3->connect_ok(
- "host=$hostlist port=$portlist load_balance_hosts=random",
- "load_balance_hosts=random continues until it connects to the a working node",
- sql => "SELECT 'connect4'",
- log_like => [qr/statement: SELECT 'connect4'/]);
-}
-
-done_testing();
diff --git a/src/interfaces/libpq/t/004_load_balance_dns.pl b/src/interfaces/libpq/t/004_load_balance_dns.pl
deleted file mode 100644
index 2b4bd261c3d..00000000000
--- a/src/interfaces/libpq/t/004_load_balance_dns.pl
+++ /dev/null
@@ -1,144 +0,0 @@
-# Copyright (c) 2023-2025, PostgreSQL Global Development Group
-use strict;
-use warnings FATAL => 'all';
-use Config;
-use PostgreSQL::Test::Utils;
-use PostgreSQL::Test::Cluster;
-use Test::More;
-
-if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\bload_balance\b/)
-{
- plan skip_all =>
- 'Potentially unsafe test load_balance not enabled in PG_TEST_EXTRA';
-}
-
-# This tests loadbalancing based on a DNS entry that contains multiple records
-# for different IPs. Since setting up a DNS server is more effort than we
-# consider reasonable to run this test, this situation is instead imitated by
-# using a hosts file where a single hostname maps to multiple different IP
-# addresses. This test requires the administrator to add the following lines to
-# the hosts file (if we detect that this hasn't happened we skip the test):
-#
-# 127.0.0.1 pg-loadbalancetest
-# 127.0.0.2 pg-loadbalancetest
-# 127.0.0.3 pg-loadbalancetest
-#
-# Windows or Linux are required to run this test because these OSes allow
-# binding to 127.0.0.2 and 127.0.0.3 addresses by default, but other OSes
-# don't. We need to bind to different IP addresses, so that we can use these
-# different IP addresses in the hosts file.
-#
-# The hosts file needs to be prepared before running this test. We don't do it
-# on the fly, because it requires root permissions to change the hosts file. In
-# CI we set up the previously mentioned rules in the hosts file, so that this
-# load balancing method is tested.
-
-# Cluster setup which is shared for testing both load balancing methods
-my $can_bind_to_127_0_0_2 =
- $Config{osname} eq 'linux' || $PostgreSQL::Test::Utils::windows_os;
-
-# Checks for the requirements for testing load balancing method 2
-if (!$can_bind_to_127_0_0_2)
-{
- plan skip_all => 'load_balance test only supported on Linux and Windows';
-}
-
-my $hosts_path;
-if ($windows_os)
-{
- $hosts_path = 'c:\Windows\System32\Drivers\etc\hosts';
-}
-else
-{
- $hosts_path = '/etc/hosts';
-}
-
-my $hosts_content = PostgreSQL::Test::Utils::slurp_file($hosts_path);
-
-my $hosts_count = () =
- $hosts_content =~ /127\.0\.0\.[1-3] pg-loadbalancetest/g;
-if ($hosts_count != 3)
-{
- # Host file is not prepared for this test
- plan skip_all => "hosts file was not prepared for DNS load balance test";
-}
-
-$PostgreSQL::Test::Cluster::use_tcp = 1;
-$PostgreSQL::Test::Cluster::test_pghost = '127.0.0.1';
-my $port = PostgreSQL::Test::Cluster::get_free_port();
-my $node1 = PostgreSQL::Test::Cluster->new('node1', port => $port);
-my $node2 =
- PostgreSQL::Test::Cluster->new('node2', port => $port, own_host => 1);
-my $node3 =
- PostgreSQL::Test::Cluster->new('node3', port => $port, own_host => 1);
-
-# Create a data directory with initdb
-$node1->init();
-$node2->init();
-$node3->init();
-
-# Start the PostgreSQL server
-$node1->start();
-$node2->start();
-$node3->start();
-
-# load_balance_hosts=disable should always choose the first one.
-$node1->connect_ok(
- "host=pg-loadbalancetest port=$port load_balance_hosts=disable",
- "load_balance_hosts=disable connects to the first node",
- sql => "SELECT 'connect1'",
- log_like => [qr/statement: SELECT 'connect1'/]);
-
-
-# Statistically the following loop with load_balance_hosts=random will almost
-# certainly connect at least once to each of the nodes. The chance of that not
-# happening is so small that it's negligible: (2/3)^50 = 1.56832855e-9
-foreach my $i (1 .. 50)
-{
- $node1->connect_ok(
- "host=pg-loadbalancetest port=$port load_balance_hosts=random",
- "repeated connections with random load balancing",
- sql => "SELECT 'connect2'");
-}
-
-my $node1_occurrences = () =
- $node1->log_content() =~ /statement: SELECT 'connect2'/g;
-my $node2_occurrences = () =
- $node2->log_content() =~ /statement: SELECT 'connect2'/g;
-my $node3_occurrences = () =
- $node3->log_content() =~ /statement: SELECT 'connect2'/g;
-
-my $total_occurrences =
- $node1_occurrences + $node2_occurrences + $node3_occurrences;
-
-cmp_ok($node1_occurrences, '>', 1,
- "received at least one connection on node1");
-cmp_ok($node2_occurrences, '>', 1,
- "received at least one connection on node2");
-cmp_ok($node3_occurrences, '>', 1,
- "received at least one connection on node3");
-is($total_occurrences, 50, "received 50 connections across all nodes");
-
-$node1->stop();
-$node2->stop();
-
-# load_balance_hosts=disable should continue trying hosts until it finds a
-# working one.
-$node3->connect_ok(
- "host=pg-loadbalancetest port=$port load_balance_hosts=disable",
- "load_balance_hosts=disable continues until it connects to the a working node",
- sql => "SELECT 'connect3'",
- log_like => [qr/statement: SELECT 'connect3'/]);
-
-# Also with load_balance_hosts=random we continue to the next nodes if previous
-# ones are down. Connect a few times to make sure it's not just lucky.
-foreach my $i (1 .. 5)
-{
- $node3->connect_ok(
- "host=pg-loadbalancetest port=$port load_balance_hosts=random",
- "load_balance_hosts=random continues until it connects to the a working node",
- sql => "SELECT 'connect4'",
- log_like => [qr/statement: SELECT 'connect4'/]);
-}
-
-done_testing();
--
2.52.0
v5-0006-WIP-pytest-Add-some-SSL-client-tests.patchtext/x-patch; charset=utf-8; name=v5-0006-WIP-pytest-Add-some-SSL-client-tests.patchDownload
From 99a8684e81edf2dd06b0f8b4b064e03b070ca6e8 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 16 Dec 2025 09:30:55 +0100
Subject: [PATCH v5 6/7] WIP: pytest: Add some SSL client tests
This is a sample client-only test suite. It tests some handshake
failures against a mock server, as well as a full SSL handshake + empty
query + response.
pyca/cryptography is added as a new package dependency. Certificates for
testing are generated on the fly.
The mock design is threaded: the server socket is listening on a
background thread, and the test provides the server logic via a
callback. There is some additional work still needed to make this
production-ready; see the notes for _TCPServer.background(). (Currently,
an exception in the wrong place could result in a hang-until-timeout
rather than an immediate failure.)
TODOs:
- local_server and tcp_server_class are nearly identical and should
share code.
- fix exception-related timeouts for .background()
- figure out the proper use of "session" vs "module" scope
- ensure that pq.libpq unwinds (to close connections) before tcp_server;
see comment in test_server_with_ssl_disabled()
---
.cirrus.tasks.yml | 18 ++-
pyproject.toml | 8 +
src/test/ssl/Makefile | 2 +
src/test/ssl/meson.build | 6 +
src/test/ssl/pyt/conftest.py | 128 +++++++++++++++
src/test/ssl/pyt/test_client.py | 278 ++++++++++++++++++++++++++++++++
6 files changed, 434 insertions(+), 6 deletions(-)
create mode 100644 src/test/ssl/pyt/conftest.py
create mode 100644 src/test/ssl/pyt/test_client.py
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index a2c3febc30c..41d2a3c1867 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -229,6 +229,7 @@ task:
sysctl kern.corefile='/tmp/cores/%N.%P.core'
setup_additional_packages_script: |
pkg install -y \
+ py311-cryptography \
py311-packaging \
py311-pytest
@@ -323,6 +324,7 @@ task:
setup_additional_packages_script: |
pkgin -y install \
+ py312-cryptography \
py312-packaging \
py312-test
ln -s /usr/pkg/bin/pytest-3.12 /usr/pkg/bin/pytest
@@ -346,8 +348,9 @@ task:
setup_additional_packages_script: |
pkg_add -I \
- py3-test \
- py3-packaging
+ py3-cryptography \
+ py3-packaging \
+ py3-test
# Always core dump to ${CORE_DUMP_DIR}
set_core_dump_script: sysctl -w kern.nosuidcoredump=2
<<: *openbsd_task_template
@@ -508,8 +511,9 @@ task:
setup_additional_packages_script: |
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get -y install \
- python3-pytest \
- python3-packaging
+ python3-cryptography \
+ python3-packaging \
+ python3-pytest
matrix:
# SPECIAL:
@@ -658,6 +662,7 @@ task:
CIRRUS_WORKING_DIR: ${HOME}/pgsql/
CCACHE_DIR: ${HOME}/ccache
MACPORTS_CACHE: ${HOME}/macports-cache
+ PYTEST_DEBUG_TEMPROOT: /tmp # default is too long for UNIX sockets on Mac
MESON_FEATURES: >-
-Dbonjour=enabled
@@ -678,6 +683,7 @@ task:
p5.34-io-tty
p5.34-ipc-run
python312
+ py312-cryptography
py312-packaging
py312-pytest
tcl
@@ -816,7 +822,7 @@ task:
# XXX Does Chocolatey really not have any Python package installers?
setup_additional_packages_script: |
REM choco install -y --no-progress ...
- pip3 install --user packaging pytest
+ pip3 install --user cryptography packaging pytest
setup_hosts_file_script: |
echo 127.0.0.1 pg-loadbalancetest >> c:\Windows\System32\Drivers\etc\hosts
@@ -879,7 +885,7 @@ task:
folder: ${CCACHE_DIR}
setup_additional_packages_script: |
- C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-python-pytest
+ C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-python-cryptography mingw-w64-ucrt-x86_64-python-pytest
mingw_info_script: |
%BASH% -c "where gcc"
diff --git a/pyproject.toml b/pyproject.toml
index 4628d2274e0..00c8ae88583 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -12,6 +12,14 @@ dependencies = [
# Any other dependencies are effectively optional (added below). We import
# these libraries using pytest.importorskip(). So tests will be skipped if
# they are not available.
+
+ # Notes on the cryptography package:
+ # - 3.3.2 is shipped on Debian bullseye.
+ # - 3.4.x drops support for Python 2, making it a version of note for older LTS
+ # distros.
+ # - 35.x switched versioning schemes and moved to Rust parsing.
+ # - 40.x is the last version supporting Python 3.6.
+ "cryptography >= 3.3.2",
]
[tool.pytest.ini_options]
diff --git a/src/test/ssl/Makefile b/src/test/ssl/Makefile
index e8a1639db2d..895ea5ea41c 100644
--- a/src/test/ssl/Makefile
+++ b/src/test/ssl/Makefile
@@ -30,6 +30,8 @@ clean distclean:
# Doesn't depend on sslfiles because we don't rebuild them by default
check:
$(prove_check)
+ # XXX these suites should run independently, not serially
+ $(pytest_check)
installcheck:
$(prove_installcheck)
diff --git a/src/test/ssl/meson.build b/src/test/ssl/meson.build
index d8e0fb518e0..a0ee2af0899 100644
--- a/src/test/ssl/meson.build
+++ b/src/test/ssl/meson.build
@@ -15,4 +15,10 @@ tests += {
't/003_sslinfo.pl',
],
},
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_client.py',
+ 'pyt/test_server.py',
+ ],
+ },
}
diff --git a/src/test/ssl/pyt/conftest.py b/src/test/ssl/pyt/conftest.py
new file mode 100644
index 00000000000..870f738ac44
--- /dev/null
+++ b/src/test/ssl/pyt/conftest.py
@@ -0,0 +1,128 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import datetime
+import re
+import subprocess
+import tempfile
+from collections import namedtuple
+
+import pytest
+
+
+@pytest.fixture(scope="session")
+def cryptography():
+ return pytest.importorskip("cryptography", "3.3.2")
+
+
+Cert = namedtuple("Cert", "cert, certpath, key, keypath")
+
+
+@pytest.fixture(scope="session")
+def certs(cryptography, tmp_path_factory):
+ """
+ Caches commonly used certificates at the session level, and provides a way
+ to create new ones.
+
+ - certs.ca: the root CA certificate
+
+ - certs.server: the "standard" server certficate, signed by certs.ca
+
+ - certs.server_host: the hostname of the certs.server certificate
+
+ - certs.new(): creates a custom certificate, signed by certs.ca
+ """
+
+ from cryptography import x509
+ from cryptography.hazmat.primitives import hashes, serialization
+ from cryptography.hazmat.primitives.asymmetric import rsa
+ from cryptography.x509.oid import NameOID
+
+ tmpdir = tmp_path_factory.mktemp("test-certs")
+
+ class _Certs:
+ def __init__(self):
+ self.ca = self.new(
+ x509.Name(
+ [x509.NameAttribute(NameOID.COMMON_NAME, "PG pytest CA")],
+ ),
+ ca=True,
+ )
+
+ self.server_host = "example.org"
+ self.server = self.new(
+ x509.Name(
+ [x509.NameAttribute(NameOID.COMMON_NAME, self.server_host)],
+ )
+ )
+
+ def new(self, subject: x509.Name, *, ca=False) -> Cert:
+ """
+ Creates and signs a new Cert with the given subject name. If ca is
+ True, the certificate will be self-signed; otherwise the certificate
+ is signed by self.ca.
+ """
+ key = rsa.generate_private_key(
+ public_exponent=65537,
+ key_size=2048,
+ )
+
+ builder = x509.CertificateBuilder()
+ now = datetime.datetime.now(datetime.timezone.utc)
+
+ builder = (
+ builder.subject_name(subject)
+ .public_key(key.public_key())
+ .serial_number(x509.random_serial_number())
+ .not_valid_before(now)
+ .not_valid_after(now + datetime.timedelta(hours=1))
+ )
+
+ if ca:
+ builder = builder.issuer_name(subject)
+ else:
+ builder = builder.issuer_name(self.ca.cert.subject)
+
+ builder = builder.add_extension(
+ x509.BasicConstraints(ca=ca, path_length=None),
+ critical=True,
+ )
+
+ cert = builder.sign(
+ private_key=key if ca else self.ca.key,
+ algorithm=hashes.SHA256(),
+ )
+
+ # Dump the certificate and key to file.
+ keypath = self._tofile(
+ key.private_bytes(
+ serialization.Encoding.PEM,
+ serialization.PrivateFormat.PKCS8,
+ serialization.NoEncryption(),
+ ),
+ suffix=".key",
+ )
+ certpath = self._tofile(
+ cert.public_bytes(serialization.Encoding.PEM),
+ suffix="-ca.crt" if ca else ".crt",
+ )
+
+ return Cert(
+ cert=cert,
+ certpath=certpath,
+ key=key,
+ keypath=keypath,
+ )
+
+ def _tofile(self, data: bytes, *, suffix) -> str:
+ """
+ Dumps data to a file on disk with the requested suffix and returns
+ the path. The file is located somewhere in pytest's temporary
+ directory root.
+ """
+ f = tempfile.NamedTemporaryFile(suffix=suffix, dir=tmpdir, delete=False)
+ with f:
+ f.write(data)
+
+ return f.name
+
+ return _Certs()
diff --git a/src/test/ssl/pyt/test_client.py b/src/test/ssl/pyt/test_client.py
new file mode 100644
index 00000000000..556bad33bf8
--- /dev/null
+++ b/src/test/ssl/pyt/test_client.py
@@ -0,0 +1,278 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import ctypes
+import socket
+import ssl
+import struct
+import threading
+from typing import Callable
+
+import pytest
+
+import pypg
+from libpq import LibpqError, ExecStatus
+
+# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
+pytestmark = pypg.require_test_extras("ssl")
+
+
+@pytest.fixture(scope="session", autouse=True)
+def skip_if_no_ssl_support(libpq_handle):
+ """Skips tests if SSL support is not configured."""
+
+ # Declare PQsslAttribute().
+ PQsslAttribute = libpq_handle.PQsslAttribute
+ PQsslAttribute.restype = ctypes.c_char_p
+ PQsslAttribute.argtypes = [ctypes.c_void_p, ctypes.c_char_p]
+
+ if not PQsslAttribute(None, b"library"):
+ pytest.skip("requires SSL support to be configured")
+
+
+#
+# Test Fixtures
+#
+
+
+@pytest.fixture
+def tcp_server_class(remaining_timeout):
+ """
+ Metafixture to combine related logic for tcp_server and ssl_server.
+
+ TODO: combine with test_libpq.local_server
+ """
+
+ class _TCPServer(contextlib.ExitStack):
+ """
+ Implementation class for tcp_server. See .background() for the primary
+ entry point for tests. Postgres clients may connect to this server via
+ **tcp_server.conninfo.
+
+ _TCPServer derives from contextlib.ExitStack to provide easy cleanup of
+ associated resources; see the documentation for that class for a full
+ explanation.
+ """
+
+ def __init__(self):
+ super().__init__()
+
+ self._thread = None
+ self._thread_exc = None
+ self._listener = self.enter_context(
+ socket.socket(socket.AF_INET, socket.SOCK_STREAM),
+ )
+
+ self._bind_and_listen()
+ sockname = self._listener.getsockname()
+ self.conninfo = dict(
+ hostaddr=sockname[0],
+ port=sockname[1],
+ )
+
+ def _bind_and_listen(self):
+ """
+ Does the actual work of binding the socket and listening for
+ connections.
+
+ The listen backlog is currently hardcoded to one.
+ """
+ self._listener.bind(("127.0.0.1", 0))
+ self._listener.listen(1)
+
+ def background(self, fn: Callable[[socket.socket], None]) -> None:
+ """
+ Accepts a client connection on a background thread and passes it to
+ the provided callback. Any exceptions raised from the callback will
+ be re-raised on the main thread during fixture teardown.
+
+ Blocking operations on the connected socket default to using the
+ remaining_timeout(), though this can be changed by the test via the
+ socket's .settimeout().
+ """
+
+ def _bg():
+ try:
+ self._listener.settimeout(remaining_timeout())
+ sock, _ = self._listener.accept()
+
+ with sock:
+ sock.settimeout(remaining_timeout())
+ fn(sock)
+
+ except Exception as e:
+ # Save the exception for re-raising on the main thread.
+ self._thread_exc = e
+
+ # TODO: rather than using callback(), consider explicitly signaling
+ # the fn() implementation to stop early if we get an exception.
+ # Otherwise we'll hang until the end of the timeout.
+ self._thread = threading.Thread(target=_bg)
+ self.callback(self._join)
+
+ self._thread.start()
+
+ def _join(self):
+ """
+ Waits for the background thread to finish and raises any thrown
+ exception. This is called during fixture teardown.
+ """
+ # Give a little bit of wiggle room on the join timeout, since we're
+ # racing against the test's own use of remaining_timeout(). (It's
+ # preferable to let tests report timeouts; the stack traces will
+ # help with debugging.)
+ self._thread.join(remaining_timeout() + 1)
+ if self._thread.is_alive():
+ raise TimeoutError("background thread is still running after timeout")
+
+ if self._thread_exc is not None:
+ raise self._thread_exc
+
+ return _TCPServer
+
+
+@pytest.fixture
+def tcp_server(tcp_server_class):
+ """
+ Opens up a local TCP socket for mocking a Postgres server on a background
+ thread. See the _TCPServer API for usage.
+ """
+ with tcp_server_class() as s:
+ yield s
+
+
+@pytest.fixture
+def ssl_server(tcp_server_class, certs):
+ """
+ Like tcp_server, but with an additional .background_ssl() method which will
+ perform a SSLRequest handshake on the socket before handing the connection
+ to the test callback.
+
+ This server uses certs.server as its identity.
+ """
+
+ class _SSLServer(tcp_server_class):
+ def __init__(self):
+ super().__init__()
+
+ self.conninfo["host"] = certs.server_host
+
+ self._ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
+ self._ctx.load_cert_chain(certs.server.certpath, certs.server.keypath)
+
+ def background_ssl(self, fn: Callable[[ssl.SSLSocket], None]) -> None:
+ """
+ Invokes a server callback as with .background(), but an SSLRequest
+ handshake is performed first, and the socket provided to the
+ callback has been wrapped in an OpenSSL layer.
+ """
+
+ def handshake(s: socket.socket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Make sure we get an SSLRequest.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (1234, 5679)
+ assert pktlen == 8
+
+ # Accept the SSLRequest.
+ s.send(b"S")
+
+ with self._ctx.wrap_socket(s, server_side=True) as wrapped:
+ fn(wrapped)
+
+ self.background(handshake)
+
+ with _SSLServer() as s:
+ yield s
+
+
+#
+# Tests
+#
+
+
+@pytest.mark.parametrize("sslmode", ("require", "verify-ca", "verify-full"))
+def test_server_with_ssl_disabled(connect, tcp_server, certs, sslmode):
+ """
+ Make sure client refuses to talk to non-SSL servers with stricter
+ sslmodes.
+ """
+
+ def refuse_ssl(s: socket.socket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Make sure we get an SSLRequest.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (1234, 5679)
+ assert pktlen == 8
+
+ # Refuse the SSLRequest.
+ s.send(b"N")
+
+ # Wait for the client to close the connection.
+ assert not s.recv(1), "client sent unexpected data"
+
+ tcp_server.background(refuse_ssl)
+
+ with pytest.raises(LibpqError, match="server does not support SSL"):
+ connect(
+ **tcp_server.conninfo,
+ sslrootcert=certs.ca.certpath,
+ sslmode=sslmode,
+ )
+
+
+def test_verify_full_connection(connect, ssl_server, certs):
+ """Completes a verify-full connection and empty query."""
+
+ def handle_empty_query(s: ssl.SSLSocket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Check the startup packet version, then discard the remainder.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (3, 0)
+ s.recv(pktlen - 8)
+
+ # Send the required litany of server messages.
+ s.send(struct.pack("!cII", b"R", 8, 0)) # AuthenticationOK
+
+ # ParameterStatus: client_encoding
+ key = b"client_encoding\0"
+ val = b"UTF-8\0"
+ s.send(struct.pack("!cI", b"S", 4 + len(key) + len(val)) + key + val)
+
+ # ParameterStatus: DateStyle
+ key = b"DateStyle\0"
+ val = b"ISO, MDY\0"
+ s.send(struct.pack("!cI", b"S", 4 + len(key) + len(val)) + key + val)
+
+ s.send(struct.pack("!cIII", b"K", 12, 1234, 1234)) # BackendKeyData
+ s.send(struct.pack("!cIc", b"Z", 5, b"I")) # ReadyForQuery
+
+ # Expect an empty query.
+ pkttype = s.recv(1)
+ assert pkttype == b"Q"
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+ assert s.recv(pktlen - 4) == b"\0"
+
+ # Send an EmptyQueryResponse+ReadyForQuery.
+ s.send(struct.pack("!cI", b"I", 4))
+ s.send(struct.pack("!cIc", b"Z", 5, b"I"))
+
+ # libpq should terminate and close the connection.
+ assert s.recv(1) == b"X"
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+ assert pktlen == 4
+
+ assert not s.recv(1), "client sent unexpected data"
+
+ ssl_server.background_ssl(handle_empty_query)
+
+ conn = connect(
+ **ssl_server.conninfo,
+ sslrootcert=certs.ca.certpath,
+ sslmode="verify-full",
+ )
+ with conn:
+ assert conn.exec("").status() == ExecStatus.PGRES_EMPTY_QUERY
--
2.52.0
v5-0007-WIP-pytest-Add-some-server-side-SSL-tests.patchtext/x-patch; charset=utf-8; name=v5-0007-WIP-pytest-Add-some-server-side-SSL-tests.patchDownload
From 2d78377ddcb558debee06040dbd58e2012dd9c8a Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 16 Dec 2025 09:31:46 +0100
Subject: [PATCH v5 7/7] WIP: pytest: Add some server-side SSL tests
In the same vein as the previous commit, this is a server-only test
suite operating against a mock client. The test itself is a heavily
parameterized check for direct-SSL handshake behavior, using a
combination of "standard" and "custom" certificates via the certs
fixture.
installcheck is currently unsupported, but the architecture has some
extension points that should make it possible later. For now, a new
server is always started for the test session.
TODOs:
- improve remaining_timeout() integration with socket operations; at the
moment, the timeout resets on every call rather than decrementing
---
src/test/ssl/pyt/conftest.py | 50 ++++++++++
src/test/ssl/pyt/test_server.py | 161 ++++++++++++++++++++++++++++++++
2 files changed, 211 insertions(+)
create mode 100644 src/test/ssl/pyt/test_server.py
diff --git a/src/test/ssl/pyt/conftest.py b/src/test/ssl/pyt/conftest.py
index 870f738ac44..d121724800b 100644
--- a/src/test/ssl/pyt/conftest.py
+++ b/src/test/ssl/pyt/conftest.py
@@ -126,3 +126,53 @@ def certs(cryptography, tmp_path_factory):
return f.name
return _Certs()
+
+
+@pytest.fixture(scope="module", autouse=True)
+def ssl_setup(pg_server_module, certs, datadir):
+ """
+ Sets up required server settings for all tests in this module.
+ """
+ try:
+ with pg_server_module.restarting() as s:
+ s.conf.set(
+ ssl="on",
+ ssl_ca_file=certs.ca.certpath,
+ ssl_cert_file=certs.server.certpath,
+ ssl_key_file=certs.server.keypath,
+ )
+
+ # Reject by default.
+ s.hba.prepend("hostssl all all all reject")
+
+ except subprocess.CalledProcessError:
+ # This is a decent place to skip if the server isn't set up for SSL.
+ logpath = datadir / "postgresql.log"
+ unsupported = re.compile("SSL is not supported")
+
+ with open(logpath, "r") as log:
+ for line in log:
+ if unsupported.search(line):
+ pytest.skip("the server does not support SSL")
+
+ # Some other error happened.
+ raise
+
+ users = pg_server_module.create_users("ssl")
+ dbs = pg_server_module.create_dbs("ssl")
+
+ return (users, dbs)
+
+
+@pytest.fixture(scope="module")
+def client_cert(ssl_setup, certs):
+ """
+ Creates a Cert for the "ssl" user.
+ """
+ from cryptography import x509
+ from cryptography.x509.oid import NameOID
+
+ users, _ = ssl_setup
+ user = users["ssl"]
+
+ return certs.new(x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, user)]))
diff --git a/src/test/ssl/pyt/test_server.py b/src/test/ssl/pyt/test_server.py
new file mode 100644
index 00000000000..d5cb14b6c9a
--- /dev/null
+++ b/src/test/ssl/pyt/test_server.py
@@ -0,0 +1,161 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import re
+import socket
+import ssl
+import struct
+
+import pytest
+
+import pypg
+
+# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
+pytestmark = pypg.require_test_extras("ssl")
+
+# For use with the `creds` parameter below.
+CLIENT = "client"
+SERVER = "server"
+
+
+# fmt: off
+@pytest.mark.parametrize(
+ "auth_method, creds, expected_error",
+[
+ # Trust allows anything.
+ ("trust", None, None),
+ ("trust", CLIENT, None),
+ ("trust", SERVER, None),
+
+ # verify-ca allows any CA-signed certificate.
+ ("trust clientcert=verify-ca", None, "requires a valid client certificate"),
+ ("trust clientcert=verify-ca", CLIENT, None),
+ ("trust clientcert=verify-ca", SERVER, None),
+
+ # cert and verify-full allow only the correct certificate.
+ ("trust clientcert=verify-full", None, "requires a valid client certificate"),
+ ("trust clientcert=verify-full", CLIENT, None),
+ ("trust clientcert=verify-full", SERVER, "authentication failed for user"),
+ ("cert", None, "requires a valid client certificate"),
+ ("cert", CLIENT, None),
+ ("cert", SERVER, "authentication failed for user"),
+],
+)
+# fmt: on
+def test_direct_ssl_certificate_authentication(
+ pg,
+ ssl_setup,
+ certs,
+ client_cert,
+ remaining_timeout,
+ # test parameters
+ auth_method,
+ creds,
+ expected_error,
+):
+ """
+ Tests direct SSL connections with various client-certificate/HBA
+ combinations.
+ """
+
+ # Set up the HBA as desired by the test.
+ users, dbs = ssl_setup
+
+ user = users["ssl"]
+ db = dbs["ssl"]
+
+ with pg.reloading() as s:
+ s.hba.prepend(
+ ["hostssl", db, user, "127.0.0.1/32", auth_method],
+ ["hostssl", db, user, "::1/128", auth_method],
+ )
+
+ # Configure the SSL settings for the client.
+ ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
+ ctx.load_verify_locations(cafile=certs.ca.certpath)
+ ctx.set_alpn_protocols(["postgresql"]) # for direct SSL
+
+ # Load up a client certificate if required by the test.
+ if creds == CLIENT:
+ ctx.load_cert_chain(client_cert.certpath, client_cert.keypath)
+ elif creds == SERVER:
+ # Using a server certificate as the client credential is expected to
+ # work only for clientcert=verify-ca (and `trust`, naturally).
+ ctx.load_cert_chain(certs.server.certpath, certs.server.keypath)
+
+ # Make a direct SSL connection. There's no SSLRequest in the handshake; we
+ # simply wrap a TCP connection with OpenSSL.
+ addr = (pg.hostaddr, pg.port)
+ with socket.create_connection(addr) as s:
+ s.settimeout(remaining_timeout()) # XXX this resets every operation
+
+ with ctx.wrap_socket(s, server_hostname=certs.server_host) as conn:
+ # Build and send the startup packet.
+ startup_options = dict(
+ user=user,
+ database=db,
+ application_name="pytest",
+ )
+
+ payload = b""
+ for k, v in startup_options.items():
+ payload += k.encode() + b"\0"
+ payload += str(v).encode() + b"\0"
+ payload += b"\0" # null terminator
+
+ pktlen = 4 + 4 + len(payload)
+ conn.send(struct.pack("!IHH", pktlen, 3, 0) + payload)
+
+ if not expected_error:
+ # Expect an AuthenticationOK to come back.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"R"
+ assert pktlen == 8
+
+ authn_result = struct.unpack("!I", conn.recv(4))[0]
+ assert authn_result == 0
+
+ # Read and discard to ReadyForQuery.
+ while True:
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ payload = conn.recv(pktlen - 4)
+
+ if pkttype == b"Z":
+ assert payload == b"I"
+ break
+
+ # Send an empty query.
+ conn.send(struct.pack("!cI", b"Q", 5) + b"\0")
+
+ # Expect EmptyQueryResponse+ReadyForQuery.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"I"
+ assert pktlen == 4
+
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"Z"
+
+ payload = conn.recv(pktlen - 4)
+ assert payload == b"I"
+
+ else:
+ # Match the expected authentication error.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"E"
+
+ payload = conn.recv(pktlen - 4)
+ msg = None
+
+ for component in payload.split(b"\0"):
+ if not component:
+ break # end of message
+
+ key, val = component[:1], component[1:]
+ if key == b"S":
+ assert val == b"FATAL"
+ elif key == b"M":
+ msg = val.decode()
+
+ assert re.search(expected_error, msg), "server error did not match"
+
+ # Terminate.
+ conn.send(struct.pack("!cI", b"X", 4))
--
2.52.0
On Sat Dec 27, 2025 at 6:26 PM CET, Jelte Fennema-Nio wrote:
Attached is a version where I addressed all of those comemnts (the few
that I didn't or did in non-obvious ways are discussed at the end). I
also made a lot of improvements to the patchset:
Rebased to resolve conflict with master.
Attachments:
v6-0001-meson-Include-TAP-tests-in-the-configuration-summ.patchtext/x-patch; charset=utf-8; name=v6-0001-meson-Include-TAP-tests-in-the-configuration-summ.patchDownload
From 3e185914f253d25803ce485e76cc905ad99664d3 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Fri, 5 Sep 2025 16:39:08 -0700
Subject: [PATCH v6 1/7] meson: Include TAP tests in the configuration summary
...to make it obvious when they've been enabled. prove is added to the
executables list for good measure.
TODO: does Autoconf need something similar?
Per complaint by Peter Eisentraut.
---
meson.build | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/meson.build b/meson.build
index d7c5193d4ce..551e27f5eb3 100644
--- a/meson.build
+++ b/meson.build
@@ -3981,6 +3981,7 @@ summary(
'bison': '@0@ @1@'.format(bison.full_path(), bison_version),
'dtrace': dtrace,
'flex': '@0@ @1@'.format(flex.full_path(), flex_version),
+ 'prove': prove,
},
section: 'Programs',
)
@@ -4017,3 +4018,11 @@ summary(
section: 'External libraries',
list_sep: ' ',
)
+
+summary(
+ {
+ 'tap': tap_tests_enabled,
+ },
+ section: 'Other features',
+ list_sep: ' ',
+)
base-commit: b7057e43467ff2d7c04c3abcf5ec35fcc7db9611
--
2.52.0
v6-0002-Add-support-for-pytest-test-suites.patchtext/x-patch; charset=utf-8; name=v6-0002-Add-support-for-pytest-test-suites.patchDownload
From 1f091d94a7523be6233ce93e2895c1fd7e1198a6 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 13 Aug 2025 10:58:56 -0700
Subject: [PATCH v6 2/7] Add support for pytest test suites
Specify --enable-pytest/-Dpytest=enabled at configure time. This
contains no Postgres test logic -- it is just a "vanilla" pytest
skeleton.
I've written a custom pgtap output plugin, used by the Meson mtest
runner, to fully control what we see during CI test failures. The
pytest-tap plugin would have been preferable, but it's now in
maintenance mode, and it has problems with accidentally suppressing
important collection failures.
TODOs:
- The Chocolatey CI setup is subpar. Need to find a way to bless the
dependencies in use rather than pulling from pip... or maybe that will
be done by the image baker.
Co-authored-by: Jelte Fennema-Nio <postgres@jeltef.nl>
---
.cirrus.tasks.yml | 37 +++++--
.gitignore | 4 +
configure | 166 +++++++++++++++++++++++++++++-
configure.ac | 29 +++++-
meson.build | 107 +++++++++++++++++++
meson_options.txt | 8 +-
pyproject.toml | 21 ++++
src/Makefile.global.in | 29 ++++++
src/makefiles/meson.build | 2 +
src/test/Makefile | 1 +
src/test/meson.build | 1 +
src/test/pytest/Makefile | 20 ++++
src/test/pytest/README | 1 +
src/test/pytest/meson.build | 16 +++
src/test/pytest/pgtap.py | 198 ++++++++++++++++++++++++++++++++++++
src/tools/testwrap | 6 +-
16 files changed, 631 insertions(+), 15 deletions(-)
create mode 100644 pyproject.toml
create mode 100644 src/test/pytest/Makefile
create mode 100644 src/test/pytest/README
create mode 100644 src/test/pytest/meson.build
create mode 100644 src/test/pytest/pgtap.py
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 038d043d00e..a83acb39e97 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -21,7 +21,8 @@ env:
# target to test, for all but windows
CHECK: check-world PROVE_FLAGS=$PROVE_FLAGS
- CHECKFLAGS: -Otarget
+ # TODO were we avoiding --keep-going on purpose?
+ CHECKFLAGS: -Otarget --keep-going
PROVE_FLAGS: --timer
# Build test dependencies as part of the build step, to see compiler
# errors/warnings in one place.
@@ -44,6 +45,7 @@ env:
-Dldap=enabled
-Dssl=openssl
-Dtap_tests=enabled
+ -Dpytest=enabled
-Dplperl=enabled
-Dplpython=enabled
-Ddocs=enabled
@@ -225,7 +227,9 @@ task:
chown root:postgres /tmp/cores
sysctl kern.corefile='/tmp/cores/%N.%P.core'
setup_additional_packages_script: |
- #pkg install -y ...
+ pkg install -y \
+ py311-packaging \
+ py311-pytest
# NB: Intentionally build without -Dllvm. The freebsd image size is already
# large enough to make VM startup slow, and even without llvm freebsd
@@ -317,7 +321,10 @@ task:
-Dpam=enabled
setup_additional_packages_script: |
- #pkgin -y install ...
+ pkgin -y install \
+ py312-packaging \
+ py312-test
+ ln -s /usr/pkg/bin/pytest-3.12 /usr/pkg/bin/pytest
<<: *netbsd_task_template
- name: OpenBSD - Meson
@@ -337,7 +344,9 @@ task:
-Duuid=e2fs
setup_additional_packages_script: |
- #pkg_add -I ...
+ pkg_add -I \
+ py3-test \
+ py3-packaging
# Always core dump to ${CORE_DUMP_DIR}
set_core_dump_script: sysctl -w kern.nosuidcoredump=2
<<: *openbsd_task_template
@@ -496,8 +505,10 @@ task:
EOF
setup_additional_packages_script: |
- #apt-get update
- #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+ apt-get update
+ DEBIAN_FRONTEND=noninteractive apt-get -y install \
+ python3-pytest \
+ python3-packaging
matrix:
# SPECIAL:
@@ -521,14 +532,15 @@ task:
set -e
./configure \
--enable-cassert --enable-injection-points --enable-debug \
- --enable-tap-tests --enable-nls \
+ --enable-tap-tests --enable-pytest --enable-nls \
--with-segsize-blocks=6 \
--with-libnuma \
--with-liburing \
\
${LINUX_CONFIGURE_FEATURES} \
\
- CLANG="ccache clang"
+ CLANG="ccache clang" \
+ PYTEST="env LD_PRELOAD=/lib/x86_64-linux-gnu/libasan.so.8 pytest"
EOF
build_script: su postgres -c "make -s -j${BUILD_JOBS} world-bin"
upload_caches: ccache
@@ -665,6 +677,8 @@ task:
p5.34-io-tty
p5.34-ipc-run
python312
+ py312-packaging
+ py312-pytest
tcl
zstd
@@ -714,6 +728,7 @@ task:
sh src/tools/ci/ci_macports_packages.sh $MACOS_PACKAGE_LIST
# system python doesn't provide headers
sudo /opt/local/bin/port select python3 python312
+ sudo /opt/local/bin/port select pytest pytest312
# Make macports install visible for subsequent steps
echo PATH=/opt/local/sbin/:/opt/local/bin/:$PATH >> $CIRRUS_ENV
upload_caches: macports
@@ -787,6 +802,8 @@ task:
-Dldap=enabled
-Dssl=openssl
-Dtap_tests=enabled
+ -Dpytest=enabled
+ -DPYTEST=c:\Windows\system32\config\systemprofile\AppData\Roaming\Python\Python310\Scripts\pytest.exe
-Dplperl=enabled
-Dplpython=enabled
@@ -795,8 +812,10 @@ task:
depends_on: SanityCheck
only_if: $CI_WINDOWS_ENABLED
+ # XXX Does Chocolatey really not have any Python package installers?
setup_additional_packages_script: |
REM choco install -y --no-progress ...
+ pip3 install --user packaging pytest
setup_hosts_file_script: |
echo 127.0.0.1 pg-loadbalancetest >> c:\Windows\System32\Drivers\etc\hosts
@@ -859,7 +878,7 @@ task:
folder: ${CCACHE_DIR}
setup_additional_packages_script: |
- REM C:\msys64\usr\bin\pacman.exe -S --noconfirm ...
+ C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-python-pytest
mingw_info_script: |
%BASH% -c "where gcc"
diff --git a/.gitignore b/.gitignore
index 4e911395fe3..a8c73bba9ba 100644
--- a/.gitignore
+++ b/.gitignore
@@ -31,6 +31,8 @@ win32ver.rc
*.exe
lib*dll.def
lib*.pc
+__pycache__/
+*.egg-info/
# Local excludes in root directory
/GNUmakefile
@@ -43,3 +45,5 @@ lib*.pc
/Release/
/tmp_install/
/portlock/
+/.venv/
+/uv.lock
diff --git a/configure b/configure
index 14ad0a5006f..f28db423cd8 100755
--- a/configure
+++ b/configure
@@ -630,6 +630,8 @@ vpath_build
PG_SYSROOT
PG_VERSION_NUM
LDFLAGS_EX_BE
+UV
+PYTEST
PROVE
DBTOEPUB
FOP
@@ -772,6 +774,7 @@ CFLAGS
CC
enable_injection_points
PG_TEST_EXTRA
+enable_pytest
enable_tap_tests
enable_dtrace
DTRACEFLAGS
@@ -850,6 +853,7 @@ enable_profiling
enable_coverage
enable_dtrace
enable_tap_tests
+enable_pytest
enable_injection_points
with_blocksize
with_segsize
@@ -1550,7 +1554,10 @@ Optional Features:
--enable-profiling build with profiling enabled
--enable-coverage build with coverage testing instrumentation
--enable-dtrace build with DTrace support
- --enable-tap-tests enable TAP tests (requires Perl and IPC::Run)
+ --enable-tap-tests enable (Perl-based) TAP tests (requires Perl and
+ IPC::Run)
+ --enable-pytest enable (Python-based) pytest suites (requires
+ Python)
--enable-injection-points
enable injection points (for testing)
--enable-depend turn on automatic dependency tracking
@@ -3632,7 +3639,7 @@ fi
#
-# TAP tests
+# Test frameworks
#
@@ -3660,6 +3667,32 @@ fi
+
+# Check whether --enable-pytest was given.
+if test "${enable_pytest+set}" = set; then :
+ enableval=$enable_pytest;
+ case $enableval in
+ yes)
+ :
+ ;;
+ no)
+ :
+ ;;
+ *)
+ as_fn_error $? "no argument expected for --enable-pytest option" "$LINENO" 5
+ ;;
+ esac
+
+else
+ enable_pytest=no
+
+fi
+
+
+
+
+
+
#
# Injection points
#
@@ -19229,6 +19262,135 @@ $as_echo "$modulestderr" >&6; }
fi
fi
+if test "$enable_pytest" = yes; then
+ if test -z "$PYTEST"; then
+ for ac_prog in pytest py.test
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_PYTEST+:} false; then :
+ $as_echo_n "(cached) " >&6
+else
+ case $PYTEST in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_PYTEST="$PYTEST" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_path_PYTEST="$as_dir/$ac_word$ac_exec_ext"
+ $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+PYTEST=$ac_cv_path_PYTEST
+if test -n "$PYTEST"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTEST" >&5
+$as_echo "$PYTEST" >&6; }
+else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+ test -n "$PYTEST" && break
+done
+
+else
+ # Report the value of PYTEST in configure's output in all cases.
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for PYTEST" >&5
+$as_echo_n "checking for PYTEST... " >&6; }
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTEST" >&5
+$as_echo "$PYTEST" >&6; }
+fi
+
+ if test -z "$PYTEST"; then
+ # If pytest not found, try installing with uv
+ if test -z "$UV"; then
+ for ac_prog in uv
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_UV+:} false; then :
+ $as_echo_n "(cached) " >&6
+else
+ case $UV in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_UV="$UV" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_path_UV="$as_dir/$ac_word$ac_exec_ext"
+ $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+UV=$ac_cv_path_UV
+if test -n "$UV"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $UV" >&5
+$as_echo "$UV" >&6; }
+else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+ test -n "$UV" && break
+done
+
+else
+ # Report the value of UV in configure's output in all cases.
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for UV" >&5
+$as_echo_n "checking for UV... " >&6; }
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $UV" >&5
+$as_echo "$UV" >&6; }
+fi
+
+ if test -n "$UV"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether uv can install pytest dependencies" >&5
+$as_echo_n "checking whether uv can install pytest dependencies... " >&6; }
+ if "$UV" pip install "$srcdir" >&5 2>&1; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+ PYTEST="$UV run pytest"
+ else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+ as_fn_error $? "pytest not found and uv failed to install dependencies" "$LINENO" 5
+ fi
+ else
+ as_fn_error $? "pytest not found" "$LINENO" 5
+ fi
+ fi
+fi
+
# If compiler will take -Wl,--as-needed (or various platform-specific
# spellings thereof) then add that to LDFLAGS. This is much easier than
# trying to filter LIBS to the minimum for each executable.
diff --git a/configure.ac b/configure.ac
index 01b3bbc1be8..8226e2a1342 100644
--- a/configure.ac
+++ b/configure.ac
@@ -225,11 +225,16 @@ AC_SUBST(DTRACEFLAGS)])
AC_SUBST(enable_dtrace)
#
-# TAP tests
+# Test frameworks
#
PGAC_ARG_BOOL(enable, tap-tests, no,
- [enable TAP tests (requires Perl and IPC::Run)])
+ [enable (Perl-based) TAP tests (requires Perl and IPC::Run)])
AC_SUBST(enable_tap_tests)
+
+PGAC_ARG_BOOL(enable, pytest, no,
+ [enable (Python-based) pytest suites (requires Python)])
+AC_SUBST(enable_pytest)
+
AC_ARG_VAR(PG_TEST_EXTRA,
[enable selected extra tests (overridden at runtime by PG_TEST_EXTRA environment variable)])
@@ -2412,6 +2417,26 @@ if test "$enable_tap_tests" = yes; then
fi
fi
+if test "$enable_pytest" = yes; then
+ PGAC_PATH_PROGS(PYTEST, pytest py.test)
+ if test -z "$PYTEST"; then
+ # If pytest not found, try installing with uv
+ PGAC_PATH_PROGS(UV, uv)
+ if test -n "$UV"; then
+ AC_MSG_CHECKING([whether uv can install pytest dependencies])
+ if "$UV" pip install "$srcdir" >&AS_MESSAGE_LOG_FD 2>&1; then
+ AC_MSG_RESULT([yes])
+ PYTEST="$UV run pytest"
+ else
+ AC_MSG_RESULT([no])
+ AC_MSG_ERROR([pytest not found and uv failed to install dependencies])
+ fi
+ else
+ AC_MSG_ERROR([pytest not found])
+ fi
+ fi
+fi
+
# If compiler will take -Wl,--as-needed (or various platform-specific
# spellings thereof) then add that to LDFLAGS. This is much easier than
# trying to filter LIBS to the minimum for each executable.
diff --git a/meson.build b/meson.build
index 551e27f5eb3..2ec125116a2 100644
--- a/meson.build
+++ b/meson.build
@@ -1711,6 +1711,41 @@ endif
+###############################################################
+# Library: pytest
+###############################################################
+
+pytest_enabled = false
+pytest = not_found_dep
+uv = not_found_dep
+use_uv = false
+
+pytestopt = get_option('pytest')
+if not pytestopt.disabled()
+ pytest = find_program(get_option('PYTEST'), native: true, required: false)
+
+ # If pytest not found, try installing with uv
+ if not pytest.found()
+ uv = find_program('uv', native: true, required: false)
+ if uv.found()
+ message('Installing pytest dependencies with uv...')
+ uv_install = run_command(uv, 'pip', 'install', meson.project_source_root(), check: false)
+ if uv_install.returncode() == 0
+ use_uv = true
+ pytest_enabled = true
+ endif
+ endif
+ else
+ pytest_enabled = true
+ endif
+
+ if not pytest_enabled and pytestopt.enabled()
+ error('pytest not found')
+ endif
+endif
+
+
+
###############################################################
# Library: zstd
###############################################################
@@ -3808,6 +3843,76 @@ foreach test_dir : tests
)
endforeach
install_suites += test_group
+ elif kind == 'pytest'
+ testwrap_pytest = testwrap_base
+ if not pytest_enabled
+ testwrap_pytest += ['--skip', 'pytest not enabled']
+ endif
+
+ if use_uv
+ test_command = [uv.full_path(), 'run', 'pytest']
+ elif pytest_enabled
+ test_command = [pytest.full_path()]
+ else
+ # Dummy value - test will be skipped anyway
+ test_command = ['pytest']
+ endif
+ test_command += [
+ '-c', meson.project_source_root() / 'pyproject.toml',
+ '--verbose',
+ '-p', 'pgtap', # enable our test reporter plugin
+ '-ra', # show skipped and xfailed tests too
+ ]
+
+ # Add temporary install, the build directory for non-installed binaries and
+ # also test/ for non-installed test binaries built separately.
+ env = test_env
+ env.prepend('PATH', temp_install_bindir, test_dir['bd'], test_dir['bd'] / 'test')
+ temp_install_datadir = '@0@@1@'.format(test_install_destdir, dir_prefix / dir_data)
+ env.set('share_contrib_dir', temp_install_datadir / 'contrib')
+ # We also configure the same PYTHONPATH in the pytest settings in
+ # pyproject.toml, but pytest versions below 8.4 only actually use that
+ # value after plugin loading. So we need to configure it here too. This
+ # won't help people manually running pytest outside of meson/make, but we
+ # expect those to use a recent enough version of pytest anyway (and if
+ # not they can manually configure PYTHONPATH too).
+ env.prepend('PYTHONPATH', meson.project_source_root() / 'src' / 'test' / 'pytest')
+
+ foreach name, value : t.get('env', {})
+ env.set(name, value)
+ endforeach
+
+ test_group = test_dir['name']
+ test_kwargs = {
+ 'protocol': 'tap',
+ 'suite': test_group,
+ 'timeout': 1000,
+ 'depends': test_deps + t.get('deps', []),
+ 'env': env,
+ } + t.get('test_kwargs', {})
+
+ foreach onetest : t['tests']
+ # Make test names prettier, remove pyt/ and .py
+ onetest_p = onetest
+ if onetest_p.startswith('pyt/')
+ onetest_p = onetest.split('pyt/')[1]
+ endif
+ if onetest_p.endswith('.py')
+ onetest_p = fs.stem(onetest_p)
+ endif
+
+ test(test_dir['name'] / onetest_p,
+ python,
+ kwargs: test_kwargs,
+ args: testwrap_pytest + [
+ '--testgroup', test_dir['name'],
+ '--testname', onetest_p,
+ '--', test_command,
+ test_dir['sd'] / onetest,
+ ],
+ )
+ endforeach
+ install_suites += test_group
else
error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
endif
@@ -3982,6 +4087,7 @@ summary(
'dtrace': dtrace,
'flex': '@0@ @1@'.format(flex.full_path(), flex_version),
'prove': prove,
+ 'pytest': pytest,
},
section: 'Programs',
)
@@ -4022,6 +4128,7 @@ summary(
summary(
{
'tap': tap_tests_enabled,
+ 'pytest': pytest_enabled,
},
section: 'Other features',
list_sep: ' ',
diff --git a/meson_options.txt b/meson_options.txt
index 06bf5627d3c..88f22e699d9 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -41,7 +41,10 @@ option('cassert', type: 'boolean', value: false,
description: 'Enable assertion checks (for debugging)')
option('tap_tests', type: 'feature', value: 'auto',
- description: 'Enable TAP tests')
+ description: 'Enable (Perl-based) TAP tests')
+
+option('pytest', type: 'feature', value: 'auto',
+ description: 'Enable (Python-based) pytest suites')
option('injection_points', type: 'boolean', value: false,
description: 'Enable injection points')
@@ -195,6 +198,9 @@ option('PERL', type: 'string', value: 'perl',
option('PROVE', type: 'string', value: 'prove',
description: 'Path to prove binary')
+option('PYTEST', type: 'array', value: ['pytest', 'py.test'],
+ description: 'Path to pytest binary')
+
option('PYTHON', type: 'array', value: ['python3', 'python'],
description: 'Path to python binary')
diff --git a/pyproject.toml b/pyproject.toml
new file mode 100644
index 00000000000..60abb4d0655
--- /dev/null
+++ b/pyproject.toml
@@ -0,0 +1,21 @@
+[project]
+name = "postgresql-hackers-tooling"
+version = "0.1.0"
+description = "Pytest infrastructure for PostgreSQL"
+requires-python = ">=3.6"
+dependencies = [
+ # pytest 7.0 was the last version which supported Python 3.6, but the BSDs
+ # have started putting 8.x into ports, so we support both. (pytest 8 can be
+ # used throughout once we drop support for Python 3.7.)
+ "pytest >= 7.0, < 10",
+
+ # Any other dependencies are effectively optional (added below). We import
+ # these libraries using pytest.importorskip(). So tests will be skipped if
+ # they are not available.
+]
+
+[tool.pytest.ini_options]
+minversion = "7.0"
+
+# Common test code can be found here.
+pythonpath = ["src/test/pytest"]
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 371cd7eba2c..160cdffd4f1 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -211,6 +211,7 @@ enable_dtrace = @enable_dtrace@
enable_coverage = @enable_coverage@
enable_injection_points = @enable_injection_points@
enable_tap_tests = @enable_tap_tests@
+enable_pytest = @enable_pytest@
python_includespec = @python_includespec@
python_libdir = @python_libdir@
@@ -354,6 +355,7 @@ MSGFMT = @MSGFMT@
MSGFMT_FLAGS = @MSGFMT_FLAGS@
MSGMERGE = @MSGMERGE@
OPENSSL = @OPENSSL@
+PYTEST = @PYTEST@
PYTHON = @PYTHON@
TAR = @TAR@
XGETTEXT = @XGETTEXT@
@@ -508,6 +510,33 @@ prove_installcheck = @echo "TAP tests not enabled. Try configuring with --enable
prove_check = $(prove_installcheck)
endif
+ifeq ($(enable_pytest),yes)
+
+pytest_installcheck = @echo "Installcheck is not currently supported for pytest."
+
+# We also configure the same PYTHONPATH in the pytest settings in
+# pyproject.toml, but pytest versions below 8.4 only actually use that value
+# after plugin loading. So we need to configure it here too. This won't help
+# people manually running pytest outside of meson/make, but we expect those to
+# use a recent enough version of pytest anyway (and if not they can manually
+# configure PYTHONPATH too).
+define pytest_check
+echo "# +++ pytest check in $(subdir) +++" && \
+rm -rf '$(CURDIR)'/tmp_check && \
+$(MKDIR_P) '$(CURDIR)'/tmp_check && \
+cd $(srcdir) && \
+ TESTLOGDIR='$(CURDIR)/tmp_check/log' \
+ TESTDATADIR='$(CURDIR)/tmp_check' \
+ PYTHONPATH='$(abs_top_srcdir)/src/test/pytest:$$PYTHONPATH' \
+ $(with_temp_install) \
+ $(PYTEST) -c '$(abs_top_srcdir)/pyproject.toml' --verbose -ra ./pyt/
+endef
+
+else
+pytest_installcheck = @echo "pytest is not enabled. Try configuring with --enable-pytest"
+pytest_check = $(pytest_installcheck)
+endif
+
# Installation.
install_bin = @install_bin@
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index c6edf14ec44..5b9a804aa94 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -56,6 +56,7 @@ pgxs_kv = {
'enable_nls': libintl.found() ? 'yes' : 'no',
'enable_injection_points': get_option('injection_points') ? 'yes' : 'no',
'enable_tap_tests': tap_tests_enabled ? 'yes' : 'no',
+ 'enable_pytest': pytest_enabled ? 'yes' : 'no',
'enable_debug': get_option('debug') ? 'yes' : 'no',
'enable_coverage': 'no',
'enable_dtrace': dtrace.found() ? 'yes' : 'no',
@@ -145,6 +146,7 @@ pgxs_bins = {
'OPENSSL': openssl,
'PERL': perl,
'PROVE': prove,
+ 'PYTEST': pytest,
'PYTHON': python,
'TAR': tar,
'ZSTD': program_zstd,
diff --git a/src/test/Makefile b/src/test/Makefile
index 3eb0a06abb4..0be9771d71f 100644
--- a/src/test/Makefile
+++ b/src/test/Makefile
@@ -18,6 +18,7 @@ SUBDIRS = \
modules \
perl \
postmaster \
+ pytest \
recovery \
regress \
subscription
diff --git a/src/test/meson.build b/src/test/meson.build
index ccc31d6a86a..d08a6ef61c2 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -5,6 +5,7 @@ subdir('isolation')
subdir('authentication')
subdir('postmaster')
+subdir('pytest')
subdir('recovery')
subdir('subscription')
subdir('modules')
diff --git a/src/test/pytest/Makefile b/src/test/pytest/Makefile
new file mode 100644
index 00000000000..2bdca96ccbe
--- /dev/null
+++ b/src/test/pytest/Makefile
@@ -0,0 +1,20 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for pytest
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/pytest/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/test/pytest
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+check:
+ $(pytest_check)
+
+clean distclean maintainer-clean:
+ rm -rf tmp_check
diff --git a/src/test/pytest/README b/src/test/pytest/README
new file mode 100644
index 00000000000..1333ed77b7e
--- /dev/null
+++ b/src/test/pytest/README
@@ -0,0 +1 @@
+TODO
diff --git a/src/test/pytest/meson.build b/src/test/pytest/meson.build
new file mode 100644
index 00000000000..abd128dfa24
--- /dev/null
+++ b/src/test/pytest/meson.build
@@ -0,0 +1,16 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+if not pytest_enabled
+ subdir_done()
+endif
+
+tests += {
+ 'name': 'pytest',
+ 'sd': meson.current_source_dir(),
+ 'bd': meson.current_build_dir(),
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_something.py',
+ ],
+ },
+}
diff --git a/src/test/pytest/pgtap.py b/src/test/pytest/pgtap.py
new file mode 100644
index 00000000000..c92cad98d95
--- /dev/null
+++ b/src/test/pytest/pgtap.py
@@ -0,0 +1,198 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import os
+import sys
+
+import pytest
+
+#
+# Helpers
+#
+
+
+class TAP:
+ """
+ A basic API for reporting via the TAP protocol.
+ """
+
+ def __init__(self):
+ self.count = 0
+
+ # XXX interacts poorly with testwrap's boilerplate diagnostics
+ # self.print("TAP version 13")
+
+ def expect(self, num: int):
+ self.print(f"1..{num}")
+
+ def print(self, *args):
+ print(*args, file=sys.__stdout__)
+
+ def ok(self, name: str):
+ self.count += 1
+ self.print("ok", self.count, "-", name)
+
+ def skip(self, name: str, reason: str):
+ self.count += 1
+ self.print("ok", self.count, "-", name, "# skip", reason)
+
+ def fail(self, name: str, details: str):
+ self.count += 1
+ self.print("not ok", self.count, "-", name)
+
+ # mtest has some odd behavior around TAP tests where it won't print
+ # diagnostics on failure if they're part of the stdout stream, so we
+ # might as well just dump the details directly to stderr instead.
+ print(details, file=sys.__stderr__)
+
+
+tap = TAP()
+
+
+class TestNotes:
+ """
+ Annotations for a single test. The existing pytest hooks keep interesting
+ information somewhat separated across the different stages
+ (setup/test/teardown), so this class is used to correlate them.
+ """
+
+ skipped = False
+ skip_reason = None
+
+ failed = False
+ details = ""
+
+
+# Register a custom key in the stash dictionary for keeping our TestNotes.
+notes_key = pytest.StashKey[TestNotes]()
+
+
+#
+# Hook Implementations
+#
+
+
+@pytest.hookimpl(tryfirst=True)
+def pytest_configure(config):
+ """
+ Hijacks the standard streams as soon as possible during pytest startup. The
+ pytest-formatted output gets logged to file instead, and we'll use the
+ original sys.__stdout__/__stderr__ streams for the TAP protocol.
+ """
+ logdir = os.getenv("TESTLOGDIR")
+ if not logdir:
+ raise RuntimeError("pgtap requires the TESTLOGDIR envvar to be set")
+
+ os.makedirs(logdir)
+ logpath = os.path.join(logdir, "pytest.log")
+ sys.stdout = sys.stderr = open(logpath, "a", buffering=1)
+
+
+@pytest.hookimpl(trylast=True)
+def pytest_sessionfinish(session, exitstatus):
+ """
+ Suppresses nonzero exit codes due to failed tests. (In that case, we want
+ Meson to report a failure count, not a generic ERROR.)
+ """
+ if exitstatus == pytest.ExitCode.TESTS_FAILED:
+ session.exitstatus = pytest.ExitCode.OK
+
+
+@pytest.hookimpl
+def pytest_collectreport(report):
+ # Include collection failures directly in Meson error output.
+ if report.failed:
+ print(report.longreprtext, file=sys.__stderr__)
+
+
+@pytest.hookimpl
+def pytest_internalerror(excrepr, excinfo):
+ # Include internal errors directly in Meson error output.
+ print(excrepr, file=sys.__stderr__)
+
+
+#
+# Hook Wrappers
+#
+# In pytest parlance, a "wrapper" for a hook can inspect and optionally modify
+# existing hooks' behavior, but it does not replace the hook chain. This is done
+# through a generator-style API which chains the hooks together (see the use of
+# `yield`).
+#
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_collection(session):
+ """Reports the number of gathered tests after collection is finished."""
+ res = yield
+ tap.expect(session.testscollected)
+ return res
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_runtest_makereport(item, call):
+ """
+ Annotates a test item with our TestNotes and grabs relevant information for
+ reporting.
+
+ This is called multiple times per test, so it's not correct to print the TAP
+ result here. (A test and its teardown stage can both fail, and we want to
+ see the details for both.) We instead combine all the information for use by
+ our pytest_runtest_protocol wrapper later on.
+ """
+ res = yield
+
+ if notes_key not in item.stash:
+ item.stash[notes_key] = TestNotes()
+ notes = item.stash[notes_key]
+
+ report = res.get_result()
+ if report.passed:
+ pass # no annotation needed
+
+ elif report.skipped:
+ notes.skipped = True
+ _, _, notes.skip_reason = report.longrepr
+
+ elif report.failed:
+ notes.failed = True
+
+ if not notes.details:
+ notes.details += "{:_^72}\n\n".format(f" {report.head_line} ")
+
+ if report.when in ("setup", "teardown"):
+ notes.details += "\n{:_^72}\n\n".format(
+ f" Error during {report.when} of {report.head_line} "
+ )
+
+ notes.details += report.longreprtext + "\n"
+
+ # Include captured stdout/stderr/log in failure output
+ for section_name, section_content in report.sections:
+ if section_content.strip():
+ notes.details += "\n{:-^72}\n".format(f" {section_name} ")
+ notes.details += section_content + "\n"
+
+ else:
+ raise RuntimeError("pytest_runtest_makereport received unknown test status")
+
+ return res
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_runtest_protocol(item, nextitem):
+ """
+ Reports the TAP result for this test item using our gathered TestNotes.
+ """
+ res = yield
+
+ assert notes_key in item.stash, "pgtap didn't annotate a test item?"
+ notes = item.stash[notes_key]
+
+ if notes.failed:
+ tap.fail(item.nodeid, notes.details)
+ elif notes.skipped:
+ tap.skip(item.nodeid, notes.skip_reason)
+ else:
+ tap.ok(item.nodeid)
+
+ return res
diff --git a/src/tools/testwrap b/src/tools/testwrap
index e91296ecd15..346f86b8ea3 100755
--- a/src/tools/testwrap
+++ b/src/tools/testwrap
@@ -42,7 +42,11 @@ open(os.path.join(testdir, 'test.start'), 'x')
env_dict = {**os.environ,
'TESTDATADIR': os.path.join(testdir, 'data'),
- 'TESTLOGDIR': os.path.join(testdir, 'log')}
+ 'TESTLOGDIR': os.path.join(testdir, 'log'),
+ # Prevent emitting terminal capability sequences that pollute the
+ # TAP output stream (i.e.\033[?1034h). This happens on OpenBSD with
+ # pytest for unknown reasons.
+ 'TERM': ''}
# The configuration time value of PG_TEST_EXTRA is supplied via argument
--
2.52.0
v6-0003-ci-Add-MTEST_SUITES-for-optional-test-tailoring.patchtext/x-patch; charset=utf-8; name=v6-0003-ci-Add-MTEST_SUITES-for-optional-test-tailoring.patchDownload
From 6b583ba6f6697d7eafb254ef540b7583e128990e Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 2 Sep 2025 15:37:53 -0700
Subject: [PATCH v6 3/7] ci: Add MTEST_SUITES for optional test tailoring
Should make it easier to control the test cycle time for Cirrus. Add the
desired suites (remembering `--suite setup`!) to the top-level envvar.
---
.cirrus.tasks.yml | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index a83acb39e97..a2c3febc30c 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -28,6 +28,7 @@ env:
# errors/warnings in one place.
MBUILD_TARGET: all testprep
MTEST_ARGS: --print-errorlogs --no-rebuild -C build
+ MTEST_SUITES: # --suite setup --suite ssl --suite ...
PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
@@ -251,7 +252,7 @@ task:
su postgres <<-EOF
set -e
ulimit -c unlimited
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
# test runningcheck, freebsd chosen because it's currently fast enough
@@ -396,7 +397,7 @@ task:
# Otherwise tests will fail on OpenBSD, due to inability to start enough
# processes.
ulimit -p 256
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
on_failure:
@@ -614,7 +615,7 @@ task:
su postgres <<-EOF
set -e
ulimit -c unlimited
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
# so that we don't upload 64bit logs if 32bit fails
rm -rf build/
@@ -627,7 +628,7 @@ task:
su postgres <<-EOF
set -e
ulimit -c unlimited
- PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS}
+ PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
on_failure:
@@ -751,7 +752,7 @@ task:
test_world_script: |
ulimit -c unlimited # default is 0
ulimit -n 1024 # default is 256, pretty low
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
on_failure:
<<: *on_failure_meson
@@ -834,7 +835,7 @@ task:
check_world_script: |
vcvarsall x64
- meson test %MTEST_ARGS% --num-processes %TEST_JOBS%
+ meson test %MTEST_ARGS% --num-processes %TEST_JOBS% %MTEST_SUITES%
on_failure:
<<: *on_failure_meson
@@ -895,7 +896,7 @@ task:
upload_caches: ccache
test_world_script: |
- %BASH% -c "meson test %MTEST_ARGS% --num-processes %TEST_JOBS%"
+ %BASH% -c "meson test %MTEST_ARGS% --num-processes %TEST_JOBS% %MTEST_SUITES%"
on_failure:
<<: *on_failure_meson
--
2.52.0
v6-0004-Add-pytest-infrastructure-to-interact-with-Postgr.patchtext/x-patch; charset=utf-8; name=v6-0004-Add-pytest-infrastructure-to-interact-with-Postgr.patchDownload
From 29cb1584673f078ee38b6bcb60e17db1389a218c Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <postgres@jeltef.nl>
Date: Tue, 16 Dec 2025 09:25:48 +0100
Subject: [PATCH v6 4/7] Add pytest infrastructure to interact with PostgreSQL
servers
This adds functionality to the pytest infrastructure that allows tests
to do common things with PostgreSQL servers like:
- creating
- starting
- stopping
- connecting
- running queries
- handling errors
The goal of this infrastructure is to be so easy to use that the actual
tests really only contain the logic to test the behaviour that the tests
are testing, as opposed to a bunch of boilerplate. Examples of this are:
Types get converted to their Python counter parts automatically. Errors
become actual Python exceptions. Results of queries that only return a
single row or cell are unpacked automatically, so you don't have to do
rows[0][0] if the query only returns a single cell.
The only new tests that are part of this commit are tests that cover
this testing infrastructure itself. It's debatable whether such tests
are useful long term, because any infrastructure that's unused by actual
tests should probably not exist. For now it seems good to test this
basic functionality though, both to make sure we don't break it before
committing actual tests that use it, and also as an example for people
writing new tests.
---
doc/src/sgml/regress.sgml | 54 +-
pyproject.toml | 3 +
src/backend/utils/errcodes.txt | 5 +
src/test/pytest/README | 140 +-
src/test/pytest/libpq/__init__.py | 36 +
src/test/pytest/libpq/_core.py | 489 +++++
src/test/pytest/libpq/_error_base.py | 74 +
src/test/pytest/libpq/_generated_errors.py | 2116 ++++++++++++++++++++
src/test/pytest/libpq/errors.py | 39 +
src/test/pytest/meson.build | 5 +-
src/test/pytest/pypg/__init__.py | 10 +
src/test/pytest/pypg/_env.py | 72 +
src/test/pytest/pypg/fixtures.py | 335 ++++
src/test/pytest/pypg/server.py | 470 +++++
src/test/pytest/pypg/util.py | 42 +
src/test/pytest/pyt/conftest.py | 1 +
src/test/pytest/pyt/test_errors.py | 34 +
src/test/pytest/pyt/test_libpq.py | 172 ++
src/test/pytest/pyt/test_multi_server.py | 46 +
src/test/pytest/pyt/test_query_helpers.py | 347 ++++
src/tools/generate_pytest_libpq_errors.py | 147 ++
21 files changed, 4634 insertions(+), 3 deletions(-)
create mode 100644 src/test/pytest/libpq/__init__.py
create mode 100644 src/test/pytest/libpq/_core.py
create mode 100644 src/test/pytest/libpq/_error_base.py
create mode 100644 src/test/pytest/libpq/_generated_errors.py
create mode 100644 src/test/pytest/libpq/errors.py
create mode 100644 src/test/pytest/pypg/__init__.py
create mode 100644 src/test/pytest/pypg/_env.py
create mode 100644 src/test/pytest/pypg/fixtures.py
create mode 100644 src/test/pytest/pypg/server.py
create mode 100644 src/test/pytest/pypg/util.py
create mode 100644 src/test/pytest/pyt/conftest.py
create mode 100644 src/test/pytest/pyt/test_errors.py
create mode 100644 src/test/pytest/pyt/test_libpq.py
create mode 100644 src/test/pytest/pyt/test_multi_server.py
create mode 100644 src/test/pytest/pyt/test_query_helpers.py
create mode 100755 src/tools/generate_pytest_libpq_errors.py
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index d80dd46c5fd..1440815b23a 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -840,7 +840,7 @@ float4:out:.*-.*-cygwin.*=float4-misrounded-input.out
</sect1>
<sect1 id="regress-tap">
- <title>TAP Tests</title>
+ <title>Perl TAP Tests</title>
<para>
Various tests, particularly the client program tests
@@ -929,6 +929,58 @@ PG_TEST_NOCLEAN=1 make -C src/bin/pg_dump check
</sect1>
+ <sect1 id="regress-pytest">
+ <title>Pytest Tests</title>
+
+ <para>
+ Tests in <filename>pyt</filename> directories use the Python
+ <application>pytest</application> framework. These tests provide a
+ convenient way to test libpq client functionality and scenarios requiring
+ multiple PostgreSQL server instances.
+ </para>
+
+ <para>
+ The pytest tests require <productname>PostgreSQL</productname> to be
+ configured with the option <option>--enable-pytest</option> (or
+ <option>-Dpytest=enabled</option> for Meson builds). You also need either
+ <application>pytest</application> or <application>uv</application>
+ installed on your system.
+ </para>
+
+ <para>
+ With Meson builds, you can run the pytest tests using:
+<programlisting>
+meson test --suite pytest
+</programlisting>
+ With autoconf-based builds, you can run them from the
+ <filename>src/test/pytest</filename> directory using:
+<programlisting>
+make check
+</programlisting>
+ </para>
+
+ <para>
+ You can also run specific test files directly using pytest:
+<programlisting>
+pytest src/test/pytest/pyt/test_libpq.py
+pytest -k "test_connstr"
+</programlisting>
+ </para>
+
+ <para>
+ Many operations in the test suites use a 180-second timeout, which on slow
+ hosts may lead to load-induced timeouts. Setting the environment variable
+ <varname>PG_TEST_TIMEOUT_DEFAULT</varname> to a higher number will change
+ the default to avoid this.
+ </para>
+
+ <para>
+ For more information on writing pytest tests, see the
+ <filename>src/test/pytest/README</filename> file.
+ </para>
+
+ </sect1>
+
<sect1 id="regress-coverage">
<title>Test Coverage Examination</title>
diff --git a/pyproject.toml b/pyproject.toml
index 60abb4d0655..4628d2274e0 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -19,3 +19,6 @@ minversion = "7.0"
# Common test code can be found here.
pythonpath = ["src/test/pytest"]
+
+# Load the shared fixtures plugin
+addopts = ["-p", "pypg.fixtures"]
diff --git a/src/backend/utils/errcodes.txt b/src/backend/utils/errcodes.txt
index c96aa7c49ef..40c7555047e 100644
--- a/src/backend/utils/errcodes.txt
+++ b/src/backend/utils/errcodes.txt
@@ -21,6 +21,11 @@
# doc/src/sgml/errcodes-table.sgml
# a SGML table of error codes for inclusion in the documentation
#
+# src/test/pytest/libpq/_generated_errors.py
+# Python exception classes for the pytest libpq wrapper
+# Note: This needs to be manually regenerated by running
+# src/tools/generate_pytest_libpq_errors.py
+#
# The format of this file is one error code per line, with the following
# whitespace-separated fields:
#
diff --git a/src/test/pytest/README b/src/test/pytest/README
index 1333ed77b7e..9dc50ca111f 100644
--- a/src/test/pytest/README
+++ b/src/test/pytest/README
@@ -1 +1,139 @@
-TODO
+src/test/pytest/README
+
+Pytest-based tests
+==================
+
+This directory contains infrastructure for Python-based tests using pytest,
+along with some core tests for the pytest infrastructure itself. The framework
+provides fixtures for managing PostgreSQL server instances and connecting to
+them via libpq.
+
+
+Running the tests
+=================
+
+NOTE: You must have given the --enable-pytest argument to configure (or
+-Dpytest=enabled for Meson builds). You also need to have either pytest or uv
+already installed.
+
+With Meson builds, you can run:
+ meson test --suite pytest
+
+With autoconf based builds, you can run:
+ make check
+or
+ make installcheck
+
+You can run specific test files and/or use pytest's -k option to select tests:
+ pytest src/test/pytest/pyt/test_libpq.py
+ pytest -k "test_connstr"
+
+
+Directory structure
+===================
+
+pypg/
+ Python library providing common functions and pytest fixtures that can be
+ used in tests.
+
+libpq/
+ A simple but user-friendly python wrapper around libpq
+
+pyt/
+ Tests for the pytest infrastructure itself
+
+pgtap.py
+ A pytest plugin to output results in TAP format
+
+
+Writing tests
+=============
+
+Tests use pytest fixtures to manage server instances and connections. The
+most commonly used fixtures are:
+
+pg
+ A PostgresServer instance configured for the current test. Use this for
+ creating test users/databases or modifying server configuration. Changes
+ are automatically rolled back after the test.
+
+conn
+ A connected PGconn instance to the test server. Automatically cleaned up
+ after the test.
+
+connect
+ A function to create additional connections with custom options.
+
+create_pg
+ A factory function to create additional PostgreSQL servers within a test.
+ Servers are automatically cleaned up at the end of the test. Useful for
+ testing scenarios that require multiple independent servers.
+
+create_pg_module
+ Like create_pg, but servers persist for the entire test module. Use this
+ when multiple tests in a module can share the same servers, which is
+ faster than creating new servers for each test.
+
+
+Example test:
+
+ def test_simple_query(conn):
+ result = conn.sql("SELECT 1 + 1")
+ assert result == 2
+
+ def test_with_user(pg):
+ users = pg.create_users("test")
+ with pg.reloading() as s:
+ s.hba.prepend(["local", "all", users["test"], "trust"])
+
+ conn = pg.connect(user=users["test"])
+ assert conn.sql("SELECT current_user") == users["test"]
+
+ def test_multiple_servers(create_pg):
+ node1 = create_pg("primary")
+ node2 = create_pg("secondary")
+
+ conn1 = node1.connect()
+ conn2 = node2.connect()
+
+ # Each server is independent
+ assert node1.port != node2.port
+
+
+Server configuration
+====================
+
+Tests can temporarily modify server configuration using context managers:
+
+ with pg.reloading() as s:
+ s.conf.set(log_connections="on")
+ s.hba.prepend("local all all trust")
+ # Server is reloaded here
+ # After the test finished the original configuration is restored and
+ # the server is reloaded again
+
+Use pg.restarting() instead if the configuration change requires a restart.
+
+
+Timeouts
+========
+
+Tests inherit the PG_TEST_TIMEOUT_DEFAULT environment variable (defaulting
+to 180 seconds). The remaining_timeout fixture provides a function that
+returns how much time remains for the current test.
+
+
+Environment variables
+=====================
+
+PG_TEST_TIMEOUT_DEFAULT
+ Per-test timeout in seconds (default: 180)
+
+PG_CONFIG
+ Path to pg_config (default: uses PATH)
+
+TESTDATADIR
+ Directory for test data (default: pytest temp directory)
+
+PG_TEST_EXTRA
+ Space-separated list of optional test categories to run (e.g., "ssl")
diff --git a/src/test/pytest/libpq/__init__.py b/src/test/pytest/libpq/__init__.py
new file mode 100644
index 00000000000..cb4d18b6206
--- /dev/null
+++ b/src/test/pytest/libpq/__init__.py
@@ -0,0 +1,36 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+libpq testing utilities - ctypes bindings and helpers for PostgreSQL's libpq library.
+
+This module provides Python wrappers around libpq for use in pytest tests.
+"""
+
+from . import errors
+from .errors import LibpqError, LibpqWarning
+from ._core import (
+ ConnectionStatus,
+ DiagField,
+ ExecStatus,
+ PGconn,
+ PGresult,
+ connect,
+ connstr,
+ load_libpq_handle,
+ register_type_info,
+)
+
+__all__ = [
+ "errors",
+ "LibpqError",
+ "LibpqWarning",
+ "ConnectionStatus",
+ "DiagField",
+ "ExecStatus",
+ "PGconn",
+ "PGresult",
+ "connect",
+ "connstr",
+ "load_libpq_handle",
+ "register_type_info",
+]
diff --git a/src/test/pytest/libpq/_core.py b/src/test/pytest/libpq/_core.py
new file mode 100644
index 00000000000..0d77996d572
--- /dev/null
+++ b/src/test/pytest/libpq/_core.py
@@ -0,0 +1,489 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Core libpq functionality - ctypes bindings and connection handling.
+"""
+
+import contextlib
+import ctypes
+import datetime
+import decimal
+import enum
+import json
+import platform
+import os
+import uuid
+from typing import Any, Callable, Dict, Optional
+
+from .errors import LibpqError, make_error
+
+
+# PG_DIAG field identifiers from postgres_ext.h
+class DiagField(enum.IntEnum):
+ SEVERITY = ord("S")
+ SEVERITY_NONLOCALIZED = ord("V")
+ SQLSTATE = ord("C")
+ MESSAGE_PRIMARY = ord("M")
+ MESSAGE_DETAIL = ord("D")
+ MESSAGE_HINT = ord("H")
+ STATEMENT_POSITION = ord("P")
+ INTERNAL_POSITION = ord("p")
+ INTERNAL_QUERY = ord("q")
+ CONTEXT = ord("W")
+ SCHEMA_NAME = ord("s")
+ TABLE_NAME = ord("t")
+ COLUMN_NAME = ord("c")
+ DATATYPE_NAME = ord("d")
+ CONSTRAINT_NAME = ord("n")
+ SOURCE_FILE = ord("F")
+ SOURCE_LINE = ord("L")
+ SOURCE_FUNCTION = ord("R")
+
+
+class ConnectionStatus(enum.IntEnum):
+ """PostgreSQL connection status codes from libpq."""
+
+ CONNECTION_OK = 0
+ CONNECTION_BAD = 1
+
+
+class ExecStatus(enum.IntEnum):
+ """PostgreSQL result status codes from PQresultStatus."""
+
+ PGRES_EMPTY_QUERY = 0
+ PGRES_COMMAND_OK = 1
+ PGRES_TUPLES_OK = 2
+ PGRES_COPY_OUT = 3
+ PGRES_COPY_IN = 4
+ PGRES_BAD_RESPONSE = 5
+ PGRES_NONFATAL_ERROR = 6
+ PGRES_FATAL_ERROR = 7
+ PGRES_COPY_BOTH = 8
+ PGRES_SINGLE_TUPLE = 9
+ PGRES_PIPELINE_SYNC = 10
+ PGRES_PIPELINE_ABORTED = 11
+
+
+class _PGconn(ctypes.Structure):
+ pass
+
+
+class _PGresult(ctypes.Structure):
+ pass
+
+
+_PGconn_p = ctypes.POINTER(_PGconn)
+_PGresult_p = ctypes.POINTER(_PGresult)
+
+
+def load_libpq_handle(libdir, bindir):
+ """
+ Loads a ctypes handle for libpq. Some common function prototypes are
+ initialized for general use.
+ """
+ system = platform.system()
+
+ if system in ("Linux", "FreeBSD", "NetBSD", "OpenBSD"):
+ name = "libpq.so.5"
+ elif system == "Darwin":
+ name = "libpq.5.dylib"
+ elif system == "Windows":
+ name = "libpq.dll"
+ else:
+ assert False, f"the libpq fixture must be updated for {system}"
+
+ if system == "Windows":
+ # On Windows, libpq.dll is confusingly in bindir, not libdir. And we
+ # need to add this directory the the search path.
+ libpq_path = os.path.join(bindir, name)
+ lib = ctypes.CDLL(libpq_path)
+ else:
+ libpq_path = os.path.join(libdir, name)
+ lib = ctypes.CDLL(libpq_path)
+
+ #
+ # Function Prototypes
+ #
+
+ lib.PQconnectdb.restype = _PGconn_p
+ lib.PQconnectdb.argtypes = [ctypes.c_char_p]
+
+ lib.PQstatus.restype = ctypes.c_int
+ lib.PQstatus.argtypes = [_PGconn_p]
+
+ lib.PQexec.restype = _PGresult_p
+ lib.PQexec.argtypes = [_PGconn_p, ctypes.c_char_p]
+
+ lib.PQresultStatus.restype = ctypes.c_int
+ lib.PQresultStatus.argtypes = [_PGresult_p]
+
+ lib.PQclear.restype = None
+ lib.PQclear.argtypes = [_PGresult_p]
+
+ lib.PQerrorMessage.restype = ctypes.c_char_p
+ lib.PQerrorMessage.argtypes = [_PGconn_p]
+
+ lib.PQfinish.restype = None
+ lib.PQfinish.argtypes = [_PGconn_p]
+
+ lib.PQresultErrorMessage.restype = ctypes.c_char_p
+ lib.PQresultErrorMessage.argtypes = [_PGresult_p]
+
+ lib.PQntuples.restype = ctypes.c_int
+ lib.PQntuples.argtypes = [_PGresult_p]
+
+ lib.PQnfields.restype = ctypes.c_int
+ lib.PQnfields.argtypes = [_PGresult_p]
+
+ lib.PQgetvalue.restype = ctypes.c_char_p
+ lib.PQgetvalue.argtypes = [_PGresult_p, ctypes.c_int, ctypes.c_int]
+
+ lib.PQgetisnull.restype = ctypes.c_int
+ lib.PQgetisnull.argtypes = [_PGresult_p, ctypes.c_int, ctypes.c_int]
+
+ lib.PQftype.restype = ctypes.c_uint
+ lib.PQftype.argtypes = [_PGresult_p, ctypes.c_int]
+
+ lib.PQresultErrorField.restype = ctypes.c_char_p
+ lib.PQresultErrorField.argtypes = [_PGresult_p, ctypes.c_int]
+
+ return lib
+
+
+# PostgreSQL type OIDs and conversion system
+# Type registry - maps OID to converter function
+_type_converters: Dict[int, Callable[[str], Any]] = {}
+_array_to_elem_map: Dict[int, int] = {}
+
+
+def register_type_info(
+ name: str, oid: int, array_oid: int, converter: Callable[[str], Any]
+):
+ """
+ Register a PostgreSQL type with its OID, array OID, and conversion function.
+
+ Usage:
+ register_type_info("bool", 16, 1000, lambda v: v == "t")
+ """
+ _type_converters[oid] = converter
+ if array_oid is not None:
+ _array_to_elem_map[array_oid] = oid
+
+
+def _parse_array(value: str, elem_oid: int):
+ """Parse PostgreSQL array syntax into nested Python lists."""
+ stack: list[list] = []
+ current_element: list[str] = []
+ in_quotes = False
+ was_quoted = False
+ pos = 0
+
+ while pos < len(value):
+ char = value[pos]
+
+ if in_quotes:
+ if char == "\\":
+ next_char = value[pos + 1]
+ if next_char not in '"\\':
+ raise NotImplementedError('Only \\" and \\\\ escapes are supported')
+ current_element.append(next_char)
+ pos += 2
+ continue
+ elif char == '"':
+ in_quotes = False
+ else:
+ current_element.append(char)
+ elif char == '"':
+ in_quotes = True
+ was_quoted = True
+ elif char == "{":
+ stack.append([])
+ elif char in ",}":
+ if current_element or was_quoted:
+ elem = "".join(current_element)
+ if not was_quoted and elem == "NULL":
+ stack[-1].append(None)
+ else:
+ stack[-1].append(_convert_pg_value(elem, elem_oid))
+ current_element = []
+ was_quoted = False
+ if char == "}":
+ completed = stack.pop()
+ if not stack:
+ return completed
+ stack[-1].append(completed)
+ elif char != " ":
+ current_element.append(char)
+ pos += 1
+
+ raise ValueError(f"Malformed array literal: {value}")
+
+
+# Register standard PostgreSQL types that we'll likely encounter in tests
+register_type_info("bool", 16, 1000, lambda v: v == "t")
+register_type_info("int2", 21, 1005, int)
+register_type_info("int4", 23, 1007, int)
+register_type_info("int8", 20, 1016, int)
+register_type_info("float4", 700, 1021, float)
+register_type_info("float8", 701, 1022, float)
+register_type_info("numeric", 1700, 1231, decimal.Decimal)
+register_type_info("text", 25, 1009, str)
+register_type_info("varchar", 1043, 1015, str)
+register_type_info("date", 1082, 1182, datetime.date.fromisoformat)
+register_type_info("time", 1083, 1183, datetime.time.fromisoformat)
+register_type_info("timestamp", 1114, 1115, datetime.datetime.fromisoformat)
+register_type_info("timestamptz", 1184, 1185, datetime.datetime.fromisoformat)
+register_type_info("uuid", 2950, 2951, uuid.UUID)
+register_type_info("json", 114, 199, json.loads)
+register_type_info("jsonb", 3802, 3807, json.loads)
+
+
+def _convert_pg_value(value: str, type_oid: int) -> Any:
+ """
+ Convert PostgreSQL string value to appropriate Python type based on OID.
+ Uses the registered type converters from register_type_info().
+ """
+ # Check if it's an array type
+ if type_oid in _array_to_elem_map:
+ elem_oid = _array_to_elem_map[type_oid]
+ return _parse_array(value, elem_oid)
+
+ # Use registered converter if available
+ converter = _type_converters.get(type_oid)
+ if converter:
+ return converter(value)
+
+ # Unknown types - return as string
+ return value
+
+
+def simplify_query_results(results) -> Any:
+ """
+ Simplify the results of a query so that the caller doesn't have to unpack
+ lists and tuples of length 1.
+ """
+ if len(results) == 1:
+ row = results[0]
+ if len(row) == 1:
+ # If there's only a single cell, just return the value
+ return row[0]
+ # If there's only a single row, just return that row
+ return row
+
+ if len(results) != 0 and len(results[0]) == 1:
+ # If there's only a single column, return an array of values
+ return [row[0] for row in results]
+
+ # if there are multiple rows and columns, return the results as is
+ return results
+
+
+class PGresult(contextlib.AbstractContextManager):
+ """Wraps a raw _PGresult_p with a more friendly interface."""
+
+ def __init__(self, lib: ctypes.CDLL, res: _PGresult_p):
+ self._lib = lib
+ self._res = res
+
+ def __exit__(self, *exc):
+ self._lib.PQclear(self._res)
+ self._res = None
+
+ def status(self) -> ExecStatus:
+ return ExecStatus(self._lib.PQresultStatus(self._res))
+
+ def error_message(self):
+ """Returns the error message associated with this result."""
+ msg = self._lib.PQresultErrorMessage(self._res)
+ return msg.decode() if msg else ""
+
+ def _get_error_field(self, field: DiagField) -> Optional[str]:
+ """Get an error field from the result using PQresultErrorField."""
+ val = self._lib.PQresultErrorField(self._res, int(field))
+ return val.decode() if val else None
+
+ def raise_error(self) -> None:
+ """
+ Raises an appropriate LibpqError subclass based on the error fields.
+ Extracts SQLSTATE and other diagnostic information from the result.
+ """
+ if not self._res:
+ raise LibpqError("query failed: out of memory or connection lost")
+
+ sqlstate = self._get_error_field(DiagField.SQLSTATE)
+ primary = self._get_error_field(DiagField.MESSAGE_PRIMARY)
+ detail = self._get_error_field(DiagField.MESSAGE_DETAIL)
+ hint = self._get_error_field(DiagField.MESSAGE_HINT)
+ severity = self._get_error_field(DiagField.SEVERITY)
+ schema_name = self._get_error_field(DiagField.SCHEMA_NAME)
+ table_name = self._get_error_field(DiagField.TABLE_NAME)
+ column_name = self._get_error_field(DiagField.COLUMN_NAME)
+ datatype_name = self._get_error_field(DiagField.DATATYPE_NAME)
+ constraint_name = self._get_error_field(DiagField.CONSTRAINT_NAME)
+ context = self._get_error_field(DiagField.CONTEXT)
+
+ position_str = self._get_error_field(DiagField.STATEMENT_POSITION)
+ position = int(position_str) if position_str else None
+
+ raise make_error(
+ primary or self.error_message(),
+ sqlstate=sqlstate,
+ severity=severity,
+ primary=primary,
+ detail=detail,
+ hint=hint,
+ schema_name=schema_name,
+ table_name=table_name,
+ column_name=column_name,
+ datatype_name=datatype_name,
+ constraint_name=constraint_name,
+ position=position,
+ context=context,
+ )
+
+ def fetch_all(self):
+ """
+ Fetch all rows and convert to Python types.
+ Returns a list of tuples, with values converted based on their PostgreSQL type.
+ """
+ nrows = self._lib.PQntuples(self._res)
+ ncols = self._lib.PQnfields(self._res)
+
+ # Get type OIDs for each column
+ type_oids = [self._lib.PQftype(self._res, col) for col in range(ncols)]
+
+ results = []
+ for row in range(nrows):
+ row_data = []
+ for col in range(ncols):
+ if self._lib.PQgetisnull(self._res, row, col):
+ row_data.append(None)
+ else:
+ value = self._lib.PQgetvalue(self._res, row, col).decode()
+ row_data.append(_convert_pg_value(value, type_oids[col]))
+ results.append(tuple(row_data))
+
+ return results
+
+
+class PGconn(contextlib.AbstractContextManager):
+ """
+ Wraps a raw _PGconn_p with a more friendly interface. This is just a
+ stub; it's expected to grow.
+ """
+
+ def __init__(
+ self,
+ lib: ctypes.CDLL,
+ handle: _PGconn_p,
+ stack: contextlib.ExitStack,
+ ):
+ self._lib = lib
+ self._handle = handle
+ self._stack = stack
+
+ def __exit__(self, *exc):
+ self._lib.PQfinish(self._handle)
+ self._handle = None
+
+ def exec(self, query: str):
+ """
+ Executes a query via PQexec() and returns a PGresult.
+ """
+ res = self._lib.PQexec(self._handle, query.encode())
+ return self._stack.enter_context(PGresult(self._lib, res))
+
+ def sql(self, query: str):
+ """
+ Executes a query and raises an exception if it fails.
+ Returns the query results with automatic type conversion and simplification.
+ For commands that don't return data (INSERT, UPDATE, etc.), returns None.
+
+ Examples:
+ - SELECT 1 -> 1
+ - SELECT 1, 2 -> (1, 2)
+ - SELECT * FROM generate_series(1, 3) -> [1, 2, 3]
+ - SELECT * FROM (VALUES (1, 'a'), (2, 'b')) t -> [(1, 'a'), (2, 'b')]
+ - CREATE TABLE ... -> None
+ - INSERT INTO ... -> None
+ """
+ res = self.exec(query)
+ status = res.status()
+
+ if status == ExecStatus.PGRES_FATAL_ERROR:
+ res.raise_error()
+ elif status == ExecStatus.PGRES_COMMAND_OK:
+ return None
+ elif status == ExecStatus.PGRES_TUPLES_OK:
+ results = res.fetch_all()
+ return simplify_query_results(results)
+ else:
+ res.raise_error()
+
+
+def connstr(opts: Dict[str, Any]) -> str:
+ """
+ Flattens the provided options into a libpq connection string. Values
+ are converted to str and quoted/escaped as necessary.
+ """
+ settings = []
+
+ for k, v in opts.items():
+ v = str(v)
+ if not v:
+ v = "''"
+ else:
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+
+ if " " in v:
+ v = f"'{v}'"
+
+ settings.append(f"{k}={v}")
+
+ return " ".join(settings)
+
+
+def connect(
+ libpq_handle: ctypes.CDLL,
+ stack: contextlib.ExitStack,
+ remaining_timeout_fn: Callable[[], float],
+ **opts,
+) -> PGconn:
+ """
+ Connects to a server, using the given connection options, and
+ returns a PGconn object wrapping the connection handle. A
+ failure will raise LibpqError.
+
+ Connections honor PG_TEST_TIMEOUT_DEFAULT unless connect_timeout is
+ explicitly overridden in opts.
+
+ Args:
+ libpq_handle: ctypes.CDLL handle to libpq library
+ stack: ExitStack for managing connection cleanup
+ remaining_timeout_fn: Function that returns remaining timeout in seconds
+ **opts: Connection options (host, port, dbname, etc.)
+
+ Returns:
+ PGconn: Connected database connection
+
+ Raises:
+ LibpqError: If connection fails
+ """
+
+ if "connect_timeout" not in opts:
+ t = int(remaining_timeout_fn())
+ opts["connect_timeout"] = max(t, 1)
+
+ conn_p = libpq_handle.PQconnectdb(connstr(opts).encode())
+
+ # Check connection status before adding to stack
+ if libpq_handle.PQstatus(conn_p) != ConnectionStatus.CONNECTION_OK:
+ error_msg = libpq_handle.PQerrorMessage(conn_p).decode()
+ # Manually close the failed connection
+ libpq_handle.PQfinish(conn_p)
+ raise LibpqError(error_msg)
+
+ # Connection succeeded - add to stack for cleanup
+ conn = stack.enter_context(PGconn(libpq_handle, conn_p, stack=stack))
+ return conn
diff --git a/src/test/pytest/libpq/_error_base.py b/src/test/pytest/libpq/_error_base.py
new file mode 100644
index 00000000000..5c70c077193
--- /dev/null
+++ b/src/test/pytest/libpq/_error_base.py
@@ -0,0 +1,74 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Base exception classes for libpq errors and warnings.
+"""
+
+from typing import Optional
+
+
+class LibpqExceptionMixin:
+ """Mixin providing PostgreSQL error field attributes."""
+
+ sqlstate: Optional[str]
+ severity: Optional[str]
+ primary: Optional[str]
+ detail: Optional[str]
+ hint: Optional[str]
+ schema_name: Optional[str]
+ table_name: Optional[str]
+ column_name: Optional[str]
+ datatype_name: Optional[str]
+ constraint_name: Optional[str]
+ position: Optional[int]
+ context: Optional[str]
+
+ def __init__(
+ self,
+ message: str,
+ *,
+ sqlstate: Optional[str] = None,
+ severity: Optional[str] = None,
+ primary: Optional[str] = None,
+ detail: Optional[str] = None,
+ hint: Optional[str] = None,
+ schema_name: Optional[str] = None,
+ table_name: Optional[str] = None,
+ column_name: Optional[str] = None,
+ datatype_name: Optional[str] = None,
+ constraint_name: Optional[str] = None,
+ position: Optional[int] = None,
+ context: Optional[str] = None,
+ ):
+ super().__init__(message)
+ self.sqlstate = sqlstate
+ self.severity = severity
+ self.primary = primary
+ self.detail = detail
+ self.hint = hint
+ self.schema_name = schema_name
+ self.table_name = table_name
+ self.column_name = column_name
+ self.datatype_name = datatype_name
+ self.constraint_name = constraint_name
+ self.position = position
+ self.context = context
+
+ @property
+ def sqlstate_class(self) -> Optional[str]:
+ """Returns the 2-character SQLSTATE class."""
+ if self.sqlstate and len(self.sqlstate) >= 2:
+ return self.sqlstate[:2]
+ return None
+
+
+class LibpqError(LibpqExceptionMixin, RuntimeError):
+ """Base exception for libpq errors."""
+
+ pass
+
+
+class LibpqWarning(LibpqExceptionMixin, UserWarning):
+ """Base exception for libpq warnings."""
+
+ pass
diff --git a/src/test/pytest/libpq/_generated_errors.py b/src/test/pytest/libpq/_generated_errors.py
new file mode 100644
index 00000000000..f50f3143580
--- /dev/null
+++ b/src/test/pytest/libpq/_generated_errors.py
@@ -0,0 +1,2116 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+# This file is generated by src/tools/generate_pytest_libpq_errors.py - do not edit directly.
+
+"""
+Generated PostgreSQL error classes mapped from SQLSTATE codes.
+"""
+
+from typing import Dict
+
+from ._error_base import LibpqError, LibpqWarning
+
+
+class SuccessfulCompletion(LibpqError):
+ """SQLSTATE 00000 - successful completion."""
+
+ pass
+
+
+class Warning(LibpqWarning):
+ """SQLSTATE 01000 - warning."""
+
+ pass
+
+
+class DynamicResultSetsReturnedWarning(Warning):
+ """SQLSTATE 0100C - dynamic result sets returned."""
+
+ pass
+
+
+class ImplicitZeroBitPaddingWarning(Warning):
+ """SQLSTATE 01008 - implicit zero bit padding."""
+
+ pass
+
+
+class NullValueEliminatedInSetFunctionWarning(Warning):
+ """SQLSTATE 01003 - null value eliminated in set function."""
+
+ pass
+
+
+class PrivilegeNotGrantedWarning(Warning):
+ """SQLSTATE 01007 - privilege not granted."""
+
+ pass
+
+
+class PrivilegeNotRevokedWarning(Warning):
+ """SQLSTATE 01006 - privilege not revoked."""
+
+ pass
+
+
+class StringDataRightTruncationWarning(Warning):
+ """SQLSTATE 01004 - string data right truncation."""
+
+ pass
+
+
+class DeprecatedFeatureWarning(Warning):
+ """SQLSTATE 01P01 - deprecated feature."""
+
+ pass
+
+
+class NoData(LibpqError):
+ """SQLSTATE 02000 - no data."""
+
+ pass
+
+
+class NoAdditionalDynamicResultSetsReturned(NoData):
+ """SQLSTATE 02001 - no additional dynamic result sets returned."""
+
+ pass
+
+
+class SQLStatementNotYetComplete(LibpqError):
+ """SQLSTATE 03000 - sql statement not yet complete."""
+
+ pass
+
+
+class ConnectionException(LibpqError):
+ """SQLSTATE 08000 - connection exception."""
+
+ pass
+
+
+class ConnectionDoesNotExist(ConnectionException):
+ """SQLSTATE 08003 - connection does not exist."""
+
+ pass
+
+
+class ConnectionFailure(ConnectionException):
+ """SQLSTATE 08006 - connection failure."""
+
+ pass
+
+
+class SQLClientUnableToEstablishSQLConnection(ConnectionException):
+ """SQLSTATE 08001 - sqlclient unable to establish sqlconnection."""
+
+ pass
+
+
+class SQLServerRejectedEstablishmentOfSQLConnection(ConnectionException):
+ """SQLSTATE 08004 - sqlserver rejected establishment of sqlconnection."""
+
+ pass
+
+
+class TransactionResolutionUnknown(ConnectionException):
+ """SQLSTATE 08007 - transaction resolution unknown."""
+
+ pass
+
+
+class ProtocolViolation(ConnectionException):
+ """SQLSTATE 08P01 - protocol violation."""
+
+ pass
+
+
+class TriggeredActionException(LibpqError):
+ """SQLSTATE 09000 - triggered action exception."""
+
+ pass
+
+
+class FeatureNotSupported(LibpqError):
+ """SQLSTATE 0A000 - feature not supported."""
+
+ pass
+
+
+class InvalidTransactionInitiation(LibpqError):
+ """SQLSTATE 0B000 - invalid transaction initiation."""
+
+ pass
+
+
+class LocatorException(LibpqError):
+ """SQLSTATE 0F000 - locator exception."""
+
+ pass
+
+
+class InvalidLocatorSpecification(LocatorException):
+ """SQLSTATE 0F001 - invalid locator specification."""
+
+ pass
+
+
+class InvalidGrantor(LibpqError):
+ """SQLSTATE 0L000 - invalid grantor."""
+
+ pass
+
+
+class InvalidGrantOperation(InvalidGrantor):
+ """SQLSTATE 0LP01 - invalid grant operation."""
+
+ pass
+
+
+class InvalidRoleSpecification(LibpqError):
+ """SQLSTATE 0P000 - invalid role specification."""
+
+ pass
+
+
+class DiagnosticsException(LibpqError):
+ """SQLSTATE 0Z000 - diagnostics exception."""
+
+ pass
+
+
+class StackedDiagnosticsAccessedWithoutActiveHandler(DiagnosticsException):
+ """SQLSTATE 0Z002 - stacked diagnostics accessed without active handler."""
+
+ pass
+
+
+class InvalidArgumentForXquery(LibpqError):
+ """SQLSTATE 10608 - invalid argument for xquery."""
+
+ pass
+
+
+class CaseNotFound(LibpqError):
+ """SQLSTATE 20000 - case not found."""
+
+ pass
+
+
+class CardinalityViolation(LibpqError):
+ """SQLSTATE 21000 - cardinality violation."""
+
+ pass
+
+
+class DataException(LibpqError):
+ """SQLSTATE 22000 - data exception."""
+
+ pass
+
+
+class ArraySubscriptError(DataException):
+ """SQLSTATE 2202E - array subscript error."""
+
+ pass
+
+
+class CharacterNotInRepertoire(DataException):
+ """SQLSTATE 22021 - character not in repertoire."""
+
+ pass
+
+
+class DatetimeFieldOverflow(DataException):
+ """SQLSTATE 22008 - datetime field overflow."""
+
+ pass
+
+
+class DivisionByZero(DataException):
+ """SQLSTATE 22012 - division by zero."""
+
+ pass
+
+
+class ErrorInAssignment(DataException):
+ """SQLSTATE 22005 - error in assignment."""
+
+ pass
+
+
+class EscapeCharacterConflict(DataException):
+ """SQLSTATE 2200B - escape character conflict."""
+
+ pass
+
+
+class IndicatorOverflow(DataException):
+ """SQLSTATE 22022 - indicator overflow."""
+
+ pass
+
+
+class IntervalFieldOverflow(DataException):
+ """SQLSTATE 22015 - interval field overflow."""
+
+ pass
+
+
+class InvalidArgumentForLogarithm(DataException):
+ """SQLSTATE 2201E - invalid argument for logarithm."""
+
+ pass
+
+
+class InvalidArgumentForNtileFunction(DataException):
+ """SQLSTATE 22014 - invalid argument for ntile function."""
+
+ pass
+
+
+class InvalidArgumentForNthValueFunction(DataException):
+ """SQLSTATE 22016 - invalid argument for nth value function."""
+
+ pass
+
+
+class InvalidArgumentForPowerFunction(DataException):
+ """SQLSTATE 2201F - invalid argument for power function."""
+
+ pass
+
+
+class InvalidArgumentForWidthBucketFunction(DataException):
+ """SQLSTATE 2201G - invalid argument for width bucket function."""
+
+ pass
+
+
+class InvalidCharacterValueForCast(DataException):
+ """SQLSTATE 22018 - invalid character value for cast."""
+
+ pass
+
+
+class InvalidDatetimeFormat(DataException):
+ """SQLSTATE 22007 - invalid datetime format."""
+
+ pass
+
+
+class InvalidEscapeCharacter(DataException):
+ """SQLSTATE 22019 - invalid escape character."""
+
+ pass
+
+
+class InvalidEscapeOctet(DataException):
+ """SQLSTATE 2200D - invalid escape octet."""
+
+ pass
+
+
+class InvalidEscapeSequence(DataException):
+ """SQLSTATE 22025 - invalid escape sequence."""
+
+ pass
+
+
+class NonstandardUseOfEscapeCharacter(DataException):
+ """SQLSTATE 22P06 - nonstandard use of escape character."""
+
+ pass
+
+
+class InvalidIndicatorParameterValue(DataException):
+ """SQLSTATE 22010 - invalid indicator parameter value."""
+
+ pass
+
+
+class InvalidParameterValue(DataException):
+ """SQLSTATE 22023 - invalid parameter value."""
+
+ pass
+
+
+class InvalidPrecedingOrFollowingSize(DataException):
+ """SQLSTATE 22013 - invalid preceding or following size."""
+
+ pass
+
+
+class InvalidRegularExpression(DataException):
+ """SQLSTATE 2201B - invalid regular expression."""
+
+ pass
+
+
+class InvalidRowCountInLimitClause(DataException):
+ """SQLSTATE 2201W - invalid row count in limit clause."""
+
+ pass
+
+
+class InvalidRowCountInResultOffsetClause(DataException):
+ """SQLSTATE 2201X - invalid row count in result offset clause."""
+
+ pass
+
+
+class InvalidTablesampleArgument(DataException):
+ """SQLSTATE 2202H - invalid tablesample argument."""
+
+ pass
+
+
+class InvalidTablesampleRepeat(DataException):
+ """SQLSTATE 2202G - invalid tablesample repeat."""
+
+ pass
+
+
+class InvalidTimeZoneDisplacementValue(DataException):
+ """SQLSTATE 22009 - invalid time zone displacement value."""
+
+ pass
+
+
+class InvalidUseOfEscapeCharacter(DataException):
+ """SQLSTATE 2200C - invalid use of escape character."""
+
+ pass
+
+
+class MostSpecificTypeMismatch(DataException):
+ """SQLSTATE 2200G - most specific type mismatch."""
+
+ pass
+
+
+class NullValueNotAllowed(DataException):
+ """SQLSTATE 22004 - null value not allowed."""
+
+ pass
+
+
+class NullValueNoIndicatorParameter(DataException):
+ """SQLSTATE 22002 - null value no indicator parameter."""
+
+ pass
+
+
+class NumericValueOutOfRange(DataException):
+ """SQLSTATE 22003 - numeric value out of range."""
+
+ pass
+
+
+class SequenceGeneratorLimitExceeded(DataException):
+ """SQLSTATE 2200H - sequence generator limit exceeded."""
+
+ pass
+
+
+class StringDataLengthMismatch(DataException):
+ """SQLSTATE 22026 - string data length mismatch."""
+
+ pass
+
+
+class StringDataRightTruncation(DataException):
+ """SQLSTATE 22001 - string data right truncation."""
+
+ pass
+
+
+class SubstringError(DataException):
+ """SQLSTATE 22011 - substring error."""
+
+ pass
+
+
+class TrimError(DataException):
+ """SQLSTATE 22027 - trim error."""
+
+ pass
+
+
+class UnterminatedCString(DataException):
+ """SQLSTATE 22024 - unterminated c string."""
+
+ pass
+
+
+class ZeroLengthCharacterString(DataException):
+ """SQLSTATE 2200F - zero length character string."""
+
+ pass
+
+
+class FloatingPointException(DataException):
+ """SQLSTATE 22P01 - floating point exception."""
+
+ pass
+
+
+class InvalidTextRepresentation(DataException):
+ """SQLSTATE 22P02 - invalid text representation."""
+
+ pass
+
+
+class InvalidBinaryRepresentation(DataException):
+ """SQLSTATE 22P03 - invalid binary representation."""
+
+ pass
+
+
+class BadCopyFileFormat(DataException):
+ """SQLSTATE 22P04 - bad copy file format."""
+
+ pass
+
+
+class UntranslatableCharacter(DataException):
+ """SQLSTATE 22P05 - untranslatable character."""
+
+ pass
+
+
+class NotAnXmlDocument(DataException):
+ """SQLSTATE 2200L - not an xml document."""
+
+ pass
+
+
+class InvalidXmlDocument(DataException):
+ """SQLSTATE 2200M - invalid xml document."""
+
+ pass
+
+
+class InvalidXmlContent(DataException):
+ """SQLSTATE 2200N - invalid xml content."""
+
+ pass
+
+
+class InvalidXmlComment(DataException):
+ """SQLSTATE 2200S - invalid xml comment."""
+
+ pass
+
+
+class InvalidXmlProcessingInstruction(DataException):
+ """SQLSTATE 2200T - invalid xml processing instruction."""
+
+ pass
+
+
+class DuplicateJsonObjectKeyValue(DataException):
+ """SQLSTATE 22030 - duplicate json object key value."""
+
+ pass
+
+
+class InvalidArgumentForSQLJsonDatetimeFunction(DataException):
+ """SQLSTATE 22031 - invalid argument for sql json datetime function."""
+
+ pass
+
+
+class InvalidJsonText(DataException):
+ """SQLSTATE 22032 - invalid json text."""
+
+ pass
+
+
+class InvalidSQLJsonSubscript(DataException):
+ """SQLSTATE 22033 - invalid sql json subscript."""
+
+ pass
+
+
+class MoreThanOneSQLJsonItem(DataException):
+ """SQLSTATE 22034 - more than one sql json item."""
+
+ pass
+
+
+class NoSQLJsonItem(DataException):
+ """SQLSTATE 22035 - no sql json item."""
+
+ pass
+
+
+class NonNumericSQLJsonItem(DataException):
+ """SQLSTATE 22036 - non numeric sql json item."""
+
+ pass
+
+
+class NonUniqueKeysInAJsonObject(DataException):
+ """SQLSTATE 22037 - non unique keys in a json object."""
+
+ pass
+
+
+class SingletonSQLJsonItemRequired(DataException):
+ """SQLSTATE 22038 - singleton sql json item required."""
+
+ pass
+
+
+class SQLJsonArrayNotFound(DataException):
+ """SQLSTATE 22039 - sql json array not found."""
+
+ pass
+
+
+class SQLJsonMemberNotFound(DataException):
+ """SQLSTATE 2203A - sql json member not found."""
+
+ pass
+
+
+class SQLJsonNumberNotFound(DataException):
+ """SQLSTATE 2203B - sql json number not found."""
+
+ pass
+
+
+class SQLJsonObjectNotFound(DataException):
+ """SQLSTATE 2203C - sql json object not found."""
+
+ pass
+
+
+class TooManyJsonArrayElements(DataException):
+ """SQLSTATE 2203D - too many json array elements."""
+
+ pass
+
+
+class TooManyJsonObjectMembers(DataException):
+ """SQLSTATE 2203E - too many json object members."""
+
+ pass
+
+
+class SQLJsonScalarRequired(DataException):
+ """SQLSTATE 2203F - sql json scalar required."""
+
+ pass
+
+
+class SQLJsonItemCannotBeCastToTargetType(DataException):
+ """SQLSTATE 2203G - sql json item cannot be cast to target type."""
+
+ pass
+
+
+class IntegrityConstraintViolation(LibpqError):
+ """SQLSTATE 23000 - integrity constraint violation."""
+
+ pass
+
+
+class RestrictViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23001 - restrict violation."""
+
+ pass
+
+
+class NotNullViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23502 - not null violation."""
+
+ pass
+
+
+class ForeignKeyViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23503 - foreign key violation."""
+
+ pass
+
+
+class UniqueViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23505 - unique violation."""
+
+ pass
+
+
+class CheckViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23514 - check violation."""
+
+ pass
+
+
+class ExclusionViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23P01 - exclusion violation."""
+
+ pass
+
+
+class InvalidCursorState(LibpqError):
+ """SQLSTATE 24000 - invalid cursor state."""
+
+ pass
+
+
+class InvalidTransactionState(LibpqError):
+ """SQLSTATE 25000 - invalid transaction state."""
+
+ pass
+
+
+class ActiveSQLTransaction(InvalidTransactionState):
+ """SQLSTATE 25001 - active sql transaction."""
+
+ pass
+
+
+class BranchTransactionAlreadyActive(InvalidTransactionState):
+ """SQLSTATE 25002 - branch transaction already active."""
+
+ pass
+
+
+class HeldCursorRequiresSameIsolationLevel(InvalidTransactionState):
+ """SQLSTATE 25008 - held cursor requires same isolation level."""
+
+ pass
+
+
+class InappropriateAccessModeForBranchTransaction(InvalidTransactionState):
+ """SQLSTATE 25003 - inappropriate access mode for branch transaction."""
+
+ pass
+
+
+class InappropriateIsolationLevelForBranchTransaction(InvalidTransactionState):
+ """SQLSTATE 25004 - inappropriate isolation level for branch transaction."""
+
+ pass
+
+
+class NoActiveSQLTransactionForBranchTransaction(InvalidTransactionState):
+ """SQLSTATE 25005 - no active sql transaction for branch transaction."""
+
+ pass
+
+
+class ReadOnlySQLTransaction(InvalidTransactionState):
+ """SQLSTATE 25006 - read only sql transaction."""
+
+ pass
+
+
+class SchemaAndDataStatementMixingNotSupported(InvalidTransactionState):
+ """SQLSTATE 25007 - schema and data statement mixing not supported."""
+
+ pass
+
+
+class NoActiveSQLTransaction(InvalidTransactionState):
+ """SQLSTATE 25P01 - no active sql transaction."""
+
+ pass
+
+
+class InFailedSQLTransaction(InvalidTransactionState):
+ """SQLSTATE 25P02 - in failed sql transaction."""
+
+ pass
+
+
+class IdleInTransactionSessionTimeout(InvalidTransactionState):
+ """SQLSTATE 25P03 - idle in transaction session timeout."""
+
+ pass
+
+
+class TransactionTimeout(InvalidTransactionState):
+ """SQLSTATE 25P04 - transaction timeout."""
+
+ pass
+
+
+class InvalidSQLStatementName(LibpqError):
+ """SQLSTATE 26000 - invalid sql statement name."""
+
+ pass
+
+
+class TriggeredDataChangeViolation(LibpqError):
+ """SQLSTATE 27000 - triggered data change violation."""
+
+ pass
+
+
+class InvalidAuthorizationSpecification(LibpqError):
+ """SQLSTATE 28000 - invalid authorization specification."""
+
+ pass
+
+
+class InvalidPassword(InvalidAuthorizationSpecification):
+ """SQLSTATE 28P01 - invalid password."""
+
+ pass
+
+
+class DependentPrivilegeDescriptorsStillExist(LibpqError):
+ """SQLSTATE 2B000 - dependent privilege descriptors still exist."""
+
+ pass
+
+
+class DependentObjectsStillExist(DependentPrivilegeDescriptorsStillExist):
+ """SQLSTATE 2BP01 - dependent objects still exist."""
+
+ pass
+
+
+class InvalidTransactionTermination(LibpqError):
+ """SQLSTATE 2D000 - invalid transaction termination."""
+
+ pass
+
+
+class SQLRoutineException(LibpqError):
+ """SQLSTATE 2F000 - sql routine exception."""
+
+ pass
+
+
+class FunctionExecutedNoReturnStatement(SQLRoutineException):
+ """SQLSTATE 2F005 - function executed no return statement."""
+
+ pass
+
+
+class SREModifyingSQLDataNotPermitted(SQLRoutineException):
+ """SQLSTATE 2F002 - modifying sql data not permitted."""
+
+ pass
+
+
+class SREProhibitedSQLStatementAttempted(SQLRoutineException):
+ """SQLSTATE 2F003 - prohibited sql statement attempted."""
+
+ pass
+
+
+class SREReadingSQLDataNotPermitted(SQLRoutineException):
+ """SQLSTATE 2F004 - reading sql data not permitted."""
+
+ pass
+
+
+class InvalidCursorName(LibpqError):
+ """SQLSTATE 34000 - invalid cursor name."""
+
+ pass
+
+
+class ExternalRoutineException(LibpqError):
+ """SQLSTATE 38000 - external routine exception."""
+
+ pass
+
+
+class ContainingSQLNotPermitted(ExternalRoutineException):
+ """SQLSTATE 38001 - containing sql not permitted."""
+
+ pass
+
+
+class EREModifyingSQLDataNotPermitted(ExternalRoutineException):
+ """SQLSTATE 38002 - modifying sql data not permitted."""
+
+ pass
+
+
+class EREProhibitedSQLStatementAttempted(ExternalRoutineException):
+ """SQLSTATE 38003 - prohibited sql statement attempted."""
+
+ pass
+
+
+class EREReadingSQLDataNotPermitted(ExternalRoutineException):
+ """SQLSTATE 38004 - reading sql data not permitted."""
+
+ pass
+
+
+class ExternalRoutineInvocationException(LibpqError):
+ """SQLSTATE 39000 - external routine invocation exception."""
+
+ pass
+
+
+class InvalidSqlstateReturned(ExternalRoutineInvocationException):
+ """SQLSTATE 39001 - invalid sqlstate returned."""
+
+ pass
+
+
+class ERIENullValueNotAllowed(ExternalRoutineInvocationException):
+ """SQLSTATE 39004 - null value not allowed."""
+
+ pass
+
+
+class TriggerProtocolViolated(ExternalRoutineInvocationException):
+ """SQLSTATE 39P01 - trigger protocol violated."""
+
+ pass
+
+
+class SrfProtocolViolated(ExternalRoutineInvocationException):
+ """SQLSTATE 39P02 - srf protocol violated."""
+
+ pass
+
+
+class EventTriggerProtocolViolated(ExternalRoutineInvocationException):
+ """SQLSTATE 39P03 - event trigger protocol violated."""
+
+ pass
+
+
+class SavepointException(LibpqError):
+ """SQLSTATE 3B000 - savepoint exception."""
+
+ pass
+
+
+class InvalidSavepointSpecification(SavepointException):
+ """SQLSTATE 3B001 - invalid savepoint specification."""
+
+ pass
+
+
+class InvalidCatalogName(LibpqError):
+ """SQLSTATE 3D000 - invalid catalog name."""
+
+ pass
+
+
+class InvalidSchemaName(LibpqError):
+ """SQLSTATE 3F000 - invalid schema name."""
+
+ pass
+
+
+class TransactionRollback(LibpqError):
+ """SQLSTATE 40000 - transaction rollback."""
+
+ pass
+
+
+class TransactionIntegrityConstraintViolation(TransactionRollback):
+ """SQLSTATE 40002 - transaction integrity constraint violation."""
+
+ pass
+
+
+class SerializationFailure(TransactionRollback):
+ """SQLSTATE 40001 - serialization failure."""
+
+ pass
+
+
+class StatementCompletionUnknown(TransactionRollback):
+ """SQLSTATE 40003 - statement completion unknown."""
+
+ pass
+
+
+class DeadlockDetected(TransactionRollback):
+ """SQLSTATE 40P01 - deadlock detected."""
+
+ pass
+
+
+class SyntaxErrorOrAccessRuleViolation(LibpqError):
+ """SQLSTATE 42000 - syntax error or access rule violation."""
+
+ pass
+
+
+class SyntaxError(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42601 - syntax error."""
+
+ pass
+
+
+class InsufficientPrivilege(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42501 - insufficient privilege."""
+
+ pass
+
+
+class CannotCoerce(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42846 - cannot coerce."""
+
+ pass
+
+
+class GroupingError(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42803 - grouping error."""
+
+ pass
+
+
+class WindowingError(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P20 - windowing error."""
+
+ pass
+
+
+class InvalidRecursion(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P19 - invalid recursion."""
+
+ pass
+
+
+class InvalidForeignKey(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42830 - invalid foreign key."""
+
+ pass
+
+
+class InvalidName(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42602 - invalid name."""
+
+ pass
+
+
+class NameTooLong(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42622 - name too long."""
+
+ pass
+
+
+class ReservedName(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42939 - reserved name."""
+
+ pass
+
+
+class DatatypeMismatch(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42804 - datatype mismatch."""
+
+ pass
+
+
+class IndeterminateDatatype(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P18 - indeterminate datatype."""
+
+ pass
+
+
+class CollationMismatch(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P21 - collation mismatch."""
+
+ pass
+
+
+class IndeterminateCollation(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P22 - indeterminate collation."""
+
+ pass
+
+
+class WrongObjectType(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42809 - wrong object type."""
+
+ pass
+
+
+class GeneratedAlways(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 428C9 - generated always."""
+
+ pass
+
+
+class UndefinedColumn(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42703 - undefined column."""
+
+ pass
+
+
+class UndefinedFunction(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42883 - undefined function."""
+
+ pass
+
+
+class UndefinedTable(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P01 - undefined table."""
+
+ pass
+
+
+class UndefinedParameter(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P02 - undefined parameter."""
+
+ pass
+
+
+class UndefinedObject(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42704 - undefined object."""
+
+ pass
+
+
+class DuplicateColumn(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42701 - duplicate column."""
+
+ pass
+
+
+class DuplicateCursor(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P03 - duplicate cursor."""
+
+ pass
+
+
+class DuplicateDatabase(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P04 - duplicate database."""
+
+ pass
+
+
+class DuplicateFunction(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42723 - duplicate function."""
+
+ pass
+
+
+class DuplicatePreparedStatement(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P05 - duplicate prepared statement."""
+
+ pass
+
+
+class DuplicateSchema(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P06 - duplicate schema."""
+
+ pass
+
+
+class DuplicateTable(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P07 - duplicate table."""
+
+ pass
+
+
+class DuplicateAlias(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42712 - duplicate alias."""
+
+ pass
+
+
+class DuplicateObject(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42710 - duplicate object."""
+
+ pass
+
+
+class AmbiguousColumn(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42702 - ambiguous column."""
+
+ pass
+
+
+class AmbiguousFunction(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42725 - ambiguous function."""
+
+ pass
+
+
+class AmbiguousParameter(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P08 - ambiguous parameter."""
+
+ pass
+
+
+class AmbiguousAlias(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P09 - ambiguous alias."""
+
+ pass
+
+
+class InvalidColumnReference(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P10 - invalid column reference."""
+
+ pass
+
+
+class InvalidColumnDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42611 - invalid column definition."""
+
+ pass
+
+
+class InvalidCursorDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P11 - invalid cursor definition."""
+
+ pass
+
+
+class InvalidDatabaseDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P12 - invalid database definition."""
+
+ pass
+
+
+class InvalidFunctionDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P13 - invalid function definition."""
+
+ pass
+
+
+class InvalidPreparedStatementDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P14 - invalid prepared statement definition."""
+
+ pass
+
+
+class InvalidSchemaDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P15 - invalid schema definition."""
+
+ pass
+
+
+class InvalidTableDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P16 - invalid table definition."""
+
+ pass
+
+
+class InvalidObjectDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P17 - invalid object definition."""
+
+ pass
+
+
+class WithCheckOptionViolation(LibpqError):
+ """SQLSTATE 44000 - with check option violation."""
+
+ pass
+
+
+class InsufficientResources(LibpqError):
+ """SQLSTATE 53000 - insufficient resources."""
+
+ pass
+
+
+class DiskFull(InsufficientResources):
+ """SQLSTATE 53100 - disk full."""
+
+ pass
+
+
+class OutOfMemory(InsufficientResources):
+ """SQLSTATE 53200 - out of memory."""
+
+ pass
+
+
+class TooManyConnections(InsufficientResources):
+ """SQLSTATE 53300 - too many connections."""
+
+ pass
+
+
+class ConfigurationLimitExceeded(InsufficientResources):
+ """SQLSTATE 53400 - configuration limit exceeded."""
+
+ pass
+
+
+class ProgramLimitExceeded(LibpqError):
+ """SQLSTATE 54000 - program limit exceeded."""
+
+ pass
+
+
+class StatementTooComplex(ProgramLimitExceeded):
+ """SQLSTATE 54001 - statement too complex."""
+
+ pass
+
+
+class TooManyColumns(ProgramLimitExceeded):
+ """SQLSTATE 54011 - too many columns."""
+
+ pass
+
+
+class TooManyArguments(ProgramLimitExceeded):
+ """SQLSTATE 54023 - too many arguments."""
+
+ pass
+
+
+class ObjectNotInPrerequisiteState(LibpqError):
+ """SQLSTATE 55000 - object not in prerequisite state."""
+
+ pass
+
+
+class ObjectInUse(ObjectNotInPrerequisiteState):
+ """SQLSTATE 55006 - object in use."""
+
+ pass
+
+
+class CantChangeRuntimeParam(ObjectNotInPrerequisiteState):
+ """SQLSTATE 55P02 - cant change runtime param."""
+
+ pass
+
+
+class LockNotAvailable(ObjectNotInPrerequisiteState):
+ """SQLSTATE 55P03 - lock not available."""
+
+ pass
+
+
+class UnsafeNewEnumValueUsage(ObjectNotInPrerequisiteState):
+ """SQLSTATE 55P04 - unsafe new enum value usage."""
+
+ pass
+
+
+class OperatorIntervention(LibpqError):
+ """SQLSTATE 57000 - operator intervention."""
+
+ pass
+
+
+class QueryCanceled(OperatorIntervention):
+ """SQLSTATE 57014 - query canceled."""
+
+ pass
+
+
+class AdminShutdown(OperatorIntervention):
+ """SQLSTATE 57P01 - admin shutdown."""
+
+ pass
+
+
+class CrashShutdown(OperatorIntervention):
+ """SQLSTATE 57P02 - crash shutdown."""
+
+ pass
+
+
+class CannotConnectNow(OperatorIntervention):
+ """SQLSTATE 57P03 - cannot connect now."""
+
+ pass
+
+
+class DatabaseDropped(OperatorIntervention):
+ """SQLSTATE 57P04 - database dropped."""
+
+ pass
+
+
+class IdleSessionTimeout(OperatorIntervention):
+ """SQLSTATE 57P05 - idle session timeout."""
+
+ pass
+
+
+class SystemError(LibpqError):
+ """SQLSTATE 58000 - system error."""
+
+ pass
+
+
+class IoError(SystemError):
+ """SQLSTATE 58030 - io error."""
+
+ pass
+
+
+class UndefinedFile(SystemError):
+ """SQLSTATE 58P01 - undefined file."""
+
+ pass
+
+
+class DuplicateFile(SystemError):
+ """SQLSTATE 58P02 - duplicate file."""
+
+ pass
+
+
+class FileNameTooLong(SystemError):
+ """SQLSTATE 58P03 - file name too long."""
+
+ pass
+
+
+class ConfigFileError(LibpqError):
+ """SQLSTATE F0000 - config file error."""
+
+ pass
+
+
+class LockFileExists(ConfigFileError):
+ """SQLSTATE F0001 - lock file exists."""
+
+ pass
+
+
+class FDWError(LibpqError):
+ """SQLSTATE HV000 - fdw error."""
+
+ pass
+
+
+class FDWColumnNameNotFound(FDWError):
+ """SQLSTATE HV005 - fdw column name not found."""
+
+ pass
+
+
+class FDWDynamicParameterValueNeeded(FDWError):
+ """SQLSTATE HV002 - fdw dynamic parameter value needed."""
+
+ pass
+
+
+class FDWFunctionSequenceError(FDWError):
+ """SQLSTATE HV010 - fdw function sequence error."""
+
+ pass
+
+
+class FDWInconsistentDescriptorInformation(FDWError):
+ """SQLSTATE HV021 - fdw inconsistent descriptor information."""
+
+ pass
+
+
+class FDWInvalidAttributeValue(FDWError):
+ """SQLSTATE HV024 - fdw invalid attribute value."""
+
+ pass
+
+
+class FDWInvalidColumnName(FDWError):
+ """SQLSTATE HV007 - fdw invalid column name."""
+
+ pass
+
+
+class FDWInvalidColumnNumber(FDWError):
+ """SQLSTATE HV008 - fdw invalid column number."""
+
+ pass
+
+
+class FDWInvalidDataType(FDWError):
+ """SQLSTATE HV004 - fdw invalid data type."""
+
+ pass
+
+
+class FDWInvalidDataTypeDescriptors(FDWError):
+ """SQLSTATE HV006 - fdw invalid data type descriptors."""
+
+ pass
+
+
+class FDWInvalidDescriptorFieldIdentifier(FDWError):
+ """SQLSTATE HV091 - fdw invalid descriptor field identifier."""
+
+ pass
+
+
+class FDWInvalidHandle(FDWError):
+ """SQLSTATE HV00B - fdw invalid handle."""
+
+ pass
+
+
+class FDWInvalidOptionIndex(FDWError):
+ """SQLSTATE HV00C - fdw invalid option index."""
+
+ pass
+
+
+class FDWInvalidOptionName(FDWError):
+ """SQLSTATE HV00D - fdw invalid option name."""
+
+ pass
+
+
+class FDWInvalidStringLengthOrBufferLength(FDWError):
+ """SQLSTATE HV090 - fdw invalid string length or buffer length."""
+
+ pass
+
+
+class FDWInvalidStringFormat(FDWError):
+ """SQLSTATE HV00A - fdw invalid string format."""
+
+ pass
+
+
+class FDWInvalidUseOfNullPointer(FDWError):
+ """SQLSTATE HV009 - fdw invalid use of null pointer."""
+
+ pass
+
+
+class FDWTooManyHandles(FDWError):
+ """SQLSTATE HV014 - fdw too many handles."""
+
+ pass
+
+
+class FDWOutOfMemory(FDWError):
+ """SQLSTATE HV001 - fdw out of memory."""
+
+ pass
+
+
+class FDWNoSchemas(FDWError):
+ """SQLSTATE HV00P - fdw no schemas."""
+
+ pass
+
+
+class FDWOptionNameNotFound(FDWError):
+ """SQLSTATE HV00J - fdw option name not found."""
+
+ pass
+
+
+class FDWReplyHandle(FDWError):
+ """SQLSTATE HV00K - fdw reply handle."""
+
+ pass
+
+
+class FDWSchemaNotFound(FDWError):
+ """SQLSTATE HV00Q - fdw schema not found."""
+
+ pass
+
+
+class FDWTableNotFound(FDWError):
+ """SQLSTATE HV00R - fdw table not found."""
+
+ pass
+
+
+class FDWUnableToCreateExecution(FDWError):
+ """SQLSTATE HV00L - fdw unable to create execution."""
+
+ pass
+
+
+class FDWUnableToCreateReply(FDWError):
+ """SQLSTATE HV00M - fdw unable to create reply."""
+
+ pass
+
+
+class FDWUnableToEstablishConnection(FDWError):
+ """SQLSTATE HV00N - fdw unable to establish connection."""
+
+ pass
+
+
+class PlpgsqlError(LibpqError):
+ """SQLSTATE P0000 - plpgsql error."""
+
+ pass
+
+
+class RaiseException(PlpgsqlError):
+ """SQLSTATE P0001 - raise exception."""
+
+ pass
+
+
+class NoDataFound(PlpgsqlError):
+ """SQLSTATE P0002 - no data found."""
+
+ pass
+
+
+class TooManyRows(PlpgsqlError):
+ """SQLSTATE P0003 - too many rows."""
+
+ pass
+
+
+class AssertFailure(PlpgsqlError):
+ """SQLSTATE P0004 - assert failure."""
+
+ pass
+
+
+class InternalError(LibpqError):
+ """SQLSTATE XX000 - internal error."""
+
+ pass
+
+
+class DataCorrupted(InternalError):
+ """SQLSTATE XX001 - data corrupted."""
+
+ pass
+
+
+class IndexCorrupted(InternalError):
+ """SQLSTATE XX002 - index corrupted."""
+
+ pass
+
+
+SQLSTATE_TO_EXCEPTION: Dict[str, type] = {
+ "00000": SuccessfulCompletion,
+ "01000": Warning,
+ "0100C": DynamicResultSetsReturnedWarning,
+ "01008": ImplicitZeroBitPaddingWarning,
+ "01003": NullValueEliminatedInSetFunctionWarning,
+ "01007": PrivilegeNotGrantedWarning,
+ "01006": PrivilegeNotRevokedWarning,
+ "01004": StringDataRightTruncationWarning,
+ "01P01": DeprecatedFeatureWarning,
+ "02000": NoData,
+ "02001": NoAdditionalDynamicResultSetsReturned,
+ "03000": SQLStatementNotYetComplete,
+ "08000": ConnectionException,
+ "08003": ConnectionDoesNotExist,
+ "08006": ConnectionFailure,
+ "08001": SQLClientUnableToEstablishSQLConnection,
+ "08004": SQLServerRejectedEstablishmentOfSQLConnection,
+ "08007": TransactionResolutionUnknown,
+ "08P01": ProtocolViolation,
+ "09000": TriggeredActionException,
+ "0A000": FeatureNotSupported,
+ "0B000": InvalidTransactionInitiation,
+ "0F000": LocatorException,
+ "0F001": InvalidLocatorSpecification,
+ "0L000": InvalidGrantor,
+ "0LP01": InvalidGrantOperation,
+ "0P000": InvalidRoleSpecification,
+ "0Z000": DiagnosticsException,
+ "0Z002": StackedDiagnosticsAccessedWithoutActiveHandler,
+ "10608": InvalidArgumentForXquery,
+ "20000": CaseNotFound,
+ "21000": CardinalityViolation,
+ "22000": DataException,
+ "2202E": ArraySubscriptError,
+ "22021": CharacterNotInRepertoire,
+ "22008": DatetimeFieldOverflow,
+ "22012": DivisionByZero,
+ "22005": ErrorInAssignment,
+ "2200B": EscapeCharacterConflict,
+ "22022": IndicatorOverflow,
+ "22015": IntervalFieldOverflow,
+ "2201E": InvalidArgumentForLogarithm,
+ "22014": InvalidArgumentForNtileFunction,
+ "22016": InvalidArgumentForNthValueFunction,
+ "2201F": InvalidArgumentForPowerFunction,
+ "2201G": InvalidArgumentForWidthBucketFunction,
+ "22018": InvalidCharacterValueForCast,
+ "22007": InvalidDatetimeFormat,
+ "22019": InvalidEscapeCharacter,
+ "2200D": InvalidEscapeOctet,
+ "22025": InvalidEscapeSequence,
+ "22P06": NonstandardUseOfEscapeCharacter,
+ "22010": InvalidIndicatorParameterValue,
+ "22023": InvalidParameterValue,
+ "22013": InvalidPrecedingOrFollowingSize,
+ "2201B": InvalidRegularExpression,
+ "2201W": InvalidRowCountInLimitClause,
+ "2201X": InvalidRowCountInResultOffsetClause,
+ "2202H": InvalidTablesampleArgument,
+ "2202G": InvalidTablesampleRepeat,
+ "22009": InvalidTimeZoneDisplacementValue,
+ "2200C": InvalidUseOfEscapeCharacter,
+ "2200G": MostSpecificTypeMismatch,
+ "22004": NullValueNotAllowed,
+ "22002": NullValueNoIndicatorParameter,
+ "22003": NumericValueOutOfRange,
+ "2200H": SequenceGeneratorLimitExceeded,
+ "22026": StringDataLengthMismatch,
+ "22001": StringDataRightTruncation,
+ "22011": SubstringError,
+ "22027": TrimError,
+ "22024": UnterminatedCString,
+ "2200F": ZeroLengthCharacterString,
+ "22P01": FloatingPointException,
+ "22P02": InvalidTextRepresentation,
+ "22P03": InvalidBinaryRepresentation,
+ "22P04": BadCopyFileFormat,
+ "22P05": UntranslatableCharacter,
+ "2200L": NotAnXmlDocument,
+ "2200M": InvalidXmlDocument,
+ "2200N": InvalidXmlContent,
+ "2200S": InvalidXmlComment,
+ "2200T": InvalidXmlProcessingInstruction,
+ "22030": DuplicateJsonObjectKeyValue,
+ "22031": InvalidArgumentForSQLJsonDatetimeFunction,
+ "22032": InvalidJsonText,
+ "22033": InvalidSQLJsonSubscript,
+ "22034": MoreThanOneSQLJsonItem,
+ "22035": NoSQLJsonItem,
+ "22036": NonNumericSQLJsonItem,
+ "22037": NonUniqueKeysInAJsonObject,
+ "22038": SingletonSQLJsonItemRequired,
+ "22039": SQLJsonArrayNotFound,
+ "2203A": SQLJsonMemberNotFound,
+ "2203B": SQLJsonNumberNotFound,
+ "2203C": SQLJsonObjectNotFound,
+ "2203D": TooManyJsonArrayElements,
+ "2203E": TooManyJsonObjectMembers,
+ "2203F": SQLJsonScalarRequired,
+ "2203G": SQLJsonItemCannotBeCastToTargetType,
+ "23000": IntegrityConstraintViolation,
+ "23001": RestrictViolation,
+ "23502": NotNullViolation,
+ "23503": ForeignKeyViolation,
+ "23505": UniqueViolation,
+ "23514": CheckViolation,
+ "23P01": ExclusionViolation,
+ "24000": InvalidCursorState,
+ "25000": InvalidTransactionState,
+ "25001": ActiveSQLTransaction,
+ "25002": BranchTransactionAlreadyActive,
+ "25008": HeldCursorRequiresSameIsolationLevel,
+ "25003": InappropriateAccessModeForBranchTransaction,
+ "25004": InappropriateIsolationLevelForBranchTransaction,
+ "25005": NoActiveSQLTransactionForBranchTransaction,
+ "25006": ReadOnlySQLTransaction,
+ "25007": SchemaAndDataStatementMixingNotSupported,
+ "25P01": NoActiveSQLTransaction,
+ "25P02": InFailedSQLTransaction,
+ "25P03": IdleInTransactionSessionTimeout,
+ "25P04": TransactionTimeout,
+ "26000": InvalidSQLStatementName,
+ "27000": TriggeredDataChangeViolation,
+ "28000": InvalidAuthorizationSpecification,
+ "28P01": InvalidPassword,
+ "2B000": DependentPrivilegeDescriptorsStillExist,
+ "2BP01": DependentObjectsStillExist,
+ "2D000": InvalidTransactionTermination,
+ "2F000": SQLRoutineException,
+ "2F005": FunctionExecutedNoReturnStatement,
+ "2F002": SREModifyingSQLDataNotPermitted,
+ "2F003": SREProhibitedSQLStatementAttempted,
+ "2F004": SREReadingSQLDataNotPermitted,
+ "34000": InvalidCursorName,
+ "38000": ExternalRoutineException,
+ "38001": ContainingSQLNotPermitted,
+ "38002": EREModifyingSQLDataNotPermitted,
+ "38003": EREProhibitedSQLStatementAttempted,
+ "38004": EREReadingSQLDataNotPermitted,
+ "39000": ExternalRoutineInvocationException,
+ "39001": InvalidSqlstateReturned,
+ "39004": ERIENullValueNotAllowed,
+ "39P01": TriggerProtocolViolated,
+ "39P02": SrfProtocolViolated,
+ "39P03": EventTriggerProtocolViolated,
+ "3B000": SavepointException,
+ "3B001": InvalidSavepointSpecification,
+ "3D000": InvalidCatalogName,
+ "3F000": InvalidSchemaName,
+ "40000": TransactionRollback,
+ "40002": TransactionIntegrityConstraintViolation,
+ "40001": SerializationFailure,
+ "40003": StatementCompletionUnknown,
+ "40P01": DeadlockDetected,
+ "42000": SyntaxErrorOrAccessRuleViolation,
+ "42601": SyntaxError,
+ "42501": InsufficientPrivilege,
+ "42846": CannotCoerce,
+ "42803": GroupingError,
+ "42P20": WindowingError,
+ "42P19": InvalidRecursion,
+ "42830": InvalidForeignKey,
+ "42602": InvalidName,
+ "42622": NameTooLong,
+ "42939": ReservedName,
+ "42804": DatatypeMismatch,
+ "42P18": IndeterminateDatatype,
+ "42P21": CollationMismatch,
+ "42P22": IndeterminateCollation,
+ "42809": WrongObjectType,
+ "428C9": GeneratedAlways,
+ "42703": UndefinedColumn,
+ "42883": UndefinedFunction,
+ "42P01": UndefinedTable,
+ "42P02": UndefinedParameter,
+ "42704": UndefinedObject,
+ "42701": DuplicateColumn,
+ "42P03": DuplicateCursor,
+ "42P04": DuplicateDatabase,
+ "42723": DuplicateFunction,
+ "42P05": DuplicatePreparedStatement,
+ "42P06": DuplicateSchema,
+ "42P07": DuplicateTable,
+ "42712": DuplicateAlias,
+ "42710": DuplicateObject,
+ "42702": AmbiguousColumn,
+ "42725": AmbiguousFunction,
+ "42P08": AmbiguousParameter,
+ "42P09": AmbiguousAlias,
+ "42P10": InvalidColumnReference,
+ "42611": InvalidColumnDefinition,
+ "42P11": InvalidCursorDefinition,
+ "42P12": InvalidDatabaseDefinition,
+ "42P13": InvalidFunctionDefinition,
+ "42P14": InvalidPreparedStatementDefinition,
+ "42P15": InvalidSchemaDefinition,
+ "42P16": InvalidTableDefinition,
+ "42P17": InvalidObjectDefinition,
+ "44000": WithCheckOptionViolation,
+ "53000": InsufficientResources,
+ "53100": DiskFull,
+ "53200": OutOfMemory,
+ "53300": TooManyConnections,
+ "53400": ConfigurationLimitExceeded,
+ "54000": ProgramLimitExceeded,
+ "54001": StatementTooComplex,
+ "54011": TooManyColumns,
+ "54023": TooManyArguments,
+ "55000": ObjectNotInPrerequisiteState,
+ "55006": ObjectInUse,
+ "55P02": CantChangeRuntimeParam,
+ "55P03": LockNotAvailable,
+ "55P04": UnsafeNewEnumValueUsage,
+ "57000": OperatorIntervention,
+ "57014": QueryCanceled,
+ "57P01": AdminShutdown,
+ "57P02": CrashShutdown,
+ "57P03": CannotConnectNow,
+ "57P04": DatabaseDropped,
+ "57P05": IdleSessionTimeout,
+ "58000": SystemError,
+ "58030": IoError,
+ "58P01": UndefinedFile,
+ "58P02": DuplicateFile,
+ "58P03": FileNameTooLong,
+ "F0000": ConfigFileError,
+ "F0001": LockFileExists,
+ "HV000": FDWError,
+ "HV005": FDWColumnNameNotFound,
+ "HV002": FDWDynamicParameterValueNeeded,
+ "HV010": FDWFunctionSequenceError,
+ "HV021": FDWInconsistentDescriptorInformation,
+ "HV024": FDWInvalidAttributeValue,
+ "HV007": FDWInvalidColumnName,
+ "HV008": FDWInvalidColumnNumber,
+ "HV004": FDWInvalidDataType,
+ "HV006": FDWInvalidDataTypeDescriptors,
+ "HV091": FDWInvalidDescriptorFieldIdentifier,
+ "HV00B": FDWInvalidHandle,
+ "HV00C": FDWInvalidOptionIndex,
+ "HV00D": FDWInvalidOptionName,
+ "HV090": FDWInvalidStringLengthOrBufferLength,
+ "HV00A": FDWInvalidStringFormat,
+ "HV009": FDWInvalidUseOfNullPointer,
+ "HV014": FDWTooManyHandles,
+ "HV001": FDWOutOfMemory,
+ "HV00P": FDWNoSchemas,
+ "HV00J": FDWOptionNameNotFound,
+ "HV00K": FDWReplyHandle,
+ "HV00Q": FDWSchemaNotFound,
+ "HV00R": FDWTableNotFound,
+ "HV00L": FDWUnableToCreateExecution,
+ "HV00M": FDWUnableToCreateReply,
+ "HV00N": FDWUnableToEstablishConnection,
+ "P0000": PlpgsqlError,
+ "P0001": RaiseException,
+ "P0002": NoDataFound,
+ "P0003": TooManyRows,
+ "P0004": AssertFailure,
+ "XX000": InternalError,
+ "XX001": DataCorrupted,
+ "XX002": IndexCorrupted,
+}
+
+
+__all__ = [
+ "InvalidCursorName",
+ "UndefinedParameter",
+ "UndefinedColumn",
+ "NotAnXmlDocument",
+ "FDWOutOfMemory",
+ "InvalidRoleSpecification",
+ "InvalidArgumentForNthValueFunction",
+ "SQLJsonObjectNotFound",
+ "FDWSchemaNotFound",
+ "InvalidParameterValue",
+ "InvalidTableDefinition",
+ "AssertFailure",
+ "FDWInvalidOptionName",
+ "InvalidEscapeOctet",
+ "ReadOnlySQLTransaction",
+ "ExternalRoutineInvocationException",
+ "CrashShutdown",
+ "FDWInvalidOptionIndex",
+ "NotNullViolation",
+ "ConfigFileError",
+ "InvalidSQLJsonSubscript",
+ "InvalidForeignKey",
+ "InsufficientResources",
+ "ObjectNotInPrerequisiteState",
+ "InvalidRowCountInLimitClause",
+ "IntervalFieldOverflow",
+ "CollationMismatch",
+ "InvalidArgumentForNtileFunction",
+ "InvalidCharacterValueForCast",
+ "NonUniqueKeysInAJsonObject",
+ "DependentPrivilegeDescriptorsStillExist",
+ "InFailedSQLTransaction",
+ "GroupingError",
+ "TransactionTimeout",
+ "CaseNotFound",
+ "ConnectionException",
+ "DuplicateJsonObjectKeyValue",
+ "InvalidSchemaDefinition",
+ "FDWUnableToCreateReply",
+ "UndefinedTable",
+ "SequenceGeneratorLimitExceeded",
+ "InvalidJsonText",
+ "IdleSessionTimeout",
+ "NullValueNotAllowed",
+ "BranchTransactionAlreadyActive",
+ "InvalidGrantOperation",
+ "NullValueNoIndicatorParameter",
+ "ProtocolViolation",
+ "FDWInvalidDataTypeDescriptors",
+ "TriggeredDataChangeViolation",
+ "ExternalRoutineException",
+ "InvalidSqlstateReturned",
+ "PlpgsqlError",
+ "InvalidXmlContent",
+ "TriggeredActionException",
+ "SQLClientUnableToEstablishSQLConnection",
+ "FDWTableNotFound",
+ "NumericValueOutOfRange",
+ "RestrictViolation",
+ "AmbiguousParameter",
+ "StatementTooComplex",
+ "UnsafeNewEnumValueUsage",
+ "NonNumericSQLJsonItem",
+ "InvalidIndicatorParameterValue",
+ "ExclusionViolation",
+ "OperatorIntervention",
+ "QueryCanceled",
+ "Warning",
+ "InvalidArgumentForSQLJsonDatetimeFunction",
+ "ForeignKeyViolation",
+ "StringDataLengthMismatch",
+ "SQLRoutineException",
+ "TooManyConnections",
+ "TooManyJsonObjectMembers",
+ "NoData",
+ "UntranslatableCharacter",
+ "FDWUnableToEstablishConnection",
+ "LockFileExists",
+ "SREReadingSQLDataNotPermitted",
+ "IndeterminateDatatype",
+ "CheckViolation",
+ "InvalidDatabaseDefinition",
+ "NoActiveSQLTransactionForBranchTransaction",
+ "SQLServerRejectedEstablishmentOfSQLConnection",
+ "DuplicateFile",
+ "FDWInvalidColumnNumber",
+ "TransactionRollback",
+ "MoreThanOneSQLJsonItem",
+ "WithCheckOptionViolation",
+ "FDWNoSchemas",
+ "GeneratedAlways",
+ "CannotConnectNow",
+ "CardinalityViolation",
+ "InvalidAuthorizationSpecification",
+ "SQLJsonNumberNotFound",
+ "SQLJsonMemberNotFound",
+ "InvalidUseOfEscapeCharacter",
+ "UnterminatedCString",
+ "TrimError",
+ "SrfProtocolViolated",
+ "DiskFull",
+ "TooManyColumns",
+ "InvalidObjectDefinition",
+ "InvalidArgumentForLogarithm",
+ "TooManyJsonArrayElements",
+ "OutOfMemory",
+ "EREProhibitedSQLStatementAttempted",
+ "FDWInvalidStringFormat",
+ "StackedDiagnosticsAccessedWithoutActiveHandler",
+ "SchemaAndDataStatementMixingNotSupported",
+ "InternalError",
+ "InvalidEscapeCharacter",
+ "FDWError",
+ "ImplicitZeroBitPaddingWarning",
+ "DivisionByZero",
+ "InvalidTablesampleArgument",
+ "DeadlockDetected",
+ "CantChangeRuntimeParam",
+ "UndefinedObject",
+ "UniqueViolation",
+ "InvalidCursorDefinition",
+ "ConnectionFailure",
+ "UndefinedFunction",
+ "FDWFunctionSequenceError",
+ "ErrorInAssignment",
+ "SuccessfulCompletion",
+ "StringDataRightTruncation",
+ "FDWTooManyHandles",
+ "FDWInvalidDataType",
+ "ActiveSQLTransaction",
+ "InvalidTextRepresentation",
+ "InvalidSQLStatementName",
+ "PrivilegeNotGrantedWarning",
+ "SREModifyingSQLDataNotPermitted",
+ "IndeterminateCollation",
+ "SystemError",
+ "NullValueEliminatedInSetFunctionWarning",
+ "DependentObjectsStillExist",
+ "InvalidSchemaName",
+ "DuplicateColumn",
+ "FunctionExecutedNoReturnStatement",
+ "InvalidColumnDefinition",
+ "DynamicResultSetsReturnedWarning",
+ "IdleInTransactionSessionTimeout",
+ "StatementCompletionUnknown",
+ "CannotCoerce",
+ "InvalidTransactionState",
+ "DuplicateTable",
+ "BadCopyFileFormat",
+ "ZeroLengthCharacterString",
+ "SyntaxErrorOrAccessRuleViolation",
+ "SingletonSQLJsonItemRequired",
+ "IndexCorrupted",
+ "FDWInvalidColumnName",
+ "DataCorrupted",
+ "ERIENullValueNotAllowed",
+ "ArraySubscriptError",
+ "FDWReplyHandle",
+ "DiagnosticsException",
+ "InvalidTablesampleRepeat",
+ "SQLJsonItemCannotBeCastToTargetType",
+ "FDWInvalidHandle",
+ "InvalidPassword",
+ "InvalidEscapeSequence",
+ "EscapeCharacterConflict",
+ "InvalidSavepointSpecification",
+ "FDWInvalidAttributeValue",
+ "ContainingSQLNotPermitted",
+ "LocatorException",
+ "DatatypeMismatch",
+ "InvalidCursorState",
+ "InvalidName",
+ "IndicatorOverflow",
+ "ReservedName",
+ "DatetimeFieldOverflow",
+ "FDWInconsistentDescriptorInformation",
+ "FloatingPointException",
+ "AmbiguousAlias",
+ "InvalidRecursion",
+ "WrongObjectType",
+ "UndefinedFile",
+ "LockNotAvailable",
+ "InvalidRowCountInResultOffsetClause",
+ "ObjectInUse",
+ "DeprecatedFeatureWarning",
+ "FDWDynamicParameterValueNeeded",
+ "DuplicateFunction",
+ "InvalidXmlDocument",
+ "StringDataRightTruncationWarning",
+ "DuplicatePreparedStatement",
+ "InvalidGrantor",
+ "EventTriggerProtocolViolated",
+ "FDWInvalidUseOfNullPointer",
+ "FDWUnableToCreateExecution",
+ "ConnectionDoesNotExist",
+ "InvalidCatalogName",
+ "InvalidArgumentForXquery",
+ "FDWColumnNameNotFound",
+ "TransactionIntegrityConstraintViolation",
+ "InvalidPreparedStatementDefinition",
+ "FDWInvalidDescriptorFieldIdentifier",
+ "FDWOptionNameNotFound",
+ "InvalidArgumentForPowerFunction",
+ "FDWInvalidStringLengthOrBufferLength",
+ "SREProhibitedSQLStatementAttempted",
+ "NoDataFound",
+ "DuplicateDatabase",
+ "FeatureNotSupported",
+ "IntegrityConstraintViolation",
+ "AmbiguousColumn",
+ "PrivilegeNotRevokedWarning",
+ "FileNameTooLong",
+ "InvalidArgumentForWidthBucketFunction",
+ "HeldCursorRequiresSameIsolationLevel",
+ "NoSQLJsonItem",
+ "IoError",
+ "SavepointException",
+ "NoActiveSQLTransaction",
+ "InvalidFunctionDefinition",
+ "AdminShutdown",
+ "DatabaseDropped",
+ "InvalidRegularExpression",
+ "WindowingError",
+ "InvalidColumnReference",
+ "InvalidBinaryRepresentation",
+ "SQLJsonScalarRequired",
+ "ConfigurationLimitExceeded",
+ "SyntaxError",
+ "SerializationFailure",
+ "ProgramLimitExceeded",
+ "DuplicateSchema",
+ "SQLStatementNotYetComplete",
+ "LibpqError",
+ "DataException",
+ "SubstringError",
+ "InvalidLocatorSpecification",
+ "InappropriateAccessModeForBranchTransaction",
+ "EREModifyingSQLDataNotPermitted",
+ "InsufficientPrivilege",
+ "NoAdditionalDynamicResultSetsReturned",
+ "SQLJsonArrayNotFound",
+ "NameTooLong",
+ "InvalidTimeZoneDisplacementValue",
+ "InappropriateIsolationLevelForBranchTransaction",
+ "RaiseException",
+ "EREReadingSQLDataNotPermitted",
+ "TriggerProtocolViolated",
+ "NonstandardUseOfEscapeCharacter",
+ "InvalidTransactionInitiation",
+ "DuplicateAlias",
+ "TransactionResolutionUnknown",
+ "TooManyRows",
+ "InvalidXmlComment",
+ "MostSpecificTypeMismatch",
+ "DuplicateObject",
+ "DuplicateCursor",
+ "AmbiguousFunction",
+ "TooManyArguments",
+ "InvalidXmlProcessingInstruction",
+ "InvalidTransactionTermination",
+ "InvalidDatetimeFormat",
+ "InvalidPrecedingOrFollowingSize",
+ "CharacterNotInRepertoire",
+ "SQLSTATE_TO_EXCEPTION",
+]
diff --git a/src/test/pytest/libpq/errors.py b/src/test/pytest/libpq/errors.py
new file mode 100644
index 00000000000..764a96c2478
--- /dev/null
+++ b/src/test/pytest/libpq/errors.py
@@ -0,0 +1,39 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+PostgreSQL error types mapped from SQLSTATE codes.
+
+This module provides LibpqError and its subclasses for handling PostgreSQL
+errors based on SQLSTATE codes. The exception classes in _generated_errors.py
+are auto-generated from src/backend/utils/errcodes.txt.
+
+To regenerate: src/tools/generate_pytest_libpq_errors.py
+"""
+
+from typing import Optional
+
+from ._error_base import LibpqError, LibpqWarning
+from ._generated_errors import (
+ SQLSTATE_TO_EXCEPTION,
+)
+from ._generated_errors import * # noqa: F403
+
+
+def get_exception_class(sqlstate: Optional[str]) -> type:
+ """Get the appropriate exception class for a SQLSTATE code."""
+ if sqlstate in SQLSTATE_TO_EXCEPTION:
+ return SQLSTATE_TO_EXCEPTION[sqlstate]
+ return LibpqError
+
+
+def make_error(message: str, *, sqlstate: Optional[str] = None, **kwargs) -> LibpqError:
+ """Create an appropriate LibpqError subclass based on the SQLSTATE code."""
+ exc_class = get_exception_class(sqlstate)
+ return exc_class(message, sqlstate=sqlstate, **kwargs)
+
+
+__all__ = [
+ "LibpqError",
+ "LibpqWarning",
+ "make_error",
+]
diff --git a/src/test/pytest/meson.build b/src/test/pytest/meson.build
index abd128dfa24..b86be901e7c 100644
--- a/src/test/pytest/meson.build
+++ b/src/test/pytest/meson.build
@@ -10,7 +10,10 @@ tests += {
'bd': meson.current_build_dir(),
'pytest': {
'tests': [
- 'pyt/test_something.py',
+ 'pyt/test_errors.py',
+ 'pyt/test_libpq.py',
+ 'pyt/test_multi_server.py',
+ 'pyt/test_query_helpers.py',
],
},
}
diff --git a/src/test/pytest/pypg/__init__.py b/src/test/pytest/pypg/__init__.py
new file mode 100644
index 00000000000..4ee91289f70
--- /dev/null
+++ b/src/test/pytest/pypg/__init__.py
@@ -0,0 +1,10 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+from ._env import require_test_extras, skip_unless_test_extras
+from .server import PostgresServer
+
+__all__ = [
+ "require_test_extras",
+ "skip_unless_test_extras",
+ "PostgresServer",
+]
diff --git a/src/test/pytest/pypg/_env.py b/src/test/pytest/pypg/_env.py
new file mode 100644
index 00000000000..c4087be3212
--- /dev/null
+++ b/src/test/pytest/pypg/_env.py
@@ -0,0 +1,72 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import logging
+import os
+
+import pytest
+
+logger = logging.getLogger(__name__)
+
+
+def _test_extra_skip_reason(*keys: str) -> str:
+ return "requires {} to be set in PG_TEST_EXTRA".format(", ".join(keys))
+
+
+def _has_test_extra(key: str) -> bool:
+ """
+ Returns True if the PG_TEST_EXTRA environment variable contains the given
+ key.
+ """
+ extra = os.getenv("PG_TEST_EXTRA", "")
+ return key in extra.split()
+
+
+def require_test_extras(*keys: str):
+ """
+ A convenience annotation which will skip tests if all of the required keys
+ are not present in PG_TEST_EXTRA.
+
+ To skip a particular test function or class:
+
+ @pypg.require_test_extras("ldap")
+ def test_some_ldap_feature():
+ ...
+
+ To skip an entire module:
+
+ pytestmark = pypg.require_test_extra("ssl", "kerberos")
+ """
+ return pytest.mark.skipif(
+ not all([_has_test_extra(k) for k in keys]),
+ reason=_test_extra_skip_reason(*keys),
+ )
+
+
+def skip_unless_test_extras(*keys: str):
+ """
+ Skip the current test/fixture if any of the required keys are not present
+ in PG_TEST_EXTRA. Use this inside fixtures where decorators can't be used.
+
+ @pytest.fixture
+ def my_fixture():
+ skip_unless_test_extras("ldap")
+ ...
+ """
+ if not all([_has_test_extra(k) for k in keys]):
+ pytest.skip(_test_extra_skip_reason(*keys))
+
+
+def test_timeout_default() -> int:
+ """
+ Returns the value of the PG_TEST_TIMEOUT_DEFAULT environment variable, in
+ seconds, or 180 if one was not provided.
+ """
+ default = os.getenv("PG_TEST_TIMEOUT_DEFAULT", "")
+ if not default:
+ return 180
+
+ try:
+ return int(default)
+ except ValueError as v:
+ logger.warning("PG_TEST_TIMEOUT_DEFAULT could not be parsed: " + str(v))
+ return 180
diff --git a/src/test/pytest/pypg/fixtures.py b/src/test/pytest/pypg/fixtures.py
new file mode 100644
index 00000000000..8c0cb60daa5
--- /dev/null
+++ b/src/test/pytest/pypg/fixtures.py
@@ -0,0 +1,335 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import os
+import contextlib
+import pathlib
+import time
+from typing import List
+
+import pytest
+
+from ._env import test_timeout_default
+from .util import capture
+from .server import PostgresServer
+
+from libpq import load_libpq_handle, connect as libpq_connect
+
+
+# Stash key for tracking servers for log reporting.
+_servers_key = pytest.StashKey[List[PostgresServer]]()
+
+
+def _record_server_for_log_reporting(request, server):
+ """Record a server for log reporting on test failure."""
+ if _servers_key not in request.node.stash:
+ request.node.stash[_servers_key] = []
+ request.node.stash[_servers_key].append(server)
+
+
+@pytest.fixture
+def remaining_timeout():
+ """
+ This fixture provides a function that returns how much of the
+ PG_TEST_TIMEOUT_DEFAULT remains for the current test, in fractional seconds.
+ This value is never less than zero.
+
+ This fixture is per-test, so the deadline is also reset on a per-test basis.
+ """
+ now = time.monotonic()
+ deadline = now + test_timeout_default()
+
+ return lambda: max(deadline - time.monotonic(), 0)
+
+
+@pytest.fixture(scope="module")
+def remaining_timeout_module():
+ """
+ Same as remaining_timeout, but the deadline is set once per module.
+
+ This fixture is per-module, which means it's generally only really useful
+ for configuring timeouts of operations that happen in the setup phase of
+ another module fixtures. If you use it in a test it would mean that each
+ subsequent test in the module gets a reduced timeout.
+ """
+ now = time.monotonic()
+ deadline = now + test_timeout_default()
+
+ return lambda: max(deadline - time.monotonic(), 0)
+
+
+@pytest.fixture(scope="session")
+def libpq_handle(libdir, bindir):
+ """
+ Loads a ctypes handle for libpq. Some common function prototypes are
+ initialized for general use.
+ """
+ try:
+ return load_libpq_handle(libdir, bindir)
+ except OSError as e:
+ if "wrong ELF class" in str(e):
+ # This happens in CI when trying to lead a 32-bit libpq library
+ # with a 64-bit Python
+ pytest.skip("libpq architecture does not match Python interpreter")
+ raise
+
+
+@pytest.fixture
+def connect(libpq_handle, remaining_timeout):
+ """
+ Returns a function to connect to PostgreSQL via libpq.
+
+ The returned function accepts connection options as keyword arguments
+ (host, port, dbname, etc.) and returns a PGconn object. Connections
+ are automatically cleaned up at the end of the test.
+
+ Example:
+ conn = connect(host='localhost', port=5432, dbname='postgres')
+ result = conn.sql("SELECT 1")
+ """
+ with contextlib.ExitStack() as stack:
+
+ def _connect(**opts):
+ return libpq_connect(libpq_handle, stack, remaining_timeout, **opts)
+
+ yield _connect
+
+
+@pytest.fixture(scope="session")
+def pg_config():
+ """
+ Returns the path to pg_config. Uses PG_CONFIG environment variable if set,
+ otherwise uses 'pg_config' from PATH.
+ """
+ return os.environ.get("PG_CONFIG", "pg_config")
+
+
+@pytest.fixture(scope="session")
+def bindir(pg_config):
+ """
+ Returns the PostgreSQL bin directory using pg_config --bindir.
+ """
+ return pathlib.Path(capture(pg_config, "--bindir"))
+
+
+@pytest.fixture(scope="session")
+def libdir(pg_config):
+ """
+ Returns the PostgreSQL lib directory using pg_config --libdir.
+ """
+ return pathlib.Path(capture(pg_config, "--libdir"))
+
+
+@pytest.fixture(scope="session")
+def tmp_check(tmp_path_factory) -> pathlib.Path:
+ """
+ Returns the tmp_check directory that should be used for the tests. If
+ TESTDATADIR is provided, that will be used; otherwise a new temporary
+ directory is created in the pytest temp root.
+ """
+ d = os.getenv("TESTDATADIR")
+ if d:
+ d = pathlib.Path(d)
+ else:
+ d = tmp_path_factory.mktemp("tmp_check")
+
+ return d
+
+
+@pytest.fixture(scope="session")
+def datadir(tmp_check):
+ """
+ Returns the data directory to use for the pg fixture.
+ """
+
+ return tmp_check / "pgdata"
+
+
+@pytest.fixture(scope="session")
+def sockdir(tmp_path_factory):
+ """
+ Returns the directory name to use as the server's unix_socket_directories
+ setting. Local client connections use this as the PGHOST.
+
+ At the moment, this is always put under the pytest temp root.
+ """
+ return tmp_path_factory.mktemp("sockfiles")
+
+
+@pytest.fixture(scope="session")
+def pg_server_global(bindir, datadir, sockdir, libpq_handle):
+ """
+ Starts a running Postgres server listening on localhost. The HBA initially
+ allows only local UNIX connections from the same user.
+
+ Returns a PostgresServer instance with methods for server management, configuration,
+ and creating test databases/users.
+ """
+ server = PostgresServer("default", bindir, datadir, sockdir, libpq_handle)
+
+ yield server
+
+ # Cleanup any test resources
+ server.cleanup()
+
+ # Stop the server
+ server.stop()
+
+
+@pytest.fixture(scope="module")
+def pg_server_module(pg_server_global):
+ """
+ Module-scoped server context. Which can be useful so that certain settings
+ can be overriden at the module level through autouse fixtures. An example
+ of this is in the SSL tests.
+ """
+ with pg_server_global.subcontext() as s:
+ yield s
+
+
+@pytest.fixture
+def pg(request, pg_server_module, remaining_timeout):
+ """
+ Per-test server context. Use this fixture to make changes to the server
+ which will be rolled back at the end of the test (e.g., creating test
+ users/databases).
+
+ Also captures the PostgreSQL log position at test start so that any new
+ log entries can be included in the test report on failure.
+ """
+ with pg_server_module.start_new_test(remaining_timeout) as s:
+ _record_server_for_log_reporting(request, s)
+ yield s
+
+
+@pytest.fixture
+def conn(pg):
+ """
+ Returns a connected PGconn instance to the test PostgreSQL server.
+ The connection is automatically cleaned up at the end of the test.
+
+ Example:
+ def test_something(conn):
+ result = conn.sql("SELECT 1")
+ assert result == 1
+ """
+ return pg.connect()
+
+
+@pytest.fixture
+def create_pg(request, bindir, sockdir, libpq_handle, tmp_check, remaining_timeout):
+ """
+ Factory fixture to create additional PostgreSQL servers (per-test scope).
+
+ Returns a function that creates new PostgreSQL server instances.
+ Servers are automatically cleaned up at the end of the test.
+
+ Example:
+ def test_multiple_servers(create_pg):
+ node1 = create_pg()
+ node2 = create_pg()
+ node3 = create_pg()
+ """
+ servers = []
+
+ def _create(name=None, **kwargs):
+ if name is None:
+ count = len(servers) + 1
+ name = f"pg{count}"
+
+ datadir = tmp_check / f"pgdata_{name}"
+ server = PostgresServer(name, bindir, datadir, sockdir, libpq_handle, **kwargs)
+ server.set_timeout(remaining_timeout)
+ _record_server_for_log_reporting(request, server)
+ servers.append(server)
+ return server
+
+ yield _create
+
+ for server in servers:
+ server.cleanup()
+ server.stop()
+
+
+@pytest.fixture(scope="module")
+def _module_scoped_servers():
+ """Session-scoped list to track servers created by create_pg_module."""
+ return []
+
+
+@pytest.fixture(scope="module")
+def create_pg_module(
+ bindir,
+ sockdir,
+ libpq_handle,
+ tmp_check,
+ remaining_timeout_module,
+ _module_scoped_servers,
+):
+ """
+ Factory fixture to create additional PostgreSQL servers (module scope).
+
+ Like create_pg, but servers persist for the entire test module.
+ Use this when multiple tests in a module can share the same servers.
+
+ The timeout is automatically set on all servers at the start of each test
+ via the _set_module_server_timeouts autouse fixture.
+
+ Example:
+ @pytest.fixture(scope="module")
+ def shared_nodes(create_pg_module):
+ return [create_pg_module() for _ in range(3)]
+ """
+
+ def _create(name=None, **kwargs):
+ if name is None:
+ count = len(_module_scoped_servers) + 1
+ name = f"pg{count}"
+ datadir = tmp_check / f"pgdata_{name}"
+ server = PostgresServer(name, bindir, datadir, sockdir, libpq_handle, **kwargs)
+ server.set_timeout(remaining_timeout_module)
+ _module_scoped_servers.append(server)
+ return server
+
+ yield _create
+
+ for server in _module_scoped_servers:
+ server.cleanup()
+ server.stop()
+
+
+@pytest.fixture(autouse=True)
+def _set_module_server_timeouts(request, _module_scoped_servers, remaining_timeout):
+ """Autouse fixture that sets timeout, enters subcontext, and records log positions for module-scoped servers."""
+ with contextlib.ExitStack() as stack:
+ for server in _module_scoped_servers:
+ stack.enter_context(server.start_new_test(remaining_timeout))
+ _record_server_for_log_reporting(request, server)
+ yield
+
+
+@pytest.hookimpl(hookwrapper=True, trylast=True)
+def pytest_runtest_makereport(item, call):
+ """
+ Adds PostgreSQL server logs to the test report sections.
+ """
+ outcome = yield
+ report = outcome.get_result()
+
+ if report.when != "call":
+ return
+
+ if _servers_key not in item.stash:
+ return
+
+ servers = item.stash[_servers_key]
+ del item.stash[_servers_key]
+
+ include_name = len(servers) > 1
+
+ for server in servers:
+ content = server.log_content()
+ if content.strip():
+ section_title = "Postgres log"
+ if include_name:
+ section_title += f" ({server.name})"
+ report.sections.append((section_title, content))
diff --git a/src/test/pytest/pypg/server.py b/src/test/pytest/pypg/server.py
new file mode 100644
index 00000000000..9242ab25007
--- /dev/null
+++ b/src/test/pytest/pypg/server.py
@@ -0,0 +1,470 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import os
+import pathlib
+import platform
+import re
+import shutil
+import socket
+import subprocess
+import tempfile
+from collections import namedtuple
+from typing import Callable, Optional
+
+from .util import run
+from libpq import PGconn, connect as libpq_connect
+
+
+class FileBackup(contextlib.AbstractContextManager):
+ """
+ A context manager which backs up a file's contents, restoring them on exit.
+ """
+
+ def __init__(self, file: pathlib.Path):
+ super().__init__()
+
+ self._file = file
+
+ def __enter__(self):
+ with tempfile.NamedTemporaryFile(
+ prefix=self._file.name, dir=self._file.parent, delete=False
+ ) as f:
+ self._backup = pathlib.Path(f.name)
+
+ shutil.copyfile(self._file, self._backup)
+
+ return self
+
+ def __exit__(self, *exc):
+ # Swap the backup and the original file, so that the modified contents
+ # can still be inspected in case of failure.
+ tmp = self._backup.parent / (self._backup.name + ".tmp")
+
+ shutil.copyfile(self._file, tmp)
+ shutil.copyfile(self._backup, self._file)
+ shutil.move(tmp, self._backup)
+
+
+class HBA(FileBackup):
+ """
+ Backs up a server's HBA configuration and provides means for temporarily
+ editing it.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "pg_hba.conf")
+
+ def prepend(self, *lines):
+ """
+ Temporarily prepends lines to the server's pg_hba.conf.
+
+ As sugar for aligning HBA columns in the tests, each line can be either
+ a string or a list of strings. List elements will be joined by single
+ spaces before they are written to file.
+ """
+ with open(self._file, "r") as f:
+ prior_data = f.read()
+
+ with open(self._file, "w") as f:
+ for line in lines:
+ if isinstance(line, list):
+ print(*line, file=f)
+ else:
+ print(line, file=f)
+
+ f.write(prior_data)
+
+
+class Config(FileBackup):
+ """
+ Backs up a server's postgresql.conf and provides means for temporarily
+ editing it.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "postgresql.conf")
+
+ def set(self, **gucs):
+ """
+ Temporarily appends GUC settings to the server's postgresql.conf.
+ """
+
+ with open(self._file, "a") as f:
+ print(file=f)
+
+ for n, v in gucs.items():
+ v = str(v)
+
+ # TODO: proper quoting
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+ v = "'{}'".format(v)
+
+ print(n, "=", v, file=f)
+
+
+Backup = namedtuple("Backup", "conf, hba")
+
+
+class PostgresServer:
+ """
+ Represents a running PostgreSQL server instance with management utilities.
+ Provides methods for configuration, user/database creation, and server control.
+ """
+
+ def __init__(
+ self,
+ name,
+ bindir,
+ datadir,
+ sockdir,
+ libpq_handle,
+ *,
+ hostaddr: Optional[str] = None,
+ port: Optional[int] = None,
+ ):
+ """
+ Initialize and start a PostgreSQL server instance.
+
+ Args:
+ name: The name of this server instance (for logging purposes)
+ bindir: Path to PostgreSQL bin directory
+ datadir: Path to data directory for this server
+ sockdir: Path to directory for Unix sockets
+ libpq_handle: ctypes handle to libpq
+ hostaddr: If provided, use this specific address (e.g., "127.0.0.2")
+ port: If provided, use this port instead of finding a free one,
+ is currently only allowed if hostaddr is also provided
+ """
+
+ if hostaddr is None and port is not None:
+ raise NotImplementedError("port was provided without hostaddr")
+
+ self.name = name
+ self.datadir = datadir
+ self.sockdir = sockdir
+ self.libpq_handle = libpq_handle
+ self._remaining_timeout_fn: Optional[Callable[[], float]] = None
+ self._bindir = bindir
+ self._pg_ctl = bindir / "pg_ctl"
+ self.log = datadir / "postgresql.log"
+ self._log_start_pos = 0
+
+ # Determine whether to use Unix sockets
+ use_unix_sockets = platform.system() != "Windows" and hostaddr is None
+
+ # Use INITDB_TEMPLATE if available (much faster than running initdb)
+ initdb_template = os.environ.get("INITDB_TEMPLATE")
+ if initdb_template and os.path.isdir(initdb_template):
+ shutil.copytree(initdb_template, datadir)
+ else:
+ if platform.system() == "Windows":
+ auth_method = "trust"
+ else:
+ auth_method = "peer"
+ run(
+ bindir / "initdb",
+ "--no-sync",
+ "--auth",
+ auth_method,
+ "--pgdata",
+ self.datadir,
+ )
+
+ # Figure out a port to listen on. Attempt to reserve both IPv4 and IPv6
+ # addresses in one go.
+ #
+ # Note: socket.has_dualstack_ipv6/create_server are only in Python 3.8+.
+ if hostaddr is not None:
+ # Explicit address provided
+ addrs: list[str] = [hostaddr]
+ temp_sock = socket.socket()
+ if port is None:
+ temp_sock.bind((hostaddr, 0))
+ _, port = temp_sock.getsockname()
+
+ elif hasattr(socket, "has_dualstack_ipv6") and socket.has_dualstack_ipv6():
+ addr = ("::1", 0)
+ temp_sock = socket.create_server(
+ addr, family=socket.AF_INET6, dualstack_ipv6=True
+ )
+
+ hostaddr, port, _, _ = temp_sock.getsockname()
+ assert hostaddr is not None
+ addrs = [hostaddr, "127.0.0.1"]
+
+ else:
+ addr = ("127.0.0.1", 0)
+
+ temp_sock = socket.socket()
+ temp_sock.bind(addr)
+
+ hostaddr, port = temp_sock.getsockname()
+ assert hostaddr is not None
+ addrs = [hostaddr]
+
+ # Store the computed values
+ self.hostaddr = hostaddr
+ self.port = port
+ # Including the host to use for connections - either the socket
+ # directory or TCP address
+ if use_unix_sockets:
+ self.host = str(sockdir)
+ else:
+ self.host = hostaddr
+
+ with open(os.path.join(datadir, "postgresql.conf"), "a") as f:
+ print(file=f)
+ if use_unix_sockets:
+ print(
+ "unix_socket_directories = '{}'".format(sockdir.as_posix()),
+ file=f,
+ )
+ else:
+ # Disable Unix sockets when using TCP to avoid lock conflicts
+ print("unix_socket_directories = ''", file=f)
+ print("listen_addresses = '{}'".format(",".join(addrs)), file=f)
+ print("port =", port, file=f)
+ print("log_connections = all", file=f)
+ print("fsync = off", file=f)
+ print("datestyle = 'ISO'", file=f)
+ print("timezone = 'UTC'", file=f)
+
+ # Between closing of the socket, s, and server start, we're racing
+ # against anything that wants to open up ephemeral ports, so try not to
+ # put any new work here.
+
+ temp_sock.close()
+ self.pg_ctl("start")
+
+ # Read the PID file to get the postmaster PID
+ with open(os.path.join(datadir, "postmaster.pid")) as f:
+ self.pid = int(f.readline().strip())
+
+ # ExitStack for cleanup callbacks
+ self._cleanup_stack = contextlib.ExitStack()
+
+ def current_log_position(self):
+ """Get the current end position of the log file."""
+ if self.log.exists():
+ return self.log.stat().st_size
+ return 0
+
+ def reset_log_position(self):
+ """Mark current log position as start for log_content()."""
+ self._log_start_pos = self.current_log_position()
+
+ @contextlib.contextmanager
+ def start_new_test(self, remaining_timeout):
+ """
+ Prepare server for a new test.
+
+ Sets timeout, resets log position, and enters a cleanup subcontext.
+ """
+ self.set_timeout(remaining_timeout)
+ self.reset_log_position()
+ with self.subcontext():
+ yield self
+
+ def psql(self, *args):
+ """Run psql with the given arguments."""
+ self._run(os.path.join(self._bindir, "psql"), "-w", *args)
+
+ def sql(self, query):
+ """Execute a SQL query via libpq. Returns simplified results."""
+ with self.connect() as conn:
+ return conn.sql(query)
+
+ def pg_ctl(self, *args):
+ """Run pg_ctl with the given arguments."""
+ self._run(self._pg_ctl, "--pgdata", self.datadir, "--log", self.log, *args)
+
+ def _run(self, cmd, *args, addenv: Optional[dict] = None):
+ """Run a command with PG* environment variables set."""
+ subenv = dict(os.environ)
+ subenv.update(
+ {
+ "PGHOST": str(self.host),
+ "PGPORT": str(self.port),
+ "PGDATABASE": "postgres",
+ "PGDATA": str(self.datadir),
+ }
+ )
+ if addenv:
+ subenv.update(addenv)
+ run(cmd, *args, env=subenv)
+
+ def create_users(self, *userkeys: str):
+ """Create test users and register them for cleanup."""
+ usermap = {}
+ for u in userkeys:
+ name = u + "user"
+ usermap[u] = name
+ self.psql("-c", "CREATE USER " + name)
+ self._cleanup_stack.callback(self.psql, "-c", "DROP USER " + name)
+ return usermap
+
+ def create_dbs(self, *dbkeys: str):
+ """Create test databases and register them for cleanup."""
+ dbmap = {}
+ for d in dbkeys:
+ name = d + "db"
+ dbmap[d] = name
+ self.psql("-c", "CREATE DATABASE " + name)
+ self._cleanup_stack.callback(self.psql, "-c", "DROP DATABASE " + name)
+ return dbmap
+
+ @contextlib.contextmanager
+ def reloading(self):
+ """
+ Provides a context manager for making configuration changes.
+
+ If the context suite finishes successfully, the configuration will
+ be reloaded via pg_ctl. On teardown, the configuration changes will
+ be unwound, and the server will be signaled to reload again.
+
+ The context target contains the following attributes which can be
+ used to configure the server:
+ - .conf: modifies postgresql.conf
+ - .hba: modifies pg_hba.conf
+
+ For example:
+
+ with pg_server_session.reloading() as s:
+ s.conf.set(log_connections="on")
+ s.hba.prepend("local all all trust")
+ """
+ # Push a reload onto the stack before making any other
+ # unwindable changes. That way the order of operations will be
+ #
+ # # test
+ # - config change 1
+ # - config change 2
+ # - reload
+ # # teardown
+ # - undo config change 2
+ # - undo config change 1
+ # - reload
+ #
+ self._cleanup_stack.callback(self.pg_ctl, "reload")
+ yield self._backup_configuration()
+
+ # Now actually reload
+ self.pg_ctl("reload")
+
+ @contextlib.contextmanager
+ def restarting(self):
+ """Like .reloading(), but with a full server restart."""
+ self._cleanup_stack.callback(self.pg_ctl, "restart")
+ yield self._backup_configuration()
+ self.pg_ctl("restart")
+
+ def _backup_configuration(self):
+ # Wrap the existing HBA and configuration with FileBackups.
+ return Backup(
+ hba=self._cleanup_stack.enter_context(HBA(self.datadir)),
+ conf=self._cleanup_stack.enter_context(Config(self.datadir)),
+ )
+
+ @contextlib.contextmanager
+ def subcontext(self):
+ """
+ Create a new cleanup context for per-test isolation.
+
+ Temporarily replaces the cleanup stack so that any cleanup callbacks
+ registered within this context will be cleaned up when the context exits.
+ """
+ old_stack = self._cleanup_stack
+ self._cleanup_stack = contextlib.ExitStack()
+ try:
+ self._cleanup_stack.__enter__()
+ yield self
+ finally:
+ self._cleanup_stack.__exit__(None, None, None)
+ self._cleanup_stack = old_stack
+
+ def stop(self, mode="fast"):
+ """
+ Stop the PostgreSQL server instance.
+
+ Ignores failures if the server is already stopped.
+ """
+ try:
+ self.pg_ctl("stop", "--mode", mode)
+ except subprocess.CalledProcessError:
+ # Server may have already been stopped
+ pass
+
+ def log_content(self) -> str:
+ """Return log content from the current context's start position."""
+ with open(self.log) as f:
+ f.seek(self._log_start_pos)
+ return f.read()
+
+ @contextlib.contextmanager
+ def log_contains(self, pattern, times=None):
+ """
+ Context manager that checks if the log matches pattern during the block.
+
+ Args:
+ pattern: The regex pattern to search for.
+ times: If None, any number of matches is accepted.
+ If a number, exactly that many matches are required.
+ """
+ start_pos = self.current_log_position()
+ yield
+ with open(self.log) as f:
+ f.seek(start_pos)
+ content = f.read()
+ if times is None:
+ assert re.search(pattern, content), f"Pattern {pattern!r} not found in log"
+ else:
+ match_count = len(re.findall(pattern, content))
+ assert match_count == times, (
+ f"Expected {times} matches of {pattern!r}, found {match_count}"
+ )
+
+ def cleanup(self):
+ """Run all registered cleanup callbacks."""
+ self._cleanup_stack.close()
+
+ def set_timeout(self, remaining_timeout_fn: Callable[[], float]) -> None:
+ """
+ Set the timeout function for connections.
+ This is typically called by pg fixture for each test.
+ """
+ self._remaining_timeout_fn = remaining_timeout_fn
+
+ def connect(self, **opts) -> PGconn:
+ """
+ Creates a connection to this PostgreSQL server instance.
+
+ Args:
+ **opts: Additional connection options (can override defaults)
+
+ Returns:
+ PGconn: Connected database connection
+
+ Example:
+ conn = pg.connect()
+ conn = pg.connect(dbname='mydb')
+ """
+ if self._remaining_timeout_fn is None:
+ raise RuntimeError(
+ "Timeout function not set. Use set_timeout() or pg fixture."
+ )
+
+ defaults = {
+ "host": self.host,
+ "port": self.port,
+ "dbname": "postgres",
+ }
+ defaults.update(opts)
+
+ return libpq_connect(
+ self.libpq_handle,
+ self._cleanup_stack,
+ self._remaining_timeout_fn,
+ **defaults,
+ )
diff --git a/src/test/pytest/pypg/util.py b/src/test/pytest/pypg/util.py
new file mode 100644
index 00000000000..b2a1e627e4b
--- /dev/null
+++ b/src/test/pytest/pypg/util.py
@@ -0,0 +1,42 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import shlex
+import subprocess
+import sys
+
+
+def eprint(*args, **kwargs):
+ """eprint prints to stderr"""
+ print(*args, file=sys.stderr, **kwargs)
+
+
+def run(*command, check=True, shell=None, silent=False, **kwargs):
+ """run runs the given command and prints it to stderr"""
+
+ if shell is None:
+ shell = len(command) == 1 and isinstance(command[0], str)
+
+ if shell:
+ command = command[0]
+ else:
+ command = list(map(str, command))
+
+ if not silent:
+ if shell:
+ eprint(f"+ {command}")
+ else:
+ # We could normally use shlex.join here, but it's not available in
+ # Python 3.6 which we still like to support
+ unsafe_string_cmd = " ".join(map(shlex.quote, command))
+ eprint(f"+ {unsafe_string_cmd}")
+
+ if silent:
+ kwargs.setdefault("stdout", subprocess.DEVNULL)
+
+ return subprocess.run(command, check=check, shell=shell, **kwargs)
+
+
+def capture(command, *args, stdout=subprocess.PIPE, encoding="utf-8", **kwargs):
+ return run(
+ command, *args, stdout=stdout, encoding=encoding, **kwargs
+ ).stdout.removesuffix("\n")
diff --git a/src/test/pytest/pyt/conftest.py b/src/test/pytest/pyt/conftest.py
new file mode 100644
index 00000000000..dd73917c68c
--- /dev/null
+++ b/src/test/pytest/pyt/conftest.py
@@ -0,0 +1 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
diff --git a/src/test/pytest/pyt/test_errors.py b/src/test/pytest/pyt/test_errors.py
new file mode 100644
index 00000000000..ad109039668
--- /dev/null
+++ b/src/test/pytest/pyt/test_errors.py
@@ -0,0 +1,34 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests for libpq error types and SQLSTATE-based exception mapping.
+"""
+
+import pytest
+import libpq
+
+
+def test_syntax_error(conn):
+ """Invalid SQL syntax raises SyntaxError with correct SQLSTATE."""
+ with pytest.raises(libpq.errors.SyntaxError) as exc_info:
+ conn.sql("SELEC 1")
+
+ err = exc_info.value
+ assert err.sqlstate == "42601"
+ assert err.sqlstate_class == "42"
+ assert "syntax" in str(err).lower()
+
+
+def test_unique_violation(conn):
+ """Unique violation includes all error fields and can be caught as parent class."""
+ conn.sql("CREATE TEMP TABLE test_uv (id int CONSTRAINT test_uv_pk PRIMARY KEY)")
+ conn.sql("INSERT INTO test_uv VALUES (1)")
+
+ with pytest.raises(libpq.errors.UniqueViolation) as exc_info:
+ conn.sql("INSERT INTO test_uv VALUES (1)")
+
+ err = exc_info.value
+ assert err.sqlstate == "23505"
+ assert err.table_name == "test_uv"
+ assert err.constraint_name == "test_uv_pk"
+ assert err.detail == "Key (id)=(1) already exists."
diff --git a/src/test/pytest/pyt/test_libpq.py b/src/test/pytest/pyt/test_libpq.py
new file mode 100644
index 00000000000..4fcf4056f41
--- /dev/null
+++ b/src/test/pytest/pyt/test_libpq.py
@@ -0,0 +1,172 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import os
+import socket
+import struct
+import threading
+from typing import Callable
+
+import pytest
+
+from libpq import connstr, LibpqError
+
+
+@pytest.mark.parametrize(
+ "opts, expected",
+ [
+ (dict(), ""),
+ (dict(port=5432), "port=5432"),
+ (dict(port=5432, dbname="postgres"), "port=5432 dbname=postgres"),
+ (dict(host=""), "host=''"),
+ (dict(host=" "), r"host=' '"),
+ (dict(keyword="'"), r"keyword=\'"),
+ (dict(keyword=" \\' "), r"keyword=' \\\' '"),
+ ],
+)
+def test_connstr(opts, expected):
+ """Tests the escape behavior for connstr()."""
+ assert connstr(opts) == expected
+
+
+def test_must_connect_errors(connect):
+ """Tests that connect() raises LibpqError."""
+ with pytest.raises(LibpqError, match="invalid connection option"):
+ connect(some_unknown_keyword="whatever")
+
+
+@pytest.fixture
+def local_server(tmp_path, remaining_timeout):
+ """
+ Opens up a local UNIX socket for mocking a Postgres server on a background
+ thread. See the _Server API for usage.
+
+ This fixture requires AF_UNIX support; dependent tests will be skipped on
+ platforms that don't provide it.
+ """
+
+ try:
+ from socket import AF_UNIX
+ except ImportError:
+ pytest.skip("AF_UNIX not supported on this platform")
+
+ class _Server(contextlib.ExitStack):
+ """
+ Implementation class for local_server. See .background() for the primary
+ entry point for tests. Postgres clients may connect to this server via
+ local_server.host/local_server.port.
+
+ _Server derives from contextlib.ExitStack to provide easy cleanup of
+ associated resources; see the documentation for that class for a full
+ explanation.
+ """
+
+ def __init__(self):
+ super().__init__()
+
+ self.host = tmp_path
+ self.port = 5432
+
+ self._thread = None
+ self._thread_exc = None
+ self._listener = self.enter_context(
+ socket.socket(AF_UNIX, socket.SOCK_STREAM),
+ )
+
+ def bind_and_listen(self):
+ """
+ Does the actual work of binding the UNIX socket using the Postgres
+ server conventions and listening for connections.
+
+ The listen backlog is currently hardcoded to one.
+ """
+ sockfile = self.host / ".s.PGSQL.{}".format(self.port)
+
+ # Lock down the permissions on the new socket.
+ prev_mask = os.umask(0o077)
+
+ # Bind (creating the socket file), and immediately register it for
+ # deletion from disk when the stack is cleaned up.
+ self._listener.bind(bytes(sockfile))
+ self.callback(os.unlink, sockfile)
+
+ os.umask(prev_mask)
+
+ self._listener.listen(1)
+
+ def background(self, fn: Callable[[socket.socket], None]) -> None:
+ """
+ Accepts a client connection on a background thread and passes it to
+ the provided callback. Any exceptions raised from the callback will
+ be re-raised on the main thread during fixture teardown.
+
+ Blocking operations on the connected socket default to using the
+ remaining_timeout(), though this can be changed by the test via the
+ socket's .settimeout().
+ """
+
+ def _bg():
+ try:
+ self._listener.settimeout(remaining_timeout())
+ sock, _ = self._listener.accept()
+
+ with sock:
+ sock.settimeout(remaining_timeout())
+ fn(sock)
+
+ except Exception as e:
+ # Save the exception for re-raising on the main thread.
+ self._thread_exc = e
+
+ # TODO: rather than using callback(), consider explicitly signaling
+ # the fn() implementation to stop early if we get an exception.
+ # Otherwise we'll hang until the end of the timeout.
+ self._thread = threading.Thread(target=_bg)
+ self.callback(self._join)
+
+ self._thread.start()
+
+ def _join(self):
+ """
+ Waits for the background thread to finish and raises any thrown
+ exception. This is called during fixture teardown.
+ """
+ # Give a little bit of wiggle room on the join timeout, since we're
+ # racing against the test's own use of remaining_timeout(). (It's
+ # preferable to let tests report timeouts; the stack traces will
+ # help with debugging.)
+ self._thread.join(remaining_timeout() + 1)
+ if self._thread.is_alive():
+ raise TimeoutError("background thread is still running after timeout")
+
+ if self._thread_exc is not None:
+ raise self._thread_exc
+
+ with _Server() as s:
+ s.bind_and_listen()
+ yield s
+
+
+def test_connection_is_finished_on_error(connect, local_server):
+ """Tests that PQfinish() gets called at the end of testing."""
+ expected_error = "something is wrong"
+
+ def serve_error(s: socket.socket) -> None:
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Quick check for the startup packet version.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (3, 0)
+
+ # Discard the remainder of the startup packet and send a v2 error.
+ s.recv(pktlen - 8)
+ s.send(b"E" + expected_error.encode() + b"\0")
+
+ # And now the socket should be closed.
+ assert not s.recv(1), "client sent unexpected data"
+
+ local_server.background(serve_error)
+
+ with pytest.raises(LibpqError, match=expected_error):
+ # Exiting this context should result in PQfinish().
+ connect(host=local_server.host, port=local_server.port)
diff --git a/src/test/pytest/pyt/test_multi_server.py b/src/test/pytest/pyt/test_multi_server.py
new file mode 100644
index 00000000000..8ee045b0cc8
--- /dev/null
+++ b/src/test/pytest/pyt/test_multi_server.py
@@ -0,0 +1,46 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests demonstrating multi-server functionality using create_pg fixture.
+
+These tests verify that the pytest infrastructure correctly handles
+multiple PostgreSQL server instances within a single test, and that
+module-scoped servers persist across tests.
+"""
+
+import pytest
+
+
+def test_multiple_servers_basic(create_pg):
+ """Test that we can create and connect to multiple servers."""
+ node1 = create_pg("primary")
+ node2 = create_pg("secondary")
+
+ conn1 = node1.connect()
+ conn2 = node2.connect()
+
+ # Each server should have its own data directory
+ datadir1 = conn1.sql("SHOW data_directory")
+ datadir2 = conn2.sql("SHOW data_directory")
+ assert datadir1 != datadir2
+
+ # Each server should be listening on a different port
+ assert node1.port != node2.port
+
+
+@pytest.fixture(scope="module")
+def shared_server(create_pg_module):
+ """A server shared across all tests in this module."""
+ server = create_pg_module("shared")
+ server.sql("CREATE TABLE module_state (value int DEFAULT 0)")
+ return server
+
+
+def test_module_server_create_row(shared_server):
+ """First test: create a row in the shared server."""
+ shared_server.connect().sql("INSERT INTO module_state VALUES (42)")
+
+
+def test_module_server_see_row(shared_server):
+ """Second test: verify we see the row from the previous test."""
+ assert shared_server.connect().sql("SELECT value FROM module_state") == 42
diff --git a/src/test/pytest/pyt/test_query_helpers.py b/src/test/pytest/pyt/test_query_helpers.py
new file mode 100644
index 00000000000..abcd9084214
--- /dev/null
+++ b/src/test/pytest/pyt/test_query_helpers.py
@@ -0,0 +1,347 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests for query helper functions with type conversion and result simplification.
+"""
+
+import uuid
+
+import pytest
+
+
+def test_single_cell_int(conn):
+ """Single cell integer query returns just the value."""
+ result = conn.sql("SELECT 1")
+ assert result == 1
+ assert isinstance(result, int)
+
+
+def test_single_cell_string(conn):
+ """Single cell string query returns just the value."""
+ result = conn.sql("SELECT 'hello'")
+ assert result == "hello"
+ assert isinstance(result, str)
+
+
+def test_single_cell_bool(conn):
+ """Single cell boolean query returns just the value."""
+
+ result = conn.sql("SELECT true")
+ assert result is True
+ assert isinstance(result, bool)
+
+ result = conn.sql("SELECT false")
+ assert result is False
+
+
+def test_single_cell_float(conn):
+ """Single cell float query returns just the value."""
+
+ result = conn.sql("SELECT 3.14::float4")
+ assert isinstance(result, float)
+ assert abs(result - 3.14) < 0.01
+
+
+def test_single_cell_null(conn):
+ """Single cell NULL query returns None."""
+
+ result = conn.sql("SELECT NULL")
+ assert result is None
+
+
+def test_single_row_multiple_columns(conn):
+ """Single row with multiple columns returns a tuple."""
+
+ result = conn.sql("SELECT 1, 'hello', true")
+ assert result == (1, "hello", True)
+ assert isinstance(result, tuple)
+
+
+def test_single_column_multiple_rows(conn):
+ """Single column with multiple rows returns a list of values."""
+
+ result = conn.sql("SELECT * FROM generate_series(1, 3)")
+ assert result == [1, 2, 3]
+ assert isinstance(result, list)
+
+
+def test_multiple_rows_and_columns(conn):
+ """Multiple rows and columns returns list of tuples."""
+
+ result = conn.sql("SELECT * FROM (VALUES (1, 'a'), (2, 'b'), (3, 'c')) AS t")
+ assert result == [(1, "a"), (2, "b"), (3, "c")]
+ assert isinstance(result, list)
+ assert all(isinstance(row, tuple) for row in result)
+
+
+def test_empty_result(conn):
+ """Empty result set returns empty list."""
+
+ result = conn.sql("SELECT 1 WHERE false")
+ assert result == []
+
+
+def test_query_error_handling(conn):
+ """Query errors raise RuntimeError with actual error message."""
+
+ with pytest.raises(RuntimeError) as exc_info:
+ conn.sql("SELECT * FROM nonexistent_table")
+
+ error_msg = str(exc_info.value)
+ assert "nonexistent_table" in error_msg or "does not exist" in error_msg
+
+
+def test_division_by_zero_error(conn):
+ """Division by zero raises RuntimeError."""
+
+ with pytest.raises(RuntimeError) as exc_info:
+ conn.sql("SELECT 1/0")
+
+ error_msg = str(exc_info.value)
+ assert "division by zero" in error_msg.lower()
+
+
+def test_simple_exec_create_table(conn):
+ """sql for CREATE TABLE returns None."""
+
+ result = conn.sql("CREATE TEMP TABLE test_table (id int, name text)")
+ assert result is None
+
+ # Verify table was created
+ count = conn.sql("SELECT COUNT(*) FROM test_table")
+ assert count == 0
+
+
+def test_simple_exec_insert(conn):
+ """sql for INSERT returns None."""
+
+ conn.sql("CREATE TEMP TABLE test_table (id int, name text)")
+ result = conn.sql("INSERT INTO test_table VALUES (1, 'Alice'), (2, 'Bob')")
+ assert result is None
+
+ # Verify data was inserted
+ count = conn.sql("SELECT COUNT(*) FROM test_table")
+ assert count == 2
+
+
+def test_type_conversion_mixed(conn):
+ """Test mixed type conversion in a single row."""
+
+ result = conn.sql("SELECT 42::int4, 123::int8, 3.14::float8, 'text', true, NULL")
+ assert result == (42, 123, 3.14, "text", True, None)
+ assert isinstance(result[0], int)
+ assert isinstance(result[1], int)
+ assert isinstance(result[2], float)
+ assert isinstance(result[3], str)
+ assert isinstance(result[4], bool)
+ assert result[5] is None
+
+
+def test_multiple_queries_same_connection(conn):
+ """Test running multiple queries on the same connection."""
+
+ result1 = conn.sql("SELECT 1")
+ assert result1 == 1
+
+ result2 = conn.sql("SELECT 'hello', 'world'")
+ assert result2 == ("hello", "world")
+
+ result3 = conn.sql("SELECT * FROM generate_series(1, 5)")
+ assert result3 == [1, 2, 3, 4, 5]
+
+
+def test_date_type(conn):
+ """Test date type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '2025-10-20'::date")
+ assert result == datetime.date(2025, 10, 20)
+ assert isinstance(result, datetime.date)
+
+
+def test_timestamp_type(conn):
+ """Test timestamp type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '2025-10-20 15:30:45'::timestamp")
+ assert result == datetime.datetime(2025, 10, 20, 15, 30, 45)
+ assert isinstance(result, datetime.datetime)
+
+
+def test_time_type(conn):
+ """Test time type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '15:30:45'::time")
+ assert result == datetime.time(15, 30, 45)
+ assert isinstance(result, datetime.time)
+
+
+def test_numeric_type(conn):
+ """Test numeric/decimal type conversion."""
+ import decimal
+
+ result = conn.sql("SELECT 123.456::numeric")
+ assert result == decimal.Decimal("123.456")
+ assert isinstance(result, decimal.Decimal)
+
+
+def test_int_array(conn):
+ """Test integer array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[1, 2, 3, 4, 5]")
+ assert result == [1, 2, 3, 4, 5]
+ assert isinstance(result, list)
+ assert all(isinstance(x, int) for x in result)
+
+
+def test_text_array(conn):
+ """Test text array type conversion."""
+
+ result = conn.sql("SELECT ARRAY['hello', 'world', 'test']")
+ assert result == ["hello", "world", "test"]
+ assert isinstance(result, list)
+ assert all(isinstance(x, str) for x in result)
+
+
+def test_bool_array(conn):
+ """Test boolean array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[true, false, true]")
+ assert result == [True, False, True]
+ assert isinstance(result, list)
+ assert all(isinstance(x, bool) for x in result)
+
+
+def test_empty_array(conn):
+ """Test empty array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[]::int[]")
+ assert result == []
+ assert isinstance(result, list)
+
+
+def test_json_type(conn):
+ """Test JSON type (parsed to dict)."""
+
+ result = conn.sql('SELECT \'{"key": "value"}\'::json')
+ assert isinstance(result, dict)
+ assert result == {"key": "value"}
+
+
+def test_jsonb_type(conn):
+ """Test JSONB type (parsed to dict)."""
+
+ result = conn.sql('SELECT \'{"name": "test", "count": 42}\'::jsonb')
+ assert isinstance(result, dict)
+ assert result == {"name": "test", "count": 42}
+
+
+def test_json_array(conn):
+ """Test JSON array type."""
+
+ result = conn.sql("SELECT '[1, 2, 3, 4, 5]'::json")
+ assert isinstance(result, list)
+ assert result == [1, 2, 3, 4, 5]
+
+
+def test_json_nested(conn):
+ """Test nested JSON object."""
+
+ result = conn.sql(
+ 'SELECT \'{"user": {"id": 1, "name": "Alice"}, "active": true}\'::json'
+ )
+ assert isinstance(result, dict)
+ assert result == {"user": {"id": 1, "name": "Alice"}, "active": True}
+
+
+def test_mixed_types_with_arrays(conn):
+ """Test mixed types including arrays in a single row."""
+
+ result = conn.sql("SELECT 42, 'text', ARRAY[1, 2, 3], true")
+ assert result == (42, "text", [1, 2, 3], True)
+ assert isinstance(result[0], int)
+ assert isinstance(result[1], str)
+ assert isinstance(result[2], list)
+ assert isinstance(result[3], bool)
+
+
+def test_uuid_type(conn):
+ """Test UUID type conversion."""
+ test_uuid = "550e8400-e29b-41d4-a716-446655440000"
+ result = conn.sql(f"SELECT '{test_uuid}'::uuid")
+ assert result == uuid.UUID(test_uuid)
+ assert isinstance(result, uuid.UUID)
+
+
+def test_uuid_generation(conn):
+ """Test generated UUID type conversion."""
+ result = conn.sql("SELECT uuidv4()")
+ assert isinstance(result, uuid.UUID)
+ # Check it's a valid UUID by ensuring it can be converted to string
+ assert len(str(result)) == 36 # UUID string format length
+
+
+def test_text_array_with_commas(conn):
+ """Test text array with elements containing commas."""
+
+ result = conn.sql("SELECT ARRAY['A,B', 'C', ' D ']")
+ assert result == ["A,B", "C", " D "]
+
+
+def test_text_array_with_quotes(conn):
+ """Test text array with elements containing quotes."""
+
+ result = conn.sql(r"SELECT ARRAY[E'a\"b', 'c']")
+ assert result == ['a"b', "c"]
+
+
+def test_text_array_with_backslash(conn):
+ """Test text array with elements containing backslashes."""
+
+ result = conn.sql(r"SELECT ARRAY[E'a\\b', 'c']")
+ assert result == ["a\\b", "c"]
+
+
+def test_json_array_type(conn):
+ """Test array of JSON values with embedded quotes and commas."""
+
+ result = conn.sql("""SELECT ARRAY['{"abc": 123, "xyz": 456}'::json]""")
+ assert result == [{"abc": 123, "xyz": 456}]
+
+
+def test_json_array_multiple(conn):
+ """Test array of multiple JSON objects."""
+
+ result = conn.sql(
+ """SELECT ARRAY['{"a": 1}'::json, '{"b": 2}'::json, '["x", "y"]'::json]"""
+ )
+ assert result == [{"a": 1}, {"b": 2}, ["x", "y"]]
+
+
+def test_2d_int_array(conn):
+ """Test 2D integer array."""
+
+ result = conn.sql("SELECT ARRAY[[1,2],[3,4]]")
+ assert result == [[1, 2], [3, 4]]
+
+
+def test_2d_text_array(conn):
+ """Test 2D integer array."""
+
+ result = conn.sql("SELECT ARRAY[['a','b'],['c','d,e']]")
+ assert result == [["a", "b"], ["c", "d,e"]]
+
+
+def test_3d_int_array(conn):
+ """Test 3D integer array."""
+
+ result = conn.sql("SELECT ARRAY[[[1,2],[3,4]],[[5,6],[7,8]]]")
+ assert result == [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]
+
+
+def test_array_with_null(conn):
+ """Test array with NULL elements."""
+
+ result = conn.sql("SELECT ARRAY[1, NULL, 3]")
+ assert result == [1, None, 3]
diff --git a/src/tools/generate_pytest_libpq_errors.py b/src/tools/generate_pytest_libpq_errors.py
new file mode 100755
index 00000000000..ba92891c17a
--- /dev/null
+++ b/src/tools/generate_pytest_libpq_errors.py
@@ -0,0 +1,147 @@
+#!/usr/bin/env python3
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Generate src/test/pytest/libpq/_generated_errors.py from errcodes.txt.
+"""
+
+import sys
+from pathlib import Path
+
+
+ACRONYMS = {"sql", "fdw"}
+WORD_MAP = {
+ "sqlclient": "SQLClient",
+ "sqlserver": "SQLServer",
+ "sqlconnection": "SQLConnection",
+}
+
+
+def snake_to_pascal(name: str) -> str:
+ """Convert snake_case to PascalCase, keeping acronyms uppercase."""
+ words = []
+ for word in name.split("_"):
+ if word in WORD_MAP:
+ words.append(WORD_MAP[word])
+ elif word in ACRONYMS:
+ words.append(word.upper())
+ else:
+ words.append(word.capitalize())
+ return "".join(words)
+
+
+def parse_errcodes(path: Path):
+ """Parse errcodes.txt and return list of (sqlstate, macro_name, spec_name) tuples."""
+ errors = []
+
+ with open(path) as f:
+ for line in f:
+ parts = line.split()
+ if len(parts) >= 4 and len(parts[0]) == 5:
+ sqlstate, _, macro_name, spec_name = parts[:4]
+ errors.append((sqlstate, macro_name, spec_name))
+
+ return errors
+
+
+def macro_to_class_name(macro_name: str) -> str:
+ """Convert ERRCODE_FOO_BAR to FooBar."""
+ name = macro_name.removeprefix("ERRCODE_")
+ # Move WARNING prefix to the end as a suffix
+ if name.startswith("WARNING_"):
+ name = name.removeprefix("WARNING_") + "_WARNING"
+ return snake_to_pascal(name.lower())
+
+
+def generate_errors(errcodes_path: Path):
+ """Generate the _generated_errors.py content."""
+ errors = parse_errcodes(errcodes_path)
+
+ # Find spec_names that appear more than once (collisions)
+ spec_name_counts: dict[str, int] = {}
+ for _, _, spec_name in errors:
+ spec_name_counts[spec_name] = spec_name_counts.get(spec_name, 0) + 1
+ colliding_spec_names = {
+ name for name, count in spec_name_counts.items() if count > 1
+ }
+
+ lines = [
+ "# Copyright (c) 2025, PostgreSQL Global Development Group",
+ "# This file is generated by src/tools/generate_pytest_libpq_errors.py - do not edit directly.",
+ "",
+ '"""',
+ "Generated PostgreSQL error classes mapped from SQLSTATE codes.",
+ '"""',
+ "",
+ "from typing import Dict",
+ "",
+ "from ._error_base import LibpqError, LibpqWarning",
+ "",
+ "",
+ ]
+
+ generated_classes = {"LibpqError"}
+ sqlstate_to_exception = {}
+
+ for sqlstate, macro_name, spec_name in errors:
+ # 000 errors define the parent class for all errors in this SQLSTATE class
+ if sqlstate.endswith("000"):
+ exc_name = snake_to_pascal(spec_name)
+ if exc_name == "Warning":
+ parent = "LibpqWarning"
+ else:
+ parent = "LibpqError"
+ else:
+ if spec_name in colliding_spec_names:
+ exc_name = macro_to_class_name(macro_name)
+ else:
+ exc_name = snake_to_pascal(spec_name)
+ # Use parent class if available, otherwise LibpqError
+ parent = sqlstate_to_exception.get(sqlstate[:2] + "000", "LibpqError")
+ # Warnings should end with "Warning"
+ if parent == "Warning" and not exc_name.endswith("Warning"):
+ exc_name += "Warning"
+
+ generated_classes.add(exc_name)
+ sqlstate_to_exception[sqlstate] = exc_name
+ lines.extend(
+ [
+ f"class {exc_name}({parent}):",
+ f' """SQLSTATE {sqlstate} - {spec_name.replace("_", " ")}."""',
+ "",
+ " pass",
+ "",
+ "",
+ ]
+ )
+
+ lines.append("SQLSTATE_TO_EXCEPTION: Dict[str, type] = {")
+ for sqlstate, exc_name in sqlstate_to_exception.items():
+ lines.append(f' "{sqlstate}": {exc_name},')
+ lines.extend(["}", "", ""])
+
+ all_exports = list(generated_classes) + ["SQLSTATE_TO_EXCEPTION"]
+ lines.append("__all__ = [")
+ for name in all_exports:
+ lines.append(f' "{name}",')
+ lines.append("]")
+
+ return "\n".join(lines) + "\n"
+
+
+if __name__ == "__main__":
+ script_dir = Path(__file__).resolve().parent
+ src_root = script_dir.parent.parent
+
+ errcodes_path = src_root / "src" / "backend" / "utils" / "errcodes.txt"
+ output_path = (
+ src_root / "src" / "test" / "pytest" / "libpq" / "_generated_errors.py"
+ )
+
+ if not errcodes_path.exists():
+ print(f"Error: {errcodes_path} not found", file=sys.stderr)
+ sys.exit(1)
+
+ output = generate_errors(errcodes_path)
+ output_path.write_text(output)
+ print(f"Generated {output_path}")
--
2.52.0
v6-0005-Convert-load-balance-tests-from-perl-to-python.patchtext/x-patch; charset=utf-8; name=v6-0005-Convert-load-balance-tests-from-perl-to-python.patchDownload
From c62dcae4cc29b7a9e35dbd3e7aded99f21dbc9d4 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <postgres@jeltef.nl>
Date: Fri, 26 Dec 2025 12:31:43 +0100
Subject: [PATCH v6 5/7] Convert load balance tests from perl to python
---
src/interfaces/libpq/Makefile | 1 +
src/interfaces/libpq/meson.build | 7 +-
src/interfaces/libpq/pyt/test_load_balance.py | 170 ++++++++++++++++++
.../libpq/t/003_load_balance_host_list.pl | 94 ----------
.../libpq/t/004_load_balance_dns.pl | 144 ---------------
5 files changed, 176 insertions(+), 240 deletions(-)
create mode 100644 src/interfaces/libpq/pyt/test_load_balance.py
delete mode 100644 src/interfaces/libpq/t/003_load_balance_host_list.pl
delete mode 100644 src/interfaces/libpq/t/004_load_balance_dns.pl
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 9fe321147fc..41ea88c7388 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -167,6 +167,7 @@ check installcheck: export PATH := $(CURDIR)/test:$(PATH)
check: test-build all
$(prove_check)
+ $(pytest_check)
installcheck: test-build all
$(prove_installcheck)
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index b259c998fa2..6d62ac17edb 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -150,8 +150,6 @@ tests += {
'tests': [
't/001_uri.pl',
't/002_api.pl',
- 't/003_load_balance_host_list.pl',
- 't/004_load_balance_dns.pl',
't/005_negotiate_encryption.pl',
't/006_service.pl',
],
@@ -162,6 +160,11 @@ tests += {
},
'deps': libpq_test_deps,
},
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_load_balance.py',
+ ],
+ },
}
subdir('po', if_found: libintl)
diff --git a/src/interfaces/libpq/pyt/test_load_balance.py b/src/interfaces/libpq/pyt/test_load_balance.py
new file mode 100644
index 00000000000..0af46d8f37d
--- /dev/null
+++ b/src/interfaces/libpq/pyt/test_load_balance.py
@@ -0,0 +1,170 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests for load_balance_hosts connection parameter.
+
+These tests verify that libpq correctly handles load balancing across multiple
+PostgreSQL servers specified in the connection string.
+"""
+
+import platform
+import re
+
+import pytest
+
+from libpq import LibpqError
+import pypg
+
+
+@pytest.fixture(scope="module")
+def load_balance_nodes_hostlist(create_pg_module):
+ """
+ Create 3 PostgreSQL nodes with different socket directories.
+
+ Each node has its own Unix socket directory for isolation.
+ Returns a tuple of (nodes, connect).
+ """
+ nodes = [create_pg_module() for _ in range(3)]
+
+ hostlist = ",".join(node.host for node in nodes)
+ portlist = ",".join(str(node.port) for node in nodes)
+
+ def connect(**kwargs):
+ return nodes[0].connect(host=hostlist, port=portlist, **kwargs)
+
+ return nodes, connect
+
+
+@pytest.fixture(scope="module")
+def load_balance_nodes_dns(create_pg_module):
+ """
+ Create 3 PostgreSQL nodes on the same port but different IP addresses.
+
+ Uses 127.0.0.1, 127.0.0.2, 127.0.0.3 with a shared port, so that
+ connections to 'pg-loadbalancetest' can be load balanced via DNS.
+
+ Since setting up a DNS server is more effort than we consider reasonable to
+ run this test, this situation is instead imitated by using a hosts file
+ where a single hostname maps to multiple different IP addresses. This test
+ requires the administrator to add the following lines to the hosts file (if
+ we detect that this hasn't happened we skip the test):
+
+ 127.0.0.1 pg-loadbalancetest
+ 127.0.0.2 pg-loadbalancetest
+ 127.0.0.3 pg-loadbalancetest
+
+ Windows or Linux are required to run this test because these OSes allow
+ binding to 127.0.0.2 and 127.0.0.3 addresses by default, but other OSes
+ don't. We need to bind to different IP addresses, so that we can use these
+ different IP addresses in the hosts file.
+
+ The hosts file needs to be prepared before running this test. We don't do
+ it on the fly, because it requires root permissions to change the hosts
+ file. In CI we set up the previously mentioned rules in the hosts file, so
+ that this load balancing method is tested.
+
+ Requires PG_TEST_EXTRA=load_balance because it requires this manual hosts
+ file configuration and also uses TCP with trust auth, which is potentially
+ unsafe on multiuser systems.
+ """
+ pypg.skip_unless_test_extras("load_balance")
+
+ if platform.system() not in ("Linux", "Windows"):
+ pytest.skip("DNS load balance test only supported on Linux and Windows")
+
+ if platform.system() == "Windows":
+ hosts_path = r"c:\Windows\System32\Drivers\etc\hosts"
+ else:
+ hosts_path = "/etc/hosts"
+
+ try:
+ with open(hosts_path) as f:
+ hosts_content = f.read()
+ except (OSError, IOError):
+ pytest.skip(f"Could not read hosts file: {hosts_path}")
+
+ count = len(re.findall(r"127\.0\.0\.[1-3]\s+pg-loadbalancetest", hosts_content))
+ if count != 3:
+ pytest.skip("hosts file not prepared for DNS load balance test")
+
+ first_node = create_pg_module(hostaddr="127.0.0.1")
+ nodes = [
+ first_node,
+ create_pg_module(hostaddr="127.0.0.2", port=first_node.port),
+ create_pg_module(hostaddr="127.0.0.3", port=first_node.port),
+ ]
+
+ # Allow trust authentication for TCP connections from loopback
+ for node in nodes:
+ hba_path = node.datadir / "pg_hba.conf"
+ with open(hba_path, "r") as f:
+ original_content = f.read()
+ with open(hba_path, "w") as f:
+ f.write("host all all 127.0.0.0/8 trust\n")
+ f.write(original_content)
+ node.pg_ctl("reload")
+
+ def connect(**kwargs):
+ return nodes[0].connect(host="pg-loadbalancetest", **kwargs)
+
+ return nodes, connect
+
+
+@pytest.fixture(scope="module", params=["hostlist", "dns"])
+def load_balance_nodes(request):
+ """
+ Parametrized fixture providing both load balancing test environments.
+ """
+ return request.getfixturevalue(f"load_balance_nodes_{request.param}")
+
+
+def test_load_balance_hosts_invalid_value(load_balance_nodes):
+ """load_balance_hosts doesn't accept unknown values."""
+ _, connect = load_balance_nodes
+
+ with pytest.raises(
+ LibpqError, match='invalid load_balance_hosts value: "doesnotexist"'
+ ):
+ connect(load_balance_hosts="doesnotexist")
+
+
+def test_load_balance_hosts_disable(load_balance_nodes):
+ """load_balance_hosts=disable always connects to the first node."""
+ nodes, connect = load_balance_nodes
+
+ with nodes[0].log_contains("connection received"):
+ connect(load_balance_hosts="disable")
+
+
+def test_load_balance_hosts_random_distribution(load_balance_nodes):
+ """load_balance_hosts=random distributes connections across all nodes."""
+ nodes, connect = load_balance_nodes
+
+ for _ in range(50):
+ connect(load_balance_hosts="random")
+
+ occurrences = [
+ len(re.findall("connection received", node.log_content())) for node in nodes
+ ]
+
+ # Statistically, each node should receive at least one connection.
+ # The probability of any node receiving 0 connections is (2/3)^50 ≈ 1.57e-9
+ assert occurrences[0] > 0, "node1 should receive at least one connection"
+ assert occurrences[1] > 0, "node2 should receive at least one connection"
+ assert occurrences[2] > 0, "node3 should receive at least one connection"
+ assert sum(occurrences) == 50, "total connections should be 50"
+
+
+def test_load_balance_hosts_failover(load_balance_nodes):
+ """load_balance_hosts continues trying hosts until it finds a working one."""
+ nodes, connect = load_balance_nodes
+
+ nodes[0].stop()
+ nodes[1].stop()
+
+ with nodes[2].log_contains("connection received"):
+ connect(load_balance_hosts="disable")
+
+ with nodes[2].log_contains("connection received", times=5):
+ for _ in range(5):
+ connect(load_balance_hosts="random")
diff --git a/src/interfaces/libpq/t/003_load_balance_host_list.pl b/src/interfaces/libpq/t/003_load_balance_host_list.pl
deleted file mode 100644
index 7a4c14ada98..00000000000
--- a/src/interfaces/libpq/t/003_load_balance_host_list.pl
+++ /dev/null
@@ -1,94 +0,0 @@
-# Copyright (c) 2023-2025, PostgreSQL Global Development Group
-use strict;
-use warnings FATAL => 'all';
-use Config;
-use PostgreSQL::Test::Utils;
-use PostgreSQL::Test::Cluster;
-use Test::More;
-
-# This tests load balancing across the list of different hosts in the host
-# parameter of the connection string.
-
-# Cluster setup which is shared for testing both load balancing methods
-my $node1 = PostgreSQL::Test::Cluster->new('node1');
-my $node2 = PostgreSQL::Test::Cluster->new('node2', own_host => 1);
-my $node3 = PostgreSQL::Test::Cluster->new('node3', own_host => 1);
-
-# Create a data directory with initdb
-$node1->init();
-$node2->init();
-$node3->init();
-
-# Start the PostgreSQL server
-$node1->start();
-$node2->start();
-$node3->start();
-
-# Start the tests for load balancing method 1
-my $hostlist = $node1->host . ',' . $node2->host . ',' . $node3->host;
-my $portlist = $node1->port . ',' . $node2->port . ',' . $node3->port;
-
-$node1->connect_fails(
- "host=$hostlist port=$portlist load_balance_hosts=doesnotexist",
- "load_balance_hosts doesn't accept unknown values",
- expected_stderr => qr/invalid load_balance_hosts value: "doesnotexist"/);
-
-# load_balance_hosts=disable should always choose the first one.
-$node1->connect_ok(
- "host=$hostlist port=$portlist load_balance_hosts=disable",
- "load_balance_hosts=disable connects to the first node",
- sql => "SELECT 'connect1'",
- log_like => [qr/statement: SELECT 'connect1'/]);
-
-# Statistically the following loop with load_balance_hosts=random will almost
-# certainly connect at least once to each of the nodes. The chance of that not
-# happening is so small that it's negligible: (2/3)^50 = 1.56832855e-9
-foreach my $i (1 .. 50)
-{
- $node1->connect_ok(
- "host=$hostlist port=$portlist load_balance_hosts=random",
- "repeated connections with random load balancing",
- sql => "SELECT 'connect2'");
-}
-
-my $node1_occurrences = () =
- $node1->log_content() =~ /statement: SELECT 'connect2'/g;
-my $node2_occurrences = () =
- $node2->log_content() =~ /statement: SELECT 'connect2'/g;
-my $node3_occurrences = () =
- $node3->log_content() =~ /statement: SELECT 'connect2'/g;
-
-my $total_occurrences =
- $node1_occurrences + $node2_occurrences + $node3_occurrences;
-
-cmp_ok($node1_occurrences, '>', 1,
- "received at least one connection on node1");
-cmp_ok($node2_occurrences, '>', 1,
- "received at least one connection on node2");
-cmp_ok($node3_occurrences, '>', 1,
- "received at least one connection on node3");
-is($total_occurrences, 50, "received 50 connections across all nodes");
-
-$node1->stop();
-$node2->stop();
-
-# load_balance_hosts=disable should continue trying hosts until it finds a
-# working one.
-$node3->connect_ok(
- "host=$hostlist port=$portlist load_balance_hosts=disable",
- "load_balance_hosts=disable continues until it connects to the a working node",
- sql => "SELECT 'connect3'",
- log_like => [qr/statement: SELECT 'connect3'/]);
-
-# Also with load_balance_hosts=random we continue to the next nodes if previous
-# ones are down. Connect a few times to make sure it's not just lucky.
-foreach my $i (1 .. 5)
-{
- $node3->connect_ok(
- "host=$hostlist port=$portlist load_balance_hosts=random",
- "load_balance_hosts=random continues until it connects to the a working node",
- sql => "SELECT 'connect4'",
- log_like => [qr/statement: SELECT 'connect4'/]);
-}
-
-done_testing();
diff --git a/src/interfaces/libpq/t/004_load_balance_dns.pl b/src/interfaces/libpq/t/004_load_balance_dns.pl
deleted file mode 100644
index 2b4bd261c3d..00000000000
--- a/src/interfaces/libpq/t/004_load_balance_dns.pl
+++ /dev/null
@@ -1,144 +0,0 @@
-# Copyright (c) 2023-2025, PostgreSQL Global Development Group
-use strict;
-use warnings FATAL => 'all';
-use Config;
-use PostgreSQL::Test::Utils;
-use PostgreSQL::Test::Cluster;
-use Test::More;
-
-if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\bload_balance\b/)
-{
- plan skip_all =>
- 'Potentially unsafe test load_balance not enabled in PG_TEST_EXTRA';
-}
-
-# This tests loadbalancing based on a DNS entry that contains multiple records
-# for different IPs. Since setting up a DNS server is more effort than we
-# consider reasonable to run this test, this situation is instead imitated by
-# using a hosts file where a single hostname maps to multiple different IP
-# addresses. This test requires the administrator to add the following lines to
-# the hosts file (if we detect that this hasn't happened we skip the test):
-#
-# 127.0.0.1 pg-loadbalancetest
-# 127.0.0.2 pg-loadbalancetest
-# 127.0.0.3 pg-loadbalancetest
-#
-# Windows or Linux are required to run this test because these OSes allow
-# binding to 127.0.0.2 and 127.0.0.3 addresses by default, but other OSes
-# don't. We need to bind to different IP addresses, so that we can use these
-# different IP addresses in the hosts file.
-#
-# The hosts file needs to be prepared before running this test. We don't do it
-# on the fly, because it requires root permissions to change the hosts file. In
-# CI we set up the previously mentioned rules in the hosts file, so that this
-# load balancing method is tested.
-
-# Cluster setup which is shared for testing both load balancing methods
-my $can_bind_to_127_0_0_2 =
- $Config{osname} eq 'linux' || $PostgreSQL::Test::Utils::windows_os;
-
-# Checks for the requirements for testing load balancing method 2
-if (!$can_bind_to_127_0_0_2)
-{
- plan skip_all => 'load_balance test only supported on Linux and Windows';
-}
-
-my $hosts_path;
-if ($windows_os)
-{
- $hosts_path = 'c:\Windows\System32\Drivers\etc\hosts';
-}
-else
-{
- $hosts_path = '/etc/hosts';
-}
-
-my $hosts_content = PostgreSQL::Test::Utils::slurp_file($hosts_path);
-
-my $hosts_count = () =
- $hosts_content =~ /127\.0\.0\.[1-3] pg-loadbalancetest/g;
-if ($hosts_count != 3)
-{
- # Host file is not prepared for this test
- plan skip_all => "hosts file was not prepared for DNS load balance test";
-}
-
-$PostgreSQL::Test::Cluster::use_tcp = 1;
-$PostgreSQL::Test::Cluster::test_pghost = '127.0.0.1';
-my $port = PostgreSQL::Test::Cluster::get_free_port();
-my $node1 = PostgreSQL::Test::Cluster->new('node1', port => $port);
-my $node2 =
- PostgreSQL::Test::Cluster->new('node2', port => $port, own_host => 1);
-my $node3 =
- PostgreSQL::Test::Cluster->new('node3', port => $port, own_host => 1);
-
-# Create a data directory with initdb
-$node1->init();
-$node2->init();
-$node3->init();
-
-# Start the PostgreSQL server
-$node1->start();
-$node2->start();
-$node3->start();
-
-# load_balance_hosts=disable should always choose the first one.
-$node1->connect_ok(
- "host=pg-loadbalancetest port=$port load_balance_hosts=disable",
- "load_balance_hosts=disable connects to the first node",
- sql => "SELECT 'connect1'",
- log_like => [qr/statement: SELECT 'connect1'/]);
-
-
-# Statistically the following loop with load_balance_hosts=random will almost
-# certainly connect at least once to each of the nodes. The chance of that not
-# happening is so small that it's negligible: (2/3)^50 = 1.56832855e-9
-foreach my $i (1 .. 50)
-{
- $node1->connect_ok(
- "host=pg-loadbalancetest port=$port load_balance_hosts=random",
- "repeated connections with random load balancing",
- sql => "SELECT 'connect2'");
-}
-
-my $node1_occurrences = () =
- $node1->log_content() =~ /statement: SELECT 'connect2'/g;
-my $node2_occurrences = () =
- $node2->log_content() =~ /statement: SELECT 'connect2'/g;
-my $node3_occurrences = () =
- $node3->log_content() =~ /statement: SELECT 'connect2'/g;
-
-my $total_occurrences =
- $node1_occurrences + $node2_occurrences + $node3_occurrences;
-
-cmp_ok($node1_occurrences, '>', 1,
- "received at least one connection on node1");
-cmp_ok($node2_occurrences, '>', 1,
- "received at least one connection on node2");
-cmp_ok($node3_occurrences, '>', 1,
- "received at least one connection on node3");
-is($total_occurrences, 50, "received 50 connections across all nodes");
-
-$node1->stop();
-$node2->stop();
-
-# load_balance_hosts=disable should continue trying hosts until it finds a
-# working one.
-$node3->connect_ok(
- "host=pg-loadbalancetest port=$port load_balance_hosts=disable",
- "load_balance_hosts=disable continues until it connects to the a working node",
- sql => "SELECT 'connect3'",
- log_like => [qr/statement: SELECT 'connect3'/]);
-
-# Also with load_balance_hosts=random we continue to the next nodes if previous
-# ones are down. Connect a few times to make sure it's not just lucky.
-foreach my $i (1 .. 5)
-{
- $node3->connect_ok(
- "host=pg-loadbalancetest port=$port load_balance_hosts=random",
- "load_balance_hosts=random continues until it connects to the a working node",
- sql => "SELECT 'connect4'",
- log_like => [qr/statement: SELECT 'connect4'/]);
-}
-
-done_testing();
--
2.52.0
v6-0006-WIP-pytest-Add-some-SSL-client-tests.patchtext/x-patch; charset=utf-8; name=v6-0006-WIP-pytest-Add-some-SSL-client-tests.patchDownload
From 5a017da9537df3e034610db593ea4fbf5e23ee8f Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 16 Dec 2025 09:30:55 +0100
Subject: [PATCH v6 6/7] WIP: pytest: Add some SSL client tests
This is a sample client-only test suite. It tests some handshake
failures against a mock server, as well as a full SSL handshake + empty
query + response.
pyca/cryptography is added as a new package dependency. Certificates for
testing are generated on the fly.
The mock design is threaded: the server socket is listening on a
background thread, and the test provides the server logic via a
callback. There is some additional work still needed to make this
production-ready; see the notes for _TCPServer.background(). (Currently,
an exception in the wrong place could result in a hang-until-timeout
rather than an immediate failure.)
TODOs:
- local_server and tcp_server_class are nearly identical and should
share code.
- fix exception-related timeouts for .background()
- figure out the proper use of "session" vs "module" scope
- ensure that pq.libpq unwinds (to close connections) before tcp_server;
see comment in test_server_with_ssl_disabled()
---
.cirrus.tasks.yml | 18 ++-
pyproject.toml | 8 +
src/test/ssl/Makefile | 2 +
src/test/ssl/meson.build | 6 +
src/test/ssl/pyt/conftest.py | 128 +++++++++++++++
src/test/ssl/pyt/test_client.py | 278 ++++++++++++++++++++++++++++++++
6 files changed, 434 insertions(+), 6 deletions(-)
create mode 100644 src/test/ssl/pyt/conftest.py
create mode 100644 src/test/ssl/pyt/test_client.py
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index a2c3febc30c..41d2a3c1867 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -229,6 +229,7 @@ task:
sysctl kern.corefile='/tmp/cores/%N.%P.core'
setup_additional_packages_script: |
pkg install -y \
+ py311-cryptography \
py311-packaging \
py311-pytest
@@ -323,6 +324,7 @@ task:
setup_additional_packages_script: |
pkgin -y install \
+ py312-cryptography \
py312-packaging \
py312-test
ln -s /usr/pkg/bin/pytest-3.12 /usr/pkg/bin/pytest
@@ -346,8 +348,9 @@ task:
setup_additional_packages_script: |
pkg_add -I \
- py3-test \
- py3-packaging
+ py3-cryptography \
+ py3-packaging \
+ py3-test
# Always core dump to ${CORE_DUMP_DIR}
set_core_dump_script: sysctl -w kern.nosuidcoredump=2
<<: *openbsd_task_template
@@ -508,8 +511,9 @@ task:
setup_additional_packages_script: |
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get -y install \
- python3-pytest \
- python3-packaging
+ python3-cryptography \
+ python3-packaging \
+ python3-pytest
matrix:
# SPECIAL:
@@ -658,6 +662,7 @@ task:
CIRRUS_WORKING_DIR: ${HOME}/pgsql/
CCACHE_DIR: ${HOME}/ccache
MACPORTS_CACHE: ${HOME}/macports-cache
+ PYTEST_DEBUG_TEMPROOT: /tmp # default is too long for UNIX sockets on Mac
MESON_FEATURES: >-
-Dbonjour=enabled
@@ -678,6 +683,7 @@ task:
p5.34-io-tty
p5.34-ipc-run
python312
+ py312-cryptography
py312-packaging
py312-pytest
tcl
@@ -816,7 +822,7 @@ task:
# XXX Does Chocolatey really not have any Python package installers?
setup_additional_packages_script: |
REM choco install -y --no-progress ...
- pip3 install --user packaging pytest
+ pip3 install --user cryptography packaging pytest
setup_hosts_file_script: |
echo 127.0.0.1 pg-loadbalancetest >> c:\Windows\System32\Drivers\etc\hosts
@@ -879,7 +885,7 @@ task:
folder: ${CCACHE_DIR}
setup_additional_packages_script: |
- C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-python-pytest
+ C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-python-cryptography mingw-w64-ucrt-x86_64-python-pytest
mingw_info_script: |
%BASH% -c "where gcc"
diff --git a/pyproject.toml b/pyproject.toml
index 4628d2274e0..00c8ae88583 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -12,6 +12,14 @@ dependencies = [
# Any other dependencies are effectively optional (added below). We import
# these libraries using pytest.importorskip(). So tests will be skipped if
# they are not available.
+
+ # Notes on the cryptography package:
+ # - 3.3.2 is shipped on Debian bullseye.
+ # - 3.4.x drops support for Python 2, making it a version of note for older LTS
+ # distros.
+ # - 35.x switched versioning schemes and moved to Rust parsing.
+ # - 40.x is the last version supporting Python 3.6.
+ "cryptography >= 3.3.2",
]
[tool.pytest.ini_options]
diff --git a/src/test/ssl/Makefile b/src/test/ssl/Makefile
index e8a1639db2d..895ea5ea41c 100644
--- a/src/test/ssl/Makefile
+++ b/src/test/ssl/Makefile
@@ -30,6 +30,8 @@ clean distclean:
# Doesn't depend on sslfiles because we don't rebuild them by default
check:
$(prove_check)
+ # XXX these suites should run independently, not serially
+ $(pytest_check)
installcheck:
$(prove_installcheck)
diff --git a/src/test/ssl/meson.build b/src/test/ssl/meson.build
index d8e0fb518e0..a0ee2af0899 100644
--- a/src/test/ssl/meson.build
+++ b/src/test/ssl/meson.build
@@ -15,4 +15,10 @@ tests += {
't/003_sslinfo.pl',
],
},
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_client.py',
+ 'pyt/test_server.py',
+ ],
+ },
}
diff --git a/src/test/ssl/pyt/conftest.py b/src/test/ssl/pyt/conftest.py
new file mode 100644
index 00000000000..870f738ac44
--- /dev/null
+++ b/src/test/ssl/pyt/conftest.py
@@ -0,0 +1,128 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import datetime
+import re
+import subprocess
+import tempfile
+from collections import namedtuple
+
+import pytest
+
+
+@pytest.fixture(scope="session")
+def cryptography():
+ return pytest.importorskip("cryptography", "3.3.2")
+
+
+Cert = namedtuple("Cert", "cert, certpath, key, keypath")
+
+
+@pytest.fixture(scope="session")
+def certs(cryptography, tmp_path_factory):
+ """
+ Caches commonly used certificates at the session level, and provides a way
+ to create new ones.
+
+ - certs.ca: the root CA certificate
+
+ - certs.server: the "standard" server certficate, signed by certs.ca
+
+ - certs.server_host: the hostname of the certs.server certificate
+
+ - certs.new(): creates a custom certificate, signed by certs.ca
+ """
+
+ from cryptography import x509
+ from cryptography.hazmat.primitives import hashes, serialization
+ from cryptography.hazmat.primitives.asymmetric import rsa
+ from cryptography.x509.oid import NameOID
+
+ tmpdir = tmp_path_factory.mktemp("test-certs")
+
+ class _Certs:
+ def __init__(self):
+ self.ca = self.new(
+ x509.Name(
+ [x509.NameAttribute(NameOID.COMMON_NAME, "PG pytest CA")],
+ ),
+ ca=True,
+ )
+
+ self.server_host = "example.org"
+ self.server = self.new(
+ x509.Name(
+ [x509.NameAttribute(NameOID.COMMON_NAME, self.server_host)],
+ )
+ )
+
+ def new(self, subject: x509.Name, *, ca=False) -> Cert:
+ """
+ Creates and signs a new Cert with the given subject name. If ca is
+ True, the certificate will be self-signed; otherwise the certificate
+ is signed by self.ca.
+ """
+ key = rsa.generate_private_key(
+ public_exponent=65537,
+ key_size=2048,
+ )
+
+ builder = x509.CertificateBuilder()
+ now = datetime.datetime.now(datetime.timezone.utc)
+
+ builder = (
+ builder.subject_name(subject)
+ .public_key(key.public_key())
+ .serial_number(x509.random_serial_number())
+ .not_valid_before(now)
+ .not_valid_after(now + datetime.timedelta(hours=1))
+ )
+
+ if ca:
+ builder = builder.issuer_name(subject)
+ else:
+ builder = builder.issuer_name(self.ca.cert.subject)
+
+ builder = builder.add_extension(
+ x509.BasicConstraints(ca=ca, path_length=None),
+ critical=True,
+ )
+
+ cert = builder.sign(
+ private_key=key if ca else self.ca.key,
+ algorithm=hashes.SHA256(),
+ )
+
+ # Dump the certificate and key to file.
+ keypath = self._tofile(
+ key.private_bytes(
+ serialization.Encoding.PEM,
+ serialization.PrivateFormat.PKCS8,
+ serialization.NoEncryption(),
+ ),
+ suffix=".key",
+ )
+ certpath = self._tofile(
+ cert.public_bytes(serialization.Encoding.PEM),
+ suffix="-ca.crt" if ca else ".crt",
+ )
+
+ return Cert(
+ cert=cert,
+ certpath=certpath,
+ key=key,
+ keypath=keypath,
+ )
+
+ def _tofile(self, data: bytes, *, suffix) -> str:
+ """
+ Dumps data to a file on disk with the requested suffix and returns
+ the path. The file is located somewhere in pytest's temporary
+ directory root.
+ """
+ f = tempfile.NamedTemporaryFile(suffix=suffix, dir=tmpdir, delete=False)
+ with f:
+ f.write(data)
+
+ return f.name
+
+ return _Certs()
diff --git a/src/test/ssl/pyt/test_client.py b/src/test/ssl/pyt/test_client.py
new file mode 100644
index 00000000000..556bad33bf8
--- /dev/null
+++ b/src/test/ssl/pyt/test_client.py
@@ -0,0 +1,278 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import ctypes
+import socket
+import ssl
+import struct
+import threading
+from typing import Callable
+
+import pytest
+
+import pypg
+from libpq import LibpqError, ExecStatus
+
+# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
+pytestmark = pypg.require_test_extras("ssl")
+
+
+@pytest.fixture(scope="session", autouse=True)
+def skip_if_no_ssl_support(libpq_handle):
+ """Skips tests if SSL support is not configured."""
+
+ # Declare PQsslAttribute().
+ PQsslAttribute = libpq_handle.PQsslAttribute
+ PQsslAttribute.restype = ctypes.c_char_p
+ PQsslAttribute.argtypes = [ctypes.c_void_p, ctypes.c_char_p]
+
+ if not PQsslAttribute(None, b"library"):
+ pytest.skip("requires SSL support to be configured")
+
+
+#
+# Test Fixtures
+#
+
+
+@pytest.fixture
+def tcp_server_class(remaining_timeout):
+ """
+ Metafixture to combine related logic for tcp_server and ssl_server.
+
+ TODO: combine with test_libpq.local_server
+ """
+
+ class _TCPServer(contextlib.ExitStack):
+ """
+ Implementation class for tcp_server. See .background() for the primary
+ entry point for tests. Postgres clients may connect to this server via
+ **tcp_server.conninfo.
+
+ _TCPServer derives from contextlib.ExitStack to provide easy cleanup of
+ associated resources; see the documentation for that class for a full
+ explanation.
+ """
+
+ def __init__(self):
+ super().__init__()
+
+ self._thread = None
+ self._thread_exc = None
+ self._listener = self.enter_context(
+ socket.socket(socket.AF_INET, socket.SOCK_STREAM),
+ )
+
+ self._bind_and_listen()
+ sockname = self._listener.getsockname()
+ self.conninfo = dict(
+ hostaddr=sockname[0],
+ port=sockname[1],
+ )
+
+ def _bind_and_listen(self):
+ """
+ Does the actual work of binding the socket and listening for
+ connections.
+
+ The listen backlog is currently hardcoded to one.
+ """
+ self._listener.bind(("127.0.0.1", 0))
+ self._listener.listen(1)
+
+ def background(self, fn: Callable[[socket.socket], None]) -> None:
+ """
+ Accepts a client connection on a background thread and passes it to
+ the provided callback. Any exceptions raised from the callback will
+ be re-raised on the main thread during fixture teardown.
+
+ Blocking operations on the connected socket default to using the
+ remaining_timeout(), though this can be changed by the test via the
+ socket's .settimeout().
+ """
+
+ def _bg():
+ try:
+ self._listener.settimeout(remaining_timeout())
+ sock, _ = self._listener.accept()
+
+ with sock:
+ sock.settimeout(remaining_timeout())
+ fn(sock)
+
+ except Exception as e:
+ # Save the exception for re-raising on the main thread.
+ self._thread_exc = e
+
+ # TODO: rather than using callback(), consider explicitly signaling
+ # the fn() implementation to stop early if we get an exception.
+ # Otherwise we'll hang until the end of the timeout.
+ self._thread = threading.Thread(target=_bg)
+ self.callback(self._join)
+
+ self._thread.start()
+
+ def _join(self):
+ """
+ Waits for the background thread to finish and raises any thrown
+ exception. This is called during fixture teardown.
+ """
+ # Give a little bit of wiggle room on the join timeout, since we're
+ # racing against the test's own use of remaining_timeout(). (It's
+ # preferable to let tests report timeouts; the stack traces will
+ # help with debugging.)
+ self._thread.join(remaining_timeout() + 1)
+ if self._thread.is_alive():
+ raise TimeoutError("background thread is still running after timeout")
+
+ if self._thread_exc is not None:
+ raise self._thread_exc
+
+ return _TCPServer
+
+
+@pytest.fixture
+def tcp_server(tcp_server_class):
+ """
+ Opens up a local TCP socket for mocking a Postgres server on a background
+ thread. See the _TCPServer API for usage.
+ """
+ with tcp_server_class() as s:
+ yield s
+
+
+@pytest.fixture
+def ssl_server(tcp_server_class, certs):
+ """
+ Like tcp_server, but with an additional .background_ssl() method which will
+ perform a SSLRequest handshake on the socket before handing the connection
+ to the test callback.
+
+ This server uses certs.server as its identity.
+ """
+
+ class _SSLServer(tcp_server_class):
+ def __init__(self):
+ super().__init__()
+
+ self.conninfo["host"] = certs.server_host
+
+ self._ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
+ self._ctx.load_cert_chain(certs.server.certpath, certs.server.keypath)
+
+ def background_ssl(self, fn: Callable[[ssl.SSLSocket], None]) -> None:
+ """
+ Invokes a server callback as with .background(), but an SSLRequest
+ handshake is performed first, and the socket provided to the
+ callback has been wrapped in an OpenSSL layer.
+ """
+
+ def handshake(s: socket.socket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Make sure we get an SSLRequest.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (1234, 5679)
+ assert pktlen == 8
+
+ # Accept the SSLRequest.
+ s.send(b"S")
+
+ with self._ctx.wrap_socket(s, server_side=True) as wrapped:
+ fn(wrapped)
+
+ self.background(handshake)
+
+ with _SSLServer() as s:
+ yield s
+
+
+#
+# Tests
+#
+
+
+@pytest.mark.parametrize("sslmode", ("require", "verify-ca", "verify-full"))
+def test_server_with_ssl_disabled(connect, tcp_server, certs, sslmode):
+ """
+ Make sure client refuses to talk to non-SSL servers with stricter
+ sslmodes.
+ """
+
+ def refuse_ssl(s: socket.socket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Make sure we get an SSLRequest.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (1234, 5679)
+ assert pktlen == 8
+
+ # Refuse the SSLRequest.
+ s.send(b"N")
+
+ # Wait for the client to close the connection.
+ assert not s.recv(1), "client sent unexpected data"
+
+ tcp_server.background(refuse_ssl)
+
+ with pytest.raises(LibpqError, match="server does not support SSL"):
+ connect(
+ **tcp_server.conninfo,
+ sslrootcert=certs.ca.certpath,
+ sslmode=sslmode,
+ )
+
+
+def test_verify_full_connection(connect, ssl_server, certs):
+ """Completes a verify-full connection and empty query."""
+
+ def handle_empty_query(s: ssl.SSLSocket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Check the startup packet version, then discard the remainder.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (3, 0)
+ s.recv(pktlen - 8)
+
+ # Send the required litany of server messages.
+ s.send(struct.pack("!cII", b"R", 8, 0)) # AuthenticationOK
+
+ # ParameterStatus: client_encoding
+ key = b"client_encoding\0"
+ val = b"UTF-8\0"
+ s.send(struct.pack("!cI", b"S", 4 + len(key) + len(val)) + key + val)
+
+ # ParameterStatus: DateStyle
+ key = b"DateStyle\0"
+ val = b"ISO, MDY\0"
+ s.send(struct.pack("!cI", b"S", 4 + len(key) + len(val)) + key + val)
+
+ s.send(struct.pack("!cIII", b"K", 12, 1234, 1234)) # BackendKeyData
+ s.send(struct.pack("!cIc", b"Z", 5, b"I")) # ReadyForQuery
+
+ # Expect an empty query.
+ pkttype = s.recv(1)
+ assert pkttype == b"Q"
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+ assert s.recv(pktlen - 4) == b"\0"
+
+ # Send an EmptyQueryResponse+ReadyForQuery.
+ s.send(struct.pack("!cI", b"I", 4))
+ s.send(struct.pack("!cIc", b"Z", 5, b"I"))
+
+ # libpq should terminate and close the connection.
+ assert s.recv(1) == b"X"
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+ assert pktlen == 4
+
+ assert not s.recv(1), "client sent unexpected data"
+
+ ssl_server.background_ssl(handle_empty_query)
+
+ conn = connect(
+ **ssl_server.conninfo,
+ sslrootcert=certs.ca.certpath,
+ sslmode="verify-full",
+ )
+ with conn:
+ assert conn.exec("").status() == ExecStatus.PGRES_EMPTY_QUERY
--
2.52.0
v6-0007-WIP-pytest-Add-some-server-side-SSL-tests.patchtext/x-patch; charset=utf-8; name=v6-0007-WIP-pytest-Add-some-server-side-SSL-tests.patchDownload
From 772d4025a2d7321f324cc8b9d8aba927ae304e4e Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 16 Dec 2025 09:31:46 +0100
Subject: [PATCH v6 7/7] WIP: pytest: Add some server-side SSL tests
In the same vein as the previous commit, this is a server-only test
suite operating against a mock client. The test itself is a heavily
parameterized check for direct-SSL handshake behavior, using a
combination of "standard" and "custom" certificates via the certs
fixture.
installcheck is currently unsupported, but the architecture has some
extension points that should make it possible later. For now, a new
server is always started for the test session.
TODOs:
- improve remaining_timeout() integration with socket operations; at the
moment, the timeout resets on every call rather than decrementing
---
src/test/ssl/pyt/conftest.py | 50 ++++++++++
src/test/ssl/pyt/test_server.py | 161 ++++++++++++++++++++++++++++++++
2 files changed, 211 insertions(+)
create mode 100644 src/test/ssl/pyt/test_server.py
diff --git a/src/test/ssl/pyt/conftest.py b/src/test/ssl/pyt/conftest.py
index 870f738ac44..d121724800b 100644
--- a/src/test/ssl/pyt/conftest.py
+++ b/src/test/ssl/pyt/conftest.py
@@ -126,3 +126,53 @@ def certs(cryptography, tmp_path_factory):
return f.name
return _Certs()
+
+
+@pytest.fixture(scope="module", autouse=True)
+def ssl_setup(pg_server_module, certs, datadir):
+ """
+ Sets up required server settings for all tests in this module.
+ """
+ try:
+ with pg_server_module.restarting() as s:
+ s.conf.set(
+ ssl="on",
+ ssl_ca_file=certs.ca.certpath,
+ ssl_cert_file=certs.server.certpath,
+ ssl_key_file=certs.server.keypath,
+ )
+
+ # Reject by default.
+ s.hba.prepend("hostssl all all all reject")
+
+ except subprocess.CalledProcessError:
+ # This is a decent place to skip if the server isn't set up for SSL.
+ logpath = datadir / "postgresql.log"
+ unsupported = re.compile("SSL is not supported")
+
+ with open(logpath, "r") as log:
+ for line in log:
+ if unsupported.search(line):
+ pytest.skip("the server does not support SSL")
+
+ # Some other error happened.
+ raise
+
+ users = pg_server_module.create_users("ssl")
+ dbs = pg_server_module.create_dbs("ssl")
+
+ return (users, dbs)
+
+
+@pytest.fixture(scope="module")
+def client_cert(ssl_setup, certs):
+ """
+ Creates a Cert for the "ssl" user.
+ """
+ from cryptography import x509
+ from cryptography.x509.oid import NameOID
+
+ users, _ = ssl_setup
+ user = users["ssl"]
+
+ return certs.new(x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, user)]))
diff --git a/src/test/ssl/pyt/test_server.py b/src/test/ssl/pyt/test_server.py
new file mode 100644
index 00000000000..d5cb14b6c9a
--- /dev/null
+++ b/src/test/ssl/pyt/test_server.py
@@ -0,0 +1,161 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import re
+import socket
+import ssl
+import struct
+
+import pytest
+
+import pypg
+
+# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
+pytestmark = pypg.require_test_extras("ssl")
+
+# For use with the `creds` parameter below.
+CLIENT = "client"
+SERVER = "server"
+
+
+# fmt: off
+@pytest.mark.parametrize(
+ "auth_method, creds, expected_error",
+[
+ # Trust allows anything.
+ ("trust", None, None),
+ ("trust", CLIENT, None),
+ ("trust", SERVER, None),
+
+ # verify-ca allows any CA-signed certificate.
+ ("trust clientcert=verify-ca", None, "requires a valid client certificate"),
+ ("trust clientcert=verify-ca", CLIENT, None),
+ ("trust clientcert=verify-ca", SERVER, None),
+
+ # cert and verify-full allow only the correct certificate.
+ ("trust clientcert=verify-full", None, "requires a valid client certificate"),
+ ("trust clientcert=verify-full", CLIENT, None),
+ ("trust clientcert=verify-full", SERVER, "authentication failed for user"),
+ ("cert", None, "requires a valid client certificate"),
+ ("cert", CLIENT, None),
+ ("cert", SERVER, "authentication failed for user"),
+],
+)
+# fmt: on
+def test_direct_ssl_certificate_authentication(
+ pg,
+ ssl_setup,
+ certs,
+ client_cert,
+ remaining_timeout,
+ # test parameters
+ auth_method,
+ creds,
+ expected_error,
+):
+ """
+ Tests direct SSL connections with various client-certificate/HBA
+ combinations.
+ """
+
+ # Set up the HBA as desired by the test.
+ users, dbs = ssl_setup
+
+ user = users["ssl"]
+ db = dbs["ssl"]
+
+ with pg.reloading() as s:
+ s.hba.prepend(
+ ["hostssl", db, user, "127.0.0.1/32", auth_method],
+ ["hostssl", db, user, "::1/128", auth_method],
+ )
+
+ # Configure the SSL settings for the client.
+ ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
+ ctx.load_verify_locations(cafile=certs.ca.certpath)
+ ctx.set_alpn_protocols(["postgresql"]) # for direct SSL
+
+ # Load up a client certificate if required by the test.
+ if creds == CLIENT:
+ ctx.load_cert_chain(client_cert.certpath, client_cert.keypath)
+ elif creds == SERVER:
+ # Using a server certificate as the client credential is expected to
+ # work only for clientcert=verify-ca (and `trust`, naturally).
+ ctx.load_cert_chain(certs.server.certpath, certs.server.keypath)
+
+ # Make a direct SSL connection. There's no SSLRequest in the handshake; we
+ # simply wrap a TCP connection with OpenSSL.
+ addr = (pg.hostaddr, pg.port)
+ with socket.create_connection(addr) as s:
+ s.settimeout(remaining_timeout()) # XXX this resets every operation
+
+ with ctx.wrap_socket(s, server_hostname=certs.server_host) as conn:
+ # Build and send the startup packet.
+ startup_options = dict(
+ user=user,
+ database=db,
+ application_name="pytest",
+ )
+
+ payload = b""
+ for k, v in startup_options.items():
+ payload += k.encode() + b"\0"
+ payload += str(v).encode() + b"\0"
+ payload += b"\0" # null terminator
+
+ pktlen = 4 + 4 + len(payload)
+ conn.send(struct.pack("!IHH", pktlen, 3, 0) + payload)
+
+ if not expected_error:
+ # Expect an AuthenticationOK to come back.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"R"
+ assert pktlen == 8
+
+ authn_result = struct.unpack("!I", conn.recv(4))[0]
+ assert authn_result == 0
+
+ # Read and discard to ReadyForQuery.
+ while True:
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ payload = conn.recv(pktlen - 4)
+
+ if pkttype == b"Z":
+ assert payload == b"I"
+ break
+
+ # Send an empty query.
+ conn.send(struct.pack("!cI", b"Q", 5) + b"\0")
+
+ # Expect EmptyQueryResponse+ReadyForQuery.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"I"
+ assert pktlen == 4
+
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"Z"
+
+ payload = conn.recv(pktlen - 4)
+ assert payload == b"I"
+
+ else:
+ # Match the expected authentication error.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"E"
+
+ payload = conn.recv(pktlen - 4)
+ msg = None
+
+ for component in payload.split(b"\0"):
+ if not component:
+ break # end of message
+
+ key, val = component[:1], component[1:]
+ if key == b"S":
+ assert val == b"FATAL"
+ elif key == b"M":
+ msg = val.decode()
+
+ assert re.search(expected_error, msg), "server error did not match"
+
+ # Terminate.
+ conn.send(struct.pack("!cI", b"X", 4))
--
2.52.0
On Sat Dec 27, 2025 at 6:26 PM CET, Jelte Fennema-Nio wrote:
Attached is a version where I addressed all of those comemnts
Rebased again
Attachments:
v7-0001-meson-Include-TAP-tests-in-the-configuration-summ.patchtext/x-patch; charset=utf-8; name=v7-0001-meson-Include-TAP-tests-in-the-configuration-summ.patchDownload
From bb4b75d622880bd3a546768d67d4f8697e484319 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Fri, 5 Sep 2025 16:39:08 -0700
Subject: [PATCH v7 1/9] meson: Include TAP tests in the configuration summary
...to make it obvious when they've been enabled. prove is added to the
executables list for good measure.
TODO: does Autoconf need something similar?
Per complaint by Peter Eisentraut.
---
meson.build | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/meson.build b/meson.build
index 467f7f005a6..2d1ea92c875 100644
--- a/meson.build
+++ b/meson.build
@@ -3973,6 +3973,7 @@ summary(
'bison': '@0@ @1@'.format(bison.full_path(), bison_version),
'dtrace': dtrace,
'flex': '@0@ @1@'.format(flex.full_path(), flex_version),
+ 'prove': prove,
},
section: 'Programs',
)
@@ -4009,3 +4010,11 @@ summary(
section: 'External libraries',
list_sep: ' ',
)
+
+summary(
+ {
+ 'tap': tap_tests_enabled,
+ },
+ section: 'Other features',
+ list_sep: ' ',
+)
base-commit: 094b61ce3ebbb1258675cb9b4eca9198628e2177
--
2.52.0
v7-0002-Add-support-for-pytest-test-suites.patchtext/x-patch; charset=utf-8; name=v7-0002-Add-support-for-pytest-test-suites.patchDownload
From 9f01a8e37f0d51b8e8c2ed4dd0d79fc2e8e742d6 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 13 Aug 2025 10:58:56 -0700
Subject: [PATCH v7 2/9] Add support for pytest test suites
Specify --enable-pytest/-Dpytest=enabled at configure time. This
contains no Postgres test logic -- it is just a "vanilla" pytest
skeleton.
I've written a custom pgtap output plugin, used by the Meson mtest
runner, to fully control what we see during CI test failures. The
pytest-tap plugin would have been preferable, but it's now in
maintenance mode, and it has problems with accidentally suppressing
important collection failures.
TODOs:
- The Chocolatey CI setup is subpar. Need to find a way to bless the
dependencies in use rather than pulling from pip... or maybe that will
be done by the image baker.
Co-authored-by: Jelte Fennema-Nio <postgres@jeltef.nl>
---
.cirrus.tasks.yml | 37 +++++--
.gitignore | 3 +
configure | 166 +++++++++++++++++++++++++++++-
configure.ac | 29 +++++-
meson.build | 107 +++++++++++++++++++
meson_options.txt | 8 +-
pyproject.toml | 21 ++++
src/Makefile.global.in | 29 ++++++
src/makefiles/meson.build | 2 +
src/test/Makefile | 1 +
src/test/meson.build | 1 +
src/test/pytest/Makefile | 20 ++++
src/test/pytest/README | 1 +
src/test/pytest/meson.build | 16 +++
src/test/pytest/pgtap.py | 198 ++++++++++++++++++++++++++++++++++++
src/tools/testwrap | 6 +-
16 files changed, 630 insertions(+), 15 deletions(-)
create mode 100644 pyproject.toml
create mode 100644 src/test/pytest/Makefile
create mode 100644 src/test/pytest/README
create mode 100644 src/test/pytest/meson.build
create mode 100644 src/test/pytest/pgtap.py
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 038d043d00e..a83acb39e97 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -21,7 +21,8 @@ env:
# target to test, for all but windows
CHECK: check-world PROVE_FLAGS=$PROVE_FLAGS
- CHECKFLAGS: -Otarget
+ # TODO were we avoiding --keep-going on purpose?
+ CHECKFLAGS: -Otarget --keep-going
PROVE_FLAGS: --timer
# Build test dependencies as part of the build step, to see compiler
# errors/warnings in one place.
@@ -44,6 +45,7 @@ env:
-Dldap=enabled
-Dssl=openssl
-Dtap_tests=enabled
+ -Dpytest=enabled
-Dplperl=enabled
-Dplpython=enabled
-Ddocs=enabled
@@ -225,7 +227,9 @@ task:
chown root:postgres /tmp/cores
sysctl kern.corefile='/tmp/cores/%N.%P.core'
setup_additional_packages_script: |
- #pkg install -y ...
+ pkg install -y \
+ py311-packaging \
+ py311-pytest
# NB: Intentionally build without -Dllvm. The freebsd image size is already
# large enough to make VM startup slow, and even without llvm freebsd
@@ -317,7 +321,10 @@ task:
-Dpam=enabled
setup_additional_packages_script: |
- #pkgin -y install ...
+ pkgin -y install \
+ py312-packaging \
+ py312-test
+ ln -s /usr/pkg/bin/pytest-3.12 /usr/pkg/bin/pytest
<<: *netbsd_task_template
- name: OpenBSD - Meson
@@ -337,7 +344,9 @@ task:
-Duuid=e2fs
setup_additional_packages_script: |
- #pkg_add -I ...
+ pkg_add -I \
+ py3-test \
+ py3-packaging
# Always core dump to ${CORE_DUMP_DIR}
set_core_dump_script: sysctl -w kern.nosuidcoredump=2
<<: *openbsd_task_template
@@ -496,8 +505,10 @@ task:
EOF
setup_additional_packages_script: |
- #apt-get update
- #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+ apt-get update
+ DEBIAN_FRONTEND=noninteractive apt-get -y install \
+ python3-pytest \
+ python3-packaging
matrix:
# SPECIAL:
@@ -521,14 +532,15 @@ task:
set -e
./configure \
--enable-cassert --enable-injection-points --enable-debug \
- --enable-tap-tests --enable-nls \
+ --enable-tap-tests --enable-pytest --enable-nls \
--with-segsize-blocks=6 \
--with-libnuma \
--with-liburing \
\
${LINUX_CONFIGURE_FEATURES} \
\
- CLANG="ccache clang"
+ CLANG="ccache clang" \
+ PYTEST="env LD_PRELOAD=/lib/x86_64-linux-gnu/libasan.so.8 pytest"
EOF
build_script: su postgres -c "make -s -j${BUILD_JOBS} world-bin"
upload_caches: ccache
@@ -665,6 +677,8 @@ task:
p5.34-io-tty
p5.34-ipc-run
python312
+ py312-packaging
+ py312-pytest
tcl
zstd
@@ -714,6 +728,7 @@ task:
sh src/tools/ci/ci_macports_packages.sh $MACOS_PACKAGE_LIST
# system python doesn't provide headers
sudo /opt/local/bin/port select python3 python312
+ sudo /opt/local/bin/port select pytest pytest312
# Make macports install visible for subsequent steps
echo PATH=/opt/local/sbin/:/opt/local/bin/:$PATH >> $CIRRUS_ENV
upload_caches: macports
@@ -787,6 +802,8 @@ task:
-Dldap=enabled
-Dssl=openssl
-Dtap_tests=enabled
+ -Dpytest=enabled
+ -DPYTEST=c:\Windows\system32\config\systemprofile\AppData\Roaming\Python\Python310\Scripts\pytest.exe
-Dplperl=enabled
-Dplpython=enabled
@@ -795,8 +812,10 @@ task:
depends_on: SanityCheck
only_if: $CI_WINDOWS_ENABLED
+ # XXX Does Chocolatey really not have any Python package installers?
setup_additional_packages_script: |
REM choco install -y --no-progress ...
+ pip3 install --user packaging pytest
setup_hosts_file_script: |
echo 127.0.0.1 pg-loadbalancetest >> c:\Windows\System32\Drivers\etc\hosts
@@ -859,7 +878,7 @@ task:
folder: ${CCACHE_DIR}
setup_additional_packages_script: |
- REM C:\msys64\usr\bin\pacman.exe -S --noconfirm ...
+ C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-python-pytest
mingw_info_script: |
%BASH% -c "where gcc"
diff --git a/.gitignore b/.gitignore
index 4e911395fe3..a550ce6194b 100644
--- a/.gitignore
+++ b/.gitignore
@@ -31,6 +31,7 @@ win32ver.rc
*.exe
lib*dll.def
lib*.pc
+__pycache__/
# Local excludes in root directory
/GNUmakefile
@@ -43,3 +44,5 @@ lib*.pc
/Release/
/tmp_install/
/portlock/
+/.venv/
+/uv.lock
diff --git a/configure b/configure
index 78597c6229a..a03a2eed401 100755
--- a/configure
+++ b/configure
@@ -630,6 +630,8 @@ vpath_build
PG_SYSROOT
PG_VERSION_NUM
LDFLAGS_EX_BE
+UV
+PYTEST
PROVE
DBTOEPUB
FOP
@@ -772,6 +774,7 @@ CFLAGS
CC
enable_injection_points
PG_TEST_EXTRA
+enable_pytest
enable_tap_tests
enable_dtrace
DTRACEFLAGS
@@ -850,6 +853,7 @@ enable_profiling
enable_coverage
enable_dtrace
enable_tap_tests
+enable_pytest
enable_injection_points
with_blocksize
with_segsize
@@ -1550,7 +1554,10 @@ Optional Features:
--enable-profiling build with profiling enabled
--enable-coverage build with coverage testing instrumentation
--enable-dtrace build with DTrace support
- --enable-tap-tests enable TAP tests (requires Perl and IPC::Run)
+ --enable-tap-tests enable (Perl-based) TAP tests (requires Perl and
+ IPC::Run)
+ --enable-pytest enable (Python-based) pytest suites (requires
+ Python)
--enable-injection-points
enable injection points (for testing)
--enable-depend turn on automatic dependency tracking
@@ -3632,7 +3639,7 @@ fi
#
-# TAP tests
+# Test frameworks
#
@@ -3660,6 +3667,32 @@ fi
+
+# Check whether --enable-pytest was given.
+if test "${enable_pytest+set}" = set; then :
+ enableval=$enable_pytest;
+ case $enableval in
+ yes)
+ :
+ ;;
+ no)
+ :
+ ;;
+ *)
+ as_fn_error $? "no argument expected for --enable-pytest option" "$LINENO" 5
+ ;;
+ esac
+
+else
+ enable_pytest=no
+
+fi
+
+
+
+
+
+
#
# Injection points
#
@@ -19197,6 +19230,135 @@ $as_echo "$modulestderr" >&6; }
fi
fi
+if test "$enable_pytest" = yes; then
+ if test -z "$PYTEST"; then
+ for ac_prog in pytest py.test
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_PYTEST+:} false; then :
+ $as_echo_n "(cached) " >&6
+else
+ case $PYTEST in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_PYTEST="$PYTEST" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_path_PYTEST="$as_dir/$ac_word$ac_exec_ext"
+ $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+PYTEST=$ac_cv_path_PYTEST
+if test -n "$PYTEST"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTEST" >&5
+$as_echo "$PYTEST" >&6; }
+else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+ test -n "$PYTEST" && break
+done
+
+else
+ # Report the value of PYTEST in configure's output in all cases.
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for PYTEST" >&5
+$as_echo_n "checking for PYTEST... " >&6; }
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTEST" >&5
+$as_echo "$PYTEST" >&6; }
+fi
+
+ if test -z "$PYTEST"; then
+ # If pytest not found, try installing with uv
+ if test -z "$UV"; then
+ for ac_prog in uv
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_UV+:} false; then :
+ $as_echo_n "(cached) " >&6
+else
+ case $UV in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_UV="$UV" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_path_UV="$as_dir/$ac_word$ac_exec_ext"
+ $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+UV=$ac_cv_path_UV
+if test -n "$UV"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $UV" >&5
+$as_echo "$UV" >&6; }
+else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+ test -n "$UV" && break
+done
+
+else
+ # Report the value of UV in configure's output in all cases.
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for UV" >&5
+$as_echo_n "checking for UV... " >&6; }
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $UV" >&5
+$as_echo "$UV" >&6; }
+fi
+
+ if test -n "$UV"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether uv can install pytest dependencies" >&5
+$as_echo_n "checking whether uv can install pytest dependencies... " >&6; }
+ if "$UV" pip install "$srcdir" >&5 2>&1; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+ PYTEST="$UV run pytest"
+ else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+ as_fn_error $? "pytest not found and uv failed to install dependencies" "$LINENO" 5
+ fi
+ else
+ as_fn_error $? "pytest not found" "$LINENO" 5
+ fi
+ fi
+fi
+
# If compiler will take -Wl,--as-needed (or various platform-specific
# spellings thereof) then add that to LDFLAGS. This is much easier than
# trying to filter LIBS to the minimum for each executable.
diff --git a/configure.ac b/configure.ac
index 2ccf410f94c..4e50c2ac42b 100644
--- a/configure.ac
+++ b/configure.ac
@@ -225,11 +225,16 @@ AC_SUBST(DTRACEFLAGS)])
AC_SUBST(enable_dtrace)
#
-# TAP tests
+# Test frameworks
#
PGAC_ARG_BOOL(enable, tap-tests, no,
- [enable TAP tests (requires Perl and IPC::Run)])
+ [enable (Perl-based) TAP tests (requires Perl and IPC::Run)])
AC_SUBST(enable_tap_tests)
+
+PGAC_ARG_BOOL(enable, pytest, no,
+ [enable (Python-based) pytest suites (requires Python)])
+AC_SUBST(enable_pytest)
+
AC_ARG_VAR(PG_TEST_EXTRA,
[enable selected extra tests (overridden at runtime by PG_TEST_EXTRA environment variable)])
@@ -2408,6 +2413,26 @@ if test "$enable_tap_tests" = yes; then
fi
fi
+if test "$enable_pytest" = yes; then
+ PGAC_PATH_PROGS(PYTEST, pytest py.test)
+ if test -z "$PYTEST"; then
+ # If pytest not found, try installing with uv
+ PGAC_PATH_PROGS(UV, uv)
+ if test -n "$UV"; then
+ AC_MSG_CHECKING([whether uv can install pytest dependencies])
+ if "$UV" pip install "$srcdir" >&AS_MESSAGE_LOG_FD 2>&1; then
+ AC_MSG_RESULT([yes])
+ PYTEST="$UV run pytest"
+ else
+ AC_MSG_RESULT([no])
+ AC_MSG_ERROR([pytest not found and uv failed to install dependencies])
+ fi
+ else
+ AC_MSG_ERROR([pytest not found])
+ fi
+ fi
+fi
+
# If compiler will take -Wl,--as-needed (or various platform-specific
# spellings thereof) then add that to LDFLAGS. This is much easier than
# trying to filter LIBS to the minimum for each executable.
diff --git a/meson.build b/meson.build
index 2d1ea92c875..94ed6ca6c15 100644
--- a/meson.build
+++ b/meson.build
@@ -1711,6 +1711,41 @@ endif
+###############################################################
+# Library: pytest
+###############################################################
+
+pytest_enabled = false
+pytest = not_found_dep
+uv = not_found_dep
+use_uv = false
+
+pytestopt = get_option('pytest')
+if not pytestopt.disabled()
+ pytest = find_program(get_option('PYTEST'), native: true, required: false)
+
+ # If pytest not found, try installing with uv
+ if not pytest.found()
+ uv = find_program('uv', native: true, required: false)
+ if uv.found()
+ message('Installing pytest dependencies with uv...')
+ uv_install = run_command(uv, 'sync', meson.project_source_root(), check: false)
+ if uv_install.returncode() == 0
+ use_uv = true
+ pytest_enabled = true
+ endif
+ endif
+ else
+ pytest_enabled = true
+ endif
+
+ if not pytest_enabled and pytestopt.enabled()
+ error('pytest not found')
+ endif
+endif
+
+
+
###############################################################
# Library: zstd
###############################################################
@@ -3800,6 +3835,76 @@ foreach test_dir : tests
)
endforeach
install_suites += test_group
+ elif kind == 'pytest'
+ testwrap_pytest = testwrap_base
+ if not pytest_enabled
+ testwrap_pytest += ['--skip', 'pytest not enabled']
+ endif
+
+ if use_uv
+ test_command = [uv.full_path(), 'run', 'pytest']
+ elif pytest_enabled
+ test_command = [pytest.full_path()]
+ else
+ # Dummy value - test will be skipped anyway
+ test_command = ['pytest']
+ endif
+ test_command += [
+ '-c', meson.project_source_root() / 'pyproject.toml',
+ '--verbose',
+ '-p', 'pgtap', # enable our test reporter plugin
+ '-ra', # show skipped and xfailed tests too
+ ]
+
+ # Add temporary install, the build directory for non-installed binaries and
+ # also test/ for non-installed test binaries built separately.
+ env = test_env
+ env.prepend('PATH', temp_install_bindir, test_dir['bd'], test_dir['bd'] / 'test')
+ temp_install_datadir = '@0@@1@'.format(test_install_destdir, dir_prefix / dir_data)
+ env.set('share_contrib_dir', temp_install_datadir / 'contrib')
+ # We also configure the same PYTHONPATH in the pytest settings in
+ # pyproject.toml, but pytest versions below 8.4 only actually use that
+ # value after plugin loading. So we need to configure it here too. This
+ # won't help people manually running pytest outside of meson/make, but we
+ # expect those to use a recent enough version of pytest anyway (and if
+ # not they can manually configure PYTHONPATH too).
+ env.prepend('PYTHONPATH', meson.project_source_root() / 'src' / 'test' / 'pytest')
+
+ foreach name, value : t.get('env', {})
+ env.set(name, value)
+ endforeach
+
+ test_group = test_dir['name']
+ test_kwargs = {
+ 'protocol': 'tap',
+ 'suite': test_group,
+ 'timeout': 1000,
+ 'depends': test_deps + t.get('deps', []),
+ 'env': env,
+ } + t.get('test_kwargs', {})
+
+ foreach onetest : t['tests']
+ # Make test names prettier, remove pyt/ and .py
+ onetest_p = onetest
+ if onetest_p.startswith('pyt/')
+ onetest_p = onetest.split('pyt/')[1]
+ endif
+ if onetest_p.endswith('.py')
+ onetest_p = fs.stem(onetest_p)
+ endif
+
+ test(test_dir['name'] / onetest_p,
+ python,
+ kwargs: test_kwargs,
+ args: testwrap_pytest + [
+ '--testgroup', test_dir['name'],
+ '--testname', onetest_p,
+ '--', test_command,
+ test_dir['sd'] / onetest,
+ ],
+ )
+ endforeach
+ install_suites += test_group
else
error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
endif
@@ -3974,6 +4079,7 @@ summary(
'dtrace': dtrace,
'flex': '@0@ @1@'.format(flex.full_path(), flex_version),
'prove': prove,
+ 'pytest': pytest,
},
section: 'Programs',
)
@@ -4014,6 +4120,7 @@ summary(
summary(
{
'tap': tap_tests_enabled,
+ 'pytest': pytest_enabled,
},
section: 'Other features',
list_sep: ' ',
diff --git a/meson_options.txt b/meson_options.txt
index 6a793f3e479..cb4825c3575 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -41,7 +41,10 @@ option('cassert', type: 'boolean', value: false,
description: 'Enable assertion checks (for debugging)')
option('tap_tests', type: 'feature', value: 'auto',
- description: 'Enable TAP tests')
+ description: 'Enable (Perl-based) TAP tests')
+
+option('pytest', type: 'feature', value: 'auto',
+ description: 'Enable (Python-based) pytest suites')
option('injection_points', type: 'boolean', value: false,
description: 'Enable injection points')
@@ -195,6 +198,9 @@ option('PERL', type: 'string', value: 'perl',
option('PROVE', type: 'string', value: 'prove',
description: 'Path to prove binary')
+option('PYTEST', type: 'array', value: ['pytest', 'py.test'],
+ description: 'Path to pytest binary')
+
option('PYTHON', type: 'array', value: ['python3', 'python'],
description: 'Path to python binary')
diff --git a/pyproject.toml b/pyproject.toml
new file mode 100644
index 00000000000..60abb4d0655
--- /dev/null
+++ b/pyproject.toml
@@ -0,0 +1,21 @@
+[project]
+name = "postgresql-hackers-tooling"
+version = "0.1.0"
+description = "Pytest infrastructure for PostgreSQL"
+requires-python = ">=3.6"
+dependencies = [
+ # pytest 7.0 was the last version which supported Python 3.6, but the BSDs
+ # have started putting 8.x into ports, so we support both. (pytest 8 can be
+ # used throughout once we drop support for Python 3.7.)
+ "pytest >= 7.0, < 10",
+
+ # Any other dependencies are effectively optional (added below). We import
+ # these libraries using pytest.importorskip(). So tests will be skipped if
+ # they are not available.
+]
+
+[tool.pytest.ini_options]
+minversion = "7.0"
+
+# Common test code can be found here.
+pythonpath = ["src/test/pytest"]
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 371cd7eba2c..160cdffd4f1 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -211,6 +211,7 @@ enable_dtrace = @enable_dtrace@
enable_coverage = @enable_coverage@
enable_injection_points = @enable_injection_points@
enable_tap_tests = @enable_tap_tests@
+enable_pytest = @enable_pytest@
python_includespec = @python_includespec@
python_libdir = @python_libdir@
@@ -354,6 +355,7 @@ MSGFMT = @MSGFMT@
MSGFMT_FLAGS = @MSGFMT_FLAGS@
MSGMERGE = @MSGMERGE@
OPENSSL = @OPENSSL@
+PYTEST = @PYTEST@
PYTHON = @PYTHON@
TAR = @TAR@
XGETTEXT = @XGETTEXT@
@@ -508,6 +510,33 @@ prove_installcheck = @echo "TAP tests not enabled. Try configuring with --enable
prove_check = $(prove_installcheck)
endif
+ifeq ($(enable_pytest),yes)
+
+pytest_installcheck = @echo "Installcheck is not currently supported for pytest."
+
+# We also configure the same PYTHONPATH in the pytest settings in
+# pyproject.toml, but pytest versions below 8.4 only actually use that value
+# after plugin loading. So we need to configure it here too. This won't help
+# people manually running pytest outside of meson/make, but we expect those to
+# use a recent enough version of pytest anyway (and if not they can manually
+# configure PYTHONPATH too).
+define pytest_check
+echo "# +++ pytest check in $(subdir) +++" && \
+rm -rf '$(CURDIR)'/tmp_check && \
+$(MKDIR_P) '$(CURDIR)'/tmp_check && \
+cd $(srcdir) && \
+ TESTLOGDIR='$(CURDIR)/tmp_check/log' \
+ TESTDATADIR='$(CURDIR)/tmp_check' \
+ PYTHONPATH='$(abs_top_srcdir)/src/test/pytest:$$PYTHONPATH' \
+ $(with_temp_install) \
+ $(PYTEST) -c '$(abs_top_srcdir)/pyproject.toml' --verbose -ra ./pyt/
+endef
+
+else
+pytest_installcheck = @echo "pytest is not enabled. Try configuring with --enable-pytest"
+pytest_check = $(pytest_installcheck)
+endif
+
# Installation.
install_bin = @install_bin@
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 124df2c8582..04ad26dabc6 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -56,6 +56,7 @@ pgxs_kv = {
'enable_nls': libintl.found() ? 'yes' : 'no',
'enable_injection_points': get_option('injection_points') ? 'yes' : 'no',
'enable_tap_tests': tap_tests_enabled ? 'yes' : 'no',
+ 'enable_pytest': pytest_enabled ? 'yes' : 'no',
'enable_debug': get_option('debug') ? 'yes' : 'no',
'enable_coverage': 'no',
'enable_dtrace': dtrace.found() ? 'yes' : 'no',
@@ -145,6 +146,7 @@ pgxs_bins = {
'OPENSSL': openssl,
'PERL': perl,
'PROVE': prove,
+ 'PYTEST': pytest,
'PYTHON': python,
'TAR': tar,
'ZSTD': program_zstd,
diff --git a/src/test/Makefile b/src/test/Makefile
index 3eb0a06abb4..0be9771d71f 100644
--- a/src/test/Makefile
+++ b/src/test/Makefile
@@ -18,6 +18,7 @@ SUBDIRS = \
modules \
perl \
postmaster \
+ pytest \
recovery \
regress \
subscription
diff --git a/src/test/meson.build b/src/test/meson.build
index cd45cbf57fb..09175f0eaea 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -5,6 +5,7 @@ subdir('isolation')
subdir('authentication')
subdir('postmaster')
+subdir('pytest')
subdir('recovery')
subdir('subscription')
subdir('modules')
diff --git a/src/test/pytest/Makefile b/src/test/pytest/Makefile
new file mode 100644
index 00000000000..2bdca96ccbe
--- /dev/null
+++ b/src/test/pytest/Makefile
@@ -0,0 +1,20 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for pytest
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/pytest/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/test/pytest
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+check:
+ $(pytest_check)
+
+clean distclean maintainer-clean:
+ rm -rf tmp_check
diff --git a/src/test/pytest/README b/src/test/pytest/README
new file mode 100644
index 00000000000..1333ed77b7e
--- /dev/null
+++ b/src/test/pytest/README
@@ -0,0 +1 @@
+TODO
diff --git a/src/test/pytest/meson.build b/src/test/pytest/meson.build
new file mode 100644
index 00000000000..abd128dfa24
--- /dev/null
+++ b/src/test/pytest/meson.build
@@ -0,0 +1,16 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+if not pytest_enabled
+ subdir_done()
+endif
+
+tests += {
+ 'name': 'pytest',
+ 'sd': meson.current_source_dir(),
+ 'bd': meson.current_build_dir(),
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_something.py',
+ ],
+ },
+}
diff --git a/src/test/pytest/pgtap.py b/src/test/pytest/pgtap.py
new file mode 100644
index 00000000000..c92cad98d95
--- /dev/null
+++ b/src/test/pytest/pgtap.py
@@ -0,0 +1,198 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import os
+import sys
+
+import pytest
+
+#
+# Helpers
+#
+
+
+class TAP:
+ """
+ A basic API for reporting via the TAP protocol.
+ """
+
+ def __init__(self):
+ self.count = 0
+
+ # XXX interacts poorly with testwrap's boilerplate diagnostics
+ # self.print("TAP version 13")
+
+ def expect(self, num: int):
+ self.print(f"1..{num}")
+
+ def print(self, *args):
+ print(*args, file=sys.__stdout__)
+
+ def ok(self, name: str):
+ self.count += 1
+ self.print("ok", self.count, "-", name)
+
+ def skip(self, name: str, reason: str):
+ self.count += 1
+ self.print("ok", self.count, "-", name, "# skip", reason)
+
+ def fail(self, name: str, details: str):
+ self.count += 1
+ self.print("not ok", self.count, "-", name)
+
+ # mtest has some odd behavior around TAP tests where it won't print
+ # diagnostics on failure if they're part of the stdout stream, so we
+ # might as well just dump the details directly to stderr instead.
+ print(details, file=sys.__stderr__)
+
+
+tap = TAP()
+
+
+class TestNotes:
+ """
+ Annotations for a single test. The existing pytest hooks keep interesting
+ information somewhat separated across the different stages
+ (setup/test/teardown), so this class is used to correlate them.
+ """
+
+ skipped = False
+ skip_reason = None
+
+ failed = False
+ details = ""
+
+
+# Register a custom key in the stash dictionary for keeping our TestNotes.
+notes_key = pytest.StashKey[TestNotes]()
+
+
+#
+# Hook Implementations
+#
+
+
+@pytest.hookimpl(tryfirst=True)
+def pytest_configure(config):
+ """
+ Hijacks the standard streams as soon as possible during pytest startup. The
+ pytest-formatted output gets logged to file instead, and we'll use the
+ original sys.__stdout__/__stderr__ streams for the TAP protocol.
+ """
+ logdir = os.getenv("TESTLOGDIR")
+ if not logdir:
+ raise RuntimeError("pgtap requires the TESTLOGDIR envvar to be set")
+
+ os.makedirs(logdir)
+ logpath = os.path.join(logdir, "pytest.log")
+ sys.stdout = sys.stderr = open(logpath, "a", buffering=1)
+
+
+@pytest.hookimpl(trylast=True)
+def pytest_sessionfinish(session, exitstatus):
+ """
+ Suppresses nonzero exit codes due to failed tests. (In that case, we want
+ Meson to report a failure count, not a generic ERROR.)
+ """
+ if exitstatus == pytest.ExitCode.TESTS_FAILED:
+ session.exitstatus = pytest.ExitCode.OK
+
+
+@pytest.hookimpl
+def pytest_collectreport(report):
+ # Include collection failures directly in Meson error output.
+ if report.failed:
+ print(report.longreprtext, file=sys.__stderr__)
+
+
+@pytest.hookimpl
+def pytest_internalerror(excrepr, excinfo):
+ # Include internal errors directly in Meson error output.
+ print(excrepr, file=sys.__stderr__)
+
+
+#
+# Hook Wrappers
+#
+# In pytest parlance, a "wrapper" for a hook can inspect and optionally modify
+# existing hooks' behavior, but it does not replace the hook chain. This is done
+# through a generator-style API which chains the hooks together (see the use of
+# `yield`).
+#
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_collection(session):
+ """Reports the number of gathered tests after collection is finished."""
+ res = yield
+ tap.expect(session.testscollected)
+ return res
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_runtest_makereport(item, call):
+ """
+ Annotates a test item with our TestNotes and grabs relevant information for
+ reporting.
+
+ This is called multiple times per test, so it's not correct to print the TAP
+ result here. (A test and its teardown stage can both fail, and we want to
+ see the details for both.) We instead combine all the information for use by
+ our pytest_runtest_protocol wrapper later on.
+ """
+ res = yield
+
+ if notes_key not in item.stash:
+ item.stash[notes_key] = TestNotes()
+ notes = item.stash[notes_key]
+
+ report = res.get_result()
+ if report.passed:
+ pass # no annotation needed
+
+ elif report.skipped:
+ notes.skipped = True
+ _, _, notes.skip_reason = report.longrepr
+
+ elif report.failed:
+ notes.failed = True
+
+ if not notes.details:
+ notes.details += "{:_^72}\n\n".format(f" {report.head_line} ")
+
+ if report.when in ("setup", "teardown"):
+ notes.details += "\n{:_^72}\n\n".format(
+ f" Error during {report.when} of {report.head_line} "
+ )
+
+ notes.details += report.longreprtext + "\n"
+
+ # Include captured stdout/stderr/log in failure output
+ for section_name, section_content in report.sections:
+ if section_content.strip():
+ notes.details += "\n{:-^72}\n".format(f" {section_name} ")
+ notes.details += section_content + "\n"
+
+ else:
+ raise RuntimeError("pytest_runtest_makereport received unknown test status")
+
+ return res
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_runtest_protocol(item, nextitem):
+ """
+ Reports the TAP result for this test item using our gathered TestNotes.
+ """
+ res = yield
+
+ assert notes_key in item.stash, "pgtap didn't annotate a test item?"
+ notes = item.stash[notes_key]
+
+ if notes.failed:
+ tap.fail(item.nodeid, notes.details)
+ elif notes.skipped:
+ tap.skip(item.nodeid, notes.skip_reason)
+ else:
+ tap.ok(item.nodeid)
+
+ return res
diff --git a/src/tools/testwrap b/src/tools/testwrap
index e91296ecd15..346f86b8ea3 100755
--- a/src/tools/testwrap
+++ b/src/tools/testwrap
@@ -42,7 +42,11 @@ open(os.path.join(testdir, 'test.start'), 'x')
env_dict = {**os.environ,
'TESTDATADIR': os.path.join(testdir, 'data'),
- 'TESTLOGDIR': os.path.join(testdir, 'log')}
+ 'TESTLOGDIR': os.path.join(testdir, 'log'),
+ # Prevent emitting terminal capability sequences that pollute the
+ # TAP output stream (i.e.\033[?1034h). This happens on OpenBSD with
+ # pytest for unknown reasons.
+ 'TERM': ''}
# The configuration time value of PG_TEST_EXTRA is supplied via argument
--
2.52.0
v7-0003-ci-Add-MTEST_SUITES-for-optional-test-tailoring.patchtext/x-patch; charset=utf-8; name=v7-0003-ci-Add-MTEST_SUITES-for-optional-test-tailoring.patchDownload
From ecd9898734ba315d291c7bd4d5aa8a84659ff8a6 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 2 Sep 2025 15:37:53 -0700
Subject: [PATCH v7 3/9] ci: Add MTEST_SUITES for optional test tailoring
Should make it easier to control the test cycle time for Cirrus. Add the
desired suites (remembering `--suite setup`!) to the top-level envvar.
---
.cirrus.tasks.yml | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index a83acb39e97..a2c3febc30c 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -28,6 +28,7 @@ env:
# errors/warnings in one place.
MBUILD_TARGET: all testprep
MTEST_ARGS: --print-errorlogs --no-rebuild -C build
+ MTEST_SUITES: # --suite setup --suite ssl --suite ...
PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
@@ -251,7 +252,7 @@ task:
su postgres <<-EOF
set -e
ulimit -c unlimited
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
# test runningcheck, freebsd chosen because it's currently fast enough
@@ -396,7 +397,7 @@ task:
# Otherwise tests will fail on OpenBSD, due to inability to start enough
# processes.
ulimit -p 256
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
on_failure:
@@ -614,7 +615,7 @@ task:
su postgres <<-EOF
set -e
ulimit -c unlimited
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
# so that we don't upload 64bit logs if 32bit fails
rm -rf build/
@@ -627,7 +628,7 @@ task:
su postgres <<-EOF
set -e
ulimit -c unlimited
- PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS}
+ PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
on_failure:
@@ -751,7 +752,7 @@ task:
test_world_script: |
ulimit -c unlimited # default is 0
ulimit -n 1024 # default is 256, pretty low
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
on_failure:
<<: *on_failure_meson
@@ -834,7 +835,7 @@ task:
check_world_script: |
vcvarsall x64
- meson test %MTEST_ARGS% --num-processes %TEST_JOBS%
+ meson test %MTEST_ARGS% --num-processes %TEST_JOBS% %MTEST_SUITES%
on_failure:
<<: *on_failure_meson
@@ -895,7 +896,7 @@ task:
upload_caches: ccache
test_world_script: |
- %BASH% -c "meson test %MTEST_ARGS% --num-processes %TEST_JOBS%"
+ %BASH% -c "meson test %MTEST_ARGS% --num-processes %TEST_JOBS% %MTEST_SUITES%"
on_failure:
<<: *on_failure_meson
--
2.52.0
v7-0004-Add-pytest-infrastructure-to-interact-with-Postgr.patchtext/x-patch; charset=utf-8; name=v7-0004-Add-pytest-infrastructure-to-interact-with-Postgr.patchDownload
From 38fbdb7a199cbfc2feaa125424d88c275222f9be Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <postgres@jeltef.nl>
Date: Tue, 16 Dec 2025 09:25:48 +0100
Subject: [PATCH v7 4/9] Add pytest infrastructure to interact with PostgreSQL
servers
This adds functionality to the pytest infrastructure that allows tests
to do common things with PostgreSQL servers like:
- creating
- starting
- stopping
- connecting
- running queries
- handling errors
The goal of this infrastructure is to be so easy to use that the actual
tests really only contain the logic to test the behaviour that the tests
are testing, as opposed to a bunch of boilerplate. Examples of this are:
Types get converted to their Python counter parts automatically. Errors
become actual Python exceptions. Results of queries that only return a
single row or cell are unpacked automatically, so you don't have to do
rows[0][0] if the query only returns a single cell.
The only new tests that are part of this commit are tests that cover
this testing infrastructure itself. It's debatable whether such tests
are useful long term, because any infrastructure that's unused by actual
tests should probably not exist. For now it seems good to test this
basic functionality though, both to make sure we don't break it before
committing actual tests that use it, and also as an example for people
writing new tests.
---
doc/src/sgml/regress.sgml | 54 +-
pyproject.toml | 3 +
src/backend/utils/errcodes.txt | 5 +
src/test/pytest/README | 140 +-
src/test/pytest/libpq/__init__.py | 36 +
src/test/pytest/libpq/_core.py | 489 +++++
src/test/pytest/libpq/_error_base.py | 74 +
src/test/pytest/libpq/_generated_errors.py | 2116 ++++++++++++++++++++
src/test/pytest/libpq/errors.py | 39 +
src/test/pytest/meson.build | 5 +-
src/test/pytest/pypg/__init__.py | 10 +
src/test/pytest/pypg/_env.py | 72 +
src/test/pytest/pypg/fixtures.py | 335 ++++
src/test/pytest/pypg/server.py | 470 +++++
src/test/pytest/pypg/util.py | 42 +
src/test/pytest/pyt/conftest.py | 1 +
src/test/pytest/pyt/test_errors.py | 34 +
src/test/pytest/pyt/test_libpq.py | 172 ++
src/test/pytest/pyt/test_multi_server.py | 46 +
src/test/pytest/pyt/test_query_helpers.py | 347 ++++
src/tools/generate_pytest_libpq_errors.py | 147 ++
21 files changed, 4634 insertions(+), 3 deletions(-)
create mode 100644 src/test/pytest/libpq/__init__.py
create mode 100644 src/test/pytest/libpq/_core.py
create mode 100644 src/test/pytest/libpq/_error_base.py
create mode 100644 src/test/pytest/libpq/_generated_errors.py
create mode 100644 src/test/pytest/libpq/errors.py
create mode 100644 src/test/pytest/pypg/__init__.py
create mode 100644 src/test/pytest/pypg/_env.py
create mode 100644 src/test/pytest/pypg/fixtures.py
create mode 100644 src/test/pytest/pypg/server.py
create mode 100644 src/test/pytest/pypg/util.py
create mode 100644 src/test/pytest/pyt/conftest.py
create mode 100644 src/test/pytest/pyt/test_errors.py
create mode 100644 src/test/pytest/pyt/test_libpq.py
create mode 100644 src/test/pytest/pyt/test_multi_server.py
create mode 100644 src/test/pytest/pyt/test_query_helpers.py
create mode 100755 src/tools/generate_pytest_libpq_errors.py
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index d80dd46c5fd..1440815b23a 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -840,7 +840,7 @@ float4:out:.*-.*-cygwin.*=float4-misrounded-input.out
</sect1>
<sect1 id="regress-tap">
- <title>TAP Tests</title>
+ <title>Perl TAP Tests</title>
<para>
Various tests, particularly the client program tests
@@ -929,6 +929,58 @@ PG_TEST_NOCLEAN=1 make -C src/bin/pg_dump check
</sect1>
+ <sect1 id="regress-pytest">
+ <title>Pytest Tests</title>
+
+ <para>
+ Tests in <filename>pyt</filename> directories use the Python
+ <application>pytest</application> framework. These tests provide a
+ convenient way to test libpq client functionality and scenarios requiring
+ multiple PostgreSQL server instances.
+ </para>
+
+ <para>
+ The pytest tests require <productname>PostgreSQL</productname> to be
+ configured with the option <option>--enable-pytest</option> (or
+ <option>-Dpytest=enabled</option> for Meson builds). You also need either
+ <application>pytest</application> or <application>uv</application>
+ installed on your system.
+ </para>
+
+ <para>
+ With Meson builds, you can run the pytest tests using:
+<programlisting>
+meson test --suite pytest
+</programlisting>
+ With autoconf-based builds, you can run them from the
+ <filename>src/test/pytest</filename> directory using:
+<programlisting>
+make check
+</programlisting>
+ </para>
+
+ <para>
+ You can also run specific test files directly using pytest:
+<programlisting>
+pytest src/test/pytest/pyt/test_libpq.py
+pytest -k "test_connstr"
+</programlisting>
+ </para>
+
+ <para>
+ Many operations in the test suites use a 180-second timeout, which on slow
+ hosts may lead to load-induced timeouts. Setting the environment variable
+ <varname>PG_TEST_TIMEOUT_DEFAULT</varname> to a higher number will change
+ the default to avoid this.
+ </para>
+
+ <para>
+ For more information on writing pytest tests, see the
+ <filename>src/test/pytest/README</filename> file.
+ </para>
+
+ </sect1>
+
<sect1 id="regress-coverage">
<title>Test Coverage Examination</title>
diff --git a/pyproject.toml b/pyproject.toml
index 60abb4d0655..4628d2274e0 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -19,3 +19,6 @@ minversion = "7.0"
# Common test code can be found here.
pythonpath = ["src/test/pytest"]
+
+# Load the shared fixtures plugin
+addopts = ["-p", "pypg.fixtures"]
diff --git a/src/backend/utils/errcodes.txt b/src/backend/utils/errcodes.txt
index 5b25402ebbe..b1d0ad4baf4 100644
--- a/src/backend/utils/errcodes.txt
+++ b/src/backend/utils/errcodes.txt
@@ -21,6 +21,11 @@
# doc/src/sgml/errcodes-table.sgml
# a SGML table of error codes for inclusion in the documentation
#
+# src/test/pytest/libpq/_generated_errors.py
+# Python exception classes for the pytest libpq wrapper
+# Note: This needs to be manually regenerated by running
+# src/tools/generate_pytest_libpq_errors.py
+#
# The format of this file is one error code per line, with the following
# whitespace-separated fields:
#
diff --git a/src/test/pytest/README b/src/test/pytest/README
index 1333ed77b7e..9dc50ca111f 100644
--- a/src/test/pytest/README
+++ b/src/test/pytest/README
@@ -1 +1,139 @@
-TODO
+src/test/pytest/README
+
+Pytest-based tests
+==================
+
+This directory contains infrastructure for Python-based tests using pytest,
+along with some core tests for the pytest infrastructure itself. The framework
+provides fixtures for managing PostgreSQL server instances and connecting to
+them via libpq.
+
+
+Running the tests
+=================
+
+NOTE: You must have given the --enable-pytest argument to configure (or
+-Dpytest=enabled for Meson builds). You also need to have either pytest or uv
+already installed.
+
+With Meson builds, you can run:
+ meson test --suite pytest
+
+With autoconf based builds, you can run:
+ make check
+or
+ make installcheck
+
+You can run specific test files and/or use pytest's -k option to select tests:
+ pytest src/test/pytest/pyt/test_libpq.py
+ pytest -k "test_connstr"
+
+
+Directory structure
+===================
+
+pypg/
+ Python library providing common functions and pytest fixtures that can be
+ used in tests.
+
+libpq/
+ A simple but user-friendly python wrapper around libpq
+
+pyt/
+ Tests for the pytest infrastructure itself
+
+pgtap.py
+ A pytest plugin to output results in TAP format
+
+
+Writing tests
+=============
+
+Tests use pytest fixtures to manage server instances and connections. The
+most commonly used fixtures are:
+
+pg
+ A PostgresServer instance configured for the current test. Use this for
+ creating test users/databases or modifying server configuration. Changes
+ are automatically rolled back after the test.
+
+conn
+ A connected PGconn instance to the test server. Automatically cleaned up
+ after the test.
+
+connect
+ A function to create additional connections with custom options.
+
+create_pg
+ A factory function to create additional PostgreSQL servers within a test.
+ Servers are automatically cleaned up at the end of the test. Useful for
+ testing scenarios that require multiple independent servers.
+
+create_pg_module
+ Like create_pg, but servers persist for the entire test module. Use this
+ when multiple tests in a module can share the same servers, which is
+ faster than creating new servers for each test.
+
+
+Example test:
+
+ def test_simple_query(conn):
+ result = conn.sql("SELECT 1 + 1")
+ assert result == 2
+
+ def test_with_user(pg):
+ users = pg.create_users("test")
+ with pg.reloading() as s:
+ s.hba.prepend(["local", "all", users["test"], "trust"])
+
+ conn = pg.connect(user=users["test"])
+ assert conn.sql("SELECT current_user") == users["test"]
+
+ def test_multiple_servers(create_pg):
+ node1 = create_pg("primary")
+ node2 = create_pg("secondary")
+
+ conn1 = node1.connect()
+ conn2 = node2.connect()
+
+ # Each server is independent
+ assert node1.port != node2.port
+
+
+Server configuration
+====================
+
+Tests can temporarily modify server configuration using context managers:
+
+ with pg.reloading() as s:
+ s.conf.set(log_connections="on")
+ s.hba.prepend("local all all trust")
+ # Server is reloaded here
+ # After the test finished the original configuration is restored and
+ # the server is reloaded again
+
+Use pg.restarting() instead if the configuration change requires a restart.
+
+
+Timeouts
+========
+
+Tests inherit the PG_TEST_TIMEOUT_DEFAULT environment variable (defaulting
+to 180 seconds). The remaining_timeout fixture provides a function that
+returns how much time remains for the current test.
+
+
+Environment variables
+=====================
+
+PG_TEST_TIMEOUT_DEFAULT
+ Per-test timeout in seconds (default: 180)
+
+PG_CONFIG
+ Path to pg_config (default: uses PATH)
+
+TESTDATADIR
+ Directory for test data (default: pytest temp directory)
+
+PG_TEST_EXTRA
+ Space-separated list of optional test categories to run (e.g., "ssl")
diff --git a/src/test/pytest/libpq/__init__.py b/src/test/pytest/libpq/__init__.py
new file mode 100644
index 00000000000..cb4d18b6206
--- /dev/null
+++ b/src/test/pytest/libpq/__init__.py
@@ -0,0 +1,36 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+libpq testing utilities - ctypes bindings and helpers for PostgreSQL's libpq library.
+
+This module provides Python wrappers around libpq for use in pytest tests.
+"""
+
+from . import errors
+from .errors import LibpqError, LibpqWarning
+from ._core import (
+ ConnectionStatus,
+ DiagField,
+ ExecStatus,
+ PGconn,
+ PGresult,
+ connect,
+ connstr,
+ load_libpq_handle,
+ register_type_info,
+)
+
+__all__ = [
+ "errors",
+ "LibpqError",
+ "LibpqWarning",
+ "ConnectionStatus",
+ "DiagField",
+ "ExecStatus",
+ "PGconn",
+ "PGresult",
+ "connect",
+ "connstr",
+ "load_libpq_handle",
+ "register_type_info",
+]
diff --git a/src/test/pytest/libpq/_core.py b/src/test/pytest/libpq/_core.py
new file mode 100644
index 00000000000..0d77996d572
--- /dev/null
+++ b/src/test/pytest/libpq/_core.py
@@ -0,0 +1,489 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Core libpq functionality - ctypes bindings and connection handling.
+"""
+
+import contextlib
+import ctypes
+import datetime
+import decimal
+import enum
+import json
+import platform
+import os
+import uuid
+from typing import Any, Callable, Dict, Optional
+
+from .errors import LibpqError, make_error
+
+
+# PG_DIAG field identifiers from postgres_ext.h
+class DiagField(enum.IntEnum):
+ SEVERITY = ord("S")
+ SEVERITY_NONLOCALIZED = ord("V")
+ SQLSTATE = ord("C")
+ MESSAGE_PRIMARY = ord("M")
+ MESSAGE_DETAIL = ord("D")
+ MESSAGE_HINT = ord("H")
+ STATEMENT_POSITION = ord("P")
+ INTERNAL_POSITION = ord("p")
+ INTERNAL_QUERY = ord("q")
+ CONTEXT = ord("W")
+ SCHEMA_NAME = ord("s")
+ TABLE_NAME = ord("t")
+ COLUMN_NAME = ord("c")
+ DATATYPE_NAME = ord("d")
+ CONSTRAINT_NAME = ord("n")
+ SOURCE_FILE = ord("F")
+ SOURCE_LINE = ord("L")
+ SOURCE_FUNCTION = ord("R")
+
+
+class ConnectionStatus(enum.IntEnum):
+ """PostgreSQL connection status codes from libpq."""
+
+ CONNECTION_OK = 0
+ CONNECTION_BAD = 1
+
+
+class ExecStatus(enum.IntEnum):
+ """PostgreSQL result status codes from PQresultStatus."""
+
+ PGRES_EMPTY_QUERY = 0
+ PGRES_COMMAND_OK = 1
+ PGRES_TUPLES_OK = 2
+ PGRES_COPY_OUT = 3
+ PGRES_COPY_IN = 4
+ PGRES_BAD_RESPONSE = 5
+ PGRES_NONFATAL_ERROR = 6
+ PGRES_FATAL_ERROR = 7
+ PGRES_COPY_BOTH = 8
+ PGRES_SINGLE_TUPLE = 9
+ PGRES_PIPELINE_SYNC = 10
+ PGRES_PIPELINE_ABORTED = 11
+
+
+class _PGconn(ctypes.Structure):
+ pass
+
+
+class _PGresult(ctypes.Structure):
+ pass
+
+
+_PGconn_p = ctypes.POINTER(_PGconn)
+_PGresult_p = ctypes.POINTER(_PGresult)
+
+
+def load_libpq_handle(libdir, bindir):
+ """
+ Loads a ctypes handle for libpq. Some common function prototypes are
+ initialized for general use.
+ """
+ system = platform.system()
+
+ if system in ("Linux", "FreeBSD", "NetBSD", "OpenBSD"):
+ name = "libpq.so.5"
+ elif system == "Darwin":
+ name = "libpq.5.dylib"
+ elif system == "Windows":
+ name = "libpq.dll"
+ else:
+ assert False, f"the libpq fixture must be updated for {system}"
+
+ if system == "Windows":
+ # On Windows, libpq.dll is confusingly in bindir, not libdir. And we
+ # need to add this directory the the search path.
+ libpq_path = os.path.join(bindir, name)
+ lib = ctypes.CDLL(libpq_path)
+ else:
+ libpq_path = os.path.join(libdir, name)
+ lib = ctypes.CDLL(libpq_path)
+
+ #
+ # Function Prototypes
+ #
+
+ lib.PQconnectdb.restype = _PGconn_p
+ lib.PQconnectdb.argtypes = [ctypes.c_char_p]
+
+ lib.PQstatus.restype = ctypes.c_int
+ lib.PQstatus.argtypes = [_PGconn_p]
+
+ lib.PQexec.restype = _PGresult_p
+ lib.PQexec.argtypes = [_PGconn_p, ctypes.c_char_p]
+
+ lib.PQresultStatus.restype = ctypes.c_int
+ lib.PQresultStatus.argtypes = [_PGresult_p]
+
+ lib.PQclear.restype = None
+ lib.PQclear.argtypes = [_PGresult_p]
+
+ lib.PQerrorMessage.restype = ctypes.c_char_p
+ lib.PQerrorMessage.argtypes = [_PGconn_p]
+
+ lib.PQfinish.restype = None
+ lib.PQfinish.argtypes = [_PGconn_p]
+
+ lib.PQresultErrorMessage.restype = ctypes.c_char_p
+ lib.PQresultErrorMessage.argtypes = [_PGresult_p]
+
+ lib.PQntuples.restype = ctypes.c_int
+ lib.PQntuples.argtypes = [_PGresult_p]
+
+ lib.PQnfields.restype = ctypes.c_int
+ lib.PQnfields.argtypes = [_PGresult_p]
+
+ lib.PQgetvalue.restype = ctypes.c_char_p
+ lib.PQgetvalue.argtypes = [_PGresult_p, ctypes.c_int, ctypes.c_int]
+
+ lib.PQgetisnull.restype = ctypes.c_int
+ lib.PQgetisnull.argtypes = [_PGresult_p, ctypes.c_int, ctypes.c_int]
+
+ lib.PQftype.restype = ctypes.c_uint
+ lib.PQftype.argtypes = [_PGresult_p, ctypes.c_int]
+
+ lib.PQresultErrorField.restype = ctypes.c_char_p
+ lib.PQresultErrorField.argtypes = [_PGresult_p, ctypes.c_int]
+
+ return lib
+
+
+# PostgreSQL type OIDs and conversion system
+# Type registry - maps OID to converter function
+_type_converters: Dict[int, Callable[[str], Any]] = {}
+_array_to_elem_map: Dict[int, int] = {}
+
+
+def register_type_info(
+ name: str, oid: int, array_oid: int, converter: Callable[[str], Any]
+):
+ """
+ Register a PostgreSQL type with its OID, array OID, and conversion function.
+
+ Usage:
+ register_type_info("bool", 16, 1000, lambda v: v == "t")
+ """
+ _type_converters[oid] = converter
+ if array_oid is not None:
+ _array_to_elem_map[array_oid] = oid
+
+
+def _parse_array(value: str, elem_oid: int):
+ """Parse PostgreSQL array syntax into nested Python lists."""
+ stack: list[list] = []
+ current_element: list[str] = []
+ in_quotes = False
+ was_quoted = False
+ pos = 0
+
+ while pos < len(value):
+ char = value[pos]
+
+ if in_quotes:
+ if char == "\\":
+ next_char = value[pos + 1]
+ if next_char not in '"\\':
+ raise NotImplementedError('Only \\" and \\\\ escapes are supported')
+ current_element.append(next_char)
+ pos += 2
+ continue
+ elif char == '"':
+ in_quotes = False
+ else:
+ current_element.append(char)
+ elif char == '"':
+ in_quotes = True
+ was_quoted = True
+ elif char == "{":
+ stack.append([])
+ elif char in ",}":
+ if current_element or was_quoted:
+ elem = "".join(current_element)
+ if not was_quoted and elem == "NULL":
+ stack[-1].append(None)
+ else:
+ stack[-1].append(_convert_pg_value(elem, elem_oid))
+ current_element = []
+ was_quoted = False
+ if char == "}":
+ completed = stack.pop()
+ if not stack:
+ return completed
+ stack[-1].append(completed)
+ elif char != " ":
+ current_element.append(char)
+ pos += 1
+
+ raise ValueError(f"Malformed array literal: {value}")
+
+
+# Register standard PostgreSQL types that we'll likely encounter in tests
+register_type_info("bool", 16, 1000, lambda v: v == "t")
+register_type_info("int2", 21, 1005, int)
+register_type_info("int4", 23, 1007, int)
+register_type_info("int8", 20, 1016, int)
+register_type_info("float4", 700, 1021, float)
+register_type_info("float8", 701, 1022, float)
+register_type_info("numeric", 1700, 1231, decimal.Decimal)
+register_type_info("text", 25, 1009, str)
+register_type_info("varchar", 1043, 1015, str)
+register_type_info("date", 1082, 1182, datetime.date.fromisoformat)
+register_type_info("time", 1083, 1183, datetime.time.fromisoformat)
+register_type_info("timestamp", 1114, 1115, datetime.datetime.fromisoformat)
+register_type_info("timestamptz", 1184, 1185, datetime.datetime.fromisoformat)
+register_type_info("uuid", 2950, 2951, uuid.UUID)
+register_type_info("json", 114, 199, json.loads)
+register_type_info("jsonb", 3802, 3807, json.loads)
+
+
+def _convert_pg_value(value: str, type_oid: int) -> Any:
+ """
+ Convert PostgreSQL string value to appropriate Python type based on OID.
+ Uses the registered type converters from register_type_info().
+ """
+ # Check if it's an array type
+ if type_oid in _array_to_elem_map:
+ elem_oid = _array_to_elem_map[type_oid]
+ return _parse_array(value, elem_oid)
+
+ # Use registered converter if available
+ converter = _type_converters.get(type_oid)
+ if converter:
+ return converter(value)
+
+ # Unknown types - return as string
+ return value
+
+
+def simplify_query_results(results) -> Any:
+ """
+ Simplify the results of a query so that the caller doesn't have to unpack
+ lists and tuples of length 1.
+ """
+ if len(results) == 1:
+ row = results[0]
+ if len(row) == 1:
+ # If there's only a single cell, just return the value
+ return row[0]
+ # If there's only a single row, just return that row
+ return row
+
+ if len(results) != 0 and len(results[0]) == 1:
+ # If there's only a single column, return an array of values
+ return [row[0] for row in results]
+
+ # if there are multiple rows and columns, return the results as is
+ return results
+
+
+class PGresult(contextlib.AbstractContextManager):
+ """Wraps a raw _PGresult_p with a more friendly interface."""
+
+ def __init__(self, lib: ctypes.CDLL, res: _PGresult_p):
+ self._lib = lib
+ self._res = res
+
+ def __exit__(self, *exc):
+ self._lib.PQclear(self._res)
+ self._res = None
+
+ def status(self) -> ExecStatus:
+ return ExecStatus(self._lib.PQresultStatus(self._res))
+
+ def error_message(self):
+ """Returns the error message associated with this result."""
+ msg = self._lib.PQresultErrorMessage(self._res)
+ return msg.decode() if msg else ""
+
+ def _get_error_field(self, field: DiagField) -> Optional[str]:
+ """Get an error field from the result using PQresultErrorField."""
+ val = self._lib.PQresultErrorField(self._res, int(field))
+ return val.decode() if val else None
+
+ def raise_error(self) -> None:
+ """
+ Raises an appropriate LibpqError subclass based on the error fields.
+ Extracts SQLSTATE and other diagnostic information from the result.
+ """
+ if not self._res:
+ raise LibpqError("query failed: out of memory or connection lost")
+
+ sqlstate = self._get_error_field(DiagField.SQLSTATE)
+ primary = self._get_error_field(DiagField.MESSAGE_PRIMARY)
+ detail = self._get_error_field(DiagField.MESSAGE_DETAIL)
+ hint = self._get_error_field(DiagField.MESSAGE_HINT)
+ severity = self._get_error_field(DiagField.SEVERITY)
+ schema_name = self._get_error_field(DiagField.SCHEMA_NAME)
+ table_name = self._get_error_field(DiagField.TABLE_NAME)
+ column_name = self._get_error_field(DiagField.COLUMN_NAME)
+ datatype_name = self._get_error_field(DiagField.DATATYPE_NAME)
+ constraint_name = self._get_error_field(DiagField.CONSTRAINT_NAME)
+ context = self._get_error_field(DiagField.CONTEXT)
+
+ position_str = self._get_error_field(DiagField.STATEMENT_POSITION)
+ position = int(position_str) if position_str else None
+
+ raise make_error(
+ primary or self.error_message(),
+ sqlstate=sqlstate,
+ severity=severity,
+ primary=primary,
+ detail=detail,
+ hint=hint,
+ schema_name=schema_name,
+ table_name=table_name,
+ column_name=column_name,
+ datatype_name=datatype_name,
+ constraint_name=constraint_name,
+ position=position,
+ context=context,
+ )
+
+ def fetch_all(self):
+ """
+ Fetch all rows and convert to Python types.
+ Returns a list of tuples, with values converted based on their PostgreSQL type.
+ """
+ nrows = self._lib.PQntuples(self._res)
+ ncols = self._lib.PQnfields(self._res)
+
+ # Get type OIDs for each column
+ type_oids = [self._lib.PQftype(self._res, col) for col in range(ncols)]
+
+ results = []
+ for row in range(nrows):
+ row_data = []
+ for col in range(ncols):
+ if self._lib.PQgetisnull(self._res, row, col):
+ row_data.append(None)
+ else:
+ value = self._lib.PQgetvalue(self._res, row, col).decode()
+ row_data.append(_convert_pg_value(value, type_oids[col]))
+ results.append(tuple(row_data))
+
+ return results
+
+
+class PGconn(contextlib.AbstractContextManager):
+ """
+ Wraps a raw _PGconn_p with a more friendly interface. This is just a
+ stub; it's expected to grow.
+ """
+
+ def __init__(
+ self,
+ lib: ctypes.CDLL,
+ handle: _PGconn_p,
+ stack: contextlib.ExitStack,
+ ):
+ self._lib = lib
+ self._handle = handle
+ self._stack = stack
+
+ def __exit__(self, *exc):
+ self._lib.PQfinish(self._handle)
+ self._handle = None
+
+ def exec(self, query: str):
+ """
+ Executes a query via PQexec() and returns a PGresult.
+ """
+ res = self._lib.PQexec(self._handle, query.encode())
+ return self._stack.enter_context(PGresult(self._lib, res))
+
+ def sql(self, query: str):
+ """
+ Executes a query and raises an exception if it fails.
+ Returns the query results with automatic type conversion and simplification.
+ For commands that don't return data (INSERT, UPDATE, etc.), returns None.
+
+ Examples:
+ - SELECT 1 -> 1
+ - SELECT 1, 2 -> (1, 2)
+ - SELECT * FROM generate_series(1, 3) -> [1, 2, 3]
+ - SELECT * FROM (VALUES (1, 'a'), (2, 'b')) t -> [(1, 'a'), (2, 'b')]
+ - CREATE TABLE ... -> None
+ - INSERT INTO ... -> None
+ """
+ res = self.exec(query)
+ status = res.status()
+
+ if status == ExecStatus.PGRES_FATAL_ERROR:
+ res.raise_error()
+ elif status == ExecStatus.PGRES_COMMAND_OK:
+ return None
+ elif status == ExecStatus.PGRES_TUPLES_OK:
+ results = res.fetch_all()
+ return simplify_query_results(results)
+ else:
+ res.raise_error()
+
+
+def connstr(opts: Dict[str, Any]) -> str:
+ """
+ Flattens the provided options into a libpq connection string. Values
+ are converted to str and quoted/escaped as necessary.
+ """
+ settings = []
+
+ for k, v in opts.items():
+ v = str(v)
+ if not v:
+ v = "''"
+ else:
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+
+ if " " in v:
+ v = f"'{v}'"
+
+ settings.append(f"{k}={v}")
+
+ return " ".join(settings)
+
+
+def connect(
+ libpq_handle: ctypes.CDLL,
+ stack: contextlib.ExitStack,
+ remaining_timeout_fn: Callable[[], float],
+ **opts,
+) -> PGconn:
+ """
+ Connects to a server, using the given connection options, and
+ returns a PGconn object wrapping the connection handle. A
+ failure will raise LibpqError.
+
+ Connections honor PG_TEST_TIMEOUT_DEFAULT unless connect_timeout is
+ explicitly overridden in opts.
+
+ Args:
+ libpq_handle: ctypes.CDLL handle to libpq library
+ stack: ExitStack for managing connection cleanup
+ remaining_timeout_fn: Function that returns remaining timeout in seconds
+ **opts: Connection options (host, port, dbname, etc.)
+
+ Returns:
+ PGconn: Connected database connection
+
+ Raises:
+ LibpqError: If connection fails
+ """
+
+ if "connect_timeout" not in opts:
+ t = int(remaining_timeout_fn())
+ opts["connect_timeout"] = max(t, 1)
+
+ conn_p = libpq_handle.PQconnectdb(connstr(opts).encode())
+
+ # Check connection status before adding to stack
+ if libpq_handle.PQstatus(conn_p) != ConnectionStatus.CONNECTION_OK:
+ error_msg = libpq_handle.PQerrorMessage(conn_p).decode()
+ # Manually close the failed connection
+ libpq_handle.PQfinish(conn_p)
+ raise LibpqError(error_msg)
+
+ # Connection succeeded - add to stack for cleanup
+ conn = stack.enter_context(PGconn(libpq_handle, conn_p, stack=stack))
+ return conn
diff --git a/src/test/pytest/libpq/_error_base.py b/src/test/pytest/libpq/_error_base.py
new file mode 100644
index 00000000000..5c70c077193
--- /dev/null
+++ b/src/test/pytest/libpq/_error_base.py
@@ -0,0 +1,74 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Base exception classes for libpq errors and warnings.
+"""
+
+from typing import Optional
+
+
+class LibpqExceptionMixin:
+ """Mixin providing PostgreSQL error field attributes."""
+
+ sqlstate: Optional[str]
+ severity: Optional[str]
+ primary: Optional[str]
+ detail: Optional[str]
+ hint: Optional[str]
+ schema_name: Optional[str]
+ table_name: Optional[str]
+ column_name: Optional[str]
+ datatype_name: Optional[str]
+ constraint_name: Optional[str]
+ position: Optional[int]
+ context: Optional[str]
+
+ def __init__(
+ self,
+ message: str,
+ *,
+ sqlstate: Optional[str] = None,
+ severity: Optional[str] = None,
+ primary: Optional[str] = None,
+ detail: Optional[str] = None,
+ hint: Optional[str] = None,
+ schema_name: Optional[str] = None,
+ table_name: Optional[str] = None,
+ column_name: Optional[str] = None,
+ datatype_name: Optional[str] = None,
+ constraint_name: Optional[str] = None,
+ position: Optional[int] = None,
+ context: Optional[str] = None,
+ ):
+ super().__init__(message)
+ self.sqlstate = sqlstate
+ self.severity = severity
+ self.primary = primary
+ self.detail = detail
+ self.hint = hint
+ self.schema_name = schema_name
+ self.table_name = table_name
+ self.column_name = column_name
+ self.datatype_name = datatype_name
+ self.constraint_name = constraint_name
+ self.position = position
+ self.context = context
+
+ @property
+ def sqlstate_class(self) -> Optional[str]:
+ """Returns the 2-character SQLSTATE class."""
+ if self.sqlstate and len(self.sqlstate) >= 2:
+ return self.sqlstate[:2]
+ return None
+
+
+class LibpqError(LibpqExceptionMixin, RuntimeError):
+ """Base exception for libpq errors."""
+
+ pass
+
+
+class LibpqWarning(LibpqExceptionMixin, UserWarning):
+ """Base exception for libpq warnings."""
+
+ pass
diff --git a/src/test/pytest/libpq/_generated_errors.py b/src/test/pytest/libpq/_generated_errors.py
new file mode 100644
index 00000000000..f50f3143580
--- /dev/null
+++ b/src/test/pytest/libpq/_generated_errors.py
@@ -0,0 +1,2116 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+# This file is generated by src/tools/generate_pytest_libpq_errors.py - do not edit directly.
+
+"""
+Generated PostgreSQL error classes mapped from SQLSTATE codes.
+"""
+
+from typing import Dict
+
+from ._error_base import LibpqError, LibpqWarning
+
+
+class SuccessfulCompletion(LibpqError):
+ """SQLSTATE 00000 - successful completion."""
+
+ pass
+
+
+class Warning(LibpqWarning):
+ """SQLSTATE 01000 - warning."""
+
+ pass
+
+
+class DynamicResultSetsReturnedWarning(Warning):
+ """SQLSTATE 0100C - dynamic result sets returned."""
+
+ pass
+
+
+class ImplicitZeroBitPaddingWarning(Warning):
+ """SQLSTATE 01008 - implicit zero bit padding."""
+
+ pass
+
+
+class NullValueEliminatedInSetFunctionWarning(Warning):
+ """SQLSTATE 01003 - null value eliminated in set function."""
+
+ pass
+
+
+class PrivilegeNotGrantedWarning(Warning):
+ """SQLSTATE 01007 - privilege not granted."""
+
+ pass
+
+
+class PrivilegeNotRevokedWarning(Warning):
+ """SQLSTATE 01006 - privilege not revoked."""
+
+ pass
+
+
+class StringDataRightTruncationWarning(Warning):
+ """SQLSTATE 01004 - string data right truncation."""
+
+ pass
+
+
+class DeprecatedFeatureWarning(Warning):
+ """SQLSTATE 01P01 - deprecated feature."""
+
+ pass
+
+
+class NoData(LibpqError):
+ """SQLSTATE 02000 - no data."""
+
+ pass
+
+
+class NoAdditionalDynamicResultSetsReturned(NoData):
+ """SQLSTATE 02001 - no additional dynamic result sets returned."""
+
+ pass
+
+
+class SQLStatementNotYetComplete(LibpqError):
+ """SQLSTATE 03000 - sql statement not yet complete."""
+
+ pass
+
+
+class ConnectionException(LibpqError):
+ """SQLSTATE 08000 - connection exception."""
+
+ pass
+
+
+class ConnectionDoesNotExist(ConnectionException):
+ """SQLSTATE 08003 - connection does not exist."""
+
+ pass
+
+
+class ConnectionFailure(ConnectionException):
+ """SQLSTATE 08006 - connection failure."""
+
+ pass
+
+
+class SQLClientUnableToEstablishSQLConnection(ConnectionException):
+ """SQLSTATE 08001 - sqlclient unable to establish sqlconnection."""
+
+ pass
+
+
+class SQLServerRejectedEstablishmentOfSQLConnection(ConnectionException):
+ """SQLSTATE 08004 - sqlserver rejected establishment of sqlconnection."""
+
+ pass
+
+
+class TransactionResolutionUnknown(ConnectionException):
+ """SQLSTATE 08007 - transaction resolution unknown."""
+
+ pass
+
+
+class ProtocolViolation(ConnectionException):
+ """SQLSTATE 08P01 - protocol violation."""
+
+ pass
+
+
+class TriggeredActionException(LibpqError):
+ """SQLSTATE 09000 - triggered action exception."""
+
+ pass
+
+
+class FeatureNotSupported(LibpqError):
+ """SQLSTATE 0A000 - feature not supported."""
+
+ pass
+
+
+class InvalidTransactionInitiation(LibpqError):
+ """SQLSTATE 0B000 - invalid transaction initiation."""
+
+ pass
+
+
+class LocatorException(LibpqError):
+ """SQLSTATE 0F000 - locator exception."""
+
+ pass
+
+
+class InvalidLocatorSpecification(LocatorException):
+ """SQLSTATE 0F001 - invalid locator specification."""
+
+ pass
+
+
+class InvalidGrantor(LibpqError):
+ """SQLSTATE 0L000 - invalid grantor."""
+
+ pass
+
+
+class InvalidGrantOperation(InvalidGrantor):
+ """SQLSTATE 0LP01 - invalid grant operation."""
+
+ pass
+
+
+class InvalidRoleSpecification(LibpqError):
+ """SQLSTATE 0P000 - invalid role specification."""
+
+ pass
+
+
+class DiagnosticsException(LibpqError):
+ """SQLSTATE 0Z000 - diagnostics exception."""
+
+ pass
+
+
+class StackedDiagnosticsAccessedWithoutActiveHandler(DiagnosticsException):
+ """SQLSTATE 0Z002 - stacked diagnostics accessed without active handler."""
+
+ pass
+
+
+class InvalidArgumentForXquery(LibpqError):
+ """SQLSTATE 10608 - invalid argument for xquery."""
+
+ pass
+
+
+class CaseNotFound(LibpqError):
+ """SQLSTATE 20000 - case not found."""
+
+ pass
+
+
+class CardinalityViolation(LibpqError):
+ """SQLSTATE 21000 - cardinality violation."""
+
+ pass
+
+
+class DataException(LibpqError):
+ """SQLSTATE 22000 - data exception."""
+
+ pass
+
+
+class ArraySubscriptError(DataException):
+ """SQLSTATE 2202E - array subscript error."""
+
+ pass
+
+
+class CharacterNotInRepertoire(DataException):
+ """SQLSTATE 22021 - character not in repertoire."""
+
+ pass
+
+
+class DatetimeFieldOverflow(DataException):
+ """SQLSTATE 22008 - datetime field overflow."""
+
+ pass
+
+
+class DivisionByZero(DataException):
+ """SQLSTATE 22012 - division by zero."""
+
+ pass
+
+
+class ErrorInAssignment(DataException):
+ """SQLSTATE 22005 - error in assignment."""
+
+ pass
+
+
+class EscapeCharacterConflict(DataException):
+ """SQLSTATE 2200B - escape character conflict."""
+
+ pass
+
+
+class IndicatorOverflow(DataException):
+ """SQLSTATE 22022 - indicator overflow."""
+
+ pass
+
+
+class IntervalFieldOverflow(DataException):
+ """SQLSTATE 22015 - interval field overflow."""
+
+ pass
+
+
+class InvalidArgumentForLogarithm(DataException):
+ """SQLSTATE 2201E - invalid argument for logarithm."""
+
+ pass
+
+
+class InvalidArgumentForNtileFunction(DataException):
+ """SQLSTATE 22014 - invalid argument for ntile function."""
+
+ pass
+
+
+class InvalidArgumentForNthValueFunction(DataException):
+ """SQLSTATE 22016 - invalid argument for nth value function."""
+
+ pass
+
+
+class InvalidArgumentForPowerFunction(DataException):
+ """SQLSTATE 2201F - invalid argument for power function."""
+
+ pass
+
+
+class InvalidArgumentForWidthBucketFunction(DataException):
+ """SQLSTATE 2201G - invalid argument for width bucket function."""
+
+ pass
+
+
+class InvalidCharacterValueForCast(DataException):
+ """SQLSTATE 22018 - invalid character value for cast."""
+
+ pass
+
+
+class InvalidDatetimeFormat(DataException):
+ """SQLSTATE 22007 - invalid datetime format."""
+
+ pass
+
+
+class InvalidEscapeCharacter(DataException):
+ """SQLSTATE 22019 - invalid escape character."""
+
+ pass
+
+
+class InvalidEscapeOctet(DataException):
+ """SQLSTATE 2200D - invalid escape octet."""
+
+ pass
+
+
+class InvalidEscapeSequence(DataException):
+ """SQLSTATE 22025 - invalid escape sequence."""
+
+ pass
+
+
+class NonstandardUseOfEscapeCharacter(DataException):
+ """SQLSTATE 22P06 - nonstandard use of escape character."""
+
+ pass
+
+
+class InvalidIndicatorParameterValue(DataException):
+ """SQLSTATE 22010 - invalid indicator parameter value."""
+
+ pass
+
+
+class InvalidParameterValue(DataException):
+ """SQLSTATE 22023 - invalid parameter value."""
+
+ pass
+
+
+class InvalidPrecedingOrFollowingSize(DataException):
+ """SQLSTATE 22013 - invalid preceding or following size."""
+
+ pass
+
+
+class InvalidRegularExpression(DataException):
+ """SQLSTATE 2201B - invalid regular expression."""
+
+ pass
+
+
+class InvalidRowCountInLimitClause(DataException):
+ """SQLSTATE 2201W - invalid row count in limit clause."""
+
+ pass
+
+
+class InvalidRowCountInResultOffsetClause(DataException):
+ """SQLSTATE 2201X - invalid row count in result offset clause."""
+
+ pass
+
+
+class InvalidTablesampleArgument(DataException):
+ """SQLSTATE 2202H - invalid tablesample argument."""
+
+ pass
+
+
+class InvalidTablesampleRepeat(DataException):
+ """SQLSTATE 2202G - invalid tablesample repeat."""
+
+ pass
+
+
+class InvalidTimeZoneDisplacementValue(DataException):
+ """SQLSTATE 22009 - invalid time zone displacement value."""
+
+ pass
+
+
+class InvalidUseOfEscapeCharacter(DataException):
+ """SQLSTATE 2200C - invalid use of escape character."""
+
+ pass
+
+
+class MostSpecificTypeMismatch(DataException):
+ """SQLSTATE 2200G - most specific type mismatch."""
+
+ pass
+
+
+class NullValueNotAllowed(DataException):
+ """SQLSTATE 22004 - null value not allowed."""
+
+ pass
+
+
+class NullValueNoIndicatorParameter(DataException):
+ """SQLSTATE 22002 - null value no indicator parameter."""
+
+ pass
+
+
+class NumericValueOutOfRange(DataException):
+ """SQLSTATE 22003 - numeric value out of range."""
+
+ pass
+
+
+class SequenceGeneratorLimitExceeded(DataException):
+ """SQLSTATE 2200H - sequence generator limit exceeded."""
+
+ pass
+
+
+class StringDataLengthMismatch(DataException):
+ """SQLSTATE 22026 - string data length mismatch."""
+
+ pass
+
+
+class StringDataRightTruncation(DataException):
+ """SQLSTATE 22001 - string data right truncation."""
+
+ pass
+
+
+class SubstringError(DataException):
+ """SQLSTATE 22011 - substring error."""
+
+ pass
+
+
+class TrimError(DataException):
+ """SQLSTATE 22027 - trim error."""
+
+ pass
+
+
+class UnterminatedCString(DataException):
+ """SQLSTATE 22024 - unterminated c string."""
+
+ pass
+
+
+class ZeroLengthCharacterString(DataException):
+ """SQLSTATE 2200F - zero length character string."""
+
+ pass
+
+
+class FloatingPointException(DataException):
+ """SQLSTATE 22P01 - floating point exception."""
+
+ pass
+
+
+class InvalidTextRepresentation(DataException):
+ """SQLSTATE 22P02 - invalid text representation."""
+
+ pass
+
+
+class InvalidBinaryRepresentation(DataException):
+ """SQLSTATE 22P03 - invalid binary representation."""
+
+ pass
+
+
+class BadCopyFileFormat(DataException):
+ """SQLSTATE 22P04 - bad copy file format."""
+
+ pass
+
+
+class UntranslatableCharacter(DataException):
+ """SQLSTATE 22P05 - untranslatable character."""
+
+ pass
+
+
+class NotAnXmlDocument(DataException):
+ """SQLSTATE 2200L - not an xml document."""
+
+ pass
+
+
+class InvalidXmlDocument(DataException):
+ """SQLSTATE 2200M - invalid xml document."""
+
+ pass
+
+
+class InvalidXmlContent(DataException):
+ """SQLSTATE 2200N - invalid xml content."""
+
+ pass
+
+
+class InvalidXmlComment(DataException):
+ """SQLSTATE 2200S - invalid xml comment."""
+
+ pass
+
+
+class InvalidXmlProcessingInstruction(DataException):
+ """SQLSTATE 2200T - invalid xml processing instruction."""
+
+ pass
+
+
+class DuplicateJsonObjectKeyValue(DataException):
+ """SQLSTATE 22030 - duplicate json object key value."""
+
+ pass
+
+
+class InvalidArgumentForSQLJsonDatetimeFunction(DataException):
+ """SQLSTATE 22031 - invalid argument for sql json datetime function."""
+
+ pass
+
+
+class InvalidJsonText(DataException):
+ """SQLSTATE 22032 - invalid json text."""
+
+ pass
+
+
+class InvalidSQLJsonSubscript(DataException):
+ """SQLSTATE 22033 - invalid sql json subscript."""
+
+ pass
+
+
+class MoreThanOneSQLJsonItem(DataException):
+ """SQLSTATE 22034 - more than one sql json item."""
+
+ pass
+
+
+class NoSQLJsonItem(DataException):
+ """SQLSTATE 22035 - no sql json item."""
+
+ pass
+
+
+class NonNumericSQLJsonItem(DataException):
+ """SQLSTATE 22036 - non numeric sql json item."""
+
+ pass
+
+
+class NonUniqueKeysInAJsonObject(DataException):
+ """SQLSTATE 22037 - non unique keys in a json object."""
+
+ pass
+
+
+class SingletonSQLJsonItemRequired(DataException):
+ """SQLSTATE 22038 - singleton sql json item required."""
+
+ pass
+
+
+class SQLJsonArrayNotFound(DataException):
+ """SQLSTATE 22039 - sql json array not found."""
+
+ pass
+
+
+class SQLJsonMemberNotFound(DataException):
+ """SQLSTATE 2203A - sql json member not found."""
+
+ pass
+
+
+class SQLJsonNumberNotFound(DataException):
+ """SQLSTATE 2203B - sql json number not found."""
+
+ pass
+
+
+class SQLJsonObjectNotFound(DataException):
+ """SQLSTATE 2203C - sql json object not found."""
+
+ pass
+
+
+class TooManyJsonArrayElements(DataException):
+ """SQLSTATE 2203D - too many json array elements."""
+
+ pass
+
+
+class TooManyJsonObjectMembers(DataException):
+ """SQLSTATE 2203E - too many json object members."""
+
+ pass
+
+
+class SQLJsonScalarRequired(DataException):
+ """SQLSTATE 2203F - sql json scalar required."""
+
+ pass
+
+
+class SQLJsonItemCannotBeCastToTargetType(DataException):
+ """SQLSTATE 2203G - sql json item cannot be cast to target type."""
+
+ pass
+
+
+class IntegrityConstraintViolation(LibpqError):
+ """SQLSTATE 23000 - integrity constraint violation."""
+
+ pass
+
+
+class RestrictViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23001 - restrict violation."""
+
+ pass
+
+
+class NotNullViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23502 - not null violation."""
+
+ pass
+
+
+class ForeignKeyViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23503 - foreign key violation."""
+
+ pass
+
+
+class UniqueViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23505 - unique violation."""
+
+ pass
+
+
+class CheckViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23514 - check violation."""
+
+ pass
+
+
+class ExclusionViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23P01 - exclusion violation."""
+
+ pass
+
+
+class InvalidCursorState(LibpqError):
+ """SQLSTATE 24000 - invalid cursor state."""
+
+ pass
+
+
+class InvalidTransactionState(LibpqError):
+ """SQLSTATE 25000 - invalid transaction state."""
+
+ pass
+
+
+class ActiveSQLTransaction(InvalidTransactionState):
+ """SQLSTATE 25001 - active sql transaction."""
+
+ pass
+
+
+class BranchTransactionAlreadyActive(InvalidTransactionState):
+ """SQLSTATE 25002 - branch transaction already active."""
+
+ pass
+
+
+class HeldCursorRequiresSameIsolationLevel(InvalidTransactionState):
+ """SQLSTATE 25008 - held cursor requires same isolation level."""
+
+ pass
+
+
+class InappropriateAccessModeForBranchTransaction(InvalidTransactionState):
+ """SQLSTATE 25003 - inappropriate access mode for branch transaction."""
+
+ pass
+
+
+class InappropriateIsolationLevelForBranchTransaction(InvalidTransactionState):
+ """SQLSTATE 25004 - inappropriate isolation level for branch transaction."""
+
+ pass
+
+
+class NoActiveSQLTransactionForBranchTransaction(InvalidTransactionState):
+ """SQLSTATE 25005 - no active sql transaction for branch transaction."""
+
+ pass
+
+
+class ReadOnlySQLTransaction(InvalidTransactionState):
+ """SQLSTATE 25006 - read only sql transaction."""
+
+ pass
+
+
+class SchemaAndDataStatementMixingNotSupported(InvalidTransactionState):
+ """SQLSTATE 25007 - schema and data statement mixing not supported."""
+
+ pass
+
+
+class NoActiveSQLTransaction(InvalidTransactionState):
+ """SQLSTATE 25P01 - no active sql transaction."""
+
+ pass
+
+
+class InFailedSQLTransaction(InvalidTransactionState):
+ """SQLSTATE 25P02 - in failed sql transaction."""
+
+ pass
+
+
+class IdleInTransactionSessionTimeout(InvalidTransactionState):
+ """SQLSTATE 25P03 - idle in transaction session timeout."""
+
+ pass
+
+
+class TransactionTimeout(InvalidTransactionState):
+ """SQLSTATE 25P04 - transaction timeout."""
+
+ pass
+
+
+class InvalidSQLStatementName(LibpqError):
+ """SQLSTATE 26000 - invalid sql statement name."""
+
+ pass
+
+
+class TriggeredDataChangeViolation(LibpqError):
+ """SQLSTATE 27000 - triggered data change violation."""
+
+ pass
+
+
+class InvalidAuthorizationSpecification(LibpqError):
+ """SQLSTATE 28000 - invalid authorization specification."""
+
+ pass
+
+
+class InvalidPassword(InvalidAuthorizationSpecification):
+ """SQLSTATE 28P01 - invalid password."""
+
+ pass
+
+
+class DependentPrivilegeDescriptorsStillExist(LibpqError):
+ """SQLSTATE 2B000 - dependent privilege descriptors still exist."""
+
+ pass
+
+
+class DependentObjectsStillExist(DependentPrivilegeDescriptorsStillExist):
+ """SQLSTATE 2BP01 - dependent objects still exist."""
+
+ pass
+
+
+class InvalidTransactionTermination(LibpqError):
+ """SQLSTATE 2D000 - invalid transaction termination."""
+
+ pass
+
+
+class SQLRoutineException(LibpqError):
+ """SQLSTATE 2F000 - sql routine exception."""
+
+ pass
+
+
+class FunctionExecutedNoReturnStatement(SQLRoutineException):
+ """SQLSTATE 2F005 - function executed no return statement."""
+
+ pass
+
+
+class SREModifyingSQLDataNotPermitted(SQLRoutineException):
+ """SQLSTATE 2F002 - modifying sql data not permitted."""
+
+ pass
+
+
+class SREProhibitedSQLStatementAttempted(SQLRoutineException):
+ """SQLSTATE 2F003 - prohibited sql statement attempted."""
+
+ pass
+
+
+class SREReadingSQLDataNotPermitted(SQLRoutineException):
+ """SQLSTATE 2F004 - reading sql data not permitted."""
+
+ pass
+
+
+class InvalidCursorName(LibpqError):
+ """SQLSTATE 34000 - invalid cursor name."""
+
+ pass
+
+
+class ExternalRoutineException(LibpqError):
+ """SQLSTATE 38000 - external routine exception."""
+
+ pass
+
+
+class ContainingSQLNotPermitted(ExternalRoutineException):
+ """SQLSTATE 38001 - containing sql not permitted."""
+
+ pass
+
+
+class EREModifyingSQLDataNotPermitted(ExternalRoutineException):
+ """SQLSTATE 38002 - modifying sql data not permitted."""
+
+ pass
+
+
+class EREProhibitedSQLStatementAttempted(ExternalRoutineException):
+ """SQLSTATE 38003 - prohibited sql statement attempted."""
+
+ pass
+
+
+class EREReadingSQLDataNotPermitted(ExternalRoutineException):
+ """SQLSTATE 38004 - reading sql data not permitted."""
+
+ pass
+
+
+class ExternalRoutineInvocationException(LibpqError):
+ """SQLSTATE 39000 - external routine invocation exception."""
+
+ pass
+
+
+class InvalidSqlstateReturned(ExternalRoutineInvocationException):
+ """SQLSTATE 39001 - invalid sqlstate returned."""
+
+ pass
+
+
+class ERIENullValueNotAllowed(ExternalRoutineInvocationException):
+ """SQLSTATE 39004 - null value not allowed."""
+
+ pass
+
+
+class TriggerProtocolViolated(ExternalRoutineInvocationException):
+ """SQLSTATE 39P01 - trigger protocol violated."""
+
+ pass
+
+
+class SrfProtocolViolated(ExternalRoutineInvocationException):
+ """SQLSTATE 39P02 - srf protocol violated."""
+
+ pass
+
+
+class EventTriggerProtocolViolated(ExternalRoutineInvocationException):
+ """SQLSTATE 39P03 - event trigger protocol violated."""
+
+ pass
+
+
+class SavepointException(LibpqError):
+ """SQLSTATE 3B000 - savepoint exception."""
+
+ pass
+
+
+class InvalidSavepointSpecification(SavepointException):
+ """SQLSTATE 3B001 - invalid savepoint specification."""
+
+ pass
+
+
+class InvalidCatalogName(LibpqError):
+ """SQLSTATE 3D000 - invalid catalog name."""
+
+ pass
+
+
+class InvalidSchemaName(LibpqError):
+ """SQLSTATE 3F000 - invalid schema name."""
+
+ pass
+
+
+class TransactionRollback(LibpqError):
+ """SQLSTATE 40000 - transaction rollback."""
+
+ pass
+
+
+class TransactionIntegrityConstraintViolation(TransactionRollback):
+ """SQLSTATE 40002 - transaction integrity constraint violation."""
+
+ pass
+
+
+class SerializationFailure(TransactionRollback):
+ """SQLSTATE 40001 - serialization failure."""
+
+ pass
+
+
+class StatementCompletionUnknown(TransactionRollback):
+ """SQLSTATE 40003 - statement completion unknown."""
+
+ pass
+
+
+class DeadlockDetected(TransactionRollback):
+ """SQLSTATE 40P01 - deadlock detected."""
+
+ pass
+
+
+class SyntaxErrorOrAccessRuleViolation(LibpqError):
+ """SQLSTATE 42000 - syntax error or access rule violation."""
+
+ pass
+
+
+class SyntaxError(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42601 - syntax error."""
+
+ pass
+
+
+class InsufficientPrivilege(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42501 - insufficient privilege."""
+
+ pass
+
+
+class CannotCoerce(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42846 - cannot coerce."""
+
+ pass
+
+
+class GroupingError(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42803 - grouping error."""
+
+ pass
+
+
+class WindowingError(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P20 - windowing error."""
+
+ pass
+
+
+class InvalidRecursion(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P19 - invalid recursion."""
+
+ pass
+
+
+class InvalidForeignKey(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42830 - invalid foreign key."""
+
+ pass
+
+
+class InvalidName(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42602 - invalid name."""
+
+ pass
+
+
+class NameTooLong(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42622 - name too long."""
+
+ pass
+
+
+class ReservedName(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42939 - reserved name."""
+
+ pass
+
+
+class DatatypeMismatch(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42804 - datatype mismatch."""
+
+ pass
+
+
+class IndeterminateDatatype(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P18 - indeterminate datatype."""
+
+ pass
+
+
+class CollationMismatch(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P21 - collation mismatch."""
+
+ pass
+
+
+class IndeterminateCollation(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P22 - indeterminate collation."""
+
+ pass
+
+
+class WrongObjectType(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42809 - wrong object type."""
+
+ pass
+
+
+class GeneratedAlways(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 428C9 - generated always."""
+
+ pass
+
+
+class UndefinedColumn(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42703 - undefined column."""
+
+ pass
+
+
+class UndefinedFunction(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42883 - undefined function."""
+
+ pass
+
+
+class UndefinedTable(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P01 - undefined table."""
+
+ pass
+
+
+class UndefinedParameter(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P02 - undefined parameter."""
+
+ pass
+
+
+class UndefinedObject(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42704 - undefined object."""
+
+ pass
+
+
+class DuplicateColumn(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42701 - duplicate column."""
+
+ pass
+
+
+class DuplicateCursor(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P03 - duplicate cursor."""
+
+ pass
+
+
+class DuplicateDatabase(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P04 - duplicate database."""
+
+ pass
+
+
+class DuplicateFunction(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42723 - duplicate function."""
+
+ pass
+
+
+class DuplicatePreparedStatement(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P05 - duplicate prepared statement."""
+
+ pass
+
+
+class DuplicateSchema(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P06 - duplicate schema."""
+
+ pass
+
+
+class DuplicateTable(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P07 - duplicate table."""
+
+ pass
+
+
+class DuplicateAlias(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42712 - duplicate alias."""
+
+ pass
+
+
+class DuplicateObject(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42710 - duplicate object."""
+
+ pass
+
+
+class AmbiguousColumn(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42702 - ambiguous column."""
+
+ pass
+
+
+class AmbiguousFunction(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42725 - ambiguous function."""
+
+ pass
+
+
+class AmbiguousParameter(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P08 - ambiguous parameter."""
+
+ pass
+
+
+class AmbiguousAlias(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P09 - ambiguous alias."""
+
+ pass
+
+
+class InvalidColumnReference(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P10 - invalid column reference."""
+
+ pass
+
+
+class InvalidColumnDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42611 - invalid column definition."""
+
+ pass
+
+
+class InvalidCursorDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P11 - invalid cursor definition."""
+
+ pass
+
+
+class InvalidDatabaseDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P12 - invalid database definition."""
+
+ pass
+
+
+class InvalidFunctionDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P13 - invalid function definition."""
+
+ pass
+
+
+class InvalidPreparedStatementDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P14 - invalid prepared statement definition."""
+
+ pass
+
+
+class InvalidSchemaDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P15 - invalid schema definition."""
+
+ pass
+
+
+class InvalidTableDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P16 - invalid table definition."""
+
+ pass
+
+
+class InvalidObjectDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P17 - invalid object definition."""
+
+ pass
+
+
+class WithCheckOptionViolation(LibpqError):
+ """SQLSTATE 44000 - with check option violation."""
+
+ pass
+
+
+class InsufficientResources(LibpqError):
+ """SQLSTATE 53000 - insufficient resources."""
+
+ pass
+
+
+class DiskFull(InsufficientResources):
+ """SQLSTATE 53100 - disk full."""
+
+ pass
+
+
+class OutOfMemory(InsufficientResources):
+ """SQLSTATE 53200 - out of memory."""
+
+ pass
+
+
+class TooManyConnections(InsufficientResources):
+ """SQLSTATE 53300 - too many connections."""
+
+ pass
+
+
+class ConfigurationLimitExceeded(InsufficientResources):
+ """SQLSTATE 53400 - configuration limit exceeded."""
+
+ pass
+
+
+class ProgramLimitExceeded(LibpqError):
+ """SQLSTATE 54000 - program limit exceeded."""
+
+ pass
+
+
+class StatementTooComplex(ProgramLimitExceeded):
+ """SQLSTATE 54001 - statement too complex."""
+
+ pass
+
+
+class TooManyColumns(ProgramLimitExceeded):
+ """SQLSTATE 54011 - too many columns."""
+
+ pass
+
+
+class TooManyArguments(ProgramLimitExceeded):
+ """SQLSTATE 54023 - too many arguments."""
+
+ pass
+
+
+class ObjectNotInPrerequisiteState(LibpqError):
+ """SQLSTATE 55000 - object not in prerequisite state."""
+
+ pass
+
+
+class ObjectInUse(ObjectNotInPrerequisiteState):
+ """SQLSTATE 55006 - object in use."""
+
+ pass
+
+
+class CantChangeRuntimeParam(ObjectNotInPrerequisiteState):
+ """SQLSTATE 55P02 - cant change runtime param."""
+
+ pass
+
+
+class LockNotAvailable(ObjectNotInPrerequisiteState):
+ """SQLSTATE 55P03 - lock not available."""
+
+ pass
+
+
+class UnsafeNewEnumValueUsage(ObjectNotInPrerequisiteState):
+ """SQLSTATE 55P04 - unsafe new enum value usage."""
+
+ pass
+
+
+class OperatorIntervention(LibpqError):
+ """SQLSTATE 57000 - operator intervention."""
+
+ pass
+
+
+class QueryCanceled(OperatorIntervention):
+ """SQLSTATE 57014 - query canceled."""
+
+ pass
+
+
+class AdminShutdown(OperatorIntervention):
+ """SQLSTATE 57P01 - admin shutdown."""
+
+ pass
+
+
+class CrashShutdown(OperatorIntervention):
+ """SQLSTATE 57P02 - crash shutdown."""
+
+ pass
+
+
+class CannotConnectNow(OperatorIntervention):
+ """SQLSTATE 57P03 - cannot connect now."""
+
+ pass
+
+
+class DatabaseDropped(OperatorIntervention):
+ """SQLSTATE 57P04 - database dropped."""
+
+ pass
+
+
+class IdleSessionTimeout(OperatorIntervention):
+ """SQLSTATE 57P05 - idle session timeout."""
+
+ pass
+
+
+class SystemError(LibpqError):
+ """SQLSTATE 58000 - system error."""
+
+ pass
+
+
+class IoError(SystemError):
+ """SQLSTATE 58030 - io error."""
+
+ pass
+
+
+class UndefinedFile(SystemError):
+ """SQLSTATE 58P01 - undefined file."""
+
+ pass
+
+
+class DuplicateFile(SystemError):
+ """SQLSTATE 58P02 - duplicate file."""
+
+ pass
+
+
+class FileNameTooLong(SystemError):
+ """SQLSTATE 58P03 - file name too long."""
+
+ pass
+
+
+class ConfigFileError(LibpqError):
+ """SQLSTATE F0000 - config file error."""
+
+ pass
+
+
+class LockFileExists(ConfigFileError):
+ """SQLSTATE F0001 - lock file exists."""
+
+ pass
+
+
+class FDWError(LibpqError):
+ """SQLSTATE HV000 - fdw error."""
+
+ pass
+
+
+class FDWColumnNameNotFound(FDWError):
+ """SQLSTATE HV005 - fdw column name not found."""
+
+ pass
+
+
+class FDWDynamicParameterValueNeeded(FDWError):
+ """SQLSTATE HV002 - fdw dynamic parameter value needed."""
+
+ pass
+
+
+class FDWFunctionSequenceError(FDWError):
+ """SQLSTATE HV010 - fdw function sequence error."""
+
+ pass
+
+
+class FDWInconsistentDescriptorInformation(FDWError):
+ """SQLSTATE HV021 - fdw inconsistent descriptor information."""
+
+ pass
+
+
+class FDWInvalidAttributeValue(FDWError):
+ """SQLSTATE HV024 - fdw invalid attribute value."""
+
+ pass
+
+
+class FDWInvalidColumnName(FDWError):
+ """SQLSTATE HV007 - fdw invalid column name."""
+
+ pass
+
+
+class FDWInvalidColumnNumber(FDWError):
+ """SQLSTATE HV008 - fdw invalid column number."""
+
+ pass
+
+
+class FDWInvalidDataType(FDWError):
+ """SQLSTATE HV004 - fdw invalid data type."""
+
+ pass
+
+
+class FDWInvalidDataTypeDescriptors(FDWError):
+ """SQLSTATE HV006 - fdw invalid data type descriptors."""
+
+ pass
+
+
+class FDWInvalidDescriptorFieldIdentifier(FDWError):
+ """SQLSTATE HV091 - fdw invalid descriptor field identifier."""
+
+ pass
+
+
+class FDWInvalidHandle(FDWError):
+ """SQLSTATE HV00B - fdw invalid handle."""
+
+ pass
+
+
+class FDWInvalidOptionIndex(FDWError):
+ """SQLSTATE HV00C - fdw invalid option index."""
+
+ pass
+
+
+class FDWInvalidOptionName(FDWError):
+ """SQLSTATE HV00D - fdw invalid option name."""
+
+ pass
+
+
+class FDWInvalidStringLengthOrBufferLength(FDWError):
+ """SQLSTATE HV090 - fdw invalid string length or buffer length."""
+
+ pass
+
+
+class FDWInvalidStringFormat(FDWError):
+ """SQLSTATE HV00A - fdw invalid string format."""
+
+ pass
+
+
+class FDWInvalidUseOfNullPointer(FDWError):
+ """SQLSTATE HV009 - fdw invalid use of null pointer."""
+
+ pass
+
+
+class FDWTooManyHandles(FDWError):
+ """SQLSTATE HV014 - fdw too many handles."""
+
+ pass
+
+
+class FDWOutOfMemory(FDWError):
+ """SQLSTATE HV001 - fdw out of memory."""
+
+ pass
+
+
+class FDWNoSchemas(FDWError):
+ """SQLSTATE HV00P - fdw no schemas."""
+
+ pass
+
+
+class FDWOptionNameNotFound(FDWError):
+ """SQLSTATE HV00J - fdw option name not found."""
+
+ pass
+
+
+class FDWReplyHandle(FDWError):
+ """SQLSTATE HV00K - fdw reply handle."""
+
+ pass
+
+
+class FDWSchemaNotFound(FDWError):
+ """SQLSTATE HV00Q - fdw schema not found."""
+
+ pass
+
+
+class FDWTableNotFound(FDWError):
+ """SQLSTATE HV00R - fdw table not found."""
+
+ pass
+
+
+class FDWUnableToCreateExecution(FDWError):
+ """SQLSTATE HV00L - fdw unable to create execution."""
+
+ pass
+
+
+class FDWUnableToCreateReply(FDWError):
+ """SQLSTATE HV00M - fdw unable to create reply."""
+
+ pass
+
+
+class FDWUnableToEstablishConnection(FDWError):
+ """SQLSTATE HV00N - fdw unable to establish connection."""
+
+ pass
+
+
+class PlpgsqlError(LibpqError):
+ """SQLSTATE P0000 - plpgsql error."""
+
+ pass
+
+
+class RaiseException(PlpgsqlError):
+ """SQLSTATE P0001 - raise exception."""
+
+ pass
+
+
+class NoDataFound(PlpgsqlError):
+ """SQLSTATE P0002 - no data found."""
+
+ pass
+
+
+class TooManyRows(PlpgsqlError):
+ """SQLSTATE P0003 - too many rows."""
+
+ pass
+
+
+class AssertFailure(PlpgsqlError):
+ """SQLSTATE P0004 - assert failure."""
+
+ pass
+
+
+class InternalError(LibpqError):
+ """SQLSTATE XX000 - internal error."""
+
+ pass
+
+
+class DataCorrupted(InternalError):
+ """SQLSTATE XX001 - data corrupted."""
+
+ pass
+
+
+class IndexCorrupted(InternalError):
+ """SQLSTATE XX002 - index corrupted."""
+
+ pass
+
+
+SQLSTATE_TO_EXCEPTION: Dict[str, type] = {
+ "00000": SuccessfulCompletion,
+ "01000": Warning,
+ "0100C": DynamicResultSetsReturnedWarning,
+ "01008": ImplicitZeroBitPaddingWarning,
+ "01003": NullValueEliminatedInSetFunctionWarning,
+ "01007": PrivilegeNotGrantedWarning,
+ "01006": PrivilegeNotRevokedWarning,
+ "01004": StringDataRightTruncationWarning,
+ "01P01": DeprecatedFeatureWarning,
+ "02000": NoData,
+ "02001": NoAdditionalDynamicResultSetsReturned,
+ "03000": SQLStatementNotYetComplete,
+ "08000": ConnectionException,
+ "08003": ConnectionDoesNotExist,
+ "08006": ConnectionFailure,
+ "08001": SQLClientUnableToEstablishSQLConnection,
+ "08004": SQLServerRejectedEstablishmentOfSQLConnection,
+ "08007": TransactionResolutionUnknown,
+ "08P01": ProtocolViolation,
+ "09000": TriggeredActionException,
+ "0A000": FeatureNotSupported,
+ "0B000": InvalidTransactionInitiation,
+ "0F000": LocatorException,
+ "0F001": InvalidLocatorSpecification,
+ "0L000": InvalidGrantor,
+ "0LP01": InvalidGrantOperation,
+ "0P000": InvalidRoleSpecification,
+ "0Z000": DiagnosticsException,
+ "0Z002": StackedDiagnosticsAccessedWithoutActiveHandler,
+ "10608": InvalidArgumentForXquery,
+ "20000": CaseNotFound,
+ "21000": CardinalityViolation,
+ "22000": DataException,
+ "2202E": ArraySubscriptError,
+ "22021": CharacterNotInRepertoire,
+ "22008": DatetimeFieldOverflow,
+ "22012": DivisionByZero,
+ "22005": ErrorInAssignment,
+ "2200B": EscapeCharacterConflict,
+ "22022": IndicatorOverflow,
+ "22015": IntervalFieldOverflow,
+ "2201E": InvalidArgumentForLogarithm,
+ "22014": InvalidArgumentForNtileFunction,
+ "22016": InvalidArgumentForNthValueFunction,
+ "2201F": InvalidArgumentForPowerFunction,
+ "2201G": InvalidArgumentForWidthBucketFunction,
+ "22018": InvalidCharacterValueForCast,
+ "22007": InvalidDatetimeFormat,
+ "22019": InvalidEscapeCharacter,
+ "2200D": InvalidEscapeOctet,
+ "22025": InvalidEscapeSequence,
+ "22P06": NonstandardUseOfEscapeCharacter,
+ "22010": InvalidIndicatorParameterValue,
+ "22023": InvalidParameterValue,
+ "22013": InvalidPrecedingOrFollowingSize,
+ "2201B": InvalidRegularExpression,
+ "2201W": InvalidRowCountInLimitClause,
+ "2201X": InvalidRowCountInResultOffsetClause,
+ "2202H": InvalidTablesampleArgument,
+ "2202G": InvalidTablesampleRepeat,
+ "22009": InvalidTimeZoneDisplacementValue,
+ "2200C": InvalidUseOfEscapeCharacter,
+ "2200G": MostSpecificTypeMismatch,
+ "22004": NullValueNotAllowed,
+ "22002": NullValueNoIndicatorParameter,
+ "22003": NumericValueOutOfRange,
+ "2200H": SequenceGeneratorLimitExceeded,
+ "22026": StringDataLengthMismatch,
+ "22001": StringDataRightTruncation,
+ "22011": SubstringError,
+ "22027": TrimError,
+ "22024": UnterminatedCString,
+ "2200F": ZeroLengthCharacterString,
+ "22P01": FloatingPointException,
+ "22P02": InvalidTextRepresentation,
+ "22P03": InvalidBinaryRepresentation,
+ "22P04": BadCopyFileFormat,
+ "22P05": UntranslatableCharacter,
+ "2200L": NotAnXmlDocument,
+ "2200M": InvalidXmlDocument,
+ "2200N": InvalidXmlContent,
+ "2200S": InvalidXmlComment,
+ "2200T": InvalidXmlProcessingInstruction,
+ "22030": DuplicateJsonObjectKeyValue,
+ "22031": InvalidArgumentForSQLJsonDatetimeFunction,
+ "22032": InvalidJsonText,
+ "22033": InvalidSQLJsonSubscript,
+ "22034": MoreThanOneSQLJsonItem,
+ "22035": NoSQLJsonItem,
+ "22036": NonNumericSQLJsonItem,
+ "22037": NonUniqueKeysInAJsonObject,
+ "22038": SingletonSQLJsonItemRequired,
+ "22039": SQLJsonArrayNotFound,
+ "2203A": SQLJsonMemberNotFound,
+ "2203B": SQLJsonNumberNotFound,
+ "2203C": SQLJsonObjectNotFound,
+ "2203D": TooManyJsonArrayElements,
+ "2203E": TooManyJsonObjectMembers,
+ "2203F": SQLJsonScalarRequired,
+ "2203G": SQLJsonItemCannotBeCastToTargetType,
+ "23000": IntegrityConstraintViolation,
+ "23001": RestrictViolation,
+ "23502": NotNullViolation,
+ "23503": ForeignKeyViolation,
+ "23505": UniqueViolation,
+ "23514": CheckViolation,
+ "23P01": ExclusionViolation,
+ "24000": InvalidCursorState,
+ "25000": InvalidTransactionState,
+ "25001": ActiveSQLTransaction,
+ "25002": BranchTransactionAlreadyActive,
+ "25008": HeldCursorRequiresSameIsolationLevel,
+ "25003": InappropriateAccessModeForBranchTransaction,
+ "25004": InappropriateIsolationLevelForBranchTransaction,
+ "25005": NoActiveSQLTransactionForBranchTransaction,
+ "25006": ReadOnlySQLTransaction,
+ "25007": SchemaAndDataStatementMixingNotSupported,
+ "25P01": NoActiveSQLTransaction,
+ "25P02": InFailedSQLTransaction,
+ "25P03": IdleInTransactionSessionTimeout,
+ "25P04": TransactionTimeout,
+ "26000": InvalidSQLStatementName,
+ "27000": TriggeredDataChangeViolation,
+ "28000": InvalidAuthorizationSpecification,
+ "28P01": InvalidPassword,
+ "2B000": DependentPrivilegeDescriptorsStillExist,
+ "2BP01": DependentObjectsStillExist,
+ "2D000": InvalidTransactionTermination,
+ "2F000": SQLRoutineException,
+ "2F005": FunctionExecutedNoReturnStatement,
+ "2F002": SREModifyingSQLDataNotPermitted,
+ "2F003": SREProhibitedSQLStatementAttempted,
+ "2F004": SREReadingSQLDataNotPermitted,
+ "34000": InvalidCursorName,
+ "38000": ExternalRoutineException,
+ "38001": ContainingSQLNotPermitted,
+ "38002": EREModifyingSQLDataNotPermitted,
+ "38003": EREProhibitedSQLStatementAttempted,
+ "38004": EREReadingSQLDataNotPermitted,
+ "39000": ExternalRoutineInvocationException,
+ "39001": InvalidSqlstateReturned,
+ "39004": ERIENullValueNotAllowed,
+ "39P01": TriggerProtocolViolated,
+ "39P02": SrfProtocolViolated,
+ "39P03": EventTriggerProtocolViolated,
+ "3B000": SavepointException,
+ "3B001": InvalidSavepointSpecification,
+ "3D000": InvalidCatalogName,
+ "3F000": InvalidSchemaName,
+ "40000": TransactionRollback,
+ "40002": TransactionIntegrityConstraintViolation,
+ "40001": SerializationFailure,
+ "40003": StatementCompletionUnknown,
+ "40P01": DeadlockDetected,
+ "42000": SyntaxErrorOrAccessRuleViolation,
+ "42601": SyntaxError,
+ "42501": InsufficientPrivilege,
+ "42846": CannotCoerce,
+ "42803": GroupingError,
+ "42P20": WindowingError,
+ "42P19": InvalidRecursion,
+ "42830": InvalidForeignKey,
+ "42602": InvalidName,
+ "42622": NameTooLong,
+ "42939": ReservedName,
+ "42804": DatatypeMismatch,
+ "42P18": IndeterminateDatatype,
+ "42P21": CollationMismatch,
+ "42P22": IndeterminateCollation,
+ "42809": WrongObjectType,
+ "428C9": GeneratedAlways,
+ "42703": UndefinedColumn,
+ "42883": UndefinedFunction,
+ "42P01": UndefinedTable,
+ "42P02": UndefinedParameter,
+ "42704": UndefinedObject,
+ "42701": DuplicateColumn,
+ "42P03": DuplicateCursor,
+ "42P04": DuplicateDatabase,
+ "42723": DuplicateFunction,
+ "42P05": DuplicatePreparedStatement,
+ "42P06": DuplicateSchema,
+ "42P07": DuplicateTable,
+ "42712": DuplicateAlias,
+ "42710": DuplicateObject,
+ "42702": AmbiguousColumn,
+ "42725": AmbiguousFunction,
+ "42P08": AmbiguousParameter,
+ "42P09": AmbiguousAlias,
+ "42P10": InvalidColumnReference,
+ "42611": InvalidColumnDefinition,
+ "42P11": InvalidCursorDefinition,
+ "42P12": InvalidDatabaseDefinition,
+ "42P13": InvalidFunctionDefinition,
+ "42P14": InvalidPreparedStatementDefinition,
+ "42P15": InvalidSchemaDefinition,
+ "42P16": InvalidTableDefinition,
+ "42P17": InvalidObjectDefinition,
+ "44000": WithCheckOptionViolation,
+ "53000": InsufficientResources,
+ "53100": DiskFull,
+ "53200": OutOfMemory,
+ "53300": TooManyConnections,
+ "53400": ConfigurationLimitExceeded,
+ "54000": ProgramLimitExceeded,
+ "54001": StatementTooComplex,
+ "54011": TooManyColumns,
+ "54023": TooManyArguments,
+ "55000": ObjectNotInPrerequisiteState,
+ "55006": ObjectInUse,
+ "55P02": CantChangeRuntimeParam,
+ "55P03": LockNotAvailable,
+ "55P04": UnsafeNewEnumValueUsage,
+ "57000": OperatorIntervention,
+ "57014": QueryCanceled,
+ "57P01": AdminShutdown,
+ "57P02": CrashShutdown,
+ "57P03": CannotConnectNow,
+ "57P04": DatabaseDropped,
+ "57P05": IdleSessionTimeout,
+ "58000": SystemError,
+ "58030": IoError,
+ "58P01": UndefinedFile,
+ "58P02": DuplicateFile,
+ "58P03": FileNameTooLong,
+ "F0000": ConfigFileError,
+ "F0001": LockFileExists,
+ "HV000": FDWError,
+ "HV005": FDWColumnNameNotFound,
+ "HV002": FDWDynamicParameterValueNeeded,
+ "HV010": FDWFunctionSequenceError,
+ "HV021": FDWInconsistentDescriptorInformation,
+ "HV024": FDWInvalidAttributeValue,
+ "HV007": FDWInvalidColumnName,
+ "HV008": FDWInvalidColumnNumber,
+ "HV004": FDWInvalidDataType,
+ "HV006": FDWInvalidDataTypeDescriptors,
+ "HV091": FDWInvalidDescriptorFieldIdentifier,
+ "HV00B": FDWInvalidHandle,
+ "HV00C": FDWInvalidOptionIndex,
+ "HV00D": FDWInvalidOptionName,
+ "HV090": FDWInvalidStringLengthOrBufferLength,
+ "HV00A": FDWInvalidStringFormat,
+ "HV009": FDWInvalidUseOfNullPointer,
+ "HV014": FDWTooManyHandles,
+ "HV001": FDWOutOfMemory,
+ "HV00P": FDWNoSchemas,
+ "HV00J": FDWOptionNameNotFound,
+ "HV00K": FDWReplyHandle,
+ "HV00Q": FDWSchemaNotFound,
+ "HV00R": FDWTableNotFound,
+ "HV00L": FDWUnableToCreateExecution,
+ "HV00M": FDWUnableToCreateReply,
+ "HV00N": FDWUnableToEstablishConnection,
+ "P0000": PlpgsqlError,
+ "P0001": RaiseException,
+ "P0002": NoDataFound,
+ "P0003": TooManyRows,
+ "P0004": AssertFailure,
+ "XX000": InternalError,
+ "XX001": DataCorrupted,
+ "XX002": IndexCorrupted,
+}
+
+
+__all__ = [
+ "InvalidCursorName",
+ "UndefinedParameter",
+ "UndefinedColumn",
+ "NotAnXmlDocument",
+ "FDWOutOfMemory",
+ "InvalidRoleSpecification",
+ "InvalidArgumentForNthValueFunction",
+ "SQLJsonObjectNotFound",
+ "FDWSchemaNotFound",
+ "InvalidParameterValue",
+ "InvalidTableDefinition",
+ "AssertFailure",
+ "FDWInvalidOptionName",
+ "InvalidEscapeOctet",
+ "ReadOnlySQLTransaction",
+ "ExternalRoutineInvocationException",
+ "CrashShutdown",
+ "FDWInvalidOptionIndex",
+ "NotNullViolation",
+ "ConfigFileError",
+ "InvalidSQLJsonSubscript",
+ "InvalidForeignKey",
+ "InsufficientResources",
+ "ObjectNotInPrerequisiteState",
+ "InvalidRowCountInLimitClause",
+ "IntervalFieldOverflow",
+ "CollationMismatch",
+ "InvalidArgumentForNtileFunction",
+ "InvalidCharacterValueForCast",
+ "NonUniqueKeysInAJsonObject",
+ "DependentPrivilegeDescriptorsStillExist",
+ "InFailedSQLTransaction",
+ "GroupingError",
+ "TransactionTimeout",
+ "CaseNotFound",
+ "ConnectionException",
+ "DuplicateJsonObjectKeyValue",
+ "InvalidSchemaDefinition",
+ "FDWUnableToCreateReply",
+ "UndefinedTable",
+ "SequenceGeneratorLimitExceeded",
+ "InvalidJsonText",
+ "IdleSessionTimeout",
+ "NullValueNotAllowed",
+ "BranchTransactionAlreadyActive",
+ "InvalidGrantOperation",
+ "NullValueNoIndicatorParameter",
+ "ProtocolViolation",
+ "FDWInvalidDataTypeDescriptors",
+ "TriggeredDataChangeViolation",
+ "ExternalRoutineException",
+ "InvalidSqlstateReturned",
+ "PlpgsqlError",
+ "InvalidXmlContent",
+ "TriggeredActionException",
+ "SQLClientUnableToEstablishSQLConnection",
+ "FDWTableNotFound",
+ "NumericValueOutOfRange",
+ "RestrictViolation",
+ "AmbiguousParameter",
+ "StatementTooComplex",
+ "UnsafeNewEnumValueUsage",
+ "NonNumericSQLJsonItem",
+ "InvalidIndicatorParameterValue",
+ "ExclusionViolation",
+ "OperatorIntervention",
+ "QueryCanceled",
+ "Warning",
+ "InvalidArgumentForSQLJsonDatetimeFunction",
+ "ForeignKeyViolation",
+ "StringDataLengthMismatch",
+ "SQLRoutineException",
+ "TooManyConnections",
+ "TooManyJsonObjectMembers",
+ "NoData",
+ "UntranslatableCharacter",
+ "FDWUnableToEstablishConnection",
+ "LockFileExists",
+ "SREReadingSQLDataNotPermitted",
+ "IndeterminateDatatype",
+ "CheckViolation",
+ "InvalidDatabaseDefinition",
+ "NoActiveSQLTransactionForBranchTransaction",
+ "SQLServerRejectedEstablishmentOfSQLConnection",
+ "DuplicateFile",
+ "FDWInvalidColumnNumber",
+ "TransactionRollback",
+ "MoreThanOneSQLJsonItem",
+ "WithCheckOptionViolation",
+ "FDWNoSchemas",
+ "GeneratedAlways",
+ "CannotConnectNow",
+ "CardinalityViolation",
+ "InvalidAuthorizationSpecification",
+ "SQLJsonNumberNotFound",
+ "SQLJsonMemberNotFound",
+ "InvalidUseOfEscapeCharacter",
+ "UnterminatedCString",
+ "TrimError",
+ "SrfProtocolViolated",
+ "DiskFull",
+ "TooManyColumns",
+ "InvalidObjectDefinition",
+ "InvalidArgumentForLogarithm",
+ "TooManyJsonArrayElements",
+ "OutOfMemory",
+ "EREProhibitedSQLStatementAttempted",
+ "FDWInvalidStringFormat",
+ "StackedDiagnosticsAccessedWithoutActiveHandler",
+ "SchemaAndDataStatementMixingNotSupported",
+ "InternalError",
+ "InvalidEscapeCharacter",
+ "FDWError",
+ "ImplicitZeroBitPaddingWarning",
+ "DivisionByZero",
+ "InvalidTablesampleArgument",
+ "DeadlockDetected",
+ "CantChangeRuntimeParam",
+ "UndefinedObject",
+ "UniqueViolation",
+ "InvalidCursorDefinition",
+ "ConnectionFailure",
+ "UndefinedFunction",
+ "FDWFunctionSequenceError",
+ "ErrorInAssignment",
+ "SuccessfulCompletion",
+ "StringDataRightTruncation",
+ "FDWTooManyHandles",
+ "FDWInvalidDataType",
+ "ActiveSQLTransaction",
+ "InvalidTextRepresentation",
+ "InvalidSQLStatementName",
+ "PrivilegeNotGrantedWarning",
+ "SREModifyingSQLDataNotPermitted",
+ "IndeterminateCollation",
+ "SystemError",
+ "NullValueEliminatedInSetFunctionWarning",
+ "DependentObjectsStillExist",
+ "InvalidSchemaName",
+ "DuplicateColumn",
+ "FunctionExecutedNoReturnStatement",
+ "InvalidColumnDefinition",
+ "DynamicResultSetsReturnedWarning",
+ "IdleInTransactionSessionTimeout",
+ "StatementCompletionUnknown",
+ "CannotCoerce",
+ "InvalidTransactionState",
+ "DuplicateTable",
+ "BadCopyFileFormat",
+ "ZeroLengthCharacterString",
+ "SyntaxErrorOrAccessRuleViolation",
+ "SingletonSQLJsonItemRequired",
+ "IndexCorrupted",
+ "FDWInvalidColumnName",
+ "DataCorrupted",
+ "ERIENullValueNotAllowed",
+ "ArraySubscriptError",
+ "FDWReplyHandle",
+ "DiagnosticsException",
+ "InvalidTablesampleRepeat",
+ "SQLJsonItemCannotBeCastToTargetType",
+ "FDWInvalidHandle",
+ "InvalidPassword",
+ "InvalidEscapeSequence",
+ "EscapeCharacterConflict",
+ "InvalidSavepointSpecification",
+ "FDWInvalidAttributeValue",
+ "ContainingSQLNotPermitted",
+ "LocatorException",
+ "DatatypeMismatch",
+ "InvalidCursorState",
+ "InvalidName",
+ "IndicatorOverflow",
+ "ReservedName",
+ "DatetimeFieldOverflow",
+ "FDWInconsistentDescriptorInformation",
+ "FloatingPointException",
+ "AmbiguousAlias",
+ "InvalidRecursion",
+ "WrongObjectType",
+ "UndefinedFile",
+ "LockNotAvailable",
+ "InvalidRowCountInResultOffsetClause",
+ "ObjectInUse",
+ "DeprecatedFeatureWarning",
+ "FDWDynamicParameterValueNeeded",
+ "DuplicateFunction",
+ "InvalidXmlDocument",
+ "StringDataRightTruncationWarning",
+ "DuplicatePreparedStatement",
+ "InvalidGrantor",
+ "EventTriggerProtocolViolated",
+ "FDWInvalidUseOfNullPointer",
+ "FDWUnableToCreateExecution",
+ "ConnectionDoesNotExist",
+ "InvalidCatalogName",
+ "InvalidArgumentForXquery",
+ "FDWColumnNameNotFound",
+ "TransactionIntegrityConstraintViolation",
+ "InvalidPreparedStatementDefinition",
+ "FDWInvalidDescriptorFieldIdentifier",
+ "FDWOptionNameNotFound",
+ "InvalidArgumentForPowerFunction",
+ "FDWInvalidStringLengthOrBufferLength",
+ "SREProhibitedSQLStatementAttempted",
+ "NoDataFound",
+ "DuplicateDatabase",
+ "FeatureNotSupported",
+ "IntegrityConstraintViolation",
+ "AmbiguousColumn",
+ "PrivilegeNotRevokedWarning",
+ "FileNameTooLong",
+ "InvalidArgumentForWidthBucketFunction",
+ "HeldCursorRequiresSameIsolationLevel",
+ "NoSQLJsonItem",
+ "IoError",
+ "SavepointException",
+ "NoActiveSQLTransaction",
+ "InvalidFunctionDefinition",
+ "AdminShutdown",
+ "DatabaseDropped",
+ "InvalidRegularExpression",
+ "WindowingError",
+ "InvalidColumnReference",
+ "InvalidBinaryRepresentation",
+ "SQLJsonScalarRequired",
+ "ConfigurationLimitExceeded",
+ "SyntaxError",
+ "SerializationFailure",
+ "ProgramLimitExceeded",
+ "DuplicateSchema",
+ "SQLStatementNotYetComplete",
+ "LibpqError",
+ "DataException",
+ "SubstringError",
+ "InvalidLocatorSpecification",
+ "InappropriateAccessModeForBranchTransaction",
+ "EREModifyingSQLDataNotPermitted",
+ "InsufficientPrivilege",
+ "NoAdditionalDynamicResultSetsReturned",
+ "SQLJsonArrayNotFound",
+ "NameTooLong",
+ "InvalidTimeZoneDisplacementValue",
+ "InappropriateIsolationLevelForBranchTransaction",
+ "RaiseException",
+ "EREReadingSQLDataNotPermitted",
+ "TriggerProtocolViolated",
+ "NonstandardUseOfEscapeCharacter",
+ "InvalidTransactionInitiation",
+ "DuplicateAlias",
+ "TransactionResolutionUnknown",
+ "TooManyRows",
+ "InvalidXmlComment",
+ "MostSpecificTypeMismatch",
+ "DuplicateObject",
+ "DuplicateCursor",
+ "AmbiguousFunction",
+ "TooManyArguments",
+ "InvalidXmlProcessingInstruction",
+ "InvalidTransactionTermination",
+ "InvalidDatetimeFormat",
+ "InvalidPrecedingOrFollowingSize",
+ "CharacterNotInRepertoire",
+ "SQLSTATE_TO_EXCEPTION",
+]
diff --git a/src/test/pytest/libpq/errors.py b/src/test/pytest/libpq/errors.py
new file mode 100644
index 00000000000..764a96c2478
--- /dev/null
+++ b/src/test/pytest/libpq/errors.py
@@ -0,0 +1,39 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+PostgreSQL error types mapped from SQLSTATE codes.
+
+This module provides LibpqError and its subclasses for handling PostgreSQL
+errors based on SQLSTATE codes. The exception classes in _generated_errors.py
+are auto-generated from src/backend/utils/errcodes.txt.
+
+To regenerate: src/tools/generate_pytest_libpq_errors.py
+"""
+
+from typing import Optional
+
+from ._error_base import LibpqError, LibpqWarning
+from ._generated_errors import (
+ SQLSTATE_TO_EXCEPTION,
+)
+from ._generated_errors import * # noqa: F403
+
+
+def get_exception_class(sqlstate: Optional[str]) -> type:
+ """Get the appropriate exception class for a SQLSTATE code."""
+ if sqlstate in SQLSTATE_TO_EXCEPTION:
+ return SQLSTATE_TO_EXCEPTION[sqlstate]
+ return LibpqError
+
+
+def make_error(message: str, *, sqlstate: Optional[str] = None, **kwargs) -> LibpqError:
+ """Create an appropriate LibpqError subclass based on the SQLSTATE code."""
+ exc_class = get_exception_class(sqlstate)
+ return exc_class(message, sqlstate=sqlstate, **kwargs)
+
+
+__all__ = [
+ "LibpqError",
+ "LibpqWarning",
+ "make_error",
+]
diff --git a/src/test/pytest/meson.build b/src/test/pytest/meson.build
index abd128dfa24..b86be901e7c 100644
--- a/src/test/pytest/meson.build
+++ b/src/test/pytest/meson.build
@@ -10,7 +10,10 @@ tests += {
'bd': meson.current_build_dir(),
'pytest': {
'tests': [
- 'pyt/test_something.py',
+ 'pyt/test_errors.py',
+ 'pyt/test_libpq.py',
+ 'pyt/test_multi_server.py',
+ 'pyt/test_query_helpers.py',
],
},
}
diff --git a/src/test/pytest/pypg/__init__.py b/src/test/pytest/pypg/__init__.py
new file mode 100644
index 00000000000..4ee91289f70
--- /dev/null
+++ b/src/test/pytest/pypg/__init__.py
@@ -0,0 +1,10 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+from ._env import require_test_extras, skip_unless_test_extras
+from .server import PostgresServer
+
+__all__ = [
+ "require_test_extras",
+ "skip_unless_test_extras",
+ "PostgresServer",
+]
diff --git a/src/test/pytest/pypg/_env.py b/src/test/pytest/pypg/_env.py
new file mode 100644
index 00000000000..c4087be3212
--- /dev/null
+++ b/src/test/pytest/pypg/_env.py
@@ -0,0 +1,72 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import logging
+import os
+
+import pytest
+
+logger = logging.getLogger(__name__)
+
+
+def _test_extra_skip_reason(*keys: str) -> str:
+ return "requires {} to be set in PG_TEST_EXTRA".format(", ".join(keys))
+
+
+def _has_test_extra(key: str) -> bool:
+ """
+ Returns True if the PG_TEST_EXTRA environment variable contains the given
+ key.
+ """
+ extra = os.getenv("PG_TEST_EXTRA", "")
+ return key in extra.split()
+
+
+def require_test_extras(*keys: str):
+ """
+ A convenience annotation which will skip tests if all of the required keys
+ are not present in PG_TEST_EXTRA.
+
+ To skip a particular test function or class:
+
+ @pypg.require_test_extras("ldap")
+ def test_some_ldap_feature():
+ ...
+
+ To skip an entire module:
+
+ pytestmark = pypg.require_test_extra("ssl", "kerberos")
+ """
+ return pytest.mark.skipif(
+ not all([_has_test_extra(k) for k in keys]),
+ reason=_test_extra_skip_reason(*keys),
+ )
+
+
+def skip_unless_test_extras(*keys: str):
+ """
+ Skip the current test/fixture if any of the required keys are not present
+ in PG_TEST_EXTRA. Use this inside fixtures where decorators can't be used.
+
+ @pytest.fixture
+ def my_fixture():
+ skip_unless_test_extras("ldap")
+ ...
+ """
+ if not all([_has_test_extra(k) for k in keys]):
+ pytest.skip(_test_extra_skip_reason(*keys))
+
+
+def test_timeout_default() -> int:
+ """
+ Returns the value of the PG_TEST_TIMEOUT_DEFAULT environment variable, in
+ seconds, or 180 if one was not provided.
+ """
+ default = os.getenv("PG_TEST_TIMEOUT_DEFAULT", "")
+ if not default:
+ return 180
+
+ try:
+ return int(default)
+ except ValueError as v:
+ logger.warning("PG_TEST_TIMEOUT_DEFAULT could not be parsed: " + str(v))
+ return 180
diff --git a/src/test/pytest/pypg/fixtures.py b/src/test/pytest/pypg/fixtures.py
new file mode 100644
index 00000000000..8c0cb60daa5
--- /dev/null
+++ b/src/test/pytest/pypg/fixtures.py
@@ -0,0 +1,335 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import os
+import contextlib
+import pathlib
+import time
+from typing import List
+
+import pytest
+
+from ._env import test_timeout_default
+from .util import capture
+from .server import PostgresServer
+
+from libpq import load_libpq_handle, connect as libpq_connect
+
+
+# Stash key for tracking servers for log reporting.
+_servers_key = pytest.StashKey[List[PostgresServer]]()
+
+
+def _record_server_for_log_reporting(request, server):
+ """Record a server for log reporting on test failure."""
+ if _servers_key not in request.node.stash:
+ request.node.stash[_servers_key] = []
+ request.node.stash[_servers_key].append(server)
+
+
+@pytest.fixture
+def remaining_timeout():
+ """
+ This fixture provides a function that returns how much of the
+ PG_TEST_TIMEOUT_DEFAULT remains for the current test, in fractional seconds.
+ This value is never less than zero.
+
+ This fixture is per-test, so the deadline is also reset on a per-test basis.
+ """
+ now = time.monotonic()
+ deadline = now + test_timeout_default()
+
+ return lambda: max(deadline - time.monotonic(), 0)
+
+
+@pytest.fixture(scope="module")
+def remaining_timeout_module():
+ """
+ Same as remaining_timeout, but the deadline is set once per module.
+
+ This fixture is per-module, which means it's generally only really useful
+ for configuring timeouts of operations that happen in the setup phase of
+ another module fixtures. If you use it in a test it would mean that each
+ subsequent test in the module gets a reduced timeout.
+ """
+ now = time.monotonic()
+ deadline = now + test_timeout_default()
+
+ return lambda: max(deadline - time.monotonic(), 0)
+
+
+@pytest.fixture(scope="session")
+def libpq_handle(libdir, bindir):
+ """
+ Loads a ctypes handle for libpq. Some common function prototypes are
+ initialized for general use.
+ """
+ try:
+ return load_libpq_handle(libdir, bindir)
+ except OSError as e:
+ if "wrong ELF class" in str(e):
+ # This happens in CI when trying to lead a 32-bit libpq library
+ # with a 64-bit Python
+ pytest.skip("libpq architecture does not match Python interpreter")
+ raise
+
+
+@pytest.fixture
+def connect(libpq_handle, remaining_timeout):
+ """
+ Returns a function to connect to PostgreSQL via libpq.
+
+ The returned function accepts connection options as keyword arguments
+ (host, port, dbname, etc.) and returns a PGconn object. Connections
+ are automatically cleaned up at the end of the test.
+
+ Example:
+ conn = connect(host='localhost', port=5432, dbname='postgres')
+ result = conn.sql("SELECT 1")
+ """
+ with contextlib.ExitStack() as stack:
+
+ def _connect(**opts):
+ return libpq_connect(libpq_handle, stack, remaining_timeout, **opts)
+
+ yield _connect
+
+
+@pytest.fixture(scope="session")
+def pg_config():
+ """
+ Returns the path to pg_config. Uses PG_CONFIG environment variable if set,
+ otherwise uses 'pg_config' from PATH.
+ """
+ return os.environ.get("PG_CONFIG", "pg_config")
+
+
+@pytest.fixture(scope="session")
+def bindir(pg_config):
+ """
+ Returns the PostgreSQL bin directory using pg_config --bindir.
+ """
+ return pathlib.Path(capture(pg_config, "--bindir"))
+
+
+@pytest.fixture(scope="session")
+def libdir(pg_config):
+ """
+ Returns the PostgreSQL lib directory using pg_config --libdir.
+ """
+ return pathlib.Path(capture(pg_config, "--libdir"))
+
+
+@pytest.fixture(scope="session")
+def tmp_check(tmp_path_factory) -> pathlib.Path:
+ """
+ Returns the tmp_check directory that should be used for the tests. If
+ TESTDATADIR is provided, that will be used; otherwise a new temporary
+ directory is created in the pytest temp root.
+ """
+ d = os.getenv("TESTDATADIR")
+ if d:
+ d = pathlib.Path(d)
+ else:
+ d = tmp_path_factory.mktemp("tmp_check")
+
+ return d
+
+
+@pytest.fixture(scope="session")
+def datadir(tmp_check):
+ """
+ Returns the data directory to use for the pg fixture.
+ """
+
+ return tmp_check / "pgdata"
+
+
+@pytest.fixture(scope="session")
+def sockdir(tmp_path_factory):
+ """
+ Returns the directory name to use as the server's unix_socket_directories
+ setting. Local client connections use this as the PGHOST.
+
+ At the moment, this is always put under the pytest temp root.
+ """
+ return tmp_path_factory.mktemp("sockfiles")
+
+
+@pytest.fixture(scope="session")
+def pg_server_global(bindir, datadir, sockdir, libpq_handle):
+ """
+ Starts a running Postgres server listening on localhost. The HBA initially
+ allows only local UNIX connections from the same user.
+
+ Returns a PostgresServer instance with methods for server management, configuration,
+ and creating test databases/users.
+ """
+ server = PostgresServer("default", bindir, datadir, sockdir, libpq_handle)
+
+ yield server
+
+ # Cleanup any test resources
+ server.cleanup()
+
+ # Stop the server
+ server.stop()
+
+
+@pytest.fixture(scope="module")
+def pg_server_module(pg_server_global):
+ """
+ Module-scoped server context. Which can be useful so that certain settings
+ can be overriden at the module level through autouse fixtures. An example
+ of this is in the SSL tests.
+ """
+ with pg_server_global.subcontext() as s:
+ yield s
+
+
+@pytest.fixture
+def pg(request, pg_server_module, remaining_timeout):
+ """
+ Per-test server context. Use this fixture to make changes to the server
+ which will be rolled back at the end of the test (e.g., creating test
+ users/databases).
+
+ Also captures the PostgreSQL log position at test start so that any new
+ log entries can be included in the test report on failure.
+ """
+ with pg_server_module.start_new_test(remaining_timeout) as s:
+ _record_server_for_log_reporting(request, s)
+ yield s
+
+
+@pytest.fixture
+def conn(pg):
+ """
+ Returns a connected PGconn instance to the test PostgreSQL server.
+ The connection is automatically cleaned up at the end of the test.
+
+ Example:
+ def test_something(conn):
+ result = conn.sql("SELECT 1")
+ assert result == 1
+ """
+ return pg.connect()
+
+
+@pytest.fixture
+def create_pg(request, bindir, sockdir, libpq_handle, tmp_check, remaining_timeout):
+ """
+ Factory fixture to create additional PostgreSQL servers (per-test scope).
+
+ Returns a function that creates new PostgreSQL server instances.
+ Servers are automatically cleaned up at the end of the test.
+
+ Example:
+ def test_multiple_servers(create_pg):
+ node1 = create_pg()
+ node2 = create_pg()
+ node3 = create_pg()
+ """
+ servers = []
+
+ def _create(name=None, **kwargs):
+ if name is None:
+ count = len(servers) + 1
+ name = f"pg{count}"
+
+ datadir = tmp_check / f"pgdata_{name}"
+ server = PostgresServer(name, bindir, datadir, sockdir, libpq_handle, **kwargs)
+ server.set_timeout(remaining_timeout)
+ _record_server_for_log_reporting(request, server)
+ servers.append(server)
+ return server
+
+ yield _create
+
+ for server in servers:
+ server.cleanup()
+ server.stop()
+
+
+@pytest.fixture(scope="module")
+def _module_scoped_servers():
+ """Session-scoped list to track servers created by create_pg_module."""
+ return []
+
+
+@pytest.fixture(scope="module")
+def create_pg_module(
+ bindir,
+ sockdir,
+ libpq_handle,
+ tmp_check,
+ remaining_timeout_module,
+ _module_scoped_servers,
+):
+ """
+ Factory fixture to create additional PostgreSQL servers (module scope).
+
+ Like create_pg, but servers persist for the entire test module.
+ Use this when multiple tests in a module can share the same servers.
+
+ The timeout is automatically set on all servers at the start of each test
+ via the _set_module_server_timeouts autouse fixture.
+
+ Example:
+ @pytest.fixture(scope="module")
+ def shared_nodes(create_pg_module):
+ return [create_pg_module() for _ in range(3)]
+ """
+
+ def _create(name=None, **kwargs):
+ if name is None:
+ count = len(_module_scoped_servers) + 1
+ name = f"pg{count}"
+ datadir = tmp_check / f"pgdata_{name}"
+ server = PostgresServer(name, bindir, datadir, sockdir, libpq_handle, **kwargs)
+ server.set_timeout(remaining_timeout_module)
+ _module_scoped_servers.append(server)
+ return server
+
+ yield _create
+
+ for server in _module_scoped_servers:
+ server.cleanup()
+ server.stop()
+
+
+@pytest.fixture(autouse=True)
+def _set_module_server_timeouts(request, _module_scoped_servers, remaining_timeout):
+ """Autouse fixture that sets timeout, enters subcontext, and records log positions for module-scoped servers."""
+ with contextlib.ExitStack() as stack:
+ for server in _module_scoped_servers:
+ stack.enter_context(server.start_new_test(remaining_timeout))
+ _record_server_for_log_reporting(request, server)
+ yield
+
+
+@pytest.hookimpl(hookwrapper=True, trylast=True)
+def pytest_runtest_makereport(item, call):
+ """
+ Adds PostgreSQL server logs to the test report sections.
+ """
+ outcome = yield
+ report = outcome.get_result()
+
+ if report.when != "call":
+ return
+
+ if _servers_key not in item.stash:
+ return
+
+ servers = item.stash[_servers_key]
+ del item.stash[_servers_key]
+
+ include_name = len(servers) > 1
+
+ for server in servers:
+ content = server.log_content()
+ if content.strip():
+ section_title = "Postgres log"
+ if include_name:
+ section_title += f" ({server.name})"
+ report.sections.append((section_title, content))
diff --git a/src/test/pytest/pypg/server.py b/src/test/pytest/pypg/server.py
new file mode 100644
index 00000000000..9242ab25007
--- /dev/null
+++ b/src/test/pytest/pypg/server.py
@@ -0,0 +1,470 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import os
+import pathlib
+import platform
+import re
+import shutil
+import socket
+import subprocess
+import tempfile
+from collections import namedtuple
+from typing import Callable, Optional
+
+from .util import run
+from libpq import PGconn, connect as libpq_connect
+
+
+class FileBackup(contextlib.AbstractContextManager):
+ """
+ A context manager which backs up a file's contents, restoring them on exit.
+ """
+
+ def __init__(self, file: pathlib.Path):
+ super().__init__()
+
+ self._file = file
+
+ def __enter__(self):
+ with tempfile.NamedTemporaryFile(
+ prefix=self._file.name, dir=self._file.parent, delete=False
+ ) as f:
+ self._backup = pathlib.Path(f.name)
+
+ shutil.copyfile(self._file, self._backup)
+
+ return self
+
+ def __exit__(self, *exc):
+ # Swap the backup and the original file, so that the modified contents
+ # can still be inspected in case of failure.
+ tmp = self._backup.parent / (self._backup.name + ".tmp")
+
+ shutil.copyfile(self._file, tmp)
+ shutil.copyfile(self._backup, self._file)
+ shutil.move(tmp, self._backup)
+
+
+class HBA(FileBackup):
+ """
+ Backs up a server's HBA configuration and provides means for temporarily
+ editing it.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "pg_hba.conf")
+
+ def prepend(self, *lines):
+ """
+ Temporarily prepends lines to the server's pg_hba.conf.
+
+ As sugar for aligning HBA columns in the tests, each line can be either
+ a string or a list of strings. List elements will be joined by single
+ spaces before they are written to file.
+ """
+ with open(self._file, "r") as f:
+ prior_data = f.read()
+
+ with open(self._file, "w") as f:
+ for line in lines:
+ if isinstance(line, list):
+ print(*line, file=f)
+ else:
+ print(line, file=f)
+
+ f.write(prior_data)
+
+
+class Config(FileBackup):
+ """
+ Backs up a server's postgresql.conf and provides means for temporarily
+ editing it.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "postgresql.conf")
+
+ def set(self, **gucs):
+ """
+ Temporarily appends GUC settings to the server's postgresql.conf.
+ """
+
+ with open(self._file, "a") as f:
+ print(file=f)
+
+ for n, v in gucs.items():
+ v = str(v)
+
+ # TODO: proper quoting
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+ v = "'{}'".format(v)
+
+ print(n, "=", v, file=f)
+
+
+Backup = namedtuple("Backup", "conf, hba")
+
+
+class PostgresServer:
+ """
+ Represents a running PostgreSQL server instance with management utilities.
+ Provides methods for configuration, user/database creation, and server control.
+ """
+
+ def __init__(
+ self,
+ name,
+ bindir,
+ datadir,
+ sockdir,
+ libpq_handle,
+ *,
+ hostaddr: Optional[str] = None,
+ port: Optional[int] = None,
+ ):
+ """
+ Initialize and start a PostgreSQL server instance.
+
+ Args:
+ name: The name of this server instance (for logging purposes)
+ bindir: Path to PostgreSQL bin directory
+ datadir: Path to data directory for this server
+ sockdir: Path to directory for Unix sockets
+ libpq_handle: ctypes handle to libpq
+ hostaddr: If provided, use this specific address (e.g., "127.0.0.2")
+ port: If provided, use this port instead of finding a free one,
+ is currently only allowed if hostaddr is also provided
+ """
+
+ if hostaddr is None and port is not None:
+ raise NotImplementedError("port was provided without hostaddr")
+
+ self.name = name
+ self.datadir = datadir
+ self.sockdir = sockdir
+ self.libpq_handle = libpq_handle
+ self._remaining_timeout_fn: Optional[Callable[[], float]] = None
+ self._bindir = bindir
+ self._pg_ctl = bindir / "pg_ctl"
+ self.log = datadir / "postgresql.log"
+ self._log_start_pos = 0
+
+ # Determine whether to use Unix sockets
+ use_unix_sockets = platform.system() != "Windows" and hostaddr is None
+
+ # Use INITDB_TEMPLATE if available (much faster than running initdb)
+ initdb_template = os.environ.get("INITDB_TEMPLATE")
+ if initdb_template and os.path.isdir(initdb_template):
+ shutil.copytree(initdb_template, datadir)
+ else:
+ if platform.system() == "Windows":
+ auth_method = "trust"
+ else:
+ auth_method = "peer"
+ run(
+ bindir / "initdb",
+ "--no-sync",
+ "--auth",
+ auth_method,
+ "--pgdata",
+ self.datadir,
+ )
+
+ # Figure out a port to listen on. Attempt to reserve both IPv4 and IPv6
+ # addresses in one go.
+ #
+ # Note: socket.has_dualstack_ipv6/create_server are only in Python 3.8+.
+ if hostaddr is not None:
+ # Explicit address provided
+ addrs: list[str] = [hostaddr]
+ temp_sock = socket.socket()
+ if port is None:
+ temp_sock.bind((hostaddr, 0))
+ _, port = temp_sock.getsockname()
+
+ elif hasattr(socket, "has_dualstack_ipv6") and socket.has_dualstack_ipv6():
+ addr = ("::1", 0)
+ temp_sock = socket.create_server(
+ addr, family=socket.AF_INET6, dualstack_ipv6=True
+ )
+
+ hostaddr, port, _, _ = temp_sock.getsockname()
+ assert hostaddr is not None
+ addrs = [hostaddr, "127.0.0.1"]
+
+ else:
+ addr = ("127.0.0.1", 0)
+
+ temp_sock = socket.socket()
+ temp_sock.bind(addr)
+
+ hostaddr, port = temp_sock.getsockname()
+ assert hostaddr is not None
+ addrs = [hostaddr]
+
+ # Store the computed values
+ self.hostaddr = hostaddr
+ self.port = port
+ # Including the host to use for connections - either the socket
+ # directory or TCP address
+ if use_unix_sockets:
+ self.host = str(sockdir)
+ else:
+ self.host = hostaddr
+
+ with open(os.path.join(datadir, "postgresql.conf"), "a") as f:
+ print(file=f)
+ if use_unix_sockets:
+ print(
+ "unix_socket_directories = '{}'".format(sockdir.as_posix()),
+ file=f,
+ )
+ else:
+ # Disable Unix sockets when using TCP to avoid lock conflicts
+ print("unix_socket_directories = ''", file=f)
+ print("listen_addresses = '{}'".format(",".join(addrs)), file=f)
+ print("port =", port, file=f)
+ print("log_connections = all", file=f)
+ print("fsync = off", file=f)
+ print("datestyle = 'ISO'", file=f)
+ print("timezone = 'UTC'", file=f)
+
+ # Between closing of the socket, s, and server start, we're racing
+ # against anything that wants to open up ephemeral ports, so try not to
+ # put any new work here.
+
+ temp_sock.close()
+ self.pg_ctl("start")
+
+ # Read the PID file to get the postmaster PID
+ with open(os.path.join(datadir, "postmaster.pid")) as f:
+ self.pid = int(f.readline().strip())
+
+ # ExitStack for cleanup callbacks
+ self._cleanup_stack = contextlib.ExitStack()
+
+ def current_log_position(self):
+ """Get the current end position of the log file."""
+ if self.log.exists():
+ return self.log.stat().st_size
+ return 0
+
+ def reset_log_position(self):
+ """Mark current log position as start for log_content()."""
+ self._log_start_pos = self.current_log_position()
+
+ @contextlib.contextmanager
+ def start_new_test(self, remaining_timeout):
+ """
+ Prepare server for a new test.
+
+ Sets timeout, resets log position, and enters a cleanup subcontext.
+ """
+ self.set_timeout(remaining_timeout)
+ self.reset_log_position()
+ with self.subcontext():
+ yield self
+
+ def psql(self, *args):
+ """Run psql with the given arguments."""
+ self._run(os.path.join(self._bindir, "psql"), "-w", *args)
+
+ def sql(self, query):
+ """Execute a SQL query via libpq. Returns simplified results."""
+ with self.connect() as conn:
+ return conn.sql(query)
+
+ def pg_ctl(self, *args):
+ """Run pg_ctl with the given arguments."""
+ self._run(self._pg_ctl, "--pgdata", self.datadir, "--log", self.log, *args)
+
+ def _run(self, cmd, *args, addenv: Optional[dict] = None):
+ """Run a command with PG* environment variables set."""
+ subenv = dict(os.environ)
+ subenv.update(
+ {
+ "PGHOST": str(self.host),
+ "PGPORT": str(self.port),
+ "PGDATABASE": "postgres",
+ "PGDATA": str(self.datadir),
+ }
+ )
+ if addenv:
+ subenv.update(addenv)
+ run(cmd, *args, env=subenv)
+
+ def create_users(self, *userkeys: str):
+ """Create test users and register them for cleanup."""
+ usermap = {}
+ for u in userkeys:
+ name = u + "user"
+ usermap[u] = name
+ self.psql("-c", "CREATE USER " + name)
+ self._cleanup_stack.callback(self.psql, "-c", "DROP USER " + name)
+ return usermap
+
+ def create_dbs(self, *dbkeys: str):
+ """Create test databases and register them for cleanup."""
+ dbmap = {}
+ for d in dbkeys:
+ name = d + "db"
+ dbmap[d] = name
+ self.psql("-c", "CREATE DATABASE " + name)
+ self._cleanup_stack.callback(self.psql, "-c", "DROP DATABASE " + name)
+ return dbmap
+
+ @contextlib.contextmanager
+ def reloading(self):
+ """
+ Provides a context manager for making configuration changes.
+
+ If the context suite finishes successfully, the configuration will
+ be reloaded via pg_ctl. On teardown, the configuration changes will
+ be unwound, and the server will be signaled to reload again.
+
+ The context target contains the following attributes which can be
+ used to configure the server:
+ - .conf: modifies postgresql.conf
+ - .hba: modifies pg_hba.conf
+
+ For example:
+
+ with pg_server_session.reloading() as s:
+ s.conf.set(log_connections="on")
+ s.hba.prepend("local all all trust")
+ """
+ # Push a reload onto the stack before making any other
+ # unwindable changes. That way the order of operations will be
+ #
+ # # test
+ # - config change 1
+ # - config change 2
+ # - reload
+ # # teardown
+ # - undo config change 2
+ # - undo config change 1
+ # - reload
+ #
+ self._cleanup_stack.callback(self.pg_ctl, "reload")
+ yield self._backup_configuration()
+
+ # Now actually reload
+ self.pg_ctl("reload")
+
+ @contextlib.contextmanager
+ def restarting(self):
+ """Like .reloading(), but with a full server restart."""
+ self._cleanup_stack.callback(self.pg_ctl, "restart")
+ yield self._backup_configuration()
+ self.pg_ctl("restart")
+
+ def _backup_configuration(self):
+ # Wrap the existing HBA and configuration with FileBackups.
+ return Backup(
+ hba=self._cleanup_stack.enter_context(HBA(self.datadir)),
+ conf=self._cleanup_stack.enter_context(Config(self.datadir)),
+ )
+
+ @contextlib.contextmanager
+ def subcontext(self):
+ """
+ Create a new cleanup context for per-test isolation.
+
+ Temporarily replaces the cleanup stack so that any cleanup callbacks
+ registered within this context will be cleaned up when the context exits.
+ """
+ old_stack = self._cleanup_stack
+ self._cleanup_stack = contextlib.ExitStack()
+ try:
+ self._cleanup_stack.__enter__()
+ yield self
+ finally:
+ self._cleanup_stack.__exit__(None, None, None)
+ self._cleanup_stack = old_stack
+
+ def stop(self, mode="fast"):
+ """
+ Stop the PostgreSQL server instance.
+
+ Ignores failures if the server is already stopped.
+ """
+ try:
+ self.pg_ctl("stop", "--mode", mode)
+ except subprocess.CalledProcessError:
+ # Server may have already been stopped
+ pass
+
+ def log_content(self) -> str:
+ """Return log content from the current context's start position."""
+ with open(self.log) as f:
+ f.seek(self._log_start_pos)
+ return f.read()
+
+ @contextlib.contextmanager
+ def log_contains(self, pattern, times=None):
+ """
+ Context manager that checks if the log matches pattern during the block.
+
+ Args:
+ pattern: The regex pattern to search for.
+ times: If None, any number of matches is accepted.
+ If a number, exactly that many matches are required.
+ """
+ start_pos = self.current_log_position()
+ yield
+ with open(self.log) as f:
+ f.seek(start_pos)
+ content = f.read()
+ if times is None:
+ assert re.search(pattern, content), f"Pattern {pattern!r} not found in log"
+ else:
+ match_count = len(re.findall(pattern, content))
+ assert match_count == times, (
+ f"Expected {times} matches of {pattern!r}, found {match_count}"
+ )
+
+ def cleanup(self):
+ """Run all registered cleanup callbacks."""
+ self._cleanup_stack.close()
+
+ def set_timeout(self, remaining_timeout_fn: Callable[[], float]) -> None:
+ """
+ Set the timeout function for connections.
+ This is typically called by pg fixture for each test.
+ """
+ self._remaining_timeout_fn = remaining_timeout_fn
+
+ def connect(self, **opts) -> PGconn:
+ """
+ Creates a connection to this PostgreSQL server instance.
+
+ Args:
+ **opts: Additional connection options (can override defaults)
+
+ Returns:
+ PGconn: Connected database connection
+
+ Example:
+ conn = pg.connect()
+ conn = pg.connect(dbname='mydb')
+ """
+ if self._remaining_timeout_fn is None:
+ raise RuntimeError(
+ "Timeout function not set. Use set_timeout() or pg fixture."
+ )
+
+ defaults = {
+ "host": self.host,
+ "port": self.port,
+ "dbname": "postgres",
+ }
+ defaults.update(opts)
+
+ return libpq_connect(
+ self.libpq_handle,
+ self._cleanup_stack,
+ self._remaining_timeout_fn,
+ **defaults,
+ )
diff --git a/src/test/pytest/pypg/util.py b/src/test/pytest/pypg/util.py
new file mode 100644
index 00000000000..b2a1e627e4b
--- /dev/null
+++ b/src/test/pytest/pypg/util.py
@@ -0,0 +1,42 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import shlex
+import subprocess
+import sys
+
+
+def eprint(*args, **kwargs):
+ """eprint prints to stderr"""
+ print(*args, file=sys.stderr, **kwargs)
+
+
+def run(*command, check=True, shell=None, silent=False, **kwargs):
+ """run runs the given command and prints it to stderr"""
+
+ if shell is None:
+ shell = len(command) == 1 and isinstance(command[0], str)
+
+ if shell:
+ command = command[0]
+ else:
+ command = list(map(str, command))
+
+ if not silent:
+ if shell:
+ eprint(f"+ {command}")
+ else:
+ # We could normally use shlex.join here, but it's not available in
+ # Python 3.6 which we still like to support
+ unsafe_string_cmd = " ".join(map(shlex.quote, command))
+ eprint(f"+ {unsafe_string_cmd}")
+
+ if silent:
+ kwargs.setdefault("stdout", subprocess.DEVNULL)
+
+ return subprocess.run(command, check=check, shell=shell, **kwargs)
+
+
+def capture(command, *args, stdout=subprocess.PIPE, encoding="utf-8", **kwargs):
+ return run(
+ command, *args, stdout=stdout, encoding=encoding, **kwargs
+ ).stdout.removesuffix("\n")
diff --git a/src/test/pytest/pyt/conftest.py b/src/test/pytest/pyt/conftest.py
new file mode 100644
index 00000000000..dd73917c68c
--- /dev/null
+++ b/src/test/pytest/pyt/conftest.py
@@ -0,0 +1 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
diff --git a/src/test/pytest/pyt/test_errors.py b/src/test/pytest/pyt/test_errors.py
new file mode 100644
index 00000000000..ad109039668
--- /dev/null
+++ b/src/test/pytest/pyt/test_errors.py
@@ -0,0 +1,34 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests for libpq error types and SQLSTATE-based exception mapping.
+"""
+
+import pytest
+import libpq
+
+
+def test_syntax_error(conn):
+ """Invalid SQL syntax raises SyntaxError with correct SQLSTATE."""
+ with pytest.raises(libpq.errors.SyntaxError) as exc_info:
+ conn.sql("SELEC 1")
+
+ err = exc_info.value
+ assert err.sqlstate == "42601"
+ assert err.sqlstate_class == "42"
+ assert "syntax" in str(err).lower()
+
+
+def test_unique_violation(conn):
+ """Unique violation includes all error fields and can be caught as parent class."""
+ conn.sql("CREATE TEMP TABLE test_uv (id int CONSTRAINT test_uv_pk PRIMARY KEY)")
+ conn.sql("INSERT INTO test_uv VALUES (1)")
+
+ with pytest.raises(libpq.errors.UniqueViolation) as exc_info:
+ conn.sql("INSERT INTO test_uv VALUES (1)")
+
+ err = exc_info.value
+ assert err.sqlstate == "23505"
+ assert err.table_name == "test_uv"
+ assert err.constraint_name == "test_uv_pk"
+ assert err.detail == "Key (id)=(1) already exists."
diff --git a/src/test/pytest/pyt/test_libpq.py b/src/test/pytest/pyt/test_libpq.py
new file mode 100644
index 00000000000..4fcf4056f41
--- /dev/null
+++ b/src/test/pytest/pyt/test_libpq.py
@@ -0,0 +1,172 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import os
+import socket
+import struct
+import threading
+from typing import Callable
+
+import pytest
+
+from libpq import connstr, LibpqError
+
+
+@pytest.mark.parametrize(
+ "opts, expected",
+ [
+ (dict(), ""),
+ (dict(port=5432), "port=5432"),
+ (dict(port=5432, dbname="postgres"), "port=5432 dbname=postgres"),
+ (dict(host=""), "host=''"),
+ (dict(host=" "), r"host=' '"),
+ (dict(keyword="'"), r"keyword=\'"),
+ (dict(keyword=" \\' "), r"keyword=' \\\' '"),
+ ],
+)
+def test_connstr(opts, expected):
+ """Tests the escape behavior for connstr()."""
+ assert connstr(opts) == expected
+
+
+def test_must_connect_errors(connect):
+ """Tests that connect() raises LibpqError."""
+ with pytest.raises(LibpqError, match="invalid connection option"):
+ connect(some_unknown_keyword="whatever")
+
+
+@pytest.fixture
+def local_server(tmp_path, remaining_timeout):
+ """
+ Opens up a local UNIX socket for mocking a Postgres server on a background
+ thread. See the _Server API for usage.
+
+ This fixture requires AF_UNIX support; dependent tests will be skipped on
+ platforms that don't provide it.
+ """
+
+ try:
+ from socket import AF_UNIX
+ except ImportError:
+ pytest.skip("AF_UNIX not supported on this platform")
+
+ class _Server(contextlib.ExitStack):
+ """
+ Implementation class for local_server. See .background() for the primary
+ entry point for tests. Postgres clients may connect to this server via
+ local_server.host/local_server.port.
+
+ _Server derives from contextlib.ExitStack to provide easy cleanup of
+ associated resources; see the documentation for that class for a full
+ explanation.
+ """
+
+ def __init__(self):
+ super().__init__()
+
+ self.host = tmp_path
+ self.port = 5432
+
+ self._thread = None
+ self._thread_exc = None
+ self._listener = self.enter_context(
+ socket.socket(AF_UNIX, socket.SOCK_STREAM),
+ )
+
+ def bind_and_listen(self):
+ """
+ Does the actual work of binding the UNIX socket using the Postgres
+ server conventions and listening for connections.
+
+ The listen backlog is currently hardcoded to one.
+ """
+ sockfile = self.host / ".s.PGSQL.{}".format(self.port)
+
+ # Lock down the permissions on the new socket.
+ prev_mask = os.umask(0o077)
+
+ # Bind (creating the socket file), and immediately register it for
+ # deletion from disk when the stack is cleaned up.
+ self._listener.bind(bytes(sockfile))
+ self.callback(os.unlink, sockfile)
+
+ os.umask(prev_mask)
+
+ self._listener.listen(1)
+
+ def background(self, fn: Callable[[socket.socket], None]) -> None:
+ """
+ Accepts a client connection on a background thread and passes it to
+ the provided callback. Any exceptions raised from the callback will
+ be re-raised on the main thread during fixture teardown.
+
+ Blocking operations on the connected socket default to using the
+ remaining_timeout(), though this can be changed by the test via the
+ socket's .settimeout().
+ """
+
+ def _bg():
+ try:
+ self._listener.settimeout(remaining_timeout())
+ sock, _ = self._listener.accept()
+
+ with sock:
+ sock.settimeout(remaining_timeout())
+ fn(sock)
+
+ except Exception as e:
+ # Save the exception for re-raising on the main thread.
+ self._thread_exc = e
+
+ # TODO: rather than using callback(), consider explicitly signaling
+ # the fn() implementation to stop early if we get an exception.
+ # Otherwise we'll hang until the end of the timeout.
+ self._thread = threading.Thread(target=_bg)
+ self.callback(self._join)
+
+ self._thread.start()
+
+ def _join(self):
+ """
+ Waits for the background thread to finish and raises any thrown
+ exception. This is called during fixture teardown.
+ """
+ # Give a little bit of wiggle room on the join timeout, since we're
+ # racing against the test's own use of remaining_timeout(). (It's
+ # preferable to let tests report timeouts; the stack traces will
+ # help with debugging.)
+ self._thread.join(remaining_timeout() + 1)
+ if self._thread.is_alive():
+ raise TimeoutError("background thread is still running after timeout")
+
+ if self._thread_exc is not None:
+ raise self._thread_exc
+
+ with _Server() as s:
+ s.bind_and_listen()
+ yield s
+
+
+def test_connection_is_finished_on_error(connect, local_server):
+ """Tests that PQfinish() gets called at the end of testing."""
+ expected_error = "something is wrong"
+
+ def serve_error(s: socket.socket) -> None:
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Quick check for the startup packet version.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (3, 0)
+
+ # Discard the remainder of the startup packet and send a v2 error.
+ s.recv(pktlen - 8)
+ s.send(b"E" + expected_error.encode() + b"\0")
+
+ # And now the socket should be closed.
+ assert not s.recv(1), "client sent unexpected data"
+
+ local_server.background(serve_error)
+
+ with pytest.raises(LibpqError, match=expected_error):
+ # Exiting this context should result in PQfinish().
+ connect(host=local_server.host, port=local_server.port)
diff --git a/src/test/pytest/pyt/test_multi_server.py b/src/test/pytest/pyt/test_multi_server.py
new file mode 100644
index 00000000000..8ee045b0cc8
--- /dev/null
+++ b/src/test/pytest/pyt/test_multi_server.py
@@ -0,0 +1,46 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests demonstrating multi-server functionality using create_pg fixture.
+
+These tests verify that the pytest infrastructure correctly handles
+multiple PostgreSQL server instances within a single test, and that
+module-scoped servers persist across tests.
+"""
+
+import pytest
+
+
+def test_multiple_servers_basic(create_pg):
+ """Test that we can create and connect to multiple servers."""
+ node1 = create_pg("primary")
+ node2 = create_pg("secondary")
+
+ conn1 = node1.connect()
+ conn2 = node2.connect()
+
+ # Each server should have its own data directory
+ datadir1 = conn1.sql("SHOW data_directory")
+ datadir2 = conn2.sql("SHOW data_directory")
+ assert datadir1 != datadir2
+
+ # Each server should be listening on a different port
+ assert node1.port != node2.port
+
+
+@pytest.fixture(scope="module")
+def shared_server(create_pg_module):
+ """A server shared across all tests in this module."""
+ server = create_pg_module("shared")
+ server.sql("CREATE TABLE module_state (value int DEFAULT 0)")
+ return server
+
+
+def test_module_server_create_row(shared_server):
+ """First test: create a row in the shared server."""
+ shared_server.connect().sql("INSERT INTO module_state VALUES (42)")
+
+
+def test_module_server_see_row(shared_server):
+ """Second test: verify we see the row from the previous test."""
+ assert shared_server.connect().sql("SELECT value FROM module_state") == 42
diff --git a/src/test/pytest/pyt/test_query_helpers.py b/src/test/pytest/pyt/test_query_helpers.py
new file mode 100644
index 00000000000..abcd9084214
--- /dev/null
+++ b/src/test/pytest/pyt/test_query_helpers.py
@@ -0,0 +1,347 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests for query helper functions with type conversion and result simplification.
+"""
+
+import uuid
+
+import pytest
+
+
+def test_single_cell_int(conn):
+ """Single cell integer query returns just the value."""
+ result = conn.sql("SELECT 1")
+ assert result == 1
+ assert isinstance(result, int)
+
+
+def test_single_cell_string(conn):
+ """Single cell string query returns just the value."""
+ result = conn.sql("SELECT 'hello'")
+ assert result == "hello"
+ assert isinstance(result, str)
+
+
+def test_single_cell_bool(conn):
+ """Single cell boolean query returns just the value."""
+
+ result = conn.sql("SELECT true")
+ assert result is True
+ assert isinstance(result, bool)
+
+ result = conn.sql("SELECT false")
+ assert result is False
+
+
+def test_single_cell_float(conn):
+ """Single cell float query returns just the value."""
+
+ result = conn.sql("SELECT 3.14::float4")
+ assert isinstance(result, float)
+ assert abs(result - 3.14) < 0.01
+
+
+def test_single_cell_null(conn):
+ """Single cell NULL query returns None."""
+
+ result = conn.sql("SELECT NULL")
+ assert result is None
+
+
+def test_single_row_multiple_columns(conn):
+ """Single row with multiple columns returns a tuple."""
+
+ result = conn.sql("SELECT 1, 'hello', true")
+ assert result == (1, "hello", True)
+ assert isinstance(result, tuple)
+
+
+def test_single_column_multiple_rows(conn):
+ """Single column with multiple rows returns a list of values."""
+
+ result = conn.sql("SELECT * FROM generate_series(1, 3)")
+ assert result == [1, 2, 3]
+ assert isinstance(result, list)
+
+
+def test_multiple_rows_and_columns(conn):
+ """Multiple rows and columns returns list of tuples."""
+
+ result = conn.sql("SELECT * FROM (VALUES (1, 'a'), (2, 'b'), (3, 'c')) AS t")
+ assert result == [(1, "a"), (2, "b"), (3, "c")]
+ assert isinstance(result, list)
+ assert all(isinstance(row, tuple) for row in result)
+
+
+def test_empty_result(conn):
+ """Empty result set returns empty list."""
+
+ result = conn.sql("SELECT 1 WHERE false")
+ assert result == []
+
+
+def test_query_error_handling(conn):
+ """Query errors raise RuntimeError with actual error message."""
+
+ with pytest.raises(RuntimeError) as exc_info:
+ conn.sql("SELECT * FROM nonexistent_table")
+
+ error_msg = str(exc_info.value)
+ assert "nonexistent_table" in error_msg or "does not exist" in error_msg
+
+
+def test_division_by_zero_error(conn):
+ """Division by zero raises RuntimeError."""
+
+ with pytest.raises(RuntimeError) as exc_info:
+ conn.sql("SELECT 1/0")
+
+ error_msg = str(exc_info.value)
+ assert "division by zero" in error_msg.lower()
+
+
+def test_simple_exec_create_table(conn):
+ """sql for CREATE TABLE returns None."""
+
+ result = conn.sql("CREATE TEMP TABLE test_table (id int, name text)")
+ assert result is None
+
+ # Verify table was created
+ count = conn.sql("SELECT COUNT(*) FROM test_table")
+ assert count == 0
+
+
+def test_simple_exec_insert(conn):
+ """sql for INSERT returns None."""
+
+ conn.sql("CREATE TEMP TABLE test_table (id int, name text)")
+ result = conn.sql("INSERT INTO test_table VALUES (1, 'Alice'), (2, 'Bob')")
+ assert result is None
+
+ # Verify data was inserted
+ count = conn.sql("SELECT COUNT(*) FROM test_table")
+ assert count == 2
+
+
+def test_type_conversion_mixed(conn):
+ """Test mixed type conversion in a single row."""
+
+ result = conn.sql("SELECT 42::int4, 123::int8, 3.14::float8, 'text', true, NULL")
+ assert result == (42, 123, 3.14, "text", True, None)
+ assert isinstance(result[0], int)
+ assert isinstance(result[1], int)
+ assert isinstance(result[2], float)
+ assert isinstance(result[3], str)
+ assert isinstance(result[4], bool)
+ assert result[5] is None
+
+
+def test_multiple_queries_same_connection(conn):
+ """Test running multiple queries on the same connection."""
+
+ result1 = conn.sql("SELECT 1")
+ assert result1 == 1
+
+ result2 = conn.sql("SELECT 'hello', 'world'")
+ assert result2 == ("hello", "world")
+
+ result3 = conn.sql("SELECT * FROM generate_series(1, 5)")
+ assert result3 == [1, 2, 3, 4, 5]
+
+
+def test_date_type(conn):
+ """Test date type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '2025-10-20'::date")
+ assert result == datetime.date(2025, 10, 20)
+ assert isinstance(result, datetime.date)
+
+
+def test_timestamp_type(conn):
+ """Test timestamp type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '2025-10-20 15:30:45'::timestamp")
+ assert result == datetime.datetime(2025, 10, 20, 15, 30, 45)
+ assert isinstance(result, datetime.datetime)
+
+
+def test_time_type(conn):
+ """Test time type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '15:30:45'::time")
+ assert result == datetime.time(15, 30, 45)
+ assert isinstance(result, datetime.time)
+
+
+def test_numeric_type(conn):
+ """Test numeric/decimal type conversion."""
+ import decimal
+
+ result = conn.sql("SELECT 123.456::numeric")
+ assert result == decimal.Decimal("123.456")
+ assert isinstance(result, decimal.Decimal)
+
+
+def test_int_array(conn):
+ """Test integer array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[1, 2, 3, 4, 5]")
+ assert result == [1, 2, 3, 4, 5]
+ assert isinstance(result, list)
+ assert all(isinstance(x, int) for x in result)
+
+
+def test_text_array(conn):
+ """Test text array type conversion."""
+
+ result = conn.sql("SELECT ARRAY['hello', 'world', 'test']")
+ assert result == ["hello", "world", "test"]
+ assert isinstance(result, list)
+ assert all(isinstance(x, str) for x in result)
+
+
+def test_bool_array(conn):
+ """Test boolean array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[true, false, true]")
+ assert result == [True, False, True]
+ assert isinstance(result, list)
+ assert all(isinstance(x, bool) for x in result)
+
+
+def test_empty_array(conn):
+ """Test empty array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[]::int[]")
+ assert result == []
+ assert isinstance(result, list)
+
+
+def test_json_type(conn):
+ """Test JSON type (parsed to dict)."""
+
+ result = conn.sql('SELECT \'{"key": "value"}\'::json')
+ assert isinstance(result, dict)
+ assert result == {"key": "value"}
+
+
+def test_jsonb_type(conn):
+ """Test JSONB type (parsed to dict)."""
+
+ result = conn.sql('SELECT \'{"name": "test", "count": 42}\'::jsonb')
+ assert isinstance(result, dict)
+ assert result == {"name": "test", "count": 42}
+
+
+def test_json_array(conn):
+ """Test JSON array type."""
+
+ result = conn.sql("SELECT '[1, 2, 3, 4, 5]'::json")
+ assert isinstance(result, list)
+ assert result == [1, 2, 3, 4, 5]
+
+
+def test_json_nested(conn):
+ """Test nested JSON object."""
+
+ result = conn.sql(
+ 'SELECT \'{"user": {"id": 1, "name": "Alice"}, "active": true}\'::json'
+ )
+ assert isinstance(result, dict)
+ assert result == {"user": {"id": 1, "name": "Alice"}, "active": True}
+
+
+def test_mixed_types_with_arrays(conn):
+ """Test mixed types including arrays in a single row."""
+
+ result = conn.sql("SELECT 42, 'text', ARRAY[1, 2, 3], true")
+ assert result == (42, "text", [1, 2, 3], True)
+ assert isinstance(result[0], int)
+ assert isinstance(result[1], str)
+ assert isinstance(result[2], list)
+ assert isinstance(result[3], bool)
+
+
+def test_uuid_type(conn):
+ """Test UUID type conversion."""
+ test_uuid = "550e8400-e29b-41d4-a716-446655440000"
+ result = conn.sql(f"SELECT '{test_uuid}'::uuid")
+ assert result == uuid.UUID(test_uuid)
+ assert isinstance(result, uuid.UUID)
+
+
+def test_uuid_generation(conn):
+ """Test generated UUID type conversion."""
+ result = conn.sql("SELECT uuidv4()")
+ assert isinstance(result, uuid.UUID)
+ # Check it's a valid UUID by ensuring it can be converted to string
+ assert len(str(result)) == 36 # UUID string format length
+
+
+def test_text_array_with_commas(conn):
+ """Test text array with elements containing commas."""
+
+ result = conn.sql("SELECT ARRAY['A,B', 'C', ' D ']")
+ assert result == ["A,B", "C", " D "]
+
+
+def test_text_array_with_quotes(conn):
+ """Test text array with elements containing quotes."""
+
+ result = conn.sql(r"SELECT ARRAY[E'a\"b', 'c']")
+ assert result == ['a"b', "c"]
+
+
+def test_text_array_with_backslash(conn):
+ """Test text array with elements containing backslashes."""
+
+ result = conn.sql(r"SELECT ARRAY[E'a\\b', 'c']")
+ assert result == ["a\\b", "c"]
+
+
+def test_json_array_type(conn):
+ """Test array of JSON values with embedded quotes and commas."""
+
+ result = conn.sql("""SELECT ARRAY['{"abc": 123, "xyz": 456}'::json]""")
+ assert result == [{"abc": 123, "xyz": 456}]
+
+
+def test_json_array_multiple(conn):
+ """Test array of multiple JSON objects."""
+
+ result = conn.sql(
+ """SELECT ARRAY['{"a": 1}'::json, '{"b": 2}'::json, '["x", "y"]'::json]"""
+ )
+ assert result == [{"a": 1}, {"b": 2}, ["x", "y"]]
+
+
+def test_2d_int_array(conn):
+ """Test 2D integer array."""
+
+ result = conn.sql("SELECT ARRAY[[1,2],[3,4]]")
+ assert result == [[1, 2], [3, 4]]
+
+
+def test_2d_text_array(conn):
+ """Test 2D integer array."""
+
+ result = conn.sql("SELECT ARRAY[['a','b'],['c','d,e']]")
+ assert result == [["a", "b"], ["c", "d,e"]]
+
+
+def test_3d_int_array(conn):
+ """Test 3D integer array."""
+
+ result = conn.sql("SELECT ARRAY[[[1,2],[3,4]],[[5,6],[7,8]]]")
+ assert result == [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]
+
+
+def test_array_with_null(conn):
+ """Test array with NULL elements."""
+
+ result = conn.sql("SELECT ARRAY[1, NULL, 3]")
+ assert result == [1, None, 3]
diff --git a/src/tools/generate_pytest_libpq_errors.py b/src/tools/generate_pytest_libpq_errors.py
new file mode 100755
index 00000000000..ba92891c17a
--- /dev/null
+++ b/src/tools/generate_pytest_libpq_errors.py
@@ -0,0 +1,147 @@
+#!/usr/bin/env python3
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Generate src/test/pytest/libpq/_generated_errors.py from errcodes.txt.
+"""
+
+import sys
+from pathlib import Path
+
+
+ACRONYMS = {"sql", "fdw"}
+WORD_MAP = {
+ "sqlclient": "SQLClient",
+ "sqlserver": "SQLServer",
+ "sqlconnection": "SQLConnection",
+}
+
+
+def snake_to_pascal(name: str) -> str:
+ """Convert snake_case to PascalCase, keeping acronyms uppercase."""
+ words = []
+ for word in name.split("_"):
+ if word in WORD_MAP:
+ words.append(WORD_MAP[word])
+ elif word in ACRONYMS:
+ words.append(word.upper())
+ else:
+ words.append(word.capitalize())
+ return "".join(words)
+
+
+def parse_errcodes(path: Path):
+ """Parse errcodes.txt and return list of (sqlstate, macro_name, spec_name) tuples."""
+ errors = []
+
+ with open(path) as f:
+ for line in f:
+ parts = line.split()
+ if len(parts) >= 4 and len(parts[0]) == 5:
+ sqlstate, _, macro_name, spec_name = parts[:4]
+ errors.append((sqlstate, macro_name, spec_name))
+
+ return errors
+
+
+def macro_to_class_name(macro_name: str) -> str:
+ """Convert ERRCODE_FOO_BAR to FooBar."""
+ name = macro_name.removeprefix("ERRCODE_")
+ # Move WARNING prefix to the end as a suffix
+ if name.startswith("WARNING_"):
+ name = name.removeprefix("WARNING_") + "_WARNING"
+ return snake_to_pascal(name.lower())
+
+
+def generate_errors(errcodes_path: Path):
+ """Generate the _generated_errors.py content."""
+ errors = parse_errcodes(errcodes_path)
+
+ # Find spec_names that appear more than once (collisions)
+ spec_name_counts: dict[str, int] = {}
+ for _, _, spec_name in errors:
+ spec_name_counts[spec_name] = spec_name_counts.get(spec_name, 0) + 1
+ colliding_spec_names = {
+ name for name, count in spec_name_counts.items() if count > 1
+ }
+
+ lines = [
+ "# Copyright (c) 2025, PostgreSQL Global Development Group",
+ "# This file is generated by src/tools/generate_pytest_libpq_errors.py - do not edit directly.",
+ "",
+ '"""',
+ "Generated PostgreSQL error classes mapped from SQLSTATE codes.",
+ '"""',
+ "",
+ "from typing import Dict",
+ "",
+ "from ._error_base import LibpqError, LibpqWarning",
+ "",
+ "",
+ ]
+
+ generated_classes = {"LibpqError"}
+ sqlstate_to_exception = {}
+
+ for sqlstate, macro_name, spec_name in errors:
+ # 000 errors define the parent class for all errors in this SQLSTATE class
+ if sqlstate.endswith("000"):
+ exc_name = snake_to_pascal(spec_name)
+ if exc_name == "Warning":
+ parent = "LibpqWarning"
+ else:
+ parent = "LibpqError"
+ else:
+ if spec_name in colliding_spec_names:
+ exc_name = macro_to_class_name(macro_name)
+ else:
+ exc_name = snake_to_pascal(spec_name)
+ # Use parent class if available, otherwise LibpqError
+ parent = sqlstate_to_exception.get(sqlstate[:2] + "000", "LibpqError")
+ # Warnings should end with "Warning"
+ if parent == "Warning" and not exc_name.endswith("Warning"):
+ exc_name += "Warning"
+
+ generated_classes.add(exc_name)
+ sqlstate_to_exception[sqlstate] = exc_name
+ lines.extend(
+ [
+ f"class {exc_name}({parent}):",
+ f' """SQLSTATE {sqlstate} - {spec_name.replace("_", " ")}."""',
+ "",
+ " pass",
+ "",
+ "",
+ ]
+ )
+
+ lines.append("SQLSTATE_TO_EXCEPTION: Dict[str, type] = {")
+ for sqlstate, exc_name in sqlstate_to_exception.items():
+ lines.append(f' "{sqlstate}": {exc_name},')
+ lines.extend(["}", "", ""])
+
+ all_exports = list(generated_classes) + ["SQLSTATE_TO_EXCEPTION"]
+ lines.append("__all__ = [")
+ for name in all_exports:
+ lines.append(f' "{name}",')
+ lines.append("]")
+
+ return "\n".join(lines) + "\n"
+
+
+if __name__ == "__main__":
+ script_dir = Path(__file__).resolve().parent
+ src_root = script_dir.parent.parent
+
+ errcodes_path = src_root / "src" / "backend" / "utils" / "errcodes.txt"
+ output_path = (
+ src_root / "src" / "test" / "pytest" / "libpq" / "_generated_errors.py"
+ )
+
+ if not errcodes_path.exists():
+ print(f"Error: {errcodes_path} not found", file=sys.stderr)
+ sys.exit(1)
+
+ output = generate_errors(errcodes_path)
+ output_path.write_text(output)
+ print(f"Generated {output_path}")
--
2.52.0
v7-0005-Convert-load-balance-tests-from-perl-to-python.patchtext/x-patch; charset=utf-8; name=v7-0005-Convert-load-balance-tests-from-perl-to-python.patchDownload
From 026d187c8fad1ce8f701d66bd41ffb30e3eff4c0 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <postgres@jeltef.nl>
Date: Fri, 26 Dec 2025 12:31:43 +0100
Subject: [PATCH v7 5/9] Convert load balance tests from perl to python
---
src/interfaces/libpq/Makefile | 1 +
src/interfaces/libpq/meson.build | 7 +-
src/interfaces/libpq/pyt/test_load_balance.py | 170 ++++++++++++++++++
.../libpq/t/003_load_balance_host_list.pl | 94 ----------
.../libpq/t/004_load_balance_dns.pl | 144 ---------------
5 files changed, 176 insertions(+), 240 deletions(-)
create mode 100644 src/interfaces/libpq/pyt/test_load_balance.py
delete mode 100644 src/interfaces/libpq/t/003_load_balance_host_list.pl
delete mode 100644 src/interfaces/libpq/t/004_load_balance_dns.pl
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index bf4baa92917..4c4bdb4b3a3 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -167,6 +167,7 @@ check installcheck: export PATH := $(CURDIR)/test:$(PATH)
check: test-build all
$(prove_check)
+ $(pytest_check)
installcheck: test-build all
$(prove_installcheck)
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index c5ecd9c3a87..56790dd92a9 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -150,8 +150,6 @@ tests += {
'tests': [
't/001_uri.pl',
't/002_api.pl',
- 't/003_load_balance_host_list.pl',
- 't/004_load_balance_dns.pl',
't/005_negotiate_encryption.pl',
't/006_service.pl',
],
@@ -162,6 +160,11 @@ tests += {
},
'deps': libpq_test_deps,
},
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_load_balance.py',
+ ],
+ },
}
subdir('po', if_found: libintl)
diff --git a/src/interfaces/libpq/pyt/test_load_balance.py b/src/interfaces/libpq/pyt/test_load_balance.py
new file mode 100644
index 00000000000..0af46d8f37d
--- /dev/null
+++ b/src/interfaces/libpq/pyt/test_load_balance.py
@@ -0,0 +1,170 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests for load_balance_hosts connection parameter.
+
+These tests verify that libpq correctly handles load balancing across multiple
+PostgreSQL servers specified in the connection string.
+"""
+
+import platform
+import re
+
+import pytest
+
+from libpq import LibpqError
+import pypg
+
+
+@pytest.fixture(scope="module")
+def load_balance_nodes_hostlist(create_pg_module):
+ """
+ Create 3 PostgreSQL nodes with different socket directories.
+
+ Each node has its own Unix socket directory for isolation.
+ Returns a tuple of (nodes, connect).
+ """
+ nodes = [create_pg_module() for _ in range(3)]
+
+ hostlist = ",".join(node.host for node in nodes)
+ portlist = ",".join(str(node.port) for node in nodes)
+
+ def connect(**kwargs):
+ return nodes[0].connect(host=hostlist, port=portlist, **kwargs)
+
+ return nodes, connect
+
+
+@pytest.fixture(scope="module")
+def load_balance_nodes_dns(create_pg_module):
+ """
+ Create 3 PostgreSQL nodes on the same port but different IP addresses.
+
+ Uses 127.0.0.1, 127.0.0.2, 127.0.0.3 with a shared port, so that
+ connections to 'pg-loadbalancetest' can be load balanced via DNS.
+
+ Since setting up a DNS server is more effort than we consider reasonable to
+ run this test, this situation is instead imitated by using a hosts file
+ where a single hostname maps to multiple different IP addresses. This test
+ requires the administrator to add the following lines to the hosts file (if
+ we detect that this hasn't happened we skip the test):
+
+ 127.0.0.1 pg-loadbalancetest
+ 127.0.0.2 pg-loadbalancetest
+ 127.0.0.3 pg-loadbalancetest
+
+ Windows or Linux are required to run this test because these OSes allow
+ binding to 127.0.0.2 and 127.0.0.3 addresses by default, but other OSes
+ don't. We need to bind to different IP addresses, so that we can use these
+ different IP addresses in the hosts file.
+
+ The hosts file needs to be prepared before running this test. We don't do
+ it on the fly, because it requires root permissions to change the hosts
+ file. In CI we set up the previously mentioned rules in the hosts file, so
+ that this load balancing method is tested.
+
+ Requires PG_TEST_EXTRA=load_balance because it requires this manual hosts
+ file configuration and also uses TCP with trust auth, which is potentially
+ unsafe on multiuser systems.
+ """
+ pypg.skip_unless_test_extras("load_balance")
+
+ if platform.system() not in ("Linux", "Windows"):
+ pytest.skip("DNS load balance test only supported on Linux and Windows")
+
+ if platform.system() == "Windows":
+ hosts_path = r"c:\Windows\System32\Drivers\etc\hosts"
+ else:
+ hosts_path = "/etc/hosts"
+
+ try:
+ with open(hosts_path) as f:
+ hosts_content = f.read()
+ except (OSError, IOError):
+ pytest.skip(f"Could not read hosts file: {hosts_path}")
+
+ count = len(re.findall(r"127\.0\.0\.[1-3]\s+pg-loadbalancetest", hosts_content))
+ if count != 3:
+ pytest.skip("hosts file not prepared for DNS load balance test")
+
+ first_node = create_pg_module(hostaddr="127.0.0.1")
+ nodes = [
+ first_node,
+ create_pg_module(hostaddr="127.0.0.2", port=first_node.port),
+ create_pg_module(hostaddr="127.0.0.3", port=first_node.port),
+ ]
+
+ # Allow trust authentication for TCP connections from loopback
+ for node in nodes:
+ hba_path = node.datadir / "pg_hba.conf"
+ with open(hba_path, "r") as f:
+ original_content = f.read()
+ with open(hba_path, "w") as f:
+ f.write("host all all 127.0.0.0/8 trust\n")
+ f.write(original_content)
+ node.pg_ctl("reload")
+
+ def connect(**kwargs):
+ return nodes[0].connect(host="pg-loadbalancetest", **kwargs)
+
+ return nodes, connect
+
+
+@pytest.fixture(scope="module", params=["hostlist", "dns"])
+def load_balance_nodes(request):
+ """
+ Parametrized fixture providing both load balancing test environments.
+ """
+ return request.getfixturevalue(f"load_balance_nodes_{request.param}")
+
+
+def test_load_balance_hosts_invalid_value(load_balance_nodes):
+ """load_balance_hosts doesn't accept unknown values."""
+ _, connect = load_balance_nodes
+
+ with pytest.raises(
+ LibpqError, match='invalid load_balance_hosts value: "doesnotexist"'
+ ):
+ connect(load_balance_hosts="doesnotexist")
+
+
+def test_load_balance_hosts_disable(load_balance_nodes):
+ """load_balance_hosts=disable always connects to the first node."""
+ nodes, connect = load_balance_nodes
+
+ with nodes[0].log_contains("connection received"):
+ connect(load_balance_hosts="disable")
+
+
+def test_load_balance_hosts_random_distribution(load_balance_nodes):
+ """load_balance_hosts=random distributes connections across all nodes."""
+ nodes, connect = load_balance_nodes
+
+ for _ in range(50):
+ connect(load_balance_hosts="random")
+
+ occurrences = [
+ len(re.findall("connection received", node.log_content())) for node in nodes
+ ]
+
+ # Statistically, each node should receive at least one connection.
+ # The probability of any node receiving 0 connections is (2/3)^50 ≈ 1.57e-9
+ assert occurrences[0] > 0, "node1 should receive at least one connection"
+ assert occurrences[1] > 0, "node2 should receive at least one connection"
+ assert occurrences[2] > 0, "node3 should receive at least one connection"
+ assert sum(occurrences) == 50, "total connections should be 50"
+
+
+def test_load_balance_hosts_failover(load_balance_nodes):
+ """load_balance_hosts continues trying hosts until it finds a working one."""
+ nodes, connect = load_balance_nodes
+
+ nodes[0].stop()
+ nodes[1].stop()
+
+ with nodes[2].log_contains("connection received"):
+ connect(load_balance_hosts="disable")
+
+ with nodes[2].log_contains("connection received", times=5):
+ for _ in range(5):
+ connect(load_balance_hosts="random")
diff --git a/src/interfaces/libpq/t/003_load_balance_host_list.pl b/src/interfaces/libpq/t/003_load_balance_host_list.pl
deleted file mode 100644
index 1f970ff994b..00000000000
--- a/src/interfaces/libpq/t/003_load_balance_host_list.pl
+++ /dev/null
@@ -1,94 +0,0 @@
-# Copyright (c) 2023-2026, PostgreSQL Global Development Group
-use strict;
-use warnings FATAL => 'all';
-use Config;
-use PostgreSQL::Test::Utils;
-use PostgreSQL::Test::Cluster;
-use Test::More;
-
-# This tests load balancing across the list of different hosts in the host
-# parameter of the connection string.
-
-# Cluster setup which is shared for testing both load balancing methods
-my $node1 = PostgreSQL::Test::Cluster->new('node1');
-my $node2 = PostgreSQL::Test::Cluster->new('node2', own_host => 1);
-my $node3 = PostgreSQL::Test::Cluster->new('node3', own_host => 1);
-
-# Create a data directory with initdb
-$node1->init();
-$node2->init();
-$node3->init();
-
-# Start the PostgreSQL server
-$node1->start();
-$node2->start();
-$node3->start();
-
-# Start the tests for load balancing method 1
-my $hostlist = $node1->host . ',' . $node2->host . ',' . $node3->host;
-my $portlist = $node1->port . ',' . $node2->port . ',' . $node3->port;
-
-$node1->connect_fails(
- "host=$hostlist port=$portlist load_balance_hosts=doesnotexist",
- "load_balance_hosts doesn't accept unknown values",
- expected_stderr => qr/invalid load_balance_hosts value: "doesnotexist"/);
-
-# load_balance_hosts=disable should always choose the first one.
-$node1->connect_ok(
- "host=$hostlist port=$portlist load_balance_hosts=disable",
- "load_balance_hosts=disable connects to the first node",
- sql => "SELECT 'connect1'",
- log_like => [qr/statement: SELECT 'connect1'/]);
-
-# Statistically the following loop with load_balance_hosts=random will almost
-# certainly connect at least once to each of the nodes. The chance of that not
-# happening is so small that it's negligible: (2/3)^50 = 1.56832855e-9
-foreach my $i (1 .. 50)
-{
- $node1->connect_ok(
- "host=$hostlist port=$portlist load_balance_hosts=random",
- "repeated connections with random load balancing",
- sql => "SELECT 'connect2'");
-}
-
-my $node1_occurrences = () =
- $node1->log_content() =~ /statement: SELECT 'connect2'/g;
-my $node2_occurrences = () =
- $node2->log_content() =~ /statement: SELECT 'connect2'/g;
-my $node3_occurrences = () =
- $node3->log_content() =~ /statement: SELECT 'connect2'/g;
-
-my $total_occurrences =
- $node1_occurrences + $node2_occurrences + $node3_occurrences;
-
-cmp_ok($node1_occurrences, '>', 1,
- "received at least one connection on node1");
-cmp_ok($node2_occurrences, '>', 1,
- "received at least one connection on node2");
-cmp_ok($node3_occurrences, '>', 1,
- "received at least one connection on node3");
-is($total_occurrences, 50, "received 50 connections across all nodes");
-
-$node1->stop();
-$node2->stop();
-
-# load_balance_hosts=disable should continue trying hosts until it finds a
-# working one.
-$node3->connect_ok(
- "host=$hostlist port=$portlist load_balance_hosts=disable",
- "load_balance_hosts=disable continues until it connects to the a working node",
- sql => "SELECT 'connect3'",
- log_like => [qr/statement: SELECT 'connect3'/]);
-
-# Also with load_balance_hosts=random we continue to the next nodes if previous
-# ones are down. Connect a few times to make sure it's not just lucky.
-foreach my $i (1 .. 5)
-{
- $node3->connect_ok(
- "host=$hostlist port=$portlist load_balance_hosts=random",
- "load_balance_hosts=random continues until it connects to the a working node",
- sql => "SELECT 'connect4'",
- log_like => [qr/statement: SELECT 'connect4'/]);
-}
-
-done_testing();
diff --git a/src/interfaces/libpq/t/004_load_balance_dns.pl b/src/interfaces/libpq/t/004_load_balance_dns.pl
deleted file mode 100644
index 210ec1ff517..00000000000
--- a/src/interfaces/libpq/t/004_load_balance_dns.pl
+++ /dev/null
@@ -1,144 +0,0 @@
-# Copyright (c) 2023-2026, PostgreSQL Global Development Group
-use strict;
-use warnings FATAL => 'all';
-use Config;
-use PostgreSQL::Test::Utils;
-use PostgreSQL::Test::Cluster;
-use Test::More;
-
-if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\bload_balance\b/)
-{
- plan skip_all =>
- 'Potentially unsafe test load_balance not enabled in PG_TEST_EXTRA';
-}
-
-# This tests loadbalancing based on a DNS entry that contains multiple records
-# for different IPs. Since setting up a DNS server is more effort than we
-# consider reasonable to run this test, this situation is instead imitated by
-# using a hosts file where a single hostname maps to multiple different IP
-# addresses. This test requires the administrator to add the following lines to
-# the hosts file (if we detect that this hasn't happened we skip the test):
-#
-# 127.0.0.1 pg-loadbalancetest
-# 127.0.0.2 pg-loadbalancetest
-# 127.0.0.3 pg-loadbalancetest
-#
-# Windows or Linux are required to run this test because these OSes allow
-# binding to 127.0.0.2 and 127.0.0.3 addresses by default, but other OSes
-# don't. We need to bind to different IP addresses, so that we can use these
-# different IP addresses in the hosts file.
-#
-# The hosts file needs to be prepared before running this test. We don't do it
-# on the fly, because it requires root permissions to change the hosts file. In
-# CI we set up the previously mentioned rules in the hosts file, so that this
-# load balancing method is tested.
-
-# Cluster setup which is shared for testing both load balancing methods
-my $can_bind_to_127_0_0_2 =
- $Config{osname} eq 'linux' || $PostgreSQL::Test::Utils::windows_os;
-
-# Checks for the requirements for testing load balancing method 2
-if (!$can_bind_to_127_0_0_2)
-{
- plan skip_all => 'load_balance test only supported on Linux and Windows';
-}
-
-my $hosts_path;
-if ($windows_os)
-{
- $hosts_path = 'c:\Windows\System32\Drivers\etc\hosts';
-}
-else
-{
- $hosts_path = '/etc/hosts';
-}
-
-my $hosts_content = PostgreSQL::Test::Utils::slurp_file($hosts_path);
-
-my $hosts_count = () =
- $hosts_content =~ /127\.0\.0\.[1-3] pg-loadbalancetest/g;
-if ($hosts_count != 3)
-{
- # Host file is not prepared for this test
- plan skip_all => "hosts file was not prepared for DNS load balance test";
-}
-
-$PostgreSQL::Test::Cluster::use_tcp = 1;
-$PostgreSQL::Test::Cluster::test_pghost = '127.0.0.1';
-my $port = PostgreSQL::Test::Cluster::get_free_port();
-my $node1 = PostgreSQL::Test::Cluster->new('node1', port => $port);
-my $node2 =
- PostgreSQL::Test::Cluster->new('node2', port => $port, own_host => 1);
-my $node3 =
- PostgreSQL::Test::Cluster->new('node3', port => $port, own_host => 1);
-
-# Create a data directory with initdb
-$node1->init();
-$node2->init();
-$node3->init();
-
-# Start the PostgreSQL server
-$node1->start();
-$node2->start();
-$node3->start();
-
-# load_balance_hosts=disable should always choose the first one.
-$node1->connect_ok(
- "host=pg-loadbalancetest port=$port load_balance_hosts=disable",
- "load_balance_hosts=disable connects to the first node",
- sql => "SELECT 'connect1'",
- log_like => [qr/statement: SELECT 'connect1'/]);
-
-
-# Statistically the following loop with load_balance_hosts=random will almost
-# certainly connect at least once to each of the nodes. The chance of that not
-# happening is so small that it's negligible: (2/3)^50 = 1.56832855e-9
-foreach my $i (1 .. 50)
-{
- $node1->connect_ok(
- "host=pg-loadbalancetest port=$port load_balance_hosts=random",
- "repeated connections with random load balancing",
- sql => "SELECT 'connect2'");
-}
-
-my $node1_occurrences = () =
- $node1->log_content() =~ /statement: SELECT 'connect2'/g;
-my $node2_occurrences = () =
- $node2->log_content() =~ /statement: SELECT 'connect2'/g;
-my $node3_occurrences = () =
- $node3->log_content() =~ /statement: SELECT 'connect2'/g;
-
-my $total_occurrences =
- $node1_occurrences + $node2_occurrences + $node3_occurrences;
-
-cmp_ok($node1_occurrences, '>', 1,
- "received at least one connection on node1");
-cmp_ok($node2_occurrences, '>', 1,
- "received at least one connection on node2");
-cmp_ok($node3_occurrences, '>', 1,
- "received at least one connection on node3");
-is($total_occurrences, 50, "received 50 connections across all nodes");
-
-$node1->stop();
-$node2->stop();
-
-# load_balance_hosts=disable should continue trying hosts until it finds a
-# working one.
-$node3->connect_ok(
- "host=pg-loadbalancetest port=$port load_balance_hosts=disable",
- "load_balance_hosts=disable continues until it connects to the a working node",
- sql => "SELECT 'connect3'",
- log_like => [qr/statement: SELECT 'connect3'/]);
-
-# Also with load_balance_hosts=random we continue to the next nodes if previous
-# ones are down. Connect a few times to make sure it's not just lucky.
-foreach my $i (1 .. 5)
-{
- $node3->connect_ok(
- "host=pg-loadbalancetest port=$port load_balance_hosts=random",
- "load_balance_hosts=random continues until it connects to the a working node",
- sql => "SELECT 'connect4'",
- log_like => [qr/statement: SELECT 'connect4'/]);
-}
-
-done_testing();
--
2.52.0
v7-0006-WIP-pytest-Add-some-SSL-client-tests.patchtext/x-patch; charset=utf-8; name=v7-0006-WIP-pytest-Add-some-SSL-client-tests.patchDownload
From a5670415c0f6073ce9bf393eddf2a1b1fb3b47db Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 16 Dec 2025 09:30:55 +0100
Subject: [PATCH v7 6/9] WIP: pytest: Add some SSL client tests
This is a sample client-only test suite. It tests some handshake
failures against a mock server, as well as a full SSL handshake + empty
query + response.
pyca/cryptography is added as a new package dependency. Certificates for
testing are generated on the fly.
The mock design is threaded: the server socket is listening on a
background thread, and the test provides the server logic via a
callback. There is some additional work still needed to make this
production-ready; see the notes for _TCPServer.background(). (Currently,
an exception in the wrong place could result in a hang-until-timeout
rather than an immediate failure.)
TODOs:
- local_server and tcp_server_class are nearly identical and should
share code.
- fix exception-related timeouts for .background()
- figure out the proper use of "session" vs "module" scope
- ensure that pq.libpq unwinds (to close connections) before tcp_server;
see comment in test_server_with_ssl_disabled()
---
.cirrus.tasks.yml | 18 ++-
pyproject.toml | 8 +
src/test/ssl/Makefile | 2 +
src/test/ssl/meson.build | 6 +
src/test/ssl/pyt/conftest.py | 128 +++++++++++++++
src/test/ssl/pyt/test_client.py | 278 ++++++++++++++++++++++++++++++++
6 files changed, 434 insertions(+), 6 deletions(-)
create mode 100644 src/test/ssl/pyt/conftest.py
create mode 100644 src/test/ssl/pyt/test_client.py
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index a2c3febc30c..41d2a3c1867 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -229,6 +229,7 @@ task:
sysctl kern.corefile='/tmp/cores/%N.%P.core'
setup_additional_packages_script: |
pkg install -y \
+ py311-cryptography \
py311-packaging \
py311-pytest
@@ -323,6 +324,7 @@ task:
setup_additional_packages_script: |
pkgin -y install \
+ py312-cryptography \
py312-packaging \
py312-test
ln -s /usr/pkg/bin/pytest-3.12 /usr/pkg/bin/pytest
@@ -346,8 +348,9 @@ task:
setup_additional_packages_script: |
pkg_add -I \
- py3-test \
- py3-packaging
+ py3-cryptography \
+ py3-packaging \
+ py3-test
# Always core dump to ${CORE_DUMP_DIR}
set_core_dump_script: sysctl -w kern.nosuidcoredump=2
<<: *openbsd_task_template
@@ -508,8 +511,9 @@ task:
setup_additional_packages_script: |
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get -y install \
- python3-pytest \
- python3-packaging
+ python3-cryptography \
+ python3-packaging \
+ python3-pytest
matrix:
# SPECIAL:
@@ -658,6 +662,7 @@ task:
CIRRUS_WORKING_DIR: ${HOME}/pgsql/
CCACHE_DIR: ${HOME}/ccache
MACPORTS_CACHE: ${HOME}/macports-cache
+ PYTEST_DEBUG_TEMPROOT: /tmp # default is too long for UNIX sockets on Mac
MESON_FEATURES: >-
-Dbonjour=enabled
@@ -678,6 +683,7 @@ task:
p5.34-io-tty
p5.34-ipc-run
python312
+ py312-cryptography
py312-packaging
py312-pytest
tcl
@@ -816,7 +822,7 @@ task:
# XXX Does Chocolatey really not have any Python package installers?
setup_additional_packages_script: |
REM choco install -y --no-progress ...
- pip3 install --user packaging pytest
+ pip3 install --user cryptography packaging pytest
setup_hosts_file_script: |
echo 127.0.0.1 pg-loadbalancetest >> c:\Windows\System32\Drivers\etc\hosts
@@ -879,7 +885,7 @@ task:
folder: ${CCACHE_DIR}
setup_additional_packages_script: |
- C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-python-pytest
+ C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-python-cryptography mingw-w64-ucrt-x86_64-python-pytest
mingw_info_script: |
%BASH% -c "where gcc"
diff --git a/pyproject.toml b/pyproject.toml
index 4628d2274e0..00c8ae88583 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -12,6 +12,14 @@ dependencies = [
# Any other dependencies are effectively optional (added below). We import
# these libraries using pytest.importorskip(). So tests will be skipped if
# they are not available.
+
+ # Notes on the cryptography package:
+ # - 3.3.2 is shipped on Debian bullseye.
+ # - 3.4.x drops support for Python 2, making it a version of note for older LTS
+ # distros.
+ # - 35.x switched versioning schemes and moved to Rust parsing.
+ # - 40.x is the last version supporting Python 3.6.
+ "cryptography >= 3.3.2",
]
[tool.pytest.ini_options]
diff --git a/src/test/ssl/Makefile b/src/test/ssl/Makefile
index aa062945fb9..287729ad9fb 100644
--- a/src/test/ssl/Makefile
+++ b/src/test/ssl/Makefile
@@ -30,6 +30,8 @@ clean distclean:
# Doesn't depend on sslfiles because we don't rebuild them by default
check:
$(prove_check)
+ # XXX these suites should run independently, not serially
+ $(pytest_check)
installcheck:
$(prove_installcheck)
diff --git a/src/test/ssl/meson.build b/src/test/ssl/meson.build
index 9e5bdbb6136..6ec274d8165 100644
--- a/src/test/ssl/meson.build
+++ b/src/test/ssl/meson.build
@@ -15,4 +15,10 @@ tests += {
't/003_sslinfo.pl',
],
},
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_client.py',
+ 'pyt/test_server.py',
+ ],
+ },
}
diff --git a/src/test/ssl/pyt/conftest.py b/src/test/ssl/pyt/conftest.py
new file mode 100644
index 00000000000..870f738ac44
--- /dev/null
+++ b/src/test/ssl/pyt/conftest.py
@@ -0,0 +1,128 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import datetime
+import re
+import subprocess
+import tempfile
+from collections import namedtuple
+
+import pytest
+
+
+@pytest.fixture(scope="session")
+def cryptography():
+ return pytest.importorskip("cryptography", "3.3.2")
+
+
+Cert = namedtuple("Cert", "cert, certpath, key, keypath")
+
+
+@pytest.fixture(scope="session")
+def certs(cryptography, tmp_path_factory):
+ """
+ Caches commonly used certificates at the session level, and provides a way
+ to create new ones.
+
+ - certs.ca: the root CA certificate
+
+ - certs.server: the "standard" server certficate, signed by certs.ca
+
+ - certs.server_host: the hostname of the certs.server certificate
+
+ - certs.new(): creates a custom certificate, signed by certs.ca
+ """
+
+ from cryptography import x509
+ from cryptography.hazmat.primitives import hashes, serialization
+ from cryptography.hazmat.primitives.asymmetric import rsa
+ from cryptography.x509.oid import NameOID
+
+ tmpdir = tmp_path_factory.mktemp("test-certs")
+
+ class _Certs:
+ def __init__(self):
+ self.ca = self.new(
+ x509.Name(
+ [x509.NameAttribute(NameOID.COMMON_NAME, "PG pytest CA")],
+ ),
+ ca=True,
+ )
+
+ self.server_host = "example.org"
+ self.server = self.new(
+ x509.Name(
+ [x509.NameAttribute(NameOID.COMMON_NAME, self.server_host)],
+ )
+ )
+
+ def new(self, subject: x509.Name, *, ca=False) -> Cert:
+ """
+ Creates and signs a new Cert with the given subject name. If ca is
+ True, the certificate will be self-signed; otherwise the certificate
+ is signed by self.ca.
+ """
+ key = rsa.generate_private_key(
+ public_exponent=65537,
+ key_size=2048,
+ )
+
+ builder = x509.CertificateBuilder()
+ now = datetime.datetime.now(datetime.timezone.utc)
+
+ builder = (
+ builder.subject_name(subject)
+ .public_key(key.public_key())
+ .serial_number(x509.random_serial_number())
+ .not_valid_before(now)
+ .not_valid_after(now + datetime.timedelta(hours=1))
+ )
+
+ if ca:
+ builder = builder.issuer_name(subject)
+ else:
+ builder = builder.issuer_name(self.ca.cert.subject)
+
+ builder = builder.add_extension(
+ x509.BasicConstraints(ca=ca, path_length=None),
+ critical=True,
+ )
+
+ cert = builder.sign(
+ private_key=key if ca else self.ca.key,
+ algorithm=hashes.SHA256(),
+ )
+
+ # Dump the certificate and key to file.
+ keypath = self._tofile(
+ key.private_bytes(
+ serialization.Encoding.PEM,
+ serialization.PrivateFormat.PKCS8,
+ serialization.NoEncryption(),
+ ),
+ suffix=".key",
+ )
+ certpath = self._tofile(
+ cert.public_bytes(serialization.Encoding.PEM),
+ suffix="-ca.crt" if ca else ".crt",
+ )
+
+ return Cert(
+ cert=cert,
+ certpath=certpath,
+ key=key,
+ keypath=keypath,
+ )
+
+ def _tofile(self, data: bytes, *, suffix) -> str:
+ """
+ Dumps data to a file on disk with the requested suffix and returns
+ the path. The file is located somewhere in pytest's temporary
+ directory root.
+ """
+ f = tempfile.NamedTemporaryFile(suffix=suffix, dir=tmpdir, delete=False)
+ with f:
+ f.write(data)
+
+ return f.name
+
+ return _Certs()
diff --git a/src/test/ssl/pyt/test_client.py b/src/test/ssl/pyt/test_client.py
new file mode 100644
index 00000000000..556bad33bf8
--- /dev/null
+++ b/src/test/ssl/pyt/test_client.py
@@ -0,0 +1,278 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import ctypes
+import socket
+import ssl
+import struct
+import threading
+from typing import Callable
+
+import pytest
+
+import pypg
+from libpq import LibpqError, ExecStatus
+
+# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
+pytestmark = pypg.require_test_extras("ssl")
+
+
+@pytest.fixture(scope="session", autouse=True)
+def skip_if_no_ssl_support(libpq_handle):
+ """Skips tests if SSL support is not configured."""
+
+ # Declare PQsslAttribute().
+ PQsslAttribute = libpq_handle.PQsslAttribute
+ PQsslAttribute.restype = ctypes.c_char_p
+ PQsslAttribute.argtypes = [ctypes.c_void_p, ctypes.c_char_p]
+
+ if not PQsslAttribute(None, b"library"):
+ pytest.skip("requires SSL support to be configured")
+
+
+#
+# Test Fixtures
+#
+
+
+@pytest.fixture
+def tcp_server_class(remaining_timeout):
+ """
+ Metafixture to combine related logic for tcp_server and ssl_server.
+
+ TODO: combine with test_libpq.local_server
+ """
+
+ class _TCPServer(contextlib.ExitStack):
+ """
+ Implementation class for tcp_server. See .background() for the primary
+ entry point for tests. Postgres clients may connect to this server via
+ **tcp_server.conninfo.
+
+ _TCPServer derives from contextlib.ExitStack to provide easy cleanup of
+ associated resources; see the documentation for that class for a full
+ explanation.
+ """
+
+ def __init__(self):
+ super().__init__()
+
+ self._thread = None
+ self._thread_exc = None
+ self._listener = self.enter_context(
+ socket.socket(socket.AF_INET, socket.SOCK_STREAM),
+ )
+
+ self._bind_and_listen()
+ sockname = self._listener.getsockname()
+ self.conninfo = dict(
+ hostaddr=sockname[0],
+ port=sockname[1],
+ )
+
+ def _bind_and_listen(self):
+ """
+ Does the actual work of binding the socket and listening for
+ connections.
+
+ The listen backlog is currently hardcoded to one.
+ """
+ self._listener.bind(("127.0.0.1", 0))
+ self._listener.listen(1)
+
+ def background(self, fn: Callable[[socket.socket], None]) -> None:
+ """
+ Accepts a client connection on a background thread and passes it to
+ the provided callback. Any exceptions raised from the callback will
+ be re-raised on the main thread during fixture teardown.
+
+ Blocking operations on the connected socket default to using the
+ remaining_timeout(), though this can be changed by the test via the
+ socket's .settimeout().
+ """
+
+ def _bg():
+ try:
+ self._listener.settimeout(remaining_timeout())
+ sock, _ = self._listener.accept()
+
+ with sock:
+ sock.settimeout(remaining_timeout())
+ fn(sock)
+
+ except Exception as e:
+ # Save the exception for re-raising on the main thread.
+ self._thread_exc = e
+
+ # TODO: rather than using callback(), consider explicitly signaling
+ # the fn() implementation to stop early if we get an exception.
+ # Otherwise we'll hang until the end of the timeout.
+ self._thread = threading.Thread(target=_bg)
+ self.callback(self._join)
+
+ self._thread.start()
+
+ def _join(self):
+ """
+ Waits for the background thread to finish and raises any thrown
+ exception. This is called during fixture teardown.
+ """
+ # Give a little bit of wiggle room on the join timeout, since we're
+ # racing against the test's own use of remaining_timeout(). (It's
+ # preferable to let tests report timeouts; the stack traces will
+ # help with debugging.)
+ self._thread.join(remaining_timeout() + 1)
+ if self._thread.is_alive():
+ raise TimeoutError("background thread is still running after timeout")
+
+ if self._thread_exc is not None:
+ raise self._thread_exc
+
+ return _TCPServer
+
+
+@pytest.fixture
+def tcp_server(tcp_server_class):
+ """
+ Opens up a local TCP socket for mocking a Postgres server on a background
+ thread. See the _TCPServer API for usage.
+ """
+ with tcp_server_class() as s:
+ yield s
+
+
+@pytest.fixture
+def ssl_server(tcp_server_class, certs):
+ """
+ Like tcp_server, but with an additional .background_ssl() method which will
+ perform a SSLRequest handshake on the socket before handing the connection
+ to the test callback.
+
+ This server uses certs.server as its identity.
+ """
+
+ class _SSLServer(tcp_server_class):
+ def __init__(self):
+ super().__init__()
+
+ self.conninfo["host"] = certs.server_host
+
+ self._ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
+ self._ctx.load_cert_chain(certs.server.certpath, certs.server.keypath)
+
+ def background_ssl(self, fn: Callable[[ssl.SSLSocket], None]) -> None:
+ """
+ Invokes a server callback as with .background(), but an SSLRequest
+ handshake is performed first, and the socket provided to the
+ callback has been wrapped in an OpenSSL layer.
+ """
+
+ def handshake(s: socket.socket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Make sure we get an SSLRequest.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (1234, 5679)
+ assert pktlen == 8
+
+ # Accept the SSLRequest.
+ s.send(b"S")
+
+ with self._ctx.wrap_socket(s, server_side=True) as wrapped:
+ fn(wrapped)
+
+ self.background(handshake)
+
+ with _SSLServer() as s:
+ yield s
+
+
+#
+# Tests
+#
+
+
+@pytest.mark.parametrize("sslmode", ("require", "verify-ca", "verify-full"))
+def test_server_with_ssl_disabled(connect, tcp_server, certs, sslmode):
+ """
+ Make sure client refuses to talk to non-SSL servers with stricter
+ sslmodes.
+ """
+
+ def refuse_ssl(s: socket.socket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Make sure we get an SSLRequest.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (1234, 5679)
+ assert pktlen == 8
+
+ # Refuse the SSLRequest.
+ s.send(b"N")
+
+ # Wait for the client to close the connection.
+ assert not s.recv(1), "client sent unexpected data"
+
+ tcp_server.background(refuse_ssl)
+
+ with pytest.raises(LibpqError, match="server does not support SSL"):
+ connect(
+ **tcp_server.conninfo,
+ sslrootcert=certs.ca.certpath,
+ sslmode=sslmode,
+ )
+
+
+def test_verify_full_connection(connect, ssl_server, certs):
+ """Completes a verify-full connection and empty query."""
+
+ def handle_empty_query(s: ssl.SSLSocket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Check the startup packet version, then discard the remainder.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (3, 0)
+ s.recv(pktlen - 8)
+
+ # Send the required litany of server messages.
+ s.send(struct.pack("!cII", b"R", 8, 0)) # AuthenticationOK
+
+ # ParameterStatus: client_encoding
+ key = b"client_encoding\0"
+ val = b"UTF-8\0"
+ s.send(struct.pack("!cI", b"S", 4 + len(key) + len(val)) + key + val)
+
+ # ParameterStatus: DateStyle
+ key = b"DateStyle\0"
+ val = b"ISO, MDY\0"
+ s.send(struct.pack("!cI", b"S", 4 + len(key) + len(val)) + key + val)
+
+ s.send(struct.pack("!cIII", b"K", 12, 1234, 1234)) # BackendKeyData
+ s.send(struct.pack("!cIc", b"Z", 5, b"I")) # ReadyForQuery
+
+ # Expect an empty query.
+ pkttype = s.recv(1)
+ assert pkttype == b"Q"
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+ assert s.recv(pktlen - 4) == b"\0"
+
+ # Send an EmptyQueryResponse+ReadyForQuery.
+ s.send(struct.pack("!cI", b"I", 4))
+ s.send(struct.pack("!cIc", b"Z", 5, b"I"))
+
+ # libpq should terminate and close the connection.
+ assert s.recv(1) == b"X"
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+ assert pktlen == 4
+
+ assert not s.recv(1), "client sent unexpected data"
+
+ ssl_server.background_ssl(handle_empty_query)
+
+ conn = connect(
+ **ssl_server.conninfo,
+ sslrootcert=certs.ca.certpath,
+ sslmode="verify-full",
+ )
+ with conn:
+ assert conn.exec("").status() == ExecStatus.PGRES_EMPTY_QUERY
--
2.52.0
v7-0007-WIP-pytest-Add-some-server-side-SSL-tests.patchtext/x-patch; charset=utf-8; name=v7-0007-WIP-pytest-Add-some-server-side-SSL-tests.patchDownload
From 8514b52bc64ecca8355dffbc1873db418b1bf404 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 16 Dec 2025 09:31:46 +0100
Subject: [PATCH v7 7/9] WIP: pytest: Add some server-side SSL tests
In the same vein as the previous commit, this is a server-only test
suite operating against a mock client. The test itself is a heavily
parameterized check for direct-SSL handshake behavior, using a
combination of "standard" and "custom" certificates via the certs
fixture.
installcheck is currently unsupported, but the architecture has some
extension points that should make it possible later. For now, a new
server is always started for the test session.
TODOs:
- improve remaining_timeout() integration with socket operations; at the
moment, the timeout resets on every call rather than decrementing
---
src/test/ssl/pyt/conftest.py | 50 ++++++++++
src/test/ssl/pyt/test_server.py | 161 ++++++++++++++++++++++++++++++++
2 files changed, 211 insertions(+)
create mode 100644 src/test/ssl/pyt/test_server.py
diff --git a/src/test/ssl/pyt/conftest.py b/src/test/ssl/pyt/conftest.py
index 870f738ac44..d121724800b 100644
--- a/src/test/ssl/pyt/conftest.py
+++ b/src/test/ssl/pyt/conftest.py
@@ -126,3 +126,53 @@ def certs(cryptography, tmp_path_factory):
return f.name
return _Certs()
+
+
+@pytest.fixture(scope="module", autouse=True)
+def ssl_setup(pg_server_module, certs, datadir):
+ """
+ Sets up required server settings for all tests in this module.
+ """
+ try:
+ with pg_server_module.restarting() as s:
+ s.conf.set(
+ ssl="on",
+ ssl_ca_file=certs.ca.certpath,
+ ssl_cert_file=certs.server.certpath,
+ ssl_key_file=certs.server.keypath,
+ )
+
+ # Reject by default.
+ s.hba.prepend("hostssl all all all reject")
+
+ except subprocess.CalledProcessError:
+ # This is a decent place to skip if the server isn't set up for SSL.
+ logpath = datadir / "postgresql.log"
+ unsupported = re.compile("SSL is not supported")
+
+ with open(logpath, "r") as log:
+ for line in log:
+ if unsupported.search(line):
+ pytest.skip("the server does not support SSL")
+
+ # Some other error happened.
+ raise
+
+ users = pg_server_module.create_users("ssl")
+ dbs = pg_server_module.create_dbs("ssl")
+
+ return (users, dbs)
+
+
+@pytest.fixture(scope="module")
+def client_cert(ssl_setup, certs):
+ """
+ Creates a Cert for the "ssl" user.
+ """
+ from cryptography import x509
+ from cryptography.x509.oid import NameOID
+
+ users, _ = ssl_setup
+ user = users["ssl"]
+
+ return certs.new(x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, user)]))
diff --git a/src/test/ssl/pyt/test_server.py b/src/test/ssl/pyt/test_server.py
new file mode 100644
index 00000000000..d5cb14b6c9a
--- /dev/null
+++ b/src/test/ssl/pyt/test_server.py
@@ -0,0 +1,161 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import re
+import socket
+import ssl
+import struct
+
+import pytest
+
+import pypg
+
+# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
+pytestmark = pypg.require_test_extras("ssl")
+
+# For use with the `creds` parameter below.
+CLIENT = "client"
+SERVER = "server"
+
+
+# fmt: off
+@pytest.mark.parametrize(
+ "auth_method, creds, expected_error",
+[
+ # Trust allows anything.
+ ("trust", None, None),
+ ("trust", CLIENT, None),
+ ("trust", SERVER, None),
+
+ # verify-ca allows any CA-signed certificate.
+ ("trust clientcert=verify-ca", None, "requires a valid client certificate"),
+ ("trust clientcert=verify-ca", CLIENT, None),
+ ("trust clientcert=verify-ca", SERVER, None),
+
+ # cert and verify-full allow only the correct certificate.
+ ("trust clientcert=verify-full", None, "requires a valid client certificate"),
+ ("trust clientcert=verify-full", CLIENT, None),
+ ("trust clientcert=verify-full", SERVER, "authentication failed for user"),
+ ("cert", None, "requires a valid client certificate"),
+ ("cert", CLIENT, None),
+ ("cert", SERVER, "authentication failed for user"),
+],
+)
+# fmt: on
+def test_direct_ssl_certificate_authentication(
+ pg,
+ ssl_setup,
+ certs,
+ client_cert,
+ remaining_timeout,
+ # test parameters
+ auth_method,
+ creds,
+ expected_error,
+):
+ """
+ Tests direct SSL connections with various client-certificate/HBA
+ combinations.
+ """
+
+ # Set up the HBA as desired by the test.
+ users, dbs = ssl_setup
+
+ user = users["ssl"]
+ db = dbs["ssl"]
+
+ with pg.reloading() as s:
+ s.hba.prepend(
+ ["hostssl", db, user, "127.0.0.1/32", auth_method],
+ ["hostssl", db, user, "::1/128", auth_method],
+ )
+
+ # Configure the SSL settings for the client.
+ ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
+ ctx.load_verify_locations(cafile=certs.ca.certpath)
+ ctx.set_alpn_protocols(["postgresql"]) # for direct SSL
+
+ # Load up a client certificate if required by the test.
+ if creds == CLIENT:
+ ctx.load_cert_chain(client_cert.certpath, client_cert.keypath)
+ elif creds == SERVER:
+ # Using a server certificate as the client credential is expected to
+ # work only for clientcert=verify-ca (and `trust`, naturally).
+ ctx.load_cert_chain(certs.server.certpath, certs.server.keypath)
+
+ # Make a direct SSL connection. There's no SSLRequest in the handshake; we
+ # simply wrap a TCP connection with OpenSSL.
+ addr = (pg.hostaddr, pg.port)
+ with socket.create_connection(addr) as s:
+ s.settimeout(remaining_timeout()) # XXX this resets every operation
+
+ with ctx.wrap_socket(s, server_hostname=certs.server_host) as conn:
+ # Build and send the startup packet.
+ startup_options = dict(
+ user=user,
+ database=db,
+ application_name="pytest",
+ )
+
+ payload = b""
+ for k, v in startup_options.items():
+ payload += k.encode() + b"\0"
+ payload += str(v).encode() + b"\0"
+ payload += b"\0" # null terminator
+
+ pktlen = 4 + 4 + len(payload)
+ conn.send(struct.pack("!IHH", pktlen, 3, 0) + payload)
+
+ if not expected_error:
+ # Expect an AuthenticationOK to come back.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"R"
+ assert pktlen == 8
+
+ authn_result = struct.unpack("!I", conn.recv(4))[0]
+ assert authn_result == 0
+
+ # Read and discard to ReadyForQuery.
+ while True:
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ payload = conn.recv(pktlen - 4)
+
+ if pkttype == b"Z":
+ assert payload == b"I"
+ break
+
+ # Send an empty query.
+ conn.send(struct.pack("!cI", b"Q", 5) + b"\0")
+
+ # Expect EmptyQueryResponse+ReadyForQuery.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"I"
+ assert pktlen == 4
+
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"Z"
+
+ payload = conn.recv(pktlen - 4)
+ assert payload == b"I"
+
+ else:
+ # Match the expected authentication error.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"E"
+
+ payload = conn.recv(pktlen - 4)
+ msg = None
+
+ for component in payload.split(b"\0"):
+ if not component:
+ break # end of message
+
+ key, val = component[:1], component[1:]
+ if key == b"S":
+ assert val == b"FATAL"
+ elif key == b"M":
+ msg = val.decode()
+
+ assert re.search(expected_error, msg), "server error did not match"
+
+ # Terminate.
+ conn.send(struct.pack("!cI", b"X", 4))
--
2.52.0
On Wed, Dec 17, 2025 at 8:10 AM Andres Freund <andres@anarazel.de> wrote:
I assume this intentionally doesn't pass CI:
https://cirrus-ci.com/github/postgresql-cfbot/postgresql/cf%2F6045
v2 intentionally doesn't pass CI to show what the failure modes are.
Jelte rightly pointed out that this masks known 32-bit failures -- I
hadn't yet found a way to get 32-bit and 64-bit Python to behave well
together on Debian. (Not a huge fan of how v4 is working around the
problem, though; if we can't load libpq into Python then why run
pytest?)
Before it gets too far away from me: note that I have not yet been
able to get up to speed with the combined refactoring+feature patch
that Jelte added in v3, and it's now up to v7, and this patchset has
been moved out of Drafts. That's fine by me, but I plan to focus on
things that need to get into PG19 before I focus on this, since
nothing is really blocked on it.
I agree with adding tap to the configuration summary, but I don't understand
the prove part, that just seems like a waste of vertical space.
I don't have a strong opinion, if no one finds it useful.
Yes, that needs to be baked into the image. Chocolatey is catastrophically
slow and unreliable. It's also just bad form to hit any service with such
repeated downloads.
Yep.
Why do we need pytest the program at all? Running the tests one-by-one with
pytest as a runner doesn't seem to make a whole lot of sense to me.
Even if we find reasons to implement our own custom runner for mtest
-- which I don't think we should do for the sake of architectural
purity alone; it doesn't seem to be a big deal in practice -- using
pytest directly ensures that the CI is using the same code paths as
individual developers who choose to use pytest as the top-level
runner.
-SUBDIRS = perl postmaster regress isolation modules authentication recovery subscription +SUBDIRS = \ + authentication \ + isolation \ + modules \ + perl \ + postmaster \ + pytest \ + recovery \ + regress \ + subscriptionI'm onboard with that, but we should do it separately and probably check for
other cases where we should do it at the same time.
I'm not sure what context this is referring to? What are you on board with?
I think it'd be a seriously bad idea to start with no central infrastructure,
we'd be force to duplicate that all over.
Right, I just want central infra to be pulled out of the new tests
that need them rather than the other way around.
Eventually we'll be forced to
introduce some central infrastructure, but we'll probably not go around and
carefully go through the existing tests for stuff that should now use the
common infrastructure.
Ehh... I don't want to encourage "regular" refactoring of test
fixtures, since that would be incredibly disruptive and cause backport
difficulties, but I think it's fine to expect some careful suite-wide
improvements to be made as we introduce a completely new way of
testing. (And I find composed tests in Python a lot easier to refactor
than Test::More scripts, since they've declared their logical
dependencies.)
Thanks!
--Jacob
On Mon Jan 5, 2026 at 9:19 PM CET, Jacob Champion wrote:
On Wed, Dec 17, 2025 at 8:10 AM Andres Freund <andres@anarazel.de> wrote:
Before it gets too far away from me: note that I have not yet been
able to get up to speed with the combined refactoring+feature patch
that Jelte added in v3, and it's now up to v7,
Attached is v8. It simplifies the Cirrus CI yaml, because the
dependencies are now baked into the images. I also removed the optional
dependency on uv. Meson/autoconf now simply search for pytest binary in
the .venv directory too. Devs can then choose if they want to populate
.venv with pip or uv. Finally, if the pytest binary cannot be found,
there's a fallback attempt to use `python -m pytest`.
That's fine by me, but I plan to focus on
things that need to get into PG19 before I focus on this, since
nothing is really blocked on it.
Part of the reason why I've been trying to push this forward is that
automated tests for the GoAway patch are definitely blocked on this. (The
other reason is that I'd like to reduce the amount of perl I have to
read/write.)
-SUBDIRS = perl postmaster regress isolation modules authentication recovery subscription +SUBDIRS = \ + authentication \ + isolation \ + modules \ + perl \ + postmaster \ + pytest \ + recovery \ + regress \ + subscriptionI'm onboard with that, but we should do it separately and probably check for
other cases where we should do it at the same time.I'm not sure what context this is referring to? What are you on board with?
If I understood Andres correctly this was about splitting the items
across multiple lines. I moved this to a separate thread, and it was
cammitted by Michael in 9adf32da6b. So this has been resolved afaik.
I think it'd be a seriously bad idea to start with no central infrastructure,
we'd be force to duplicate that all over.Right, I just want central infra to be pulled out of the new tests
that need them rather than the other way around.
I'm not sure how you expect that to work in practice. I believe (and I
think Andres too) that there's some infra that we already know we'll
need for many tests, e.g. starting/stopping nodes, running queries,
handling errors. I don't think it makes sense to have those be pulled
out of new tests. You need some basics, otherwise no-one will want to
write tests. And even if they do, everyone ends up with different styles
of doing basic things. I'd rather coordinate on a bit of style upront so
that tests behave similarly for common usages.
Attachments:
v8-0001-meson-Include-TAP-tests-in-the-configuration-summ.patchtext/x-patch; charset=utf-8; name=v8-0001-meson-Include-TAP-tests-in-the-configuration-summ.patchDownload
From 8870b096cc0335739c48fe945500c8ae514e1dba Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Fri, 5 Sep 2025 16:39:08 -0700
Subject: [PATCH v8 1/7] meson: Include TAP tests in the configuration summary
...to make it obvious when they've been enabled. prove is added to the
executables list for good measure.
TODO: does Autoconf need something similar?
Per complaint by Peter Eisentraut.
---
meson.build | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/meson.build b/meson.build
index c3834a9dc8f..9d1c7ffc702 100644
--- a/meson.build
+++ b/meson.build
@@ -3971,6 +3971,7 @@ summary(
'bison': '@0@ @1@'.format(bison.full_path(), bison_version),
'dtrace': dtrace,
'flex': '@0@ @1@'.format(flex.full_path(), flex_version),
+ 'prove': prove,
},
section: 'Programs',
)
@@ -4007,3 +4008,11 @@ summary(
section: 'External libraries',
list_sep: ' ',
)
+
+summary(
+ {
+ 'tap': tap_tests_enabled,
+ },
+ section: 'Other features',
+ list_sep: ' ',
+)
base-commit: 0547aeae0fd6f6d03dd7499c84145ad9e3aa51b9
--
2.52.0
v8-0002-Add-support-for-pytest-test-suites.patchtext/x-patch; charset=utf-8; name=v8-0002-Add-support-for-pytest-test-suites.patchDownload
From ee733745423dfeca9ff481e0fc567b51dcb0207e Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 13 Aug 2025 10:58:56 -0700
Subject: [PATCH v8 2/7] Add support for pytest test suites
Specify --enable-pytest/-Dpytest=enabled at configure time. This
contains no Postgres test logic -- it is just a "vanilla" pytest
skeleton.
I've written a custom pgtap output plugin, used by the Meson mtest
runner, to fully control what we see during CI test failures. The
pytest-tap plugin would have been preferable, but it's now in
maintenance mode, and it has problems with accidentally suppressing
important collection failures.
Co-authored-by: Jelte Fennema-Nio <postgres@jeltef.nl>
---
.cirrus.tasks.yml | 14 ++-
.gitignore | 3 +
configure | 166 +++++++++++++++++++++++++++++-
configure.ac | 32 +++++-
meson.build | 107 +++++++++++++++++++
meson_options.txt | 8 +-
pyproject.toml | 21 ++++
src/Makefile.global.in | 29 ++++++
src/makefiles/meson.build | 2 +
src/test/Makefile | 1 +
src/test/meson.build | 1 +
src/test/pytest/Makefile | 20 ++++
src/test/pytest/README | 1 +
src/test/pytest/meson.build | 16 +++
src/test/pytest/pgtap.py | 198 ++++++++++++++++++++++++++++++++++++
src/tools/testwrap | 6 +-
16 files changed, 616 insertions(+), 9 deletions(-)
create mode 100644 pyproject.toml
create mode 100644 src/test/pytest/Makefile
create mode 100644 src/test/pytest/README
create mode 100644 src/test/pytest/meson.build
create mode 100644 src/test/pytest/pgtap.py
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 745bd198b42..b795bad0470 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -21,7 +21,8 @@ env:
# target to test, for all but windows
CHECK: check-world PROVE_FLAGS=$PROVE_FLAGS
- CHECKFLAGS: -Otarget
+ # TODO were we avoiding --keep-going on purpose?
+ CHECKFLAGS: -Otarget --keep-going
PROVE_FLAGS: --timer
# Build test dependencies as part of the build step, to see compiler
# errors/warnings in one place.
@@ -44,6 +45,7 @@ env:
-Dldap=enabled
-Dssl=openssl
-Dtap_tests=enabled
+ -Dpytest=enabled
-Dplperl=enabled
-Dplpython=enabled
-Ddocs=enabled
@@ -315,6 +317,7 @@ task:
-Dlibcurl=enabled
-Dnls=enabled
-Dpam=enabled
+ -DPYTEST=pytest-3.12
setup_additional_packages_script: |
#pkgin -y install ...
@@ -518,14 +521,15 @@ task:
set -e
./configure \
--enable-cassert --enable-injection-points --enable-debug \
- --enable-tap-tests --enable-nls \
+ --enable-tap-tests --enable-pytest --enable-nls \
--with-segsize-blocks=6 \
--with-libnuma \
--with-liburing \
\
${LINUX_CONFIGURE_FEATURES} \
\
- CLANG="ccache clang"
+ CLANG="ccache clang" \
+ PYTEST="env LD_PRELOAD=/lib/x86_64-linux-gnu/libasan.so.8 pytest"
EOF
build_script: su postgres -c "make -s -j${BUILD_JOBS} world-bin"
upload_caches: ccache
@@ -662,6 +666,8 @@ task:
p5.34-io-tty
p5.34-ipc-run
python312
+ py312-packaging
+ py312-pytest
tcl
zstd
@@ -711,6 +717,7 @@ task:
sh src/tools/ci/ci_macports_packages.sh $MACOS_PACKAGE_LIST
# system python doesn't provide headers
sudo /opt/local/bin/port select python3 python312
+ sudo /opt/local/bin/port select pytest pytest312
# Make macports install visible for subsequent steps
echo PATH=/opt/local/sbin/:/opt/local/bin/:$PATH >> $CIRRUS_ENV
upload_caches: macports
@@ -784,6 +791,7 @@ task:
-Dldap=enabled
-Dssl=openssl
-Dtap_tests=enabled
+ -Dpytest=enabled
-Dplperl=enabled
-Dplpython=enabled
diff --git a/.gitignore b/.gitignore
index 4e911395fe3..a550ce6194b 100644
--- a/.gitignore
+++ b/.gitignore
@@ -31,6 +31,7 @@ win32ver.rc
*.exe
lib*dll.def
lib*.pc
+__pycache__/
# Local excludes in root directory
/GNUmakefile
@@ -43,3 +44,5 @@ lib*.pc
/Release/
/tmp_install/
/portlock/
+/.venv/
+/uv.lock
diff --git a/configure b/configure
index 02e4ec7890f..26863bc8e64 100755
--- a/configure
+++ b/configure
@@ -630,6 +630,8 @@ vpath_build
PG_SYSROOT
PG_VERSION_NUM
LDFLAGS_EX_BE
+UV
+PYTEST
PROVE
DBTOEPUB
FOP
@@ -772,6 +774,7 @@ CFLAGS
CC
enable_injection_points
PG_TEST_EXTRA
+enable_pytest
enable_tap_tests
enable_dtrace
DTRACEFLAGS
@@ -850,6 +853,7 @@ enable_profiling
enable_coverage
enable_dtrace
enable_tap_tests
+enable_pytest
enable_injection_points
with_blocksize
with_segsize
@@ -1550,7 +1554,10 @@ Optional Features:
--enable-profiling build with profiling enabled
--enable-coverage build with coverage testing instrumentation
--enable-dtrace build with DTrace support
- --enable-tap-tests enable TAP tests (requires Perl and IPC::Run)
+ --enable-tap-tests enable (Perl-based) TAP tests (requires Perl and
+ IPC::Run)
+ --enable-pytest enable (Python-based) pytest suites (requires
+ Python)
--enable-injection-points
enable injection points (for testing)
--enable-depend turn on automatic dependency tracking
@@ -3632,7 +3639,7 @@ fi
#
-# TAP tests
+# Test frameworks
#
@@ -3660,6 +3667,32 @@ fi
+
+# Check whether --enable-pytest was given.
+if test "${enable_pytest+set}" = set; then :
+ enableval=$enable_pytest;
+ case $enableval in
+ yes)
+ :
+ ;;
+ no)
+ :
+ ;;
+ *)
+ as_fn_error $? "no argument expected for --enable-pytest option" "$LINENO" 5
+ ;;
+ esac
+
+else
+ enable_pytest=no
+
+fi
+
+
+
+
+
+
#
# Injection points
#
@@ -19197,6 +19230,135 @@ $as_echo "$modulestderr" >&6; }
fi
fi
+if test "$enable_pytest" = yes; then
+ if test -z "$PYTEST"; then
+ for ac_prog in pytest py.test
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_PYTEST+:} false; then :
+ $as_echo_n "(cached) " >&6
+else
+ case $PYTEST in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_PYTEST="$PYTEST" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_path_PYTEST="$as_dir/$ac_word$ac_exec_ext"
+ $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+PYTEST=$ac_cv_path_PYTEST
+if test -n "$PYTEST"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTEST" >&5
+$as_echo "$PYTEST" >&6; }
+else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+ test -n "$PYTEST" && break
+done
+
+else
+ # Report the value of PYTEST in configure's output in all cases.
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for PYTEST" >&5
+$as_echo_n "checking for PYTEST... " >&6; }
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTEST" >&5
+$as_echo "$PYTEST" >&6; }
+fi
+
+ if test -z "$PYTEST"; then
+ # If pytest not found, try installing with uv
+ if test -z "$UV"; then
+ for ac_prog in uv
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_UV+:} false; then :
+ $as_echo_n "(cached) " >&6
+else
+ case $UV in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_UV="$UV" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_path_UV="$as_dir/$ac_word$ac_exec_ext"
+ $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+UV=$ac_cv_path_UV
+if test -n "$UV"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $UV" >&5
+$as_echo "$UV" >&6; }
+else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+ test -n "$UV" && break
+done
+
+else
+ # Report the value of UV in configure's output in all cases.
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for UV" >&5
+$as_echo_n "checking for UV... " >&6; }
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $UV" >&5
+$as_echo "$UV" >&6; }
+fi
+
+ if test -n "$UV"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether uv can install pytest dependencies" >&5
+$as_echo_n "checking whether uv can install pytest dependencies... " >&6; }
+ if "$UV" pip install "$srcdir" >&5 2>&1; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+ PYTEST="$UV run pytest"
+ else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+ as_fn_error $? "pytest not found and uv failed to install dependencies" "$LINENO" 5
+ fi
+ else
+ as_fn_error $? "pytest not found" "$LINENO" 5
+ fi
+ fi
+fi
+
# If compiler will take -Wl,--as-needed (or various platform-specific
# spellings thereof) then add that to LDFLAGS. This is much easier than
# trying to filter LIBS to the minimum for each executable.
diff --git a/configure.ac b/configure.ac
index b90a220a635..162ef0114b4 100644
--- a/configure.ac
+++ b/configure.ac
@@ -225,11 +225,16 @@ AC_SUBST(DTRACEFLAGS)])
AC_SUBST(enable_dtrace)
#
-# TAP tests
+# Test frameworks
#
PGAC_ARG_BOOL(enable, tap-tests, no,
- [enable TAP tests (requires Perl and IPC::Run)])
+ [enable (Perl-based) TAP tests (requires Perl and IPC::Run)])
AC_SUBST(enable_tap_tests)
+
+PGAC_ARG_BOOL(enable, pytest, no,
+ [enable (Python-based) pytest suites (requires Python)])
+AC_SUBST(enable_pytest)
+
AC_ARG_VAR(PG_TEST_EXTRA,
[enable selected extra tests (overridden at runtime by PG_TEST_EXTRA environment variable)])
@@ -2406,6 +2411,29 @@ if test "$enable_tap_tests" = yes; then
fi
fi
+if test "$enable_pytest" = yes; then
+ PGAC_PATH_PROGS(PYTEST, [pytest py.test])
+ if test -z "$PYTEST"; then
+ # Check for .venv in source directory
+ AC_MSG_CHECKING([for pytest in .venv])
+ if test -x "$srcdir/.venv/bin/pytest"; then
+ PYTEST="$srcdir/.venv/bin/pytest"
+ AC_MSG_RESULT([$PYTEST])
+ else
+ AC_MSG_RESULT([no])
+ # Try python -m pytest as a fallback
+ AC_MSG_CHECKING([whether python -m pytest works])
+ if "$PYTHON" -m pytest --version >&AS_MESSAGE_LOG_FD 2>&1; then
+ AC_MSG_RESULT([yes])
+ PYTEST="$PYTHON -m pytest"
+ else
+ AC_MSG_RESULT([no])
+ AC_MSG_ERROR([pytest not found. Install it or create a .venv with: python -m venv .venv && .venv/bin/pip install .])
+ fi
+ fi
+ fi
+fi
+
# If compiler will take -Wl,--as-needed (or various platform-specific
# spellings thereof) then add that to LDFLAGS. This is much easier than
# trying to filter LIBS to the minimum for each executable.
diff --git a/meson.build b/meson.build
index 9d1c7ffc702..d91180c0dbb 100644
--- a/meson.build
+++ b/meson.build
@@ -1711,6 +1711,53 @@ endif
+###############################################################
+# Library: pytest
+###############################################################
+
+pytest_enabled = false
+pytest_version = ''
+pytest_cmd = ['pytest'] # dummy, overwritten when pytest is found
+# We also configure the same PYTHONPATH in the pytest settings in
+# pyproject.toml, but pytest versions below 8.4 only actually use that
+# value after plugin loading. On lower versions pytest will throw an error even
+# when just running 'pytest --version'. So we need to configure it here too.
+# This won't help people manually running pytest outside of meson/make, but we
+# expect those to use a recent enough version of pytest anyway (and if not they
+# can manually configure PYTHONPATH too).
+pytest_env = {'PYTHONPATH': meson.project_source_root() / 'src' / 'test' / 'pytest'}
+
+pytestopt = get_option('pytest')
+if not pytestopt.disabled()
+ pytest = find_program(
+ get_option('PYTEST'),
+ dirs: [
+ meson.project_source_root() / '.venv/bin',
+ meson.project_source_root() / '.venv/Scripts',
+ ],
+ native: true, required: false)
+
+ if pytest.found()
+ pytest_enabled = true
+ pytest_version = run_command(pytest, '--version', env: pytest_env, check: false).stdout().strip().split(' ')[-1]
+ pytest_cmd = [pytest.full_path()]
+ else
+ # Try python -m pytest as a fallback
+ pytest_check = run_command(python, '-m', 'pytest', '--version', env: pytest_env, check: false)
+ if pytest_check.returncode() == 0
+ pytest_enabled = true
+ pytest_version = pytest_check.stdout().strip().split(' ')[-1]
+ pytest_cmd = [python.full_path(), '-m', 'pytest']
+ endif
+ endif
+
+ if not pytest_enabled and pytestopt.enabled()
+ error('pytest not found. Install it or create a .venv with: python -m venv .venv && .venv/bin/pip install .')
+ endif
+endif
+
+
+
###############################################################
# Library: zstd
###############################################################
@@ -3798,6 +3845,64 @@ foreach test_dir : tests
)
endforeach
install_suites += test_group
+ elif kind == 'pytest'
+ testwrap_pytest = testwrap_base
+ if not pytest_enabled
+ testwrap_pytest += ['--skip', 'pytest not enabled']
+ endif
+
+ test_command = pytest_cmd
+
+ test_command += [
+ '-c', meson.project_source_root() / 'pyproject.toml',
+ '--verbose',
+ '-p', 'pgtap', # enable our test reporter plugin
+ '-ra', # show skipped and xfailed tests too
+ ]
+
+ # Add temporary install, the build directory for non-installed binaries and
+ # also test/ for non-installed test binaries built separately.
+ env = test_env
+ env.prepend('PATH', temp_install_bindir, test_dir['bd'], test_dir['bd'] / 'test')
+ temp_install_datadir = '@0@@1@'.format(test_install_destdir, dir_prefix / dir_data)
+ env.set('share_contrib_dir', temp_install_datadir / 'contrib')
+ env.prepend('PYTHONPATH', pytest_env['PYTHONPATH'])
+
+ foreach name, value : t.get('env', {})
+ env.set(name, value)
+ endforeach
+
+ test_group = test_dir['name']
+ test_kwargs = {
+ 'protocol': 'tap',
+ 'suite': test_group,
+ 'timeout': 1000,
+ 'depends': test_deps + t.get('deps', []),
+ 'env': env,
+ } + t.get('test_kwargs', {})
+
+ foreach onetest : t['tests']
+ # Make test names prettier, remove pyt/ and .py
+ onetest_p = onetest
+ if onetest_p.startswith('pyt/')
+ onetest_p = onetest.split('pyt/')[1]
+ endif
+ if onetest_p.endswith('.py')
+ onetest_p = fs.stem(onetest_p)
+ endif
+
+ test(test_dir['name'] / onetest_p,
+ python,
+ kwargs: test_kwargs,
+ args: testwrap_pytest + [
+ '--testgroup', test_dir['name'],
+ '--testname', onetest_p,
+ '--', test_command,
+ test_dir['sd'] / onetest,
+ ],
+ )
+ endforeach
+ install_suites += test_group
else
error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
endif
@@ -3972,6 +4077,7 @@ summary(
'dtrace': dtrace,
'flex': '@0@ @1@'.format(flex.full_path(), flex_version),
'prove': prove,
+ 'pytest': pytest_enabled ? ' '.join(pytest_cmd) + ' ' + pytest_version : not_found_dep,
},
section: 'Programs',
)
@@ -4012,6 +4118,7 @@ summary(
summary(
{
'tap': tap_tests_enabled,
+ 'pytest': pytest_enabled,
},
section: 'Other features',
list_sep: ' ',
diff --git a/meson_options.txt b/meson_options.txt
index 6a793f3e479..cb4825c3575 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -41,7 +41,10 @@ option('cassert', type: 'boolean', value: false,
description: 'Enable assertion checks (for debugging)')
option('tap_tests', type: 'feature', value: 'auto',
- description: 'Enable TAP tests')
+ description: 'Enable (Perl-based) TAP tests')
+
+option('pytest', type: 'feature', value: 'auto',
+ description: 'Enable (Python-based) pytest suites')
option('injection_points', type: 'boolean', value: false,
description: 'Enable injection points')
@@ -195,6 +198,9 @@ option('PERL', type: 'string', value: 'perl',
option('PROVE', type: 'string', value: 'prove',
description: 'Path to prove binary')
+option('PYTEST', type: 'array', value: ['pytest', 'py.test'],
+ description: 'Path to pytest binary')
+
option('PYTHON', type: 'array', value: ['python3', 'python'],
description: 'Path to python binary')
diff --git a/pyproject.toml b/pyproject.toml
new file mode 100644
index 00000000000..60abb4d0655
--- /dev/null
+++ b/pyproject.toml
@@ -0,0 +1,21 @@
+[project]
+name = "postgresql-hackers-tooling"
+version = "0.1.0"
+description = "Pytest infrastructure for PostgreSQL"
+requires-python = ">=3.6"
+dependencies = [
+ # pytest 7.0 was the last version which supported Python 3.6, but the BSDs
+ # have started putting 8.x into ports, so we support both. (pytest 8 can be
+ # used throughout once we drop support for Python 3.7.)
+ "pytest >= 7.0, < 10",
+
+ # Any other dependencies are effectively optional (added below). We import
+ # these libraries using pytest.importorskip(). So tests will be skipped if
+ # they are not available.
+]
+
+[tool.pytest.ini_options]
+minversion = "7.0"
+
+# Common test code can be found here.
+pythonpath = ["src/test/pytest"]
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 371cd7eba2c..160cdffd4f1 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -211,6 +211,7 @@ enable_dtrace = @enable_dtrace@
enable_coverage = @enable_coverage@
enable_injection_points = @enable_injection_points@
enable_tap_tests = @enable_tap_tests@
+enable_pytest = @enable_pytest@
python_includespec = @python_includespec@
python_libdir = @python_libdir@
@@ -354,6 +355,7 @@ MSGFMT = @MSGFMT@
MSGFMT_FLAGS = @MSGFMT_FLAGS@
MSGMERGE = @MSGMERGE@
OPENSSL = @OPENSSL@
+PYTEST = @PYTEST@
PYTHON = @PYTHON@
TAR = @TAR@
XGETTEXT = @XGETTEXT@
@@ -508,6 +510,33 @@ prove_installcheck = @echo "TAP tests not enabled. Try configuring with --enable
prove_check = $(prove_installcheck)
endif
+ifeq ($(enable_pytest),yes)
+
+pytest_installcheck = @echo "Installcheck is not currently supported for pytest."
+
+# We also configure the same PYTHONPATH in the pytest settings in
+# pyproject.toml, but pytest versions below 8.4 only actually use that value
+# after plugin loading. So we need to configure it here too. This won't help
+# people manually running pytest outside of meson/make, but we expect those to
+# use a recent enough version of pytest anyway (and if not they can manually
+# configure PYTHONPATH too).
+define pytest_check
+echo "# +++ pytest check in $(subdir) +++" && \
+rm -rf '$(CURDIR)'/tmp_check && \
+$(MKDIR_P) '$(CURDIR)'/tmp_check && \
+cd $(srcdir) && \
+ TESTLOGDIR='$(CURDIR)/tmp_check/log' \
+ TESTDATADIR='$(CURDIR)/tmp_check' \
+ PYTHONPATH='$(abs_top_srcdir)/src/test/pytest:$$PYTHONPATH' \
+ $(with_temp_install) \
+ $(PYTEST) -c '$(abs_top_srcdir)/pyproject.toml' --verbose -ra ./pyt/
+endef
+
+else
+pytest_installcheck = @echo "pytest is not enabled. Try configuring with --enable-pytest"
+pytest_check = $(pytest_installcheck)
+endif
+
# Installation.
install_bin = @install_bin@
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 124df2c8582..778b59c9afb 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -56,6 +56,8 @@ pgxs_kv = {
'enable_nls': libintl.found() ? 'yes' : 'no',
'enable_injection_points': get_option('injection_points') ? 'yes' : 'no',
'enable_tap_tests': tap_tests_enabled ? 'yes' : 'no',
+ 'enable_pytest': pytest_enabled ? 'yes' : 'no',
+ 'PYTEST': pytest_enabled ? ' '.join(pytest_cmd) : '',
'enable_debug': get_option('debug') ? 'yes' : 'no',
'enable_coverage': 'no',
'enable_dtrace': dtrace.found() ? 'yes' : 'no',
diff --git a/src/test/Makefile b/src/test/Makefile
index 3eb0a06abb4..0be9771d71f 100644
--- a/src/test/Makefile
+++ b/src/test/Makefile
@@ -18,6 +18,7 @@ SUBDIRS = \
modules \
perl \
postmaster \
+ pytest \
recovery \
regress \
subscription
diff --git a/src/test/meson.build b/src/test/meson.build
index cd45cbf57fb..09175f0eaea 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -5,6 +5,7 @@ subdir('isolation')
subdir('authentication')
subdir('postmaster')
+subdir('pytest')
subdir('recovery')
subdir('subscription')
subdir('modules')
diff --git a/src/test/pytest/Makefile b/src/test/pytest/Makefile
new file mode 100644
index 00000000000..2bdca96ccbe
--- /dev/null
+++ b/src/test/pytest/Makefile
@@ -0,0 +1,20 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for pytest
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/pytest/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/test/pytest
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+check:
+ $(pytest_check)
+
+clean distclean maintainer-clean:
+ rm -rf tmp_check
diff --git a/src/test/pytest/README b/src/test/pytest/README
new file mode 100644
index 00000000000..1333ed77b7e
--- /dev/null
+++ b/src/test/pytest/README
@@ -0,0 +1 @@
+TODO
diff --git a/src/test/pytest/meson.build b/src/test/pytest/meson.build
new file mode 100644
index 00000000000..abd128dfa24
--- /dev/null
+++ b/src/test/pytest/meson.build
@@ -0,0 +1,16 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+if not pytest_enabled
+ subdir_done()
+endif
+
+tests += {
+ 'name': 'pytest',
+ 'sd': meson.current_source_dir(),
+ 'bd': meson.current_build_dir(),
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_something.py',
+ ],
+ },
+}
diff --git a/src/test/pytest/pgtap.py b/src/test/pytest/pgtap.py
new file mode 100644
index 00000000000..c92cad98d95
--- /dev/null
+++ b/src/test/pytest/pgtap.py
@@ -0,0 +1,198 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import os
+import sys
+
+import pytest
+
+#
+# Helpers
+#
+
+
+class TAP:
+ """
+ A basic API for reporting via the TAP protocol.
+ """
+
+ def __init__(self):
+ self.count = 0
+
+ # XXX interacts poorly with testwrap's boilerplate diagnostics
+ # self.print("TAP version 13")
+
+ def expect(self, num: int):
+ self.print(f"1..{num}")
+
+ def print(self, *args):
+ print(*args, file=sys.__stdout__)
+
+ def ok(self, name: str):
+ self.count += 1
+ self.print("ok", self.count, "-", name)
+
+ def skip(self, name: str, reason: str):
+ self.count += 1
+ self.print("ok", self.count, "-", name, "# skip", reason)
+
+ def fail(self, name: str, details: str):
+ self.count += 1
+ self.print("not ok", self.count, "-", name)
+
+ # mtest has some odd behavior around TAP tests where it won't print
+ # diagnostics on failure if they're part of the stdout stream, so we
+ # might as well just dump the details directly to stderr instead.
+ print(details, file=sys.__stderr__)
+
+
+tap = TAP()
+
+
+class TestNotes:
+ """
+ Annotations for a single test. The existing pytest hooks keep interesting
+ information somewhat separated across the different stages
+ (setup/test/teardown), so this class is used to correlate them.
+ """
+
+ skipped = False
+ skip_reason = None
+
+ failed = False
+ details = ""
+
+
+# Register a custom key in the stash dictionary for keeping our TestNotes.
+notes_key = pytest.StashKey[TestNotes]()
+
+
+#
+# Hook Implementations
+#
+
+
+@pytest.hookimpl(tryfirst=True)
+def pytest_configure(config):
+ """
+ Hijacks the standard streams as soon as possible during pytest startup. The
+ pytest-formatted output gets logged to file instead, and we'll use the
+ original sys.__stdout__/__stderr__ streams for the TAP protocol.
+ """
+ logdir = os.getenv("TESTLOGDIR")
+ if not logdir:
+ raise RuntimeError("pgtap requires the TESTLOGDIR envvar to be set")
+
+ os.makedirs(logdir)
+ logpath = os.path.join(logdir, "pytest.log")
+ sys.stdout = sys.stderr = open(logpath, "a", buffering=1)
+
+
+@pytest.hookimpl(trylast=True)
+def pytest_sessionfinish(session, exitstatus):
+ """
+ Suppresses nonzero exit codes due to failed tests. (In that case, we want
+ Meson to report a failure count, not a generic ERROR.)
+ """
+ if exitstatus == pytest.ExitCode.TESTS_FAILED:
+ session.exitstatus = pytest.ExitCode.OK
+
+
+@pytest.hookimpl
+def pytest_collectreport(report):
+ # Include collection failures directly in Meson error output.
+ if report.failed:
+ print(report.longreprtext, file=sys.__stderr__)
+
+
+@pytest.hookimpl
+def pytest_internalerror(excrepr, excinfo):
+ # Include internal errors directly in Meson error output.
+ print(excrepr, file=sys.__stderr__)
+
+
+#
+# Hook Wrappers
+#
+# In pytest parlance, a "wrapper" for a hook can inspect and optionally modify
+# existing hooks' behavior, but it does not replace the hook chain. This is done
+# through a generator-style API which chains the hooks together (see the use of
+# `yield`).
+#
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_collection(session):
+ """Reports the number of gathered tests after collection is finished."""
+ res = yield
+ tap.expect(session.testscollected)
+ return res
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_runtest_makereport(item, call):
+ """
+ Annotates a test item with our TestNotes and grabs relevant information for
+ reporting.
+
+ This is called multiple times per test, so it's not correct to print the TAP
+ result here. (A test and its teardown stage can both fail, and we want to
+ see the details for both.) We instead combine all the information for use by
+ our pytest_runtest_protocol wrapper later on.
+ """
+ res = yield
+
+ if notes_key not in item.stash:
+ item.stash[notes_key] = TestNotes()
+ notes = item.stash[notes_key]
+
+ report = res.get_result()
+ if report.passed:
+ pass # no annotation needed
+
+ elif report.skipped:
+ notes.skipped = True
+ _, _, notes.skip_reason = report.longrepr
+
+ elif report.failed:
+ notes.failed = True
+
+ if not notes.details:
+ notes.details += "{:_^72}\n\n".format(f" {report.head_line} ")
+
+ if report.when in ("setup", "teardown"):
+ notes.details += "\n{:_^72}\n\n".format(
+ f" Error during {report.when} of {report.head_line} "
+ )
+
+ notes.details += report.longreprtext + "\n"
+
+ # Include captured stdout/stderr/log in failure output
+ for section_name, section_content in report.sections:
+ if section_content.strip():
+ notes.details += "\n{:-^72}\n".format(f" {section_name} ")
+ notes.details += section_content + "\n"
+
+ else:
+ raise RuntimeError("pytest_runtest_makereport received unknown test status")
+
+ return res
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_runtest_protocol(item, nextitem):
+ """
+ Reports the TAP result for this test item using our gathered TestNotes.
+ """
+ res = yield
+
+ assert notes_key in item.stash, "pgtap didn't annotate a test item?"
+ notes = item.stash[notes_key]
+
+ if notes.failed:
+ tap.fail(item.nodeid, notes.details)
+ elif notes.skipped:
+ tap.skip(item.nodeid, notes.skip_reason)
+ else:
+ tap.ok(item.nodeid)
+
+ return res
diff --git a/src/tools/testwrap b/src/tools/testwrap
index e91296ecd15..346f86b8ea3 100755
--- a/src/tools/testwrap
+++ b/src/tools/testwrap
@@ -42,7 +42,11 @@ open(os.path.join(testdir, 'test.start'), 'x')
env_dict = {**os.environ,
'TESTDATADIR': os.path.join(testdir, 'data'),
- 'TESTLOGDIR': os.path.join(testdir, 'log')}
+ 'TESTLOGDIR': os.path.join(testdir, 'log'),
+ # Prevent emitting terminal capability sequences that pollute the
+ # TAP output stream (i.e.\033[?1034h). This happens on OpenBSD with
+ # pytest for unknown reasons.
+ 'TERM': ''}
# The configuration time value of PG_TEST_EXTRA is supplied via argument
--
2.52.0
v8-0003-ci-Add-MTEST_SUITES-for-optional-test-tailoring.patchtext/x-patch; charset=utf-8; name=v8-0003-ci-Add-MTEST_SUITES-for-optional-test-tailoring.patchDownload
From c7ed1796b7beeaf24a0b36f190e773271f1f9df8 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 2 Sep 2025 15:37:53 -0700
Subject: [PATCH v8 3/7] ci: Add MTEST_SUITES for optional test tailoring
Should make it easier to control the test cycle time for Cirrus. Add the
desired suites (remembering `--suite setup`!) to the top-level envvar.
---
.cirrus.tasks.yml | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index b795bad0470..388f5c75556 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -28,6 +28,7 @@ env:
# errors/warnings in one place.
MBUILD_TARGET: all testprep
MTEST_ARGS: --print-errorlogs --no-rebuild -C build
+ MTEST_SUITES: # --suite setup --suite ssl --suite ...
PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
@@ -249,7 +250,7 @@ task:
su postgres <<-EOF
set -e
ulimit -c unlimited
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
# test runningcheck, freebsd chosen because it's currently fast enough
@@ -387,7 +388,7 @@ task:
su postgres <<-EOF
set -e
ulimit -c unlimited
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
on_failure:
@@ -603,7 +604,7 @@ task:
su postgres <<-EOF
set -e
ulimit -c unlimited
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
# so that we don't upload 64bit logs if 32bit fails
rm -rf build/
@@ -616,7 +617,7 @@ task:
su postgres <<-EOF
set -e
ulimit -c unlimited
- PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS}
+ PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS} ${MTEST_SUITES}
EOF
on_failure:
@@ -740,7 +741,7 @@ task:
test_world_script: |
ulimit -c unlimited # default is 0
ulimit -n 1024 # default is 256, pretty low
- meson test $MTEST_ARGS --num-processes ${TEST_JOBS}
+ meson test $MTEST_ARGS --num-processes ${TEST_JOBS} ${MTEST_SUITES}
on_failure:
<<: *on_failure_meson
@@ -820,7 +821,7 @@ task:
check_world_script: |
vcvarsall x64
- meson test %MTEST_ARGS% --num-processes %TEST_JOBS%
+ meson test %MTEST_ARGS% --num-processes %TEST_JOBS% %MTEST_SUITES%
on_failure:
<<: *on_failure_meson
@@ -881,7 +882,7 @@ task:
upload_caches: ccache
test_world_script: |
- %BASH% -c "meson test %MTEST_ARGS% --num-processes %TEST_JOBS%"
+ %BASH% -c "meson test %MTEST_ARGS% --num-processes %TEST_JOBS% %MTEST_SUITES%"
on_failure:
<<: *on_failure_meson
--
2.52.0
v8-0004-Add-pytest-infrastructure-to-interact-with-Postgr.patchtext/x-patch; charset=utf-8; name=v8-0004-Add-pytest-infrastructure-to-interact-with-Postgr.patchDownload
From b72b5f5cf497394ff2d1af9d53aa487b48394e20 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <postgres@jeltef.nl>
Date: Tue, 16 Dec 2025 09:25:48 +0100
Subject: [PATCH v8 4/7] Add pytest infrastructure to interact with PostgreSQL
servers
This adds functionality to the pytest infrastructure that allows tests
to do common things with PostgreSQL servers like:
- creating
- starting
- stopping
- connecting
- running queries
- handling errors
The goal of this infrastructure is to be so easy to use that the actual
tests really only contain the logic to test the behaviour that the tests
are testing, as opposed to a bunch of boilerplate. Examples of this are:
Types get converted to their Python counter parts automatically. Errors
become actual Python exceptions. Results of queries that only return a
single row or cell are unpacked automatically, so you don't have to do
rows[0][0] if the query only returns a single cell.
The only new tests that are part of this commit are tests that cover
this testing infrastructure itself. It's debatable whether such tests
are useful long term, because any infrastructure that's unused by actual
tests should probably not exist. For now it seems good to test this
basic functionality though, both to make sure we don't break it before
committing actual tests that use it, and also as an example for people
writing new tests.
---
doc/src/sgml/regress.sgml | 62 +-
pyproject.toml | 3 +
src/backend/utils/errcodes.txt | 5 +
src/test/pytest/README | 140 +-
src/test/pytest/libpq/__init__.py | 36 +
src/test/pytest/libpq/_core.py | 489 +++++
src/test/pytest/libpq/_error_base.py | 74 +
src/test/pytest/libpq/_generated_errors.py | 2116 ++++++++++++++++++++
src/test/pytest/libpq/errors.py | 39 +
src/test/pytest/meson.build | 5 +-
src/test/pytest/pypg/__init__.py | 10 +
src/test/pytest/pypg/_env.py | 72 +
src/test/pytest/pypg/fixtures.py | 335 ++++
src/test/pytest/pypg/server.py | 470 +++++
src/test/pytest/pypg/util.py | 42 +
src/test/pytest/pyt/conftest.py | 1 +
src/test/pytest/pyt/test_errors.py | 34 +
src/test/pytest/pyt/test_libpq.py | 172 ++
src/test/pytest/pyt/test_multi_server.py | 46 +
src/test/pytest/pyt/test_query_helpers.py | 347 ++++
src/tools/generate_pytest_libpq_errors.py | 147 ++
21 files changed, 4642 insertions(+), 3 deletions(-)
create mode 100644 src/test/pytest/libpq/__init__.py
create mode 100644 src/test/pytest/libpq/_core.py
create mode 100644 src/test/pytest/libpq/_error_base.py
create mode 100644 src/test/pytest/libpq/_generated_errors.py
create mode 100644 src/test/pytest/libpq/errors.py
create mode 100644 src/test/pytest/pypg/__init__.py
create mode 100644 src/test/pytest/pypg/_env.py
create mode 100644 src/test/pytest/pypg/fixtures.py
create mode 100644 src/test/pytest/pypg/server.py
create mode 100644 src/test/pytest/pypg/util.py
create mode 100644 src/test/pytest/pyt/conftest.py
create mode 100644 src/test/pytest/pyt/test_errors.py
create mode 100644 src/test/pytest/pyt/test_libpq.py
create mode 100644 src/test/pytest/pyt/test_multi_server.py
create mode 100644 src/test/pytest/pyt/test_query_helpers.py
create mode 100755 src/tools/generate_pytest_libpq_errors.py
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index d80dd46c5fd..ea92e640b19 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -840,7 +840,7 @@ float4:out:.*-.*-cygwin.*=float4-misrounded-input.out
</sect1>
<sect1 id="regress-tap">
- <title>TAP Tests</title>
+ <title>Perl TAP Tests</title>
<para>
Various tests, particularly the client program tests
@@ -929,6 +929,66 @@ PG_TEST_NOCLEAN=1 make -C src/bin/pg_dump check
</sect1>
+ <sect1 id="regress-pytest">
+ <title>Pytest Tests</title>
+
+ <para>
+ Tests in <filename>pyt</filename> directories use the Python
+ <application>pytest</application> framework. These tests provide a
+ convenient way to test libpq client functionality and scenarios requiring
+ multiple PostgreSQL server instances.
+ </para>
+
+ <para>
+ The pytest tests require <productname>PostgreSQL</productname> to be
+ configured with the option <option>--enable-pytest</option> (or
+ <option>-Dpytest=enabled</option> for Meson builds). You also need
+ <application>pytest</application> installed. You can either install it
+ system-wide, or create a virtual environment in the source directory:
+<programlisting>
+python -m venv .venv
+.venv/bin/pip install .
+</programlisting>
+ Alternatively, if you have <application>uv</application> installed:
+<programlisting>
+uv sync
+</programlisting>
+ </para>
+
+ <para>
+ With Meson builds, you can run the pytest tests using:
+<programlisting>
+meson test --suite pytest
+</programlisting>
+ With autoconf-based builds, you can run them from the
+ <filename>src/test/pytest</filename> directory using:
+<programlisting>
+make check
+</programlisting>
+ </para>
+
+ <para>
+ You can also run specific test files directly using pytest:
+<programlisting>
+pytest src/test/pytest/pyt/test_libpq.py
+pytest -k "test_connstr"
+</programlisting>
+ </para>
+
+ <para>
+ Many operations in the test suites use a 180-second timeout, which on slow
+ hosts may lead to load-induced timeouts. Setting the environment variable
+ <varname>PG_TEST_TIMEOUT_DEFAULT</varname> to a higher number will change
+ the default to avoid this.
+ </para>
+
+ <para>
+ For more information on writing pytest tests, see the
+ <filename>src/test/pytest/README</filename> file.
+ </para>
+
+ </sect1>
+
<sect1 id="regress-coverage">
<title>Test Coverage Examination</title>
diff --git a/pyproject.toml b/pyproject.toml
index 60abb4d0655..4628d2274e0 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -19,3 +19,6 @@ minversion = "7.0"
# Common test code can be found here.
pythonpath = ["src/test/pytest"]
+
+# Load the shared fixtures plugin
+addopts = ["-p", "pypg.fixtures"]
diff --git a/src/backend/utils/errcodes.txt b/src/backend/utils/errcodes.txt
index 5b25402ebbe..b1d0ad4baf4 100644
--- a/src/backend/utils/errcodes.txt
+++ b/src/backend/utils/errcodes.txt
@@ -21,6 +21,11 @@
# doc/src/sgml/errcodes-table.sgml
# a SGML table of error codes for inclusion in the documentation
#
+# src/test/pytest/libpq/_generated_errors.py
+# Python exception classes for the pytest libpq wrapper
+# Note: This needs to be manually regenerated by running
+# src/tools/generate_pytest_libpq_errors.py
+#
# The format of this file is one error code per line, with the following
# whitespace-separated fields:
#
diff --git a/src/test/pytest/README b/src/test/pytest/README
index 1333ed77b7e..9dc50ca111f 100644
--- a/src/test/pytest/README
+++ b/src/test/pytest/README
@@ -1 +1,139 @@
-TODO
+src/test/pytest/README
+
+Pytest-based tests
+==================
+
+This directory contains infrastructure for Python-based tests using pytest,
+along with some core tests for the pytest infrastructure itself. The framework
+provides fixtures for managing PostgreSQL server instances and connecting to
+them via libpq.
+
+
+Running the tests
+=================
+
+NOTE: You must have given the --enable-pytest argument to configure (or
+-Dpytest=enabled for Meson builds). You also need to have either pytest or uv
+already installed.
+
+With Meson builds, you can run:
+ meson test --suite pytest
+
+With autoconf based builds, you can run:
+ make check
+or
+ make installcheck
+
+You can run specific test files and/or use pytest's -k option to select tests:
+ pytest src/test/pytest/pyt/test_libpq.py
+ pytest -k "test_connstr"
+
+
+Directory structure
+===================
+
+pypg/
+ Python library providing common functions and pytest fixtures that can be
+ used in tests.
+
+libpq/
+ A simple but user-friendly python wrapper around libpq
+
+pyt/
+ Tests for the pytest infrastructure itself
+
+pgtap.py
+ A pytest plugin to output results in TAP format
+
+
+Writing tests
+=============
+
+Tests use pytest fixtures to manage server instances and connections. The
+most commonly used fixtures are:
+
+pg
+ A PostgresServer instance configured for the current test. Use this for
+ creating test users/databases or modifying server configuration. Changes
+ are automatically rolled back after the test.
+
+conn
+ A connected PGconn instance to the test server. Automatically cleaned up
+ after the test.
+
+connect
+ A function to create additional connections with custom options.
+
+create_pg
+ A factory function to create additional PostgreSQL servers within a test.
+ Servers are automatically cleaned up at the end of the test. Useful for
+ testing scenarios that require multiple independent servers.
+
+create_pg_module
+ Like create_pg, but servers persist for the entire test module. Use this
+ when multiple tests in a module can share the same servers, which is
+ faster than creating new servers for each test.
+
+
+Example test:
+
+ def test_simple_query(conn):
+ result = conn.sql("SELECT 1 + 1")
+ assert result == 2
+
+ def test_with_user(pg):
+ users = pg.create_users("test")
+ with pg.reloading() as s:
+ s.hba.prepend(["local", "all", users["test"], "trust"])
+
+ conn = pg.connect(user=users["test"])
+ assert conn.sql("SELECT current_user") == users["test"]
+
+ def test_multiple_servers(create_pg):
+ node1 = create_pg("primary")
+ node2 = create_pg("secondary")
+
+ conn1 = node1.connect()
+ conn2 = node2.connect()
+
+ # Each server is independent
+ assert node1.port != node2.port
+
+
+Server configuration
+====================
+
+Tests can temporarily modify server configuration using context managers:
+
+ with pg.reloading() as s:
+ s.conf.set(log_connections="on")
+ s.hba.prepend("local all all trust")
+ # Server is reloaded here
+ # After the test finished the original configuration is restored and
+ # the server is reloaded again
+
+Use pg.restarting() instead if the configuration change requires a restart.
+
+
+Timeouts
+========
+
+Tests inherit the PG_TEST_TIMEOUT_DEFAULT environment variable (defaulting
+to 180 seconds). The remaining_timeout fixture provides a function that
+returns how much time remains for the current test.
+
+
+Environment variables
+=====================
+
+PG_TEST_TIMEOUT_DEFAULT
+ Per-test timeout in seconds (default: 180)
+
+PG_CONFIG
+ Path to pg_config (default: uses PATH)
+
+TESTDATADIR
+ Directory for test data (default: pytest temp directory)
+
+PG_TEST_EXTRA
+ Space-separated list of optional test categories to run (e.g., "ssl")
diff --git a/src/test/pytest/libpq/__init__.py b/src/test/pytest/libpq/__init__.py
new file mode 100644
index 00000000000..cb4d18b6206
--- /dev/null
+++ b/src/test/pytest/libpq/__init__.py
@@ -0,0 +1,36 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+libpq testing utilities - ctypes bindings and helpers for PostgreSQL's libpq library.
+
+This module provides Python wrappers around libpq for use in pytest tests.
+"""
+
+from . import errors
+from .errors import LibpqError, LibpqWarning
+from ._core import (
+ ConnectionStatus,
+ DiagField,
+ ExecStatus,
+ PGconn,
+ PGresult,
+ connect,
+ connstr,
+ load_libpq_handle,
+ register_type_info,
+)
+
+__all__ = [
+ "errors",
+ "LibpqError",
+ "LibpqWarning",
+ "ConnectionStatus",
+ "DiagField",
+ "ExecStatus",
+ "PGconn",
+ "PGresult",
+ "connect",
+ "connstr",
+ "load_libpq_handle",
+ "register_type_info",
+]
diff --git a/src/test/pytest/libpq/_core.py b/src/test/pytest/libpq/_core.py
new file mode 100644
index 00000000000..0d77996d572
--- /dev/null
+++ b/src/test/pytest/libpq/_core.py
@@ -0,0 +1,489 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Core libpq functionality - ctypes bindings and connection handling.
+"""
+
+import contextlib
+import ctypes
+import datetime
+import decimal
+import enum
+import json
+import platform
+import os
+import uuid
+from typing import Any, Callable, Dict, Optional
+
+from .errors import LibpqError, make_error
+
+
+# PG_DIAG field identifiers from postgres_ext.h
+class DiagField(enum.IntEnum):
+ SEVERITY = ord("S")
+ SEVERITY_NONLOCALIZED = ord("V")
+ SQLSTATE = ord("C")
+ MESSAGE_PRIMARY = ord("M")
+ MESSAGE_DETAIL = ord("D")
+ MESSAGE_HINT = ord("H")
+ STATEMENT_POSITION = ord("P")
+ INTERNAL_POSITION = ord("p")
+ INTERNAL_QUERY = ord("q")
+ CONTEXT = ord("W")
+ SCHEMA_NAME = ord("s")
+ TABLE_NAME = ord("t")
+ COLUMN_NAME = ord("c")
+ DATATYPE_NAME = ord("d")
+ CONSTRAINT_NAME = ord("n")
+ SOURCE_FILE = ord("F")
+ SOURCE_LINE = ord("L")
+ SOURCE_FUNCTION = ord("R")
+
+
+class ConnectionStatus(enum.IntEnum):
+ """PostgreSQL connection status codes from libpq."""
+
+ CONNECTION_OK = 0
+ CONNECTION_BAD = 1
+
+
+class ExecStatus(enum.IntEnum):
+ """PostgreSQL result status codes from PQresultStatus."""
+
+ PGRES_EMPTY_QUERY = 0
+ PGRES_COMMAND_OK = 1
+ PGRES_TUPLES_OK = 2
+ PGRES_COPY_OUT = 3
+ PGRES_COPY_IN = 4
+ PGRES_BAD_RESPONSE = 5
+ PGRES_NONFATAL_ERROR = 6
+ PGRES_FATAL_ERROR = 7
+ PGRES_COPY_BOTH = 8
+ PGRES_SINGLE_TUPLE = 9
+ PGRES_PIPELINE_SYNC = 10
+ PGRES_PIPELINE_ABORTED = 11
+
+
+class _PGconn(ctypes.Structure):
+ pass
+
+
+class _PGresult(ctypes.Structure):
+ pass
+
+
+_PGconn_p = ctypes.POINTER(_PGconn)
+_PGresult_p = ctypes.POINTER(_PGresult)
+
+
+def load_libpq_handle(libdir, bindir):
+ """
+ Loads a ctypes handle for libpq. Some common function prototypes are
+ initialized for general use.
+ """
+ system = platform.system()
+
+ if system in ("Linux", "FreeBSD", "NetBSD", "OpenBSD"):
+ name = "libpq.so.5"
+ elif system == "Darwin":
+ name = "libpq.5.dylib"
+ elif system == "Windows":
+ name = "libpq.dll"
+ else:
+ assert False, f"the libpq fixture must be updated for {system}"
+
+ if system == "Windows":
+ # On Windows, libpq.dll is confusingly in bindir, not libdir. And we
+ # need to add this directory the the search path.
+ libpq_path = os.path.join(bindir, name)
+ lib = ctypes.CDLL(libpq_path)
+ else:
+ libpq_path = os.path.join(libdir, name)
+ lib = ctypes.CDLL(libpq_path)
+
+ #
+ # Function Prototypes
+ #
+
+ lib.PQconnectdb.restype = _PGconn_p
+ lib.PQconnectdb.argtypes = [ctypes.c_char_p]
+
+ lib.PQstatus.restype = ctypes.c_int
+ lib.PQstatus.argtypes = [_PGconn_p]
+
+ lib.PQexec.restype = _PGresult_p
+ lib.PQexec.argtypes = [_PGconn_p, ctypes.c_char_p]
+
+ lib.PQresultStatus.restype = ctypes.c_int
+ lib.PQresultStatus.argtypes = [_PGresult_p]
+
+ lib.PQclear.restype = None
+ lib.PQclear.argtypes = [_PGresult_p]
+
+ lib.PQerrorMessage.restype = ctypes.c_char_p
+ lib.PQerrorMessage.argtypes = [_PGconn_p]
+
+ lib.PQfinish.restype = None
+ lib.PQfinish.argtypes = [_PGconn_p]
+
+ lib.PQresultErrorMessage.restype = ctypes.c_char_p
+ lib.PQresultErrorMessage.argtypes = [_PGresult_p]
+
+ lib.PQntuples.restype = ctypes.c_int
+ lib.PQntuples.argtypes = [_PGresult_p]
+
+ lib.PQnfields.restype = ctypes.c_int
+ lib.PQnfields.argtypes = [_PGresult_p]
+
+ lib.PQgetvalue.restype = ctypes.c_char_p
+ lib.PQgetvalue.argtypes = [_PGresult_p, ctypes.c_int, ctypes.c_int]
+
+ lib.PQgetisnull.restype = ctypes.c_int
+ lib.PQgetisnull.argtypes = [_PGresult_p, ctypes.c_int, ctypes.c_int]
+
+ lib.PQftype.restype = ctypes.c_uint
+ lib.PQftype.argtypes = [_PGresult_p, ctypes.c_int]
+
+ lib.PQresultErrorField.restype = ctypes.c_char_p
+ lib.PQresultErrorField.argtypes = [_PGresult_p, ctypes.c_int]
+
+ return lib
+
+
+# PostgreSQL type OIDs and conversion system
+# Type registry - maps OID to converter function
+_type_converters: Dict[int, Callable[[str], Any]] = {}
+_array_to_elem_map: Dict[int, int] = {}
+
+
+def register_type_info(
+ name: str, oid: int, array_oid: int, converter: Callable[[str], Any]
+):
+ """
+ Register a PostgreSQL type with its OID, array OID, and conversion function.
+
+ Usage:
+ register_type_info("bool", 16, 1000, lambda v: v == "t")
+ """
+ _type_converters[oid] = converter
+ if array_oid is not None:
+ _array_to_elem_map[array_oid] = oid
+
+
+def _parse_array(value: str, elem_oid: int):
+ """Parse PostgreSQL array syntax into nested Python lists."""
+ stack: list[list] = []
+ current_element: list[str] = []
+ in_quotes = False
+ was_quoted = False
+ pos = 0
+
+ while pos < len(value):
+ char = value[pos]
+
+ if in_quotes:
+ if char == "\\":
+ next_char = value[pos + 1]
+ if next_char not in '"\\':
+ raise NotImplementedError('Only \\" and \\\\ escapes are supported')
+ current_element.append(next_char)
+ pos += 2
+ continue
+ elif char == '"':
+ in_quotes = False
+ else:
+ current_element.append(char)
+ elif char == '"':
+ in_quotes = True
+ was_quoted = True
+ elif char == "{":
+ stack.append([])
+ elif char in ",}":
+ if current_element or was_quoted:
+ elem = "".join(current_element)
+ if not was_quoted and elem == "NULL":
+ stack[-1].append(None)
+ else:
+ stack[-1].append(_convert_pg_value(elem, elem_oid))
+ current_element = []
+ was_quoted = False
+ if char == "}":
+ completed = stack.pop()
+ if not stack:
+ return completed
+ stack[-1].append(completed)
+ elif char != " ":
+ current_element.append(char)
+ pos += 1
+
+ raise ValueError(f"Malformed array literal: {value}")
+
+
+# Register standard PostgreSQL types that we'll likely encounter in tests
+register_type_info("bool", 16, 1000, lambda v: v == "t")
+register_type_info("int2", 21, 1005, int)
+register_type_info("int4", 23, 1007, int)
+register_type_info("int8", 20, 1016, int)
+register_type_info("float4", 700, 1021, float)
+register_type_info("float8", 701, 1022, float)
+register_type_info("numeric", 1700, 1231, decimal.Decimal)
+register_type_info("text", 25, 1009, str)
+register_type_info("varchar", 1043, 1015, str)
+register_type_info("date", 1082, 1182, datetime.date.fromisoformat)
+register_type_info("time", 1083, 1183, datetime.time.fromisoformat)
+register_type_info("timestamp", 1114, 1115, datetime.datetime.fromisoformat)
+register_type_info("timestamptz", 1184, 1185, datetime.datetime.fromisoformat)
+register_type_info("uuid", 2950, 2951, uuid.UUID)
+register_type_info("json", 114, 199, json.loads)
+register_type_info("jsonb", 3802, 3807, json.loads)
+
+
+def _convert_pg_value(value: str, type_oid: int) -> Any:
+ """
+ Convert PostgreSQL string value to appropriate Python type based on OID.
+ Uses the registered type converters from register_type_info().
+ """
+ # Check if it's an array type
+ if type_oid in _array_to_elem_map:
+ elem_oid = _array_to_elem_map[type_oid]
+ return _parse_array(value, elem_oid)
+
+ # Use registered converter if available
+ converter = _type_converters.get(type_oid)
+ if converter:
+ return converter(value)
+
+ # Unknown types - return as string
+ return value
+
+
+def simplify_query_results(results) -> Any:
+ """
+ Simplify the results of a query so that the caller doesn't have to unpack
+ lists and tuples of length 1.
+ """
+ if len(results) == 1:
+ row = results[0]
+ if len(row) == 1:
+ # If there's only a single cell, just return the value
+ return row[0]
+ # If there's only a single row, just return that row
+ return row
+
+ if len(results) != 0 and len(results[0]) == 1:
+ # If there's only a single column, return an array of values
+ return [row[0] for row in results]
+
+ # if there are multiple rows and columns, return the results as is
+ return results
+
+
+class PGresult(contextlib.AbstractContextManager):
+ """Wraps a raw _PGresult_p with a more friendly interface."""
+
+ def __init__(self, lib: ctypes.CDLL, res: _PGresult_p):
+ self._lib = lib
+ self._res = res
+
+ def __exit__(self, *exc):
+ self._lib.PQclear(self._res)
+ self._res = None
+
+ def status(self) -> ExecStatus:
+ return ExecStatus(self._lib.PQresultStatus(self._res))
+
+ def error_message(self):
+ """Returns the error message associated with this result."""
+ msg = self._lib.PQresultErrorMessage(self._res)
+ return msg.decode() if msg else ""
+
+ def _get_error_field(self, field: DiagField) -> Optional[str]:
+ """Get an error field from the result using PQresultErrorField."""
+ val = self._lib.PQresultErrorField(self._res, int(field))
+ return val.decode() if val else None
+
+ def raise_error(self) -> None:
+ """
+ Raises an appropriate LibpqError subclass based on the error fields.
+ Extracts SQLSTATE and other diagnostic information from the result.
+ """
+ if not self._res:
+ raise LibpqError("query failed: out of memory or connection lost")
+
+ sqlstate = self._get_error_field(DiagField.SQLSTATE)
+ primary = self._get_error_field(DiagField.MESSAGE_PRIMARY)
+ detail = self._get_error_field(DiagField.MESSAGE_DETAIL)
+ hint = self._get_error_field(DiagField.MESSAGE_HINT)
+ severity = self._get_error_field(DiagField.SEVERITY)
+ schema_name = self._get_error_field(DiagField.SCHEMA_NAME)
+ table_name = self._get_error_field(DiagField.TABLE_NAME)
+ column_name = self._get_error_field(DiagField.COLUMN_NAME)
+ datatype_name = self._get_error_field(DiagField.DATATYPE_NAME)
+ constraint_name = self._get_error_field(DiagField.CONSTRAINT_NAME)
+ context = self._get_error_field(DiagField.CONTEXT)
+
+ position_str = self._get_error_field(DiagField.STATEMENT_POSITION)
+ position = int(position_str) if position_str else None
+
+ raise make_error(
+ primary or self.error_message(),
+ sqlstate=sqlstate,
+ severity=severity,
+ primary=primary,
+ detail=detail,
+ hint=hint,
+ schema_name=schema_name,
+ table_name=table_name,
+ column_name=column_name,
+ datatype_name=datatype_name,
+ constraint_name=constraint_name,
+ position=position,
+ context=context,
+ )
+
+ def fetch_all(self):
+ """
+ Fetch all rows and convert to Python types.
+ Returns a list of tuples, with values converted based on their PostgreSQL type.
+ """
+ nrows = self._lib.PQntuples(self._res)
+ ncols = self._lib.PQnfields(self._res)
+
+ # Get type OIDs for each column
+ type_oids = [self._lib.PQftype(self._res, col) for col in range(ncols)]
+
+ results = []
+ for row in range(nrows):
+ row_data = []
+ for col in range(ncols):
+ if self._lib.PQgetisnull(self._res, row, col):
+ row_data.append(None)
+ else:
+ value = self._lib.PQgetvalue(self._res, row, col).decode()
+ row_data.append(_convert_pg_value(value, type_oids[col]))
+ results.append(tuple(row_data))
+
+ return results
+
+
+class PGconn(contextlib.AbstractContextManager):
+ """
+ Wraps a raw _PGconn_p with a more friendly interface. This is just a
+ stub; it's expected to grow.
+ """
+
+ def __init__(
+ self,
+ lib: ctypes.CDLL,
+ handle: _PGconn_p,
+ stack: contextlib.ExitStack,
+ ):
+ self._lib = lib
+ self._handle = handle
+ self._stack = stack
+
+ def __exit__(self, *exc):
+ self._lib.PQfinish(self._handle)
+ self._handle = None
+
+ def exec(self, query: str):
+ """
+ Executes a query via PQexec() and returns a PGresult.
+ """
+ res = self._lib.PQexec(self._handle, query.encode())
+ return self._stack.enter_context(PGresult(self._lib, res))
+
+ def sql(self, query: str):
+ """
+ Executes a query and raises an exception if it fails.
+ Returns the query results with automatic type conversion and simplification.
+ For commands that don't return data (INSERT, UPDATE, etc.), returns None.
+
+ Examples:
+ - SELECT 1 -> 1
+ - SELECT 1, 2 -> (1, 2)
+ - SELECT * FROM generate_series(1, 3) -> [1, 2, 3]
+ - SELECT * FROM (VALUES (1, 'a'), (2, 'b')) t -> [(1, 'a'), (2, 'b')]
+ - CREATE TABLE ... -> None
+ - INSERT INTO ... -> None
+ """
+ res = self.exec(query)
+ status = res.status()
+
+ if status == ExecStatus.PGRES_FATAL_ERROR:
+ res.raise_error()
+ elif status == ExecStatus.PGRES_COMMAND_OK:
+ return None
+ elif status == ExecStatus.PGRES_TUPLES_OK:
+ results = res.fetch_all()
+ return simplify_query_results(results)
+ else:
+ res.raise_error()
+
+
+def connstr(opts: Dict[str, Any]) -> str:
+ """
+ Flattens the provided options into a libpq connection string. Values
+ are converted to str and quoted/escaped as necessary.
+ """
+ settings = []
+
+ for k, v in opts.items():
+ v = str(v)
+ if not v:
+ v = "''"
+ else:
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+
+ if " " in v:
+ v = f"'{v}'"
+
+ settings.append(f"{k}={v}")
+
+ return " ".join(settings)
+
+
+def connect(
+ libpq_handle: ctypes.CDLL,
+ stack: contextlib.ExitStack,
+ remaining_timeout_fn: Callable[[], float],
+ **opts,
+) -> PGconn:
+ """
+ Connects to a server, using the given connection options, and
+ returns a PGconn object wrapping the connection handle. A
+ failure will raise LibpqError.
+
+ Connections honor PG_TEST_TIMEOUT_DEFAULT unless connect_timeout is
+ explicitly overridden in opts.
+
+ Args:
+ libpq_handle: ctypes.CDLL handle to libpq library
+ stack: ExitStack for managing connection cleanup
+ remaining_timeout_fn: Function that returns remaining timeout in seconds
+ **opts: Connection options (host, port, dbname, etc.)
+
+ Returns:
+ PGconn: Connected database connection
+
+ Raises:
+ LibpqError: If connection fails
+ """
+
+ if "connect_timeout" not in opts:
+ t = int(remaining_timeout_fn())
+ opts["connect_timeout"] = max(t, 1)
+
+ conn_p = libpq_handle.PQconnectdb(connstr(opts).encode())
+
+ # Check connection status before adding to stack
+ if libpq_handle.PQstatus(conn_p) != ConnectionStatus.CONNECTION_OK:
+ error_msg = libpq_handle.PQerrorMessage(conn_p).decode()
+ # Manually close the failed connection
+ libpq_handle.PQfinish(conn_p)
+ raise LibpqError(error_msg)
+
+ # Connection succeeded - add to stack for cleanup
+ conn = stack.enter_context(PGconn(libpq_handle, conn_p, stack=stack))
+ return conn
diff --git a/src/test/pytest/libpq/_error_base.py b/src/test/pytest/libpq/_error_base.py
new file mode 100644
index 00000000000..5c70c077193
--- /dev/null
+++ b/src/test/pytest/libpq/_error_base.py
@@ -0,0 +1,74 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Base exception classes for libpq errors and warnings.
+"""
+
+from typing import Optional
+
+
+class LibpqExceptionMixin:
+ """Mixin providing PostgreSQL error field attributes."""
+
+ sqlstate: Optional[str]
+ severity: Optional[str]
+ primary: Optional[str]
+ detail: Optional[str]
+ hint: Optional[str]
+ schema_name: Optional[str]
+ table_name: Optional[str]
+ column_name: Optional[str]
+ datatype_name: Optional[str]
+ constraint_name: Optional[str]
+ position: Optional[int]
+ context: Optional[str]
+
+ def __init__(
+ self,
+ message: str,
+ *,
+ sqlstate: Optional[str] = None,
+ severity: Optional[str] = None,
+ primary: Optional[str] = None,
+ detail: Optional[str] = None,
+ hint: Optional[str] = None,
+ schema_name: Optional[str] = None,
+ table_name: Optional[str] = None,
+ column_name: Optional[str] = None,
+ datatype_name: Optional[str] = None,
+ constraint_name: Optional[str] = None,
+ position: Optional[int] = None,
+ context: Optional[str] = None,
+ ):
+ super().__init__(message)
+ self.sqlstate = sqlstate
+ self.severity = severity
+ self.primary = primary
+ self.detail = detail
+ self.hint = hint
+ self.schema_name = schema_name
+ self.table_name = table_name
+ self.column_name = column_name
+ self.datatype_name = datatype_name
+ self.constraint_name = constraint_name
+ self.position = position
+ self.context = context
+
+ @property
+ def sqlstate_class(self) -> Optional[str]:
+ """Returns the 2-character SQLSTATE class."""
+ if self.sqlstate and len(self.sqlstate) >= 2:
+ return self.sqlstate[:2]
+ return None
+
+
+class LibpqError(LibpqExceptionMixin, RuntimeError):
+ """Base exception for libpq errors."""
+
+ pass
+
+
+class LibpqWarning(LibpqExceptionMixin, UserWarning):
+ """Base exception for libpq warnings."""
+
+ pass
diff --git a/src/test/pytest/libpq/_generated_errors.py b/src/test/pytest/libpq/_generated_errors.py
new file mode 100644
index 00000000000..f50f3143580
--- /dev/null
+++ b/src/test/pytest/libpq/_generated_errors.py
@@ -0,0 +1,2116 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+# This file is generated by src/tools/generate_pytest_libpq_errors.py - do not edit directly.
+
+"""
+Generated PostgreSQL error classes mapped from SQLSTATE codes.
+"""
+
+from typing import Dict
+
+from ._error_base import LibpqError, LibpqWarning
+
+
+class SuccessfulCompletion(LibpqError):
+ """SQLSTATE 00000 - successful completion."""
+
+ pass
+
+
+class Warning(LibpqWarning):
+ """SQLSTATE 01000 - warning."""
+
+ pass
+
+
+class DynamicResultSetsReturnedWarning(Warning):
+ """SQLSTATE 0100C - dynamic result sets returned."""
+
+ pass
+
+
+class ImplicitZeroBitPaddingWarning(Warning):
+ """SQLSTATE 01008 - implicit zero bit padding."""
+
+ pass
+
+
+class NullValueEliminatedInSetFunctionWarning(Warning):
+ """SQLSTATE 01003 - null value eliminated in set function."""
+
+ pass
+
+
+class PrivilegeNotGrantedWarning(Warning):
+ """SQLSTATE 01007 - privilege not granted."""
+
+ pass
+
+
+class PrivilegeNotRevokedWarning(Warning):
+ """SQLSTATE 01006 - privilege not revoked."""
+
+ pass
+
+
+class StringDataRightTruncationWarning(Warning):
+ """SQLSTATE 01004 - string data right truncation."""
+
+ pass
+
+
+class DeprecatedFeatureWarning(Warning):
+ """SQLSTATE 01P01 - deprecated feature."""
+
+ pass
+
+
+class NoData(LibpqError):
+ """SQLSTATE 02000 - no data."""
+
+ pass
+
+
+class NoAdditionalDynamicResultSetsReturned(NoData):
+ """SQLSTATE 02001 - no additional dynamic result sets returned."""
+
+ pass
+
+
+class SQLStatementNotYetComplete(LibpqError):
+ """SQLSTATE 03000 - sql statement not yet complete."""
+
+ pass
+
+
+class ConnectionException(LibpqError):
+ """SQLSTATE 08000 - connection exception."""
+
+ pass
+
+
+class ConnectionDoesNotExist(ConnectionException):
+ """SQLSTATE 08003 - connection does not exist."""
+
+ pass
+
+
+class ConnectionFailure(ConnectionException):
+ """SQLSTATE 08006 - connection failure."""
+
+ pass
+
+
+class SQLClientUnableToEstablishSQLConnection(ConnectionException):
+ """SQLSTATE 08001 - sqlclient unable to establish sqlconnection."""
+
+ pass
+
+
+class SQLServerRejectedEstablishmentOfSQLConnection(ConnectionException):
+ """SQLSTATE 08004 - sqlserver rejected establishment of sqlconnection."""
+
+ pass
+
+
+class TransactionResolutionUnknown(ConnectionException):
+ """SQLSTATE 08007 - transaction resolution unknown."""
+
+ pass
+
+
+class ProtocolViolation(ConnectionException):
+ """SQLSTATE 08P01 - protocol violation."""
+
+ pass
+
+
+class TriggeredActionException(LibpqError):
+ """SQLSTATE 09000 - triggered action exception."""
+
+ pass
+
+
+class FeatureNotSupported(LibpqError):
+ """SQLSTATE 0A000 - feature not supported."""
+
+ pass
+
+
+class InvalidTransactionInitiation(LibpqError):
+ """SQLSTATE 0B000 - invalid transaction initiation."""
+
+ pass
+
+
+class LocatorException(LibpqError):
+ """SQLSTATE 0F000 - locator exception."""
+
+ pass
+
+
+class InvalidLocatorSpecification(LocatorException):
+ """SQLSTATE 0F001 - invalid locator specification."""
+
+ pass
+
+
+class InvalidGrantor(LibpqError):
+ """SQLSTATE 0L000 - invalid grantor."""
+
+ pass
+
+
+class InvalidGrantOperation(InvalidGrantor):
+ """SQLSTATE 0LP01 - invalid grant operation."""
+
+ pass
+
+
+class InvalidRoleSpecification(LibpqError):
+ """SQLSTATE 0P000 - invalid role specification."""
+
+ pass
+
+
+class DiagnosticsException(LibpqError):
+ """SQLSTATE 0Z000 - diagnostics exception."""
+
+ pass
+
+
+class StackedDiagnosticsAccessedWithoutActiveHandler(DiagnosticsException):
+ """SQLSTATE 0Z002 - stacked diagnostics accessed without active handler."""
+
+ pass
+
+
+class InvalidArgumentForXquery(LibpqError):
+ """SQLSTATE 10608 - invalid argument for xquery."""
+
+ pass
+
+
+class CaseNotFound(LibpqError):
+ """SQLSTATE 20000 - case not found."""
+
+ pass
+
+
+class CardinalityViolation(LibpqError):
+ """SQLSTATE 21000 - cardinality violation."""
+
+ pass
+
+
+class DataException(LibpqError):
+ """SQLSTATE 22000 - data exception."""
+
+ pass
+
+
+class ArraySubscriptError(DataException):
+ """SQLSTATE 2202E - array subscript error."""
+
+ pass
+
+
+class CharacterNotInRepertoire(DataException):
+ """SQLSTATE 22021 - character not in repertoire."""
+
+ pass
+
+
+class DatetimeFieldOverflow(DataException):
+ """SQLSTATE 22008 - datetime field overflow."""
+
+ pass
+
+
+class DivisionByZero(DataException):
+ """SQLSTATE 22012 - division by zero."""
+
+ pass
+
+
+class ErrorInAssignment(DataException):
+ """SQLSTATE 22005 - error in assignment."""
+
+ pass
+
+
+class EscapeCharacterConflict(DataException):
+ """SQLSTATE 2200B - escape character conflict."""
+
+ pass
+
+
+class IndicatorOverflow(DataException):
+ """SQLSTATE 22022 - indicator overflow."""
+
+ pass
+
+
+class IntervalFieldOverflow(DataException):
+ """SQLSTATE 22015 - interval field overflow."""
+
+ pass
+
+
+class InvalidArgumentForLogarithm(DataException):
+ """SQLSTATE 2201E - invalid argument for logarithm."""
+
+ pass
+
+
+class InvalidArgumentForNtileFunction(DataException):
+ """SQLSTATE 22014 - invalid argument for ntile function."""
+
+ pass
+
+
+class InvalidArgumentForNthValueFunction(DataException):
+ """SQLSTATE 22016 - invalid argument for nth value function."""
+
+ pass
+
+
+class InvalidArgumentForPowerFunction(DataException):
+ """SQLSTATE 2201F - invalid argument for power function."""
+
+ pass
+
+
+class InvalidArgumentForWidthBucketFunction(DataException):
+ """SQLSTATE 2201G - invalid argument for width bucket function."""
+
+ pass
+
+
+class InvalidCharacterValueForCast(DataException):
+ """SQLSTATE 22018 - invalid character value for cast."""
+
+ pass
+
+
+class InvalidDatetimeFormat(DataException):
+ """SQLSTATE 22007 - invalid datetime format."""
+
+ pass
+
+
+class InvalidEscapeCharacter(DataException):
+ """SQLSTATE 22019 - invalid escape character."""
+
+ pass
+
+
+class InvalidEscapeOctet(DataException):
+ """SQLSTATE 2200D - invalid escape octet."""
+
+ pass
+
+
+class InvalidEscapeSequence(DataException):
+ """SQLSTATE 22025 - invalid escape sequence."""
+
+ pass
+
+
+class NonstandardUseOfEscapeCharacter(DataException):
+ """SQLSTATE 22P06 - nonstandard use of escape character."""
+
+ pass
+
+
+class InvalidIndicatorParameterValue(DataException):
+ """SQLSTATE 22010 - invalid indicator parameter value."""
+
+ pass
+
+
+class InvalidParameterValue(DataException):
+ """SQLSTATE 22023 - invalid parameter value."""
+
+ pass
+
+
+class InvalidPrecedingOrFollowingSize(DataException):
+ """SQLSTATE 22013 - invalid preceding or following size."""
+
+ pass
+
+
+class InvalidRegularExpression(DataException):
+ """SQLSTATE 2201B - invalid regular expression."""
+
+ pass
+
+
+class InvalidRowCountInLimitClause(DataException):
+ """SQLSTATE 2201W - invalid row count in limit clause."""
+
+ pass
+
+
+class InvalidRowCountInResultOffsetClause(DataException):
+ """SQLSTATE 2201X - invalid row count in result offset clause."""
+
+ pass
+
+
+class InvalidTablesampleArgument(DataException):
+ """SQLSTATE 2202H - invalid tablesample argument."""
+
+ pass
+
+
+class InvalidTablesampleRepeat(DataException):
+ """SQLSTATE 2202G - invalid tablesample repeat."""
+
+ pass
+
+
+class InvalidTimeZoneDisplacementValue(DataException):
+ """SQLSTATE 22009 - invalid time zone displacement value."""
+
+ pass
+
+
+class InvalidUseOfEscapeCharacter(DataException):
+ """SQLSTATE 2200C - invalid use of escape character."""
+
+ pass
+
+
+class MostSpecificTypeMismatch(DataException):
+ """SQLSTATE 2200G - most specific type mismatch."""
+
+ pass
+
+
+class NullValueNotAllowed(DataException):
+ """SQLSTATE 22004 - null value not allowed."""
+
+ pass
+
+
+class NullValueNoIndicatorParameter(DataException):
+ """SQLSTATE 22002 - null value no indicator parameter."""
+
+ pass
+
+
+class NumericValueOutOfRange(DataException):
+ """SQLSTATE 22003 - numeric value out of range."""
+
+ pass
+
+
+class SequenceGeneratorLimitExceeded(DataException):
+ """SQLSTATE 2200H - sequence generator limit exceeded."""
+
+ pass
+
+
+class StringDataLengthMismatch(DataException):
+ """SQLSTATE 22026 - string data length mismatch."""
+
+ pass
+
+
+class StringDataRightTruncation(DataException):
+ """SQLSTATE 22001 - string data right truncation."""
+
+ pass
+
+
+class SubstringError(DataException):
+ """SQLSTATE 22011 - substring error."""
+
+ pass
+
+
+class TrimError(DataException):
+ """SQLSTATE 22027 - trim error."""
+
+ pass
+
+
+class UnterminatedCString(DataException):
+ """SQLSTATE 22024 - unterminated c string."""
+
+ pass
+
+
+class ZeroLengthCharacterString(DataException):
+ """SQLSTATE 2200F - zero length character string."""
+
+ pass
+
+
+class FloatingPointException(DataException):
+ """SQLSTATE 22P01 - floating point exception."""
+
+ pass
+
+
+class InvalidTextRepresentation(DataException):
+ """SQLSTATE 22P02 - invalid text representation."""
+
+ pass
+
+
+class InvalidBinaryRepresentation(DataException):
+ """SQLSTATE 22P03 - invalid binary representation."""
+
+ pass
+
+
+class BadCopyFileFormat(DataException):
+ """SQLSTATE 22P04 - bad copy file format."""
+
+ pass
+
+
+class UntranslatableCharacter(DataException):
+ """SQLSTATE 22P05 - untranslatable character."""
+
+ pass
+
+
+class NotAnXmlDocument(DataException):
+ """SQLSTATE 2200L - not an xml document."""
+
+ pass
+
+
+class InvalidXmlDocument(DataException):
+ """SQLSTATE 2200M - invalid xml document."""
+
+ pass
+
+
+class InvalidXmlContent(DataException):
+ """SQLSTATE 2200N - invalid xml content."""
+
+ pass
+
+
+class InvalidXmlComment(DataException):
+ """SQLSTATE 2200S - invalid xml comment."""
+
+ pass
+
+
+class InvalidXmlProcessingInstruction(DataException):
+ """SQLSTATE 2200T - invalid xml processing instruction."""
+
+ pass
+
+
+class DuplicateJsonObjectKeyValue(DataException):
+ """SQLSTATE 22030 - duplicate json object key value."""
+
+ pass
+
+
+class InvalidArgumentForSQLJsonDatetimeFunction(DataException):
+ """SQLSTATE 22031 - invalid argument for sql json datetime function."""
+
+ pass
+
+
+class InvalidJsonText(DataException):
+ """SQLSTATE 22032 - invalid json text."""
+
+ pass
+
+
+class InvalidSQLJsonSubscript(DataException):
+ """SQLSTATE 22033 - invalid sql json subscript."""
+
+ pass
+
+
+class MoreThanOneSQLJsonItem(DataException):
+ """SQLSTATE 22034 - more than one sql json item."""
+
+ pass
+
+
+class NoSQLJsonItem(DataException):
+ """SQLSTATE 22035 - no sql json item."""
+
+ pass
+
+
+class NonNumericSQLJsonItem(DataException):
+ """SQLSTATE 22036 - non numeric sql json item."""
+
+ pass
+
+
+class NonUniqueKeysInAJsonObject(DataException):
+ """SQLSTATE 22037 - non unique keys in a json object."""
+
+ pass
+
+
+class SingletonSQLJsonItemRequired(DataException):
+ """SQLSTATE 22038 - singleton sql json item required."""
+
+ pass
+
+
+class SQLJsonArrayNotFound(DataException):
+ """SQLSTATE 22039 - sql json array not found."""
+
+ pass
+
+
+class SQLJsonMemberNotFound(DataException):
+ """SQLSTATE 2203A - sql json member not found."""
+
+ pass
+
+
+class SQLJsonNumberNotFound(DataException):
+ """SQLSTATE 2203B - sql json number not found."""
+
+ pass
+
+
+class SQLJsonObjectNotFound(DataException):
+ """SQLSTATE 2203C - sql json object not found."""
+
+ pass
+
+
+class TooManyJsonArrayElements(DataException):
+ """SQLSTATE 2203D - too many json array elements."""
+
+ pass
+
+
+class TooManyJsonObjectMembers(DataException):
+ """SQLSTATE 2203E - too many json object members."""
+
+ pass
+
+
+class SQLJsonScalarRequired(DataException):
+ """SQLSTATE 2203F - sql json scalar required."""
+
+ pass
+
+
+class SQLJsonItemCannotBeCastToTargetType(DataException):
+ """SQLSTATE 2203G - sql json item cannot be cast to target type."""
+
+ pass
+
+
+class IntegrityConstraintViolation(LibpqError):
+ """SQLSTATE 23000 - integrity constraint violation."""
+
+ pass
+
+
+class RestrictViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23001 - restrict violation."""
+
+ pass
+
+
+class NotNullViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23502 - not null violation."""
+
+ pass
+
+
+class ForeignKeyViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23503 - foreign key violation."""
+
+ pass
+
+
+class UniqueViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23505 - unique violation."""
+
+ pass
+
+
+class CheckViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23514 - check violation."""
+
+ pass
+
+
+class ExclusionViolation(IntegrityConstraintViolation):
+ """SQLSTATE 23P01 - exclusion violation."""
+
+ pass
+
+
+class InvalidCursorState(LibpqError):
+ """SQLSTATE 24000 - invalid cursor state."""
+
+ pass
+
+
+class InvalidTransactionState(LibpqError):
+ """SQLSTATE 25000 - invalid transaction state."""
+
+ pass
+
+
+class ActiveSQLTransaction(InvalidTransactionState):
+ """SQLSTATE 25001 - active sql transaction."""
+
+ pass
+
+
+class BranchTransactionAlreadyActive(InvalidTransactionState):
+ """SQLSTATE 25002 - branch transaction already active."""
+
+ pass
+
+
+class HeldCursorRequiresSameIsolationLevel(InvalidTransactionState):
+ """SQLSTATE 25008 - held cursor requires same isolation level."""
+
+ pass
+
+
+class InappropriateAccessModeForBranchTransaction(InvalidTransactionState):
+ """SQLSTATE 25003 - inappropriate access mode for branch transaction."""
+
+ pass
+
+
+class InappropriateIsolationLevelForBranchTransaction(InvalidTransactionState):
+ """SQLSTATE 25004 - inappropriate isolation level for branch transaction."""
+
+ pass
+
+
+class NoActiveSQLTransactionForBranchTransaction(InvalidTransactionState):
+ """SQLSTATE 25005 - no active sql transaction for branch transaction."""
+
+ pass
+
+
+class ReadOnlySQLTransaction(InvalidTransactionState):
+ """SQLSTATE 25006 - read only sql transaction."""
+
+ pass
+
+
+class SchemaAndDataStatementMixingNotSupported(InvalidTransactionState):
+ """SQLSTATE 25007 - schema and data statement mixing not supported."""
+
+ pass
+
+
+class NoActiveSQLTransaction(InvalidTransactionState):
+ """SQLSTATE 25P01 - no active sql transaction."""
+
+ pass
+
+
+class InFailedSQLTransaction(InvalidTransactionState):
+ """SQLSTATE 25P02 - in failed sql transaction."""
+
+ pass
+
+
+class IdleInTransactionSessionTimeout(InvalidTransactionState):
+ """SQLSTATE 25P03 - idle in transaction session timeout."""
+
+ pass
+
+
+class TransactionTimeout(InvalidTransactionState):
+ """SQLSTATE 25P04 - transaction timeout."""
+
+ pass
+
+
+class InvalidSQLStatementName(LibpqError):
+ """SQLSTATE 26000 - invalid sql statement name."""
+
+ pass
+
+
+class TriggeredDataChangeViolation(LibpqError):
+ """SQLSTATE 27000 - triggered data change violation."""
+
+ pass
+
+
+class InvalidAuthorizationSpecification(LibpqError):
+ """SQLSTATE 28000 - invalid authorization specification."""
+
+ pass
+
+
+class InvalidPassword(InvalidAuthorizationSpecification):
+ """SQLSTATE 28P01 - invalid password."""
+
+ pass
+
+
+class DependentPrivilegeDescriptorsStillExist(LibpqError):
+ """SQLSTATE 2B000 - dependent privilege descriptors still exist."""
+
+ pass
+
+
+class DependentObjectsStillExist(DependentPrivilegeDescriptorsStillExist):
+ """SQLSTATE 2BP01 - dependent objects still exist."""
+
+ pass
+
+
+class InvalidTransactionTermination(LibpqError):
+ """SQLSTATE 2D000 - invalid transaction termination."""
+
+ pass
+
+
+class SQLRoutineException(LibpqError):
+ """SQLSTATE 2F000 - sql routine exception."""
+
+ pass
+
+
+class FunctionExecutedNoReturnStatement(SQLRoutineException):
+ """SQLSTATE 2F005 - function executed no return statement."""
+
+ pass
+
+
+class SREModifyingSQLDataNotPermitted(SQLRoutineException):
+ """SQLSTATE 2F002 - modifying sql data not permitted."""
+
+ pass
+
+
+class SREProhibitedSQLStatementAttempted(SQLRoutineException):
+ """SQLSTATE 2F003 - prohibited sql statement attempted."""
+
+ pass
+
+
+class SREReadingSQLDataNotPermitted(SQLRoutineException):
+ """SQLSTATE 2F004 - reading sql data not permitted."""
+
+ pass
+
+
+class InvalidCursorName(LibpqError):
+ """SQLSTATE 34000 - invalid cursor name."""
+
+ pass
+
+
+class ExternalRoutineException(LibpqError):
+ """SQLSTATE 38000 - external routine exception."""
+
+ pass
+
+
+class ContainingSQLNotPermitted(ExternalRoutineException):
+ """SQLSTATE 38001 - containing sql not permitted."""
+
+ pass
+
+
+class EREModifyingSQLDataNotPermitted(ExternalRoutineException):
+ """SQLSTATE 38002 - modifying sql data not permitted."""
+
+ pass
+
+
+class EREProhibitedSQLStatementAttempted(ExternalRoutineException):
+ """SQLSTATE 38003 - prohibited sql statement attempted."""
+
+ pass
+
+
+class EREReadingSQLDataNotPermitted(ExternalRoutineException):
+ """SQLSTATE 38004 - reading sql data not permitted."""
+
+ pass
+
+
+class ExternalRoutineInvocationException(LibpqError):
+ """SQLSTATE 39000 - external routine invocation exception."""
+
+ pass
+
+
+class InvalidSqlstateReturned(ExternalRoutineInvocationException):
+ """SQLSTATE 39001 - invalid sqlstate returned."""
+
+ pass
+
+
+class ERIENullValueNotAllowed(ExternalRoutineInvocationException):
+ """SQLSTATE 39004 - null value not allowed."""
+
+ pass
+
+
+class TriggerProtocolViolated(ExternalRoutineInvocationException):
+ """SQLSTATE 39P01 - trigger protocol violated."""
+
+ pass
+
+
+class SrfProtocolViolated(ExternalRoutineInvocationException):
+ """SQLSTATE 39P02 - srf protocol violated."""
+
+ pass
+
+
+class EventTriggerProtocolViolated(ExternalRoutineInvocationException):
+ """SQLSTATE 39P03 - event trigger protocol violated."""
+
+ pass
+
+
+class SavepointException(LibpqError):
+ """SQLSTATE 3B000 - savepoint exception."""
+
+ pass
+
+
+class InvalidSavepointSpecification(SavepointException):
+ """SQLSTATE 3B001 - invalid savepoint specification."""
+
+ pass
+
+
+class InvalidCatalogName(LibpqError):
+ """SQLSTATE 3D000 - invalid catalog name."""
+
+ pass
+
+
+class InvalidSchemaName(LibpqError):
+ """SQLSTATE 3F000 - invalid schema name."""
+
+ pass
+
+
+class TransactionRollback(LibpqError):
+ """SQLSTATE 40000 - transaction rollback."""
+
+ pass
+
+
+class TransactionIntegrityConstraintViolation(TransactionRollback):
+ """SQLSTATE 40002 - transaction integrity constraint violation."""
+
+ pass
+
+
+class SerializationFailure(TransactionRollback):
+ """SQLSTATE 40001 - serialization failure."""
+
+ pass
+
+
+class StatementCompletionUnknown(TransactionRollback):
+ """SQLSTATE 40003 - statement completion unknown."""
+
+ pass
+
+
+class DeadlockDetected(TransactionRollback):
+ """SQLSTATE 40P01 - deadlock detected."""
+
+ pass
+
+
+class SyntaxErrorOrAccessRuleViolation(LibpqError):
+ """SQLSTATE 42000 - syntax error or access rule violation."""
+
+ pass
+
+
+class SyntaxError(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42601 - syntax error."""
+
+ pass
+
+
+class InsufficientPrivilege(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42501 - insufficient privilege."""
+
+ pass
+
+
+class CannotCoerce(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42846 - cannot coerce."""
+
+ pass
+
+
+class GroupingError(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42803 - grouping error."""
+
+ pass
+
+
+class WindowingError(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P20 - windowing error."""
+
+ pass
+
+
+class InvalidRecursion(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P19 - invalid recursion."""
+
+ pass
+
+
+class InvalidForeignKey(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42830 - invalid foreign key."""
+
+ pass
+
+
+class InvalidName(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42602 - invalid name."""
+
+ pass
+
+
+class NameTooLong(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42622 - name too long."""
+
+ pass
+
+
+class ReservedName(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42939 - reserved name."""
+
+ pass
+
+
+class DatatypeMismatch(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42804 - datatype mismatch."""
+
+ pass
+
+
+class IndeterminateDatatype(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P18 - indeterminate datatype."""
+
+ pass
+
+
+class CollationMismatch(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P21 - collation mismatch."""
+
+ pass
+
+
+class IndeterminateCollation(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P22 - indeterminate collation."""
+
+ pass
+
+
+class WrongObjectType(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42809 - wrong object type."""
+
+ pass
+
+
+class GeneratedAlways(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 428C9 - generated always."""
+
+ pass
+
+
+class UndefinedColumn(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42703 - undefined column."""
+
+ pass
+
+
+class UndefinedFunction(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42883 - undefined function."""
+
+ pass
+
+
+class UndefinedTable(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P01 - undefined table."""
+
+ pass
+
+
+class UndefinedParameter(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P02 - undefined parameter."""
+
+ pass
+
+
+class UndefinedObject(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42704 - undefined object."""
+
+ pass
+
+
+class DuplicateColumn(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42701 - duplicate column."""
+
+ pass
+
+
+class DuplicateCursor(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P03 - duplicate cursor."""
+
+ pass
+
+
+class DuplicateDatabase(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P04 - duplicate database."""
+
+ pass
+
+
+class DuplicateFunction(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42723 - duplicate function."""
+
+ pass
+
+
+class DuplicatePreparedStatement(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P05 - duplicate prepared statement."""
+
+ pass
+
+
+class DuplicateSchema(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P06 - duplicate schema."""
+
+ pass
+
+
+class DuplicateTable(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P07 - duplicate table."""
+
+ pass
+
+
+class DuplicateAlias(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42712 - duplicate alias."""
+
+ pass
+
+
+class DuplicateObject(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42710 - duplicate object."""
+
+ pass
+
+
+class AmbiguousColumn(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42702 - ambiguous column."""
+
+ pass
+
+
+class AmbiguousFunction(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42725 - ambiguous function."""
+
+ pass
+
+
+class AmbiguousParameter(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P08 - ambiguous parameter."""
+
+ pass
+
+
+class AmbiguousAlias(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P09 - ambiguous alias."""
+
+ pass
+
+
+class InvalidColumnReference(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P10 - invalid column reference."""
+
+ pass
+
+
+class InvalidColumnDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42611 - invalid column definition."""
+
+ pass
+
+
+class InvalidCursorDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P11 - invalid cursor definition."""
+
+ pass
+
+
+class InvalidDatabaseDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P12 - invalid database definition."""
+
+ pass
+
+
+class InvalidFunctionDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P13 - invalid function definition."""
+
+ pass
+
+
+class InvalidPreparedStatementDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P14 - invalid prepared statement definition."""
+
+ pass
+
+
+class InvalidSchemaDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P15 - invalid schema definition."""
+
+ pass
+
+
+class InvalidTableDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P16 - invalid table definition."""
+
+ pass
+
+
+class InvalidObjectDefinition(SyntaxErrorOrAccessRuleViolation):
+ """SQLSTATE 42P17 - invalid object definition."""
+
+ pass
+
+
+class WithCheckOptionViolation(LibpqError):
+ """SQLSTATE 44000 - with check option violation."""
+
+ pass
+
+
+class InsufficientResources(LibpqError):
+ """SQLSTATE 53000 - insufficient resources."""
+
+ pass
+
+
+class DiskFull(InsufficientResources):
+ """SQLSTATE 53100 - disk full."""
+
+ pass
+
+
+class OutOfMemory(InsufficientResources):
+ """SQLSTATE 53200 - out of memory."""
+
+ pass
+
+
+class TooManyConnections(InsufficientResources):
+ """SQLSTATE 53300 - too many connections."""
+
+ pass
+
+
+class ConfigurationLimitExceeded(InsufficientResources):
+ """SQLSTATE 53400 - configuration limit exceeded."""
+
+ pass
+
+
+class ProgramLimitExceeded(LibpqError):
+ """SQLSTATE 54000 - program limit exceeded."""
+
+ pass
+
+
+class StatementTooComplex(ProgramLimitExceeded):
+ """SQLSTATE 54001 - statement too complex."""
+
+ pass
+
+
+class TooManyColumns(ProgramLimitExceeded):
+ """SQLSTATE 54011 - too many columns."""
+
+ pass
+
+
+class TooManyArguments(ProgramLimitExceeded):
+ """SQLSTATE 54023 - too many arguments."""
+
+ pass
+
+
+class ObjectNotInPrerequisiteState(LibpqError):
+ """SQLSTATE 55000 - object not in prerequisite state."""
+
+ pass
+
+
+class ObjectInUse(ObjectNotInPrerequisiteState):
+ """SQLSTATE 55006 - object in use."""
+
+ pass
+
+
+class CantChangeRuntimeParam(ObjectNotInPrerequisiteState):
+ """SQLSTATE 55P02 - cant change runtime param."""
+
+ pass
+
+
+class LockNotAvailable(ObjectNotInPrerequisiteState):
+ """SQLSTATE 55P03 - lock not available."""
+
+ pass
+
+
+class UnsafeNewEnumValueUsage(ObjectNotInPrerequisiteState):
+ """SQLSTATE 55P04 - unsafe new enum value usage."""
+
+ pass
+
+
+class OperatorIntervention(LibpqError):
+ """SQLSTATE 57000 - operator intervention."""
+
+ pass
+
+
+class QueryCanceled(OperatorIntervention):
+ """SQLSTATE 57014 - query canceled."""
+
+ pass
+
+
+class AdminShutdown(OperatorIntervention):
+ """SQLSTATE 57P01 - admin shutdown."""
+
+ pass
+
+
+class CrashShutdown(OperatorIntervention):
+ """SQLSTATE 57P02 - crash shutdown."""
+
+ pass
+
+
+class CannotConnectNow(OperatorIntervention):
+ """SQLSTATE 57P03 - cannot connect now."""
+
+ pass
+
+
+class DatabaseDropped(OperatorIntervention):
+ """SQLSTATE 57P04 - database dropped."""
+
+ pass
+
+
+class IdleSessionTimeout(OperatorIntervention):
+ """SQLSTATE 57P05 - idle session timeout."""
+
+ pass
+
+
+class SystemError(LibpqError):
+ """SQLSTATE 58000 - system error."""
+
+ pass
+
+
+class IoError(SystemError):
+ """SQLSTATE 58030 - io error."""
+
+ pass
+
+
+class UndefinedFile(SystemError):
+ """SQLSTATE 58P01 - undefined file."""
+
+ pass
+
+
+class DuplicateFile(SystemError):
+ """SQLSTATE 58P02 - duplicate file."""
+
+ pass
+
+
+class FileNameTooLong(SystemError):
+ """SQLSTATE 58P03 - file name too long."""
+
+ pass
+
+
+class ConfigFileError(LibpqError):
+ """SQLSTATE F0000 - config file error."""
+
+ pass
+
+
+class LockFileExists(ConfigFileError):
+ """SQLSTATE F0001 - lock file exists."""
+
+ pass
+
+
+class FDWError(LibpqError):
+ """SQLSTATE HV000 - fdw error."""
+
+ pass
+
+
+class FDWColumnNameNotFound(FDWError):
+ """SQLSTATE HV005 - fdw column name not found."""
+
+ pass
+
+
+class FDWDynamicParameterValueNeeded(FDWError):
+ """SQLSTATE HV002 - fdw dynamic parameter value needed."""
+
+ pass
+
+
+class FDWFunctionSequenceError(FDWError):
+ """SQLSTATE HV010 - fdw function sequence error."""
+
+ pass
+
+
+class FDWInconsistentDescriptorInformation(FDWError):
+ """SQLSTATE HV021 - fdw inconsistent descriptor information."""
+
+ pass
+
+
+class FDWInvalidAttributeValue(FDWError):
+ """SQLSTATE HV024 - fdw invalid attribute value."""
+
+ pass
+
+
+class FDWInvalidColumnName(FDWError):
+ """SQLSTATE HV007 - fdw invalid column name."""
+
+ pass
+
+
+class FDWInvalidColumnNumber(FDWError):
+ """SQLSTATE HV008 - fdw invalid column number."""
+
+ pass
+
+
+class FDWInvalidDataType(FDWError):
+ """SQLSTATE HV004 - fdw invalid data type."""
+
+ pass
+
+
+class FDWInvalidDataTypeDescriptors(FDWError):
+ """SQLSTATE HV006 - fdw invalid data type descriptors."""
+
+ pass
+
+
+class FDWInvalidDescriptorFieldIdentifier(FDWError):
+ """SQLSTATE HV091 - fdw invalid descriptor field identifier."""
+
+ pass
+
+
+class FDWInvalidHandle(FDWError):
+ """SQLSTATE HV00B - fdw invalid handle."""
+
+ pass
+
+
+class FDWInvalidOptionIndex(FDWError):
+ """SQLSTATE HV00C - fdw invalid option index."""
+
+ pass
+
+
+class FDWInvalidOptionName(FDWError):
+ """SQLSTATE HV00D - fdw invalid option name."""
+
+ pass
+
+
+class FDWInvalidStringLengthOrBufferLength(FDWError):
+ """SQLSTATE HV090 - fdw invalid string length or buffer length."""
+
+ pass
+
+
+class FDWInvalidStringFormat(FDWError):
+ """SQLSTATE HV00A - fdw invalid string format."""
+
+ pass
+
+
+class FDWInvalidUseOfNullPointer(FDWError):
+ """SQLSTATE HV009 - fdw invalid use of null pointer."""
+
+ pass
+
+
+class FDWTooManyHandles(FDWError):
+ """SQLSTATE HV014 - fdw too many handles."""
+
+ pass
+
+
+class FDWOutOfMemory(FDWError):
+ """SQLSTATE HV001 - fdw out of memory."""
+
+ pass
+
+
+class FDWNoSchemas(FDWError):
+ """SQLSTATE HV00P - fdw no schemas."""
+
+ pass
+
+
+class FDWOptionNameNotFound(FDWError):
+ """SQLSTATE HV00J - fdw option name not found."""
+
+ pass
+
+
+class FDWReplyHandle(FDWError):
+ """SQLSTATE HV00K - fdw reply handle."""
+
+ pass
+
+
+class FDWSchemaNotFound(FDWError):
+ """SQLSTATE HV00Q - fdw schema not found."""
+
+ pass
+
+
+class FDWTableNotFound(FDWError):
+ """SQLSTATE HV00R - fdw table not found."""
+
+ pass
+
+
+class FDWUnableToCreateExecution(FDWError):
+ """SQLSTATE HV00L - fdw unable to create execution."""
+
+ pass
+
+
+class FDWUnableToCreateReply(FDWError):
+ """SQLSTATE HV00M - fdw unable to create reply."""
+
+ pass
+
+
+class FDWUnableToEstablishConnection(FDWError):
+ """SQLSTATE HV00N - fdw unable to establish connection."""
+
+ pass
+
+
+class PlpgsqlError(LibpqError):
+ """SQLSTATE P0000 - plpgsql error."""
+
+ pass
+
+
+class RaiseException(PlpgsqlError):
+ """SQLSTATE P0001 - raise exception."""
+
+ pass
+
+
+class NoDataFound(PlpgsqlError):
+ """SQLSTATE P0002 - no data found."""
+
+ pass
+
+
+class TooManyRows(PlpgsqlError):
+ """SQLSTATE P0003 - too many rows."""
+
+ pass
+
+
+class AssertFailure(PlpgsqlError):
+ """SQLSTATE P0004 - assert failure."""
+
+ pass
+
+
+class InternalError(LibpqError):
+ """SQLSTATE XX000 - internal error."""
+
+ pass
+
+
+class DataCorrupted(InternalError):
+ """SQLSTATE XX001 - data corrupted."""
+
+ pass
+
+
+class IndexCorrupted(InternalError):
+ """SQLSTATE XX002 - index corrupted."""
+
+ pass
+
+
+SQLSTATE_TO_EXCEPTION: Dict[str, type] = {
+ "00000": SuccessfulCompletion,
+ "01000": Warning,
+ "0100C": DynamicResultSetsReturnedWarning,
+ "01008": ImplicitZeroBitPaddingWarning,
+ "01003": NullValueEliminatedInSetFunctionWarning,
+ "01007": PrivilegeNotGrantedWarning,
+ "01006": PrivilegeNotRevokedWarning,
+ "01004": StringDataRightTruncationWarning,
+ "01P01": DeprecatedFeatureWarning,
+ "02000": NoData,
+ "02001": NoAdditionalDynamicResultSetsReturned,
+ "03000": SQLStatementNotYetComplete,
+ "08000": ConnectionException,
+ "08003": ConnectionDoesNotExist,
+ "08006": ConnectionFailure,
+ "08001": SQLClientUnableToEstablishSQLConnection,
+ "08004": SQLServerRejectedEstablishmentOfSQLConnection,
+ "08007": TransactionResolutionUnknown,
+ "08P01": ProtocolViolation,
+ "09000": TriggeredActionException,
+ "0A000": FeatureNotSupported,
+ "0B000": InvalidTransactionInitiation,
+ "0F000": LocatorException,
+ "0F001": InvalidLocatorSpecification,
+ "0L000": InvalidGrantor,
+ "0LP01": InvalidGrantOperation,
+ "0P000": InvalidRoleSpecification,
+ "0Z000": DiagnosticsException,
+ "0Z002": StackedDiagnosticsAccessedWithoutActiveHandler,
+ "10608": InvalidArgumentForXquery,
+ "20000": CaseNotFound,
+ "21000": CardinalityViolation,
+ "22000": DataException,
+ "2202E": ArraySubscriptError,
+ "22021": CharacterNotInRepertoire,
+ "22008": DatetimeFieldOverflow,
+ "22012": DivisionByZero,
+ "22005": ErrorInAssignment,
+ "2200B": EscapeCharacterConflict,
+ "22022": IndicatorOverflow,
+ "22015": IntervalFieldOverflow,
+ "2201E": InvalidArgumentForLogarithm,
+ "22014": InvalidArgumentForNtileFunction,
+ "22016": InvalidArgumentForNthValueFunction,
+ "2201F": InvalidArgumentForPowerFunction,
+ "2201G": InvalidArgumentForWidthBucketFunction,
+ "22018": InvalidCharacterValueForCast,
+ "22007": InvalidDatetimeFormat,
+ "22019": InvalidEscapeCharacter,
+ "2200D": InvalidEscapeOctet,
+ "22025": InvalidEscapeSequence,
+ "22P06": NonstandardUseOfEscapeCharacter,
+ "22010": InvalidIndicatorParameterValue,
+ "22023": InvalidParameterValue,
+ "22013": InvalidPrecedingOrFollowingSize,
+ "2201B": InvalidRegularExpression,
+ "2201W": InvalidRowCountInLimitClause,
+ "2201X": InvalidRowCountInResultOffsetClause,
+ "2202H": InvalidTablesampleArgument,
+ "2202G": InvalidTablesampleRepeat,
+ "22009": InvalidTimeZoneDisplacementValue,
+ "2200C": InvalidUseOfEscapeCharacter,
+ "2200G": MostSpecificTypeMismatch,
+ "22004": NullValueNotAllowed,
+ "22002": NullValueNoIndicatorParameter,
+ "22003": NumericValueOutOfRange,
+ "2200H": SequenceGeneratorLimitExceeded,
+ "22026": StringDataLengthMismatch,
+ "22001": StringDataRightTruncation,
+ "22011": SubstringError,
+ "22027": TrimError,
+ "22024": UnterminatedCString,
+ "2200F": ZeroLengthCharacterString,
+ "22P01": FloatingPointException,
+ "22P02": InvalidTextRepresentation,
+ "22P03": InvalidBinaryRepresentation,
+ "22P04": BadCopyFileFormat,
+ "22P05": UntranslatableCharacter,
+ "2200L": NotAnXmlDocument,
+ "2200M": InvalidXmlDocument,
+ "2200N": InvalidXmlContent,
+ "2200S": InvalidXmlComment,
+ "2200T": InvalidXmlProcessingInstruction,
+ "22030": DuplicateJsonObjectKeyValue,
+ "22031": InvalidArgumentForSQLJsonDatetimeFunction,
+ "22032": InvalidJsonText,
+ "22033": InvalidSQLJsonSubscript,
+ "22034": MoreThanOneSQLJsonItem,
+ "22035": NoSQLJsonItem,
+ "22036": NonNumericSQLJsonItem,
+ "22037": NonUniqueKeysInAJsonObject,
+ "22038": SingletonSQLJsonItemRequired,
+ "22039": SQLJsonArrayNotFound,
+ "2203A": SQLJsonMemberNotFound,
+ "2203B": SQLJsonNumberNotFound,
+ "2203C": SQLJsonObjectNotFound,
+ "2203D": TooManyJsonArrayElements,
+ "2203E": TooManyJsonObjectMembers,
+ "2203F": SQLJsonScalarRequired,
+ "2203G": SQLJsonItemCannotBeCastToTargetType,
+ "23000": IntegrityConstraintViolation,
+ "23001": RestrictViolation,
+ "23502": NotNullViolation,
+ "23503": ForeignKeyViolation,
+ "23505": UniqueViolation,
+ "23514": CheckViolation,
+ "23P01": ExclusionViolation,
+ "24000": InvalidCursorState,
+ "25000": InvalidTransactionState,
+ "25001": ActiveSQLTransaction,
+ "25002": BranchTransactionAlreadyActive,
+ "25008": HeldCursorRequiresSameIsolationLevel,
+ "25003": InappropriateAccessModeForBranchTransaction,
+ "25004": InappropriateIsolationLevelForBranchTransaction,
+ "25005": NoActiveSQLTransactionForBranchTransaction,
+ "25006": ReadOnlySQLTransaction,
+ "25007": SchemaAndDataStatementMixingNotSupported,
+ "25P01": NoActiveSQLTransaction,
+ "25P02": InFailedSQLTransaction,
+ "25P03": IdleInTransactionSessionTimeout,
+ "25P04": TransactionTimeout,
+ "26000": InvalidSQLStatementName,
+ "27000": TriggeredDataChangeViolation,
+ "28000": InvalidAuthorizationSpecification,
+ "28P01": InvalidPassword,
+ "2B000": DependentPrivilegeDescriptorsStillExist,
+ "2BP01": DependentObjectsStillExist,
+ "2D000": InvalidTransactionTermination,
+ "2F000": SQLRoutineException,
+ "2F005": FunctionExecutedNoReturnStatement,
+ "2F002": SREModifyingSQLDataNotPermitted,
+ "2F003": SREProhibitedSQLStatementAttempted,
+ "2F004": SREReadingSQLDataNotPermitted,
+ "34000": InvalidCursorName,
+ "38000": ExternalRoutineException,
+ "38001": ContainingSQLNotPermitted,
+ "38002": EREModifyingSQLDataNotPermitted,
+ "38003": EREProhibitedSQLStatementAttempted,
+ "38004": EREReadingSQLDataNotPermitted,
+ "39000": ExternalRoutineInvocationException,
+ "39001": InvalidSqlstateReturned,
+ "39004": ERIENullValueNotAllowed,
+ "39P01": TriggerProtocolViolated,
+ "39P02": SrfProtocolViolated,
+ "39P03": EventTriggerProtocolViolated,
+ "3B000": SavepointException,
+ "3B001": InvalidSavepointSpecification,
+ "3D000": InvalidCatalogName,
+ "3F000": InvalidSchemaName,
+ "40000": TransactionRollback,
+ "40002": TransactionIntegrityConstraintViolation,
+ "40001": SerializationFailure,
+ "40003": StatementCompletionUnknown,
+ "40P01": DeadlockDetected,
+ "42000": SyntaxErrorOrAccessRuleViolation,
+ "42601": SyntaxError,
+ "42501": InsufficientPrivilege,
+ "42846": CannotCoerce,
+ "42803": GroupingError,
+ "42P20": WindowingError,
+ "42P19": InvalidRecursion,
+ "42830": InvalidForeignKey,
+ "42602": InvalidName,
+ "42622": NameTooLong,
+ "42939": ReservedName,
+ "42804": DatatypeMismatch,
+ "42P18": IndeterminateDatatype,
+ "42P21": CollationMismatch,
+ "42P22": IndeterminateCollation,
+ "42809": WrongObjectType,
+ "428C9": GeneratedAlways,
+ "42703": UndefinedColumn,
+ "42883": UndefinedFunction,
+ "42P01": UndefinedTable,
+ "42P02": UndefinedParameter,
+ "42704": UndefinedObject,
+ "42701": DuplicateColumn,
+ "42P03": DuplicateCursor,
+ "42P04": DuplicateDatabase,
+ "42723": DuplicateFunction,
+ "42P05": DuplicatePreparedStatement,
+ "42P06": DuplicateSchema,
+ "42P07": DuplicateTable,
+ "42712": DuplicateAlias,
+ "42710": DuplicateObject,
+ "42702": AmbiguousColumn,
+ "42725": AmbiguousFunction,
+ "42P08": AmbiguousParameter,
+ "42P09": AmbiguousAlias,
+ "42P10": InvalidColumnReference,
+ "42611": InvalidColumnDefinition,
+ "42P11": InvalidCursorDefinition,
+ "42P12": InvalidDatabaseDefinition,
+ "42P13": InvalidFunctionDefinition,
+ "42P14": InvalidPreparedStatementDefinition,
+ "42P15": InvalidSchemaDefinition,
+ "42P16": InvalidTableDefinition,
+ "42P17": InvalidObjectDefinition,
+ "44000": WithCheckOptionViolation,
+ "53000": InsufficientResources,
+ "53100": DiskFull,
+ "53200": OutOfMemory,
+ "53300": TooManyConnections,
+ "53400": ConfigurationLimitExceeded,
+ "54000": ProgramLimitExceeded,
+ "54001": StatementTooComplex,
+ "54011": TooManyColumns,
+ "54023": TooManyArguments,
+ "55000": ObjectNotInPrerequisiteState,
+ "55006": ObjectInUse,
+ "55P02": CantChangeRuntimeParam,
+ "55P03": LockNotAvailable,
+ "55P04": UnsafeNewEnumValueUsage,
+ "57000": OperatorIntervention,
+ "57014": QueryCanceled,
+ "57P01": AdminShutdown,
+ "57P02": CrashShutdown,
+ "57P03": CannotConnectNow,
+ "57P04": DatabaseDropped,
+ "57P05": IdleSessionTimeout,
+ "58000": SystemError,
+ "58030": IoError,
+ "58P01": UndefinedFile,
+ "58P02": DuplicateFile,
+ "58P03": FileNameTooLong,
+ "F0000": ConfigFileError,
+ "F0001": LockFileExists,
+ "HV000": FDWError,
+ "HV005": FDWColumnNameNotFound,
+ "HV002": FDWDynamicParameterValueNeeded,
+ "HV010": FDWFunctionSequenceError,
+ "HV021": FDWInconsistentDescriptorInformation,
+ "HV024": FDWInvalidAttributeValue,
+ "HV007": FDWInvalidColumnName,
+ "HV008": FDWInvalidColumnNumber,
+ "HV004": FDWInvalidDataType,
+ "HV006": FDWInvalidDataTypeDescriptors,
+ "HV091": FDWInvalidDescriptorFieldIdentifier,
+ "HV00B": FDWInvalidHandle,
+ "HV00C": FDWInvalidOptionIndex,
+ "HV00D": FDWInvalidOptionName,
+ "HV090": FDWInvalidStringLengthOrBufferLength,
+ "HV00A": FDWInvalidStringFormat,
+ "HV009": FDWInvalidUseOfNullPointer,
+ "HV014": FDWTooManyHandles,
+ "HV001": FDWOutOfMemory,
+ "HV00P": FDWNoSchemas,
+ "HV00J": FDWOptionNameNotFound,
+ "HV00K": FDWReplyHandle,
+ "HV00Q": FDWSchemaNotFound,
+ "HV00R": FDWTableNotFound,
+ "HV00L": FDWUnableToCreateExecution,
+ "HV00M": FDWUnableToCreateReply,
+ "HV00N": FDWUnableToEstablishConnection,
+ "P0000": PlpgsqlError,
+ "P0001": RaiseException,
+ "P0002": NoDataFound,
+ "P0003": TooManyRows,
+ "P0004": AssertFailure,
+ "XX000": InternalError,
+ "XX001": DataCorrupted,
+ "XX002": IndexCorrupted,
+}
+
+
+__all__ = [
+ "InvalidCursorName",
+ "UndefinedParameter",
+ "UndefinedColumn",
+ "NotAnXmlDocument",
+ "FDWOutOfMemory",
+ "InvalidRoleSpecification",
+ "InvalidArgumentForNthValueFunction",
+ "SQLJsonObjectNotFound",
+ "FDWSchemaNotFound",
+ "InvalidParameterValue",
+ "InvalidTableDefinition",
+ "AssertFailure",
+ "FDWInvalidOptionName",
+ "InvalidEscapeOctet",
+ "ReadOnlySQLTransaction",
+ "ExternalRoutineInvocationException",
+ "CrashShutdown",
+ "FDWInvalidOptionIndex",
+ "NotNullViolation",
+ "ConfigFileError",
+ "InvalidSQLJsonSubscript",
+ "InvalidForeignKey",
+ "InsufficientResources",
+ "ObjectNotInPrerequisiteState",
+ "InvalidRowCountInLimitClause",
+ "IntervalFieldOverflow",
+ "CollationMismatch",
+ "InvalidArgumentForNtileFunction",
+ "InvalidCharacterValueForCast",
+ "NonUniqueKeysInAJsonObject",
+ "DependentPrivilegeDescriptorsStillExist",
+ "InFailedSQLTransaction",
+ "GroupingError",
+ "TransactionTimeout",
+ "CaseNotFound",
+ "ConnectionException",
+ "DuplicateJsonObjectKeyValue",
+ "InvalidSchemaDefinition",
+ "FDWUnableToCreateReply",
+ "UndefinedTable",
+ "SequenceGeneratorLimitExceeded",
+ "InvalidJsonText",
+ "IdleSessionTimeout",
+ "NullValueNotAllowed",
+ "BranchTransactionAlreadyActive",
+ "InvalidGrantOperation",
+ "NullValueNoIndicatorParameter",
+ "ProtocolViolation",
+ "FDWInvalidDataTypeDescriptors",
+ "TriggeredDataChangeViolation",
+ "ExternalRoutineException",
+ "InvalidSqlstateReturned",
+ "PlpgsqlError",
+ "InvalidXmlContent",
+ "TriggeredActionException",
+ "SQLClientUnableToEstablishSQLConnection",
+ "FDWTableNotFound",
+ "NumericValueOutOfRange",
+ "RestrictViolation",
+ "AmbiguousParameter",
+ "StatementTooComplex",
+ "UnsafeNewEnumValueUsage",
+ "NonNumericSQLJsonItem",
+ "InvalidIndicatorParameterValue",
+ "ExclusionViolation",
+ "OperatorIntervention",
+ "QueryCanceled",
+ "Warning",
+ "InvalidArgumentForSQLJsonDatetimeFunction",
+ "ForeignKeyViolation",
+ "StringDataLengthMismatch",
+ "SQLRoutineException",
+ "TooManyConnections",
+ "TooManyJsonObjectMembers",
+ "NoData",
+ "UntranslatableCharacter",
+ "FDWUnableToEstablishConnection",
+ "LockFileExists",
+ "SREReadingSQLDataNotPermitted",
+ "IndeterminateDatatype",
+ "CheckViolation",
+ "InvalidDatabaseDefinition",
+ "NoActiveSQLTransactionForBranchTransaction",
+ "SQLServerRejectedEstablishmentOfSQLConnection",
+ "DuplicateFile",
+ "FDWInvalidColumnNumber",
+ "TransactionRollback",
+ "MoreThanOneSQLJsonItem",
+ "WithCheckOptionViolation",
+ "FDWNoSchemas",
+ "GeneratedAlways",
+ "CannotConnectNow",
+ "CardinalityViolation",
+ "InvalidAuthorizationSpecification",
+ "SQLJsonNumberNotFound",
+ "SQLJsonMemberNotFound",
+ "InvalidUseOfEscapeCharacter",
+ "UnterminatedCString",
+ "TrimError",
+ "SrfProtocolViolated",
+ "DiskFull",
+ "TooManyColumns",
+ "InvalidObjectDefinition",
+ "InvalidArgumentForLogarithm",
+ "TooManyJsonArrayElements",
+ "OutOfMemory",
+ "EREProhibitedSQLStatementAttempted",
+ "FDWInvalidStringFormat",
+ "StackedDiagnosticsAccessedWithoutActiveHandler",
+ "SchemaAndDataStatementMixingNotSupported",
+ "InternalError",
+ "InvalidEscapeCharacter",
+ "FDWError",
+ "ImplicitZeroBitPaddingWarning",
+ "DivisionByZero",
+ "InvalidTablesampleArgument",
+ "DeadlockDetected",
+ "CantChangeRuntimeParam",
+ "UndefinedObject",
+ "UniqueViolation",
+ "InvalidCursorDefinition",
+ "ConnectionFailure",
+ "UndefinedFunction",
+ "FDWFunctionSequenceError",
+ "ErrorInAssignment",
+ "SuccessfulCompletion",
+ "StringDataRightTruncation",
+ "FDWTooManyHandles",
+ "FDWInvalidDataType",
+ "ActiveSQLTransaction",
+ "InvalidTextRepresentation",
+ "InvalidSQLStatementName",
+ "PrivilegeNotGrantedWarning",
+ "SREModifyingSQLDataNotPermitted",
+ "IndeterminateCollation",
+ "SystemError",
+ "NullValueEliminatedInSetFunctionWarning",
+ "DependentObjectsStillExist",
+ "InvalidSchemaName",
+ "DuplicateColumn",
+ "FunctionExecutedNoReturnStatement",
+ "InvalidColumnDefinition",
+ "DynamicResultSetsReturnedWarning",
+ "IdleInTransactionSessionTimeout",
+ "StatementCompletionUnknown",
+ "CannotCoerce",
+ "InvalidTransactionState",
+ "DuplicateTable",
+ "BadCopyFileFormat",
+ "ZeroLengthCharacterString",
+ "SyntaxErrorOrAccessRuleViolation",
+ "SingletonSQLJsonItemRequired",
+ "IndexCorrupted",
+ "FDWInvalidColumnName",
+ "DataCorrupted",
+ "ERIENullValueNotAllowed",
+ "ArraySubscriptError",
+ "FDWReplyHandle",
+ "DiagnosticsException",
+ "InvalidTablesampleRepeat",
+ "SQLJsonItemCannotBeCastToTargetType",
+ "FDWInvalidHandle",
+ "InvalidPassword",
+ "InvalidEscapeSequence",
+ "EscapeCharacterConflict",
+ "InvalidSavepointSpecification",
+ "FDWInvalidAttributeValue",
+ "ContainingSQLNotPermitted",
+ "LocatorException",
+ "DatatypeMismatch",
+ "InvalidCursorState",
+ "InvalidName",
+ "IndicatorOverflow",
+ "ReservedName",
+ "DatetimeFieldOverflow",
+ "FDWInconsistentDescriptorInformation",
+ "FloatingPointException",
+ "AmbiguousAlias",
+ "InvalidRecursion",
+ "WrongObjectType",
+ "UndefinedFile",
+ "LockNotAvailable",
+ "InvalidRowCountInResultOffsetClause",
+ "ObjectInUse",
+ "DeprecatedFeatureWarning",
+ "FDWDynamicParameterValueNeeded",
+ "DuplicateFunction",
+ "InvalidXmlDocument",
+ "StringDataRightTruncationWarning",
+ "DuplicatePreparedStatement",
+ "InvalidGrantor",
+ "EventTriggerProtocolViolated",
+ "FDWInvalidUseOfNullPointer",
+ "FDWUnableToCreateExecution",
+ "ConnectionDoesNotExist",
+ "InvalidCatalogName",
+ "InvalidArgumentForXquery",
+ "FDWColumnNameNotFound",
+ "TransactionIntegrityConstraintViolation",
+ "InvalidPreparedStatementDefinition",
+ "FDWInvalidDescriptorFieldIdentifier",
+ "FDWOptionNameNotFound",
+ "InvalidArgumentForPowerFunction",
+ "FDWInvalidStringLengthOrBufferLength",
+ "SREProhibitedSQLStatementAttempted",
+ "NoDataFound",
+ "DuplicateDatabase",
+ "FeatureNotSupported",
+ "IntegrityConstraintViolation",
+ "AmbiguousColumn",
+ "PrivilegeNotRevokedWarning",
+ "FileNameTooLong",
+ "InvalidArgumentForWidthBucketFunction",
+ "HeldCursorRequiresSameIsolationLevel",
+ "NoSQLJsonItem",
+ "IoError",
+ "SavepointException",
+ "NoActiveSQLTransaction",
+ "InvalidFunctionDefinition",
+ "AdminShutdown",
+ "DatabaseDropped",
+ "InvalidRegularExpression",
+ "WindowingError",
+ "InvalidColumnReference",
+ "InvalidBinaryRepresentation",
+ "SQLJsonScalarRequired",
+ "ConfigurationLimitExceeded",
+ "SyntaxError",
+ "SerializationFailure",
+ "ProgramLimitExceeded",
+ "DuplicateSchema",
+ "SQLStatementNotYetComplete",
+ "LibpqError",
+ "DataException",
+ "SubstringError",
+ "InvalidLocatorSpecification",
+ "InappropriateAccessModeForBranchTransaction",
+ "EREModifyingSQLDataNotPermitted",
+ "InsufficientPrivilege",
+ "NoAdditionalDynamicResultSetsReturned",
+ "SQLJsonArrayNotFound",
+ "NameTooLong",
+ "InvalidTimeZoneDisplacementValue",
+ "InappropriateIsolationLevelForBranchTransaction",
+ "RaiseException",
+ "EREReadingSQLDataNotPermitted",
+ "TriggerProtocolViolated",
+ "NonstandardUseOfEscapeCharacter",
+ "InvalidTransactionInitiation",
+ "DuplicateAlias",
+ "TransactionResolutionUnknown",
+ "TooManyRows",
+ "InvalidXmlComment",
+ "MostSpecificTypeMismatch",
+ "DuplicateObject",
+ "DuplicateCursor",
+ "AmbiguousFunction",
+ "TooManyArguments",
+ "InvalidXmlProcessingInstruction",
+ "InvalidTransactionTermination",
+ "InvalidDatetimeFormat",
+ "InvalidPrecedingOrFollowingSize",
+ "CharacterNotInRepertoire",
+ "SQLSTATE_TO_EXCEPTION",
+]
diff --git a/src/test/pytest/libpq/errors.py b/src/test/pytest/libpq/errors.py
new file mode 100644
index 00000000000..764a96c2478
--- /dev/null
+++ b/src/test/pytest/libpq/errors.py
@@ -0,0 +1,39 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+PostgreSQL error types mapped from SQLSTATE codes.
+
+This module provides LibpqError and its subclasses for handling PostgreSQL
+errors based on SQLSTATE codes. The exception classes in _generated_errors.py
+are auto-generated from src/backend/utils/errcodes.txt.
+
+To regenerate: src/tools/generate_pytest_libpq_errors.py
+"""
+
+from typing import Optional
+
+from ._error_base import LibpqError, LibpqWarning
+from ._generated_errors import (
+ SQLSTATE_TO_EXCEPTION,
+)
+from ._generated_errors import * # noqa: F403
+
+
+def get_exception_class(sqlstate: Optional[str]) -> type:
+ """Get the appropriate exception class for a SQLSTATE code."""
+ if sqlstate in SQLSTATE_TO_EXCEPTION:
+ return SQLSTATE_TO_EXCEPTION[sqlstate]
+ return LibpqError
+
+
+def make_error(message: str, *, sqlstate: Optional[str] = None, **kwargs) -> LibpqError:
+ """Create an appropriate LibpqError subclass based on the SQLSTATE code."""
+ exc_class = get_exception_class(sqlstate)
+ return exc_class(message, sqlstate=sqlstate, **kwargs)
+
+
+__all__ = [
+ "LibpqError",
+ "LibpqWarning",
+ "make_error",
+]
diff --git a/src/test/pytest/meson.build b/src/test/pytest/meson.build
index abd128dfa24..b86be901e7c 100644
--- a/src/test/pytest/meson.build
+++ b/src/test/pytest/meson.build
@@ -10,7 +10,10 @@ tests += {
'bd': meson.current_build_dir(),
'pytest': {
'tests': [
- 'pyt/test_something.py',
+ 'pyt/test_errors.py',
+ 'pyt/test_libpq.py',
+ 'pyt/test_multi_server.py',
+ 'pyt/test_query_helpers.py',
],
},
}
diff --git a/src/test/pytest/pypg/__init__.py b/src/test/pytest/pypg/__init__.py
new file mode 100644
index 00000000000..4ee91289f70
--- /dev/null
+++ b/src/test/pytest/pypg/__init__.py
@@ -0,0 +1,10 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+from ._env import require_test_extras, skip_unless_test_extras
+from .server import PostgresServer
+
+__all__ = [
+ "require_test_extras",
+ "skip_unless_test_extras",
+ "PostgresServer",
+]
diff --git a/src/test/pytest/pypg/_env.py b/src/test/pytest/pypg/_env.py
new file mode 100644
index 00000000000..c4087be3212
--- /dev/null
+++ b/src/test/pytest/pypg/_env.py
@@ -0,0 +1,72 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import logging
+import os
+
+import pytest
+
+logger = logging.getLogger(__name__)
+
+
+def _test_extra_skip_reason(*keys: str) -> str:
+ return "requires {} to be set in PG_TEST_EXTRA".format(", ".join(keys))
+
+
+def _has_test_extra(key: str) -> bool:
+ """
+ Returns True if the PG_TEST_EXTRA environment variable contains the given
+ key.
+ """
+ extra = os.getenv("PG_TEST_EXTRA", "")
+ return key in extra.split()
+
+
+def require_test_extras(*keys: str):
+ """
+ A convenience annotation which will skip tests if all of the required keys
+ are not present in PG_TEST_EXTRA.
+
+ To skip a particular test function or class:
+
+ @pypg.require_test_extras("ldap")
+ def test_some_ldap_feature():
+ ...
+
+ To skip an entire module:
+
+ pytestmark = pypg.require_test_extra("ssl", "kerberos")
+ """
+ return pytest.mark.skipif(
+ not all([_has_test_extra(k) for k in keys]),
+ reason=_test_extra_skip_reason(*keys),
+ )
+
+
+def skip_unless_test_extras(*keys: str):
+ """
+ Skip the current test/fixture if any of the required keys are not present
+ in PG_TEST_EXTRA. Use this inside fixtures where decorators can't be used.
+
+ @pytest.fixture
+ def my_fixture():
+ skip_unless_test_extras("ldap")
+ ...
+ """
+ if not all([_has_test_extra(k) for k in keys]):
+ pytest.skip(_test_extra_skip_reason(*keys))
+
+
+def test_timeout_default() -> int:
+ """
+ Returns the value of the PG_TEST_TIMEOUT_DEFAULT environment variable, in
+ seconds, or 180 if one was not provided.
+ """
+ default = os.getenv("PG_TEST_TIMEOUT_DEFAULT", "")
+ if not default:
+ return 180
+
+ try:
+ return int(default)
+ except ValueError as v:
+ logger.warning("PG_TEST_TIMEOUT_DEFAULT could not be parsed: " + str(v))
+ return 180
diff --git a/src/test/pytest/pypg/fixtures.py b/src/test/pytest/pypg/fixtures.py
new file mode 100644
index 00000000000..8c0cb60daa5
--- /dev/null
+++ b/src/test/pytest/pypg/fixtures.py
@@ -0,0 +1,335 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import os
+import contextlib
+import pathlib
+import time
+from typing import List
+
+import pytest
+
+from ._env import test_timeout_default
+from .util import capture
+from .server import PostgresServer
+
+from libpq import load_libpq_handle, connect as libpq_connect
+
+
+# Stash key for tracking servers for log reporting.
+_servers_key = pytest.StashKey[List[PostgresServer]]()
+
+
+def _record_server_for_log_reporting(request, server):
+ """Record a server for log reporting on test failure."""
+ if _servers_key not in request.node.stash:
+ request.node.stash[_servers_key] = []
+ request.node.stash[_servers_key].append(server)
+
+
+@pytest.fixture
+def remaining_timeout():
+ """
+ This fixture provides a function that returns how much of the
+ PG_TEST_TIMEOUT_DEFAULT remains for the current test, in fractional seconds.
+ This value is never less than zero.
+
+ This fixture is per-test, so the deadline is also reset on a per-test basis.
+ """
+ now = time.monotonic()
+ deadline = now + test_timeout_default()
+
+ return lambda: max(deadline - time.monotonic(), 0)
+
+
+@pytest.fixture(scope="module")
+def remaining_timeout_module():
+ """
+ Same as remaining_timeout, but the deadline is set once per module.
+
+ This fixture is per-module, which means it's generally only really useful
+ for configuring timeouts of operations that happen in the setup phase of
+ another module fixtures. If you use it in a test it would mean that each
+ subsequent test in the module gets a reduced timeout.
+ """
+ now = time.monotonic()
+ deadline = now + test_timeout_default()
+
+ return lambda: max(deadline - time.monotonic(), 0)
+
+
+@pytest.fixture(scope="session")
+def libpq_handle(libdir, bindir):
+ """
+ Loads a ctypes handle for libpq. Some common function prototypes are
+ initialized for general use.
+ """
+ try:
+ return load_libpq_handle(libdir, bindir)
+ except OSError as e:
+ if "wrong ELF class" in str(e):
+ # This happens in CI when trying to lead a 32-bit libpq library
+ # with a 64-bit Python
+ pytest.skip("libpq architecture does not match Python interpreter")
+ raise
+
+
+@pytest.fixture
+def connect(libpq_handle, remaining_timeout):
+ """
+ Returns a function to connect to PostgreSQL via libpq.
+
+ The returned function accepts connection options as keyword arguments
+ (host, port, dbname, etc.) and returns a PGconn object. Connections
+ are automatically cleaned up at the end of the test.
+
+ Example:
+ conn = connect(host='localhost', port=5432, dbname='postgres')
+ result = conn.sql("SELECT 1")
+ """
+ with contextlib.ExitStack() as stack:
+
+ def _connect(**opts):
+ return libpq_connect(libpq_handle, stack, remaining_timeout, **opts)
+
+ yield _connect
+
+
+@pytest.fixture(scope="session")
+def pg_config():
+ """
+ Returns the path to pg_config. Uses PG_CONFIG environment variable if set,
+ otherwise uses 'pg_config' from PATH.
+ """
+ return os.environ.get("PG_CONFIG", "pg_config")
+
+
+@pytest.fixture(scope="session")
+def bindir(pg_config):
+ """
+ Returns the PostgreSQL bin directory using pg_config --bindir.
+ """
+ return pathlib.Path(capture(pg_config, "--bindir"))
+
+
+@pytest.fixture(scope="session")
+def libdir(pg_config):
+ """
+ Returns the PostgreSQL lib directory using pg_config --libdir.
+ """
+ return pathlib.Path(capture(pg_config, "--libdir"))
+
+
+@pytest.fixture(scope="session")
+def tmp_check(tmp_path_factory) -> pathlib.Path:
+ """
+ Returns the tmp_check directory that should be used for the tests. If
+ TESTDATADIR is provided, that will be used; otherwise a new temporary
+ directory is created in the pytest temp root.
+ """
+ d = os.getenv("TESTDATADIR")
+ if d:
+ d = pathlib.Path(d)
+ else:
+ d = tmp_path_factory.mktemp("tmp_check")
+
+ return d
+
+
+@pytest.fixture(scope="session")
+def datadir(tmp_check):
+ """
+ Returns the data directory to use for the pg fixture.
+ """
+
+ return tmp_check / "pgdata"
+
+
+@pytest.fixture(scope="session")
+def sockdir(tmp_path_factory):
+ """
+ Returns the directory name to use as the server's unix_socket_directories
+ setting. Local client connections use this as the PGHOST.
+
+ At the moment, this is always put under the pytest temp root.
+ """
+ return tmp_path_factory.mktemp("sockfiles")
+
+
+@pytest.fixture(scope="session")
+def pg_server_global(bindir, datadir, sockdir, libpq_handle):
+ """
+ Starts a running Postgres server listening on localhost. The HBA initially
+ allows only local UNIX connections from the same user.
+
+ Returns a PostgresServer instance with methods for server management, configuration,
+ and creating test databases/users.
+ """
+ server = PostgresServer("default", bindir, datadir, sockdir, libpq_handle)
+
+ yield server
+
+ # Cleanup any test resources
+ server.cleanup()
+
+ # Stop the server
+ server.stop()
+
+
+@pytest.fixture(scope="module")
+def pg_server_module(pg_server_global):
+ """
+ Module-scoped server context. Which can be useful so that certain settings
+ can be overriden at the module level through autouse fixtures. An example
+ of this is in the SSL tests.
+ """
+ with pg_server_global.subcontext() as s:
+ yield s
+
+
+@pytest.fixture
+def pg(request, pg_server_module, remaining_timeout):
+ """
+ Per-test server context. Use this fixture to make changes to the server
+ which will be rolled back at the end of the test (e.g., creating test
+ users/databases).
+
+ Also captures the PostgreSQL log position at test start so that any new
+ log entries can be included in the test report on failure.
+ """
+ with pg_server_module.start_new_test(remaining_timeout) as s:
+ _record_server_for_log_reporting(request, s)
+ yield s
+
+
+@pytest.fixture
+def conn(pg):
+ """
+ Returns a connected PGconn instance to the test PostgreSQL server.
+ The connection is automatically cleaned up at the end of the test.
+
+ Example:
+ def test_something(conn):
+ result = conn.sql("SELECT 1")
+ assert result == 1
+ """
+ return pg.connect()
+
+
+@pytest.fixture
+def create_pg(request, bindir, sockdir, libpq_handle, tmp_check, remaining_timeout):
+ """
+ Factory fixture to create additional PostgreSQL servers (per-test scope).
+
+ Returns a function that creates new PostgreSQL server instances.
+ Servers are automatically cleaned up at the end of the test.
+
+ Example:
+ def test_multiple_servers(create_pg):
+ node1 = create_pg()
+ node2 = create_pg()
+ node3 = create_pg()
+ """
+ servers = []
+
+ def _create(name=None, **kwargs):
+ if name is None:
+ count = len(servers) + 1
+ name = f"pg{count}"
+
+ datadir = tmp_check / f"pgdata_{name}"
+ server = PostgresServer(name, bindir, datadir, sockdir, libpq_handle, **kwargs)
+ server.set_timeout(remaining_timeout)
+ _record_server_for_log_reporting(request, server)
+ servers.append(server)
+ return server
+
+ yield _create
+
+ for server in servers:
+ server.cleanup()
+ server.stop()
+
+
+@pytest.fixture(scope="module")
+def _module_scoped_servers():
+ """Session-scoped list to track servers created by create_pg_module."""
+ return []
+
+
+@pytest.fixture(scope="module")
+def create_pg_module(
+ bindir,
+ sockdir,
+ libpq_handle,
+ tmp_check,
+ remaining_timeout_module,
+ _module_scoped_servers,
+):
+ """
+ Factory fixture to create additional PostgreSQL servers (module scope).
+
+ Like create_pg, but servers persist for the entire test module.
+ Use this when multiple tests in a module can share the same servers.
+
+ The timeout is automatically set on all servers at the start of each test
+ via the _set_module_server_timeouts autouse fixture.
+
+ Example:
+ @pytest.fixture(scope="module")
+ def shared_nodes(create_pg_module):
+ return [create_pg_module() for _ in range(3)]
+ """
+
+ def _create(name=None, **kwargs):
+ if name is None:
+ count = len(_module_scoped_servers) + 1
+ name = f"pg{count}"
+ datadir = tmp_check / f"pgdata_{name}"
+ server = PostgresServer(name, bindir, datadir, sockdir, libpq_handle, **kwargs)
+ server.set_timeout(remaining_timeout_module)
+ _module_scoped_servers.append(server)
+ return server
+
+ yield _create
+
+ for server in _module_scoped_servers:
+ server.cleanup()
+ server.stop()
+
+
+@pytest.fixture(autouse=True)
+def _set_module_server_timeouts(request, _module_scoped_servers, remaining_timeout):
+ """Autouse fixture that sets timeout, enters subcontext, and records log positions for module-scoped servers."""
+ with contextlib.ExitStack() as stack:
+ for server in _module_scoped_servers:
+ stack.enter_context(server.start_new_test(remaining_timeout))
+ _record_server_for_log_reporting(request, server)
+ yield
+
+
+@pytest.hookimpl(hookwrapper=True, trylast=True)
+def pytest_runtest_makereport(item, call):
+ """
+ Adds PostgreSQL server logs to the test report sections.
+ """
+ outcome = yield
+ report = outcome.get_result()
+
+ if report.when != "call":
+ return
+
+ if _servers_key not in item.stash:
+ return
+
+ servers = item.stash[_servers_key]
+ del item.stash[_servers_key]
+
+ include_name = len(servers) > 1
+
+ for server in servers:
+ content = server.log_content()
+ if content.strip():
+ section_title = "Postgres log"
+ if include_name:
+ section_title += f" ({server.name})"
+ report.sections.append((section_title, content))
diff --git a/src/test/pytest/pypg/server.py b/src/test/pytest/pypg/server.py
new file mode 100644
index 00000000000..9242ab25007
--- /dev/null
+++ b/src/test/pytest/pypg/server.py
@@ -0,0 +1,470 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import os
+import pathlib
+import platform
+import re
+import shutil
+import socket
+import subprocess
+import tempfile
+from collections import namedtuple
+from typing import Callable, Optional
+
+from .util import run
+from libpq import PGconn, connect as libpq_connect
+
+
+class FileBackup(contextlib.AbstractContextManager):
+ """
+ A context manager which backs up a file's contents, restoring them on exit.
+ """
+
+ def __init__(self, file: pathlib.Path):
+ super().__init__()
+
+ self._file = file
+
+ def __enter__(self):
+ with tempfile.NamedTemporaryFile(
+ prefix=self._file.name, dir=self._file.parent, delete=False
+ ) as f:
+ self._backup = pathlib.Path(f.name)
+
+ shutil.copyfile(self._file, self._backup)
+
+ return self
+
+ def __exit__(self, *exc):
+ # Swap the backup and the original file, so that the modified contents
+ # can still be inspected in case of failure.
+ tmp = self._backup.parent / (self._backup.name + ".tmp")
+
+ shutil.copyfile(self._file, tmp)
+ shutil.copyfile(self._backup, self._file)
+ shutil.move(tmp, self._backup)
+
+
+class HBA(FileBackup):
+ """
+ Backs up a server's HBA configuration and provides means for temporarily
+ editing it.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "pg_hba.conf")
+
+ def prepend(self, *lines):
+ """
+ Temporarily prepends lines to the server's pg_hba.conf.
+
+ As sugar for aligning HBA columns in the tests, each line can be either
+ a string or a list of strings. List elements will be joined by single
+ spaces before they are written to file.
+ """
+ with open(self._file, "r") as f:
+ prior_data = f.read()
+
+ with open(self._file, "w") as f:
+ for line in lines:
+ if isinstance(line, list):
+ print(*line, file=f)
+ else:
+ print(line, file=f)
+
+ f.write(prior_data)
+
+
+class Config(FileBackup):
+ """
+ Backs up a server's postgresql.conf and provides means for temporarily
+ editing it.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "postgresql.conf")
+
+ def set(self, **gucs):
+ """
+ Temporarily appends GUC settings to the server's postgresql.conf.
+ """
+
+ with open(self._file, "a") as f:
+ print(file=f)
+
+ for n, v in gucs.items():
+ v = str(v)
+
+ # TODO: proper quoting
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+ v = "'{}'".format(v)
+
+ print(n, "=", v, file=f)
+
+
+Backup = namedtuple("Backup", "conf, hba")
+
+
+class PostgresServer:
+ """
+ Represents a running PostgreSQL server instance with management utilities.
+ Provides methods for configuration, user/database creation, and server control.
+ """
+
+ def __init__(
+ self,
+ name,
+ bindir,
+ datadir,
+ sockdir,
+ libpq_handle,
+ *,
+ hostaddr: Optional[str] = None,
+ port: Optional[int] = None,
+ ):
+ """
+ Initialize and start a PostgreSQL server instance.
+
+ Args:
+ name: The name of this server instance (for logging purposes)
+ bindir: Path to PostgreSQL bin directory
+ datadir: Path to data directory for this server
+ sockdir: Path to directory for Unix sockets
+ libpq_handle: ctypes handle to libpq
+ hostaddr: If provided, use this specific address (e.g., "127.0.0.2")
+ port: If provided, use this port instead of finding a free one,
+ is currently only allowed if hostaddr is also provided
+ """
+
+ if hostaddr is None and port is not None:
+ raise NotImplementedError("port was provided without hostaddr")
+
+ self.name = name
+ self.datadir = datadir
+ self.sockdir = sockdir
+ self.libpq_handle = libpq_handle
+ self._remaining_timeout_fn: Optional[Callable[[], float]] = None
+ self._bindir = bindir
+ self._pg_ctl = bindir / "pg_ctl"
+ self.log = datadir / "postgresql.log"
+ self._log_start_pos = 0
+
+ # Determine whether to use Unix sockets
+ use_unix_sockets = platform.system() != "Windows" and hostaddr is None
+
+ # Use INITDB_TEMPLATE if available (much faster than running initdb)
+ initdb_template = os.environ.get("INITDB_TEMPLATE")
+ if initdb_template and os.path.isdir(initdb_template):
+ shutil.copytree(initdb_template, datadir)
+ else:
+ if platform.system() == "Windows":
+ auth_method = "trust"
+ else:
+ auth_method = "peer"
+ run(
+ bindir / "initdb",
+ "--no-sync",
+ "--auth",
+ auth_method,
+ "--pgdata",
+ self.datadir,
+ )
+
+ # Figure out a port to listen on. Attempt to reserve both IPv4 and IPv6
+ # addresses in one go.
+ #
+ # Note: socket.has_dualstack_ipv6/create_server are only in Python 3.8+.
+ if hostaddr is not None:
+ # Explicit address provided
+ addrs: list[str] = [hostaddr]
+ temp_sock = socket.socket()
+ if port is None:
+ temp_sock.bind((hostaddr, 0))
+ _, port = temp_sock.getsockname()
+
+ elif hasattr(socket, "has_dualstack_ipv6") and socket.has_dualstack_ipv6():
+ addr = ("::1", 0)
+ temp_sock = socket.create_server(
+ addr, family=socket.AF_INET6, dualstack_ipv6=True
+ )
+
+ hostaddr, port, _, _ = temp_sock.getsockname()
+ assert hostaddr is not None
+ addrs = [hostaddr, "127.0.0.1"]
+
+ else:
+ addr = ("127.0.0.1", 0)
+
+ temp_sock = socket.socket()
+ temp_sock.bind(addr)
+
+ hostaddr, port = temp_sock.getsockname()
+ assert hostaddr is not None
+ addrs = [hostaddr]
+
+ # Store the computed values
+ self.hostaddr = hostaddr
+ self.port = port
+ # Including the host to use for connections - either the socket
+ # directory or TCP address
+ if use_unix_sockets:
+ self.host = str(sockdir)
+ else:
+ self.host = hostaddr
+
+ with open(os.path.join(datadir, "postgresql.conf"), "a") as f:
+ print(file=f)
+ if use_unix_sockets:
+ print(
+ "unix_socket_directories = '{}'".format(sockdir.as_posix()),
+ file=f,
+ )
+ else:
+ # Disable Unix sockets when using TCP to avoid lock conflicts
+ print("unix_socket_directories = ''", file=f)
+ print("listen_addresses = '{}'".format(",".join(addrs)), file=f)
+ print("port =", port, file=f)
+ print("log_connections = all", file=f)
+ print("fsync = off", file=f)
+ print("datestyle = 'ISO'", file=f)
+ print("timezone = 'UTC'", file=f)
+
+ # Between closing of the socket, s, and server start, we're racing
+ # against anything that wants to open up ephemeral ports, so try not to
+ # put any new work here.
+
+ temp_sock.close()
+ self.pg_ctl("start")
+
+ # Read the PID file to get the postmaster PID
+ with open(os.path.join(datadir, "postmaster.pid")) as f:
+ self.pid = int(f.readline().strip())
+
+ # ExitStack for cleanup callbacks
+ self._cleanup_stack = contextlib.ExitStack()
+
+ def current_log_position(self):
+ """Get the current end position of the log file."""
+ if self.log.exists():
+ return self.log.stat().st_size
+ return 0
+
+ def reset_log_position(self):
+ """Mark current log position as start for log_content()."""
+ self._log_start_pos = self.current_log_position()
+
+ @contextlib.contextmanager
+ def start_new_test(self, remaining_timeout):
+ """
+ Prepare server for a new test.
+
+ Sets timeout, resets log position, and enters a cleanup subcontext.
+ """
+ self.set_timeout(remaining_timeout)
+ self.reset_log_position()
+ with self.subcontext():
+ yield self
+
+ def psql(self, *args):
+ """Run psql with the given arguments."""
+ self._run(os.path.join(self._bindir, "psql"), "-w", *args)
+
+ def sql(self, query):
+ """Execute a SQL query via libpq. Returns simplified results."""
+ with self.connect() as conn:
+ return conn.sql(query)
+
+ def pg_ctl(self, *args):
+ """Run pg_ctl with the given arguments."""
+ self._run(self._pg_ctl, "--pgdata", self.datadir, "--log", self.log, *args)
+
+ def _run(self, cmd, *args, addenv: Optional[dict] = None):
+ """Run a command with PG* environment variables set."""
+ subenv = dict(os.environ)
+ subenv.update(
+ {
+ "PGHOST": str(self.host),
+ "PGPORT": str(self.port),
+ "PGDATABASE": "postgres",
+ "PGDATA": str(self.datadir),
+ }
+ )
+ if addenv:
+ subenv.update(addenv)
+ run(cmd, *args, env=subenv)
+
+ def create_users(self, *userkeys: str):
+ """Create test users and register them for cleanup."""
+ usermap = {}
+ for u in userkeys:
+ name = u + "user"
+ usermap[u] = name
+ self.psql("-c", "CREATE USER " + name)
+ self._cleanup_stack.callback(self.psql, "-c", "DROP USER " + name)
+ return usermap
+
+ def create_dbs(self, *dbkeys: str):
+ """Create test databases and register them for cleanup."""
+ dbmap = {}
+ for d in dbkeys:
+ name = d + "db"
+ dbmap[d] = name
+ self.psql("-c", "CREATE DATABASE " + name)
+ self._cleanup_stack.callback(self.psql, "-c", "DROP DATABASE " + name)
+ return dbmap
+
+ @contextlib.contextmanager
+ def reloading(self):
+ """
+ Provides a context manager for making configuration changes.
+
+ If the context suite finishes successfully, the configuration will
+ be reloaded via pg_ctl. On teardown, the configuration changes will
+ be unwound, and the server will be signaled to reload again.
+
+ The context target contains the following attributes which can be
+ used to configure the server:
+ - .conf: modifies postgresql.conf
+ - .hba: modifies pg_hba.conf
+
+ For example:
+
+ with pg_server_session.reloading() as s:
+ s.conf.set(log_connections="on")
+ s.hba.prepend("local all all trust")
+ """
+ # Push a reload onto the stack before making any other
+ # unwindable changes. That way the order of operations will be
+ #
+ # # test
+ # - config change 1
+ # - config change 2
+ # - reload
+ # # teardown
+ # - undo config change 2
+ # - undo config change 1
+ # - reload
+ #
+ self._cleanup_stack.callback(self.pg_ctl, "reload")
+ yield self._backup_configuration()
+
+ # Now actually reload
+ self.pg_ctl("reload")
+
+ @contextlib.contextmanager
+ def restarting(self):
+ """Like .reloading(), but with a full server restart."""
+ self._cleanup_stack.callback(self.pg_ctl, "restart")
+ yield self._backup_configuration()
+ self.pg_ctl("restart")
+
+ def _backup_configuration(self):
+ # Wrap the existing HBA and configuration with FileBackups.
+ return Backup(
+ hba=self._cleanup_stack.enter_context(HBA(self.datadir)),
+ conf=self._cleanup_stack.enter_context(Config(self.datadir)),
+ )
+
+ @contextlib.contextmanager
+ def subcontext(self):
+ """
+ Create a new cleanup context for per-test isolation.
+
+ Temporarily replaces the cleanup stack so that any cleanup callbacks
+ registered within this context will be cleaned up when the context exits.
+ """
+ old_stack = self._cleanup_stack
+ self._cleanup_stack = contextlib.ExitStack()
+ try:
+ self._cleanup_stack.__enter__()
+ yield self
+ finally:
+ self._cleanup_stack.__exit__(None, None, None)
+ self._cleanup_stack = old_stack
+
+ def stop(self, mode="fast"):
+ """
+ Stop the PostgreSQL server instance.
+
+ Ignores failures if the server is already stopped.
+ """
+ try:
+ self.pg_ctl("stop", "--mode", mode)
+ except subprocess.CalledProcessError:
+ # Server may have already been stopped
+ pass
+
+ def log_content(self) -> str:
+ """Return log content from the current context's start position."""
+ with open(self.log) as f:
+ f.seek(self._log_start_pos)
+ return f.read()
+
+ @contextlib.contextmanager
+ def log_contains(self, pattern, times=None):
+ """
+ Context manager that checks if the log matches pattern during the block.
+
+ Args:
+ pattern: The regex pattern to search for.
+ times: If None, any number of matches is accepted.
+ If a number, exactly that many matches are required.
+ """
+ start_pos = self.current_log_position()
+ yield
+ with open(self.log) as f:
+ f.seek(start_pos)
+ content = f.read()
+ if times is None:
+ assert re.search(pattern, content), f"Pattern {pattern!r} not found in log"
+ else:
+ match_count = len(re.findall(pattern, content))
+ assert match_count == times, (
+ f"Expected {times} matches of {pattern!r}, found {match_count}"
+ )
+
+ def cleanup(self):
+ """Run all registered cleanup callbacks."""
+ self._cleanup_stack.close()
+
+ def set_timeout(self, remaining_timeout_fn: Callable[[], float]) -> None:
+ """
+ Set the timeout function for connections.
+ This is typically called by pg fixture for each test.
+ """
+ self._remaining_timeout_fn = remaining_timeout_fn
+
+ def connect(self, **opts) -> PGconn:
+ """
+ Creates a connection to this PostgreSQL server instance.
+
+ Args:
+ **opts: Additional connection options (can override defaults)
+
+ Returns:
+ PGconn: Connected database connection
+
+ Example:
+ conn = pg.connect()
+ conn = pg.connect(dbname='mydb')
+ """
+ if self._remaining_timeout_fn is None:
+ raise RuntimeError(
+ "Timeout function not set. Use set_timeout() or pg fixture."
+ )
+
+ defaults = {
+ "host": self.host,
+ "port": self.port,
+ "dbname": "postgres",
+ }
+ defaults.update(opts)
+
+ return libpq_connect(
+ self.libpq_handle,
+ self._cleanup_stack,
+ self._remaining_timeout_fn,
+ **defaults,
+ )
diff --git a/src/test/pytest/pypg/util.py b/src/test/pytest/pypg/util.py
new file mode 100644
index 00000000000..b2a1e627e4b
--- /dev/null
+++ b/src/test/pytest/pypg/util.py
@@ -0,0 +1,42 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import shlex
+import subprocess
+import sys
+
+
+def eprint(*args, **kwargs):
+ """eprint prints to stderr"""
+ print(*args, file=sys.stderr, **kwargs)
+
+
+def run(*command, check=True, shell=None, silent=False, **kwargs):
+ """run runs the given command and prints it to stderr"""
+
+ if shell is None:
+ shell = len(command) == 1 and isinstance(command[0], str)
+
+ if shell:
+ command = command[0]
+ else:
+ command = list(map(str, command))
+
+ if not silent:
+ if shell:
+ eprint(f"+ {command}")
+ else:
+ # We could normally use shlex.join here, but it's not available in
+ # Python 3.6 which we still like to support
+ unsafe_string_cmd = " ".join(map(shlex.quote, command))
+ eprint(f"+ {unsafe_string_cmd}")
+
+ if silent:
+ kwargs.setdefault("stdout", subprocess.DEVNULL)
+
+ return subprocess.run(command, check=check, shell=shell, **kwargs)
+
+
+def capture(command, *args, stdout=subprocess.PIPE, encoding="utf-8", **kwargs):
+ return run(
+ command, *args, stdout=stdout, encoding=encoding, **kwargs
+ ).stdout.removesuffix("\n")
diff --git a/src/test/pytest/pyt/conftest.py b/src/test/pytest/pyt/conftest.py
new file mode 100644
index 00000000000..dd73917c68c
--- /dev/null
+++ b/src/test/pytest/pyt/conftest.py
@@ -0,0 +1 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
diff --git a/src/test/pytest/pyt/test_errors.py b/src/test/pytest/pyt/test_errors.py
new file mode 100644
index 00000000000..ad109039668
--- /dev/null
+++ b/src/test/pytest/pyt/test_errors.py
@@ -0,0 +1,34 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests for libpq error types and SQLSTATE-based exception mapping.
+"""
+
+import pytest
+import libpq
+
+
+def test_syntax_error(conn):
+ """Invalid SQL syntax raises SyntaxError with correct SQLSTATE."""
+ with pytest.raises(libpq.errors.SyntaxError) as exc_info:
+ conn.sql("SELEC 1")
+
+ err = exc_info.value
+ assert err.sqlstate == "42601"
+ assert err.sqlstate_class == "42"
+ assert "syntax" in str(err).lower()
+
+
+def test_unique_violation(conn):
+ """Unique violation includes all error fields and can be caught as parent class."""
+ conn.sql("CREATE TEMP TABLE test_uv (id int CONSTRAINT test_uv_pk PRIMARY KEY)")
+ conn.sql("INSERT INTO test_uv VALUES (1)")
+
+ with pytest.raises(libpq.errors.UniqueViolation) as exc_info:
+ conn.sql("INSERT INTO test_uv VALUES (1)")
+
+ err = exc_info.value
+ assert err.sqlstate == "23505"
+ assert err.table_name == "test_uv"
+ assert err.constraint_name == "test_uv_pk"
+ assert err.detail == "Key (id)=(1) already exists."
diff --git a/src/test/pytest/pyt/test_libpq.py b/src/test/pytest/pyt/test_libpq.py
new file mode 100644
index 00000000000..4fcf4056f41
--- /dev/null
+++ b/src/test/pytest/pyt/test_libpq.py
@@ -0,0 +1,172 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import os
+import socket
+import struct
+import threading
+from typing import Callable
+
+import pytest
+
+from libpq import connstr, LibpqError
+
+
+@pytest.mark.parametrize(
+ "opts, expected",
+ [
+ (dict(), ""),
+ (dict(port=5432), "port=5432"),
+ (dict(port=5432, dbname="postgres"), "port=5432 dbname=postgres"),
+ (dict(host=""), "host=''"),
+ (dict(host=" "), r"host=' '"),
+ (dict(keyword="'"), r"keyword=\'"),
+ (dict(keyword=" \\' "), r"keyword=' \\\' '"),
+ ],
+)
+def test_connstr(opts, expected):
+ """Tests the escape behavior for connstr()."""
+ assert connstr(opts) == expected
+
+
+def test_must_connect_errors(connect):
+ """Tests that connect() raises LibpqError."""
+ with pytest.raises(LibpqError, match="invalid connection option"):
+ connect(some_unknown_keyword="whatever")
+
+
+@pytest.fixture
+def local_server(tmp_path, remaining_timeout):
+ """
+ Opens up a local UNIX socket for mocking a Postgres server on a background
+ thread. See the _Server API for usage.
+
+ This fixture requires AF_UNIX support; dependent tests will be skipped on
+ platforms that don't provide it.
+ """
+
+ try:
+ from socket import AF_UNIX
+ except ImportError:
+ pytest.skip("AF_UNIX not supported on this platform")
+
+ class _Server(contextlib.ExitStack):
+ """
+ Implementation class for local_server. See .background() for the primary
+ entry point for tests. Postgres clients may connect to this server via
+ local_server.host/local_server.port.
+
+ _Server derives from contextlib.ExitStack to provide easy cleanup of
+ associated resources; see the documentation for that class for a full
+ explanation.
+ """
+
+ def __init__(self):
+ super().__init__()
+
+ self.host = tmp_path
+ self.port = 5432
+
+ self._thread = None
+ self._thread_exc = None
+ self._listener = self.enter_context(
+ socket.socket(AF_UNIX, socket.SOCK_STREAM),
+ )
+
+ def bind_and_listen(self):
+ """
+ Does the actual work of binding the UNIX socket using the Postgres
+ server conventions and listening for connections.
+
+ The listen backlog is currently hardcoded to one.
+ """
+ sockfile = self.host / ".s.PGSQL.{}".format(self.port)
+
+ # Lock down the permissions on the new socket.
+ prev_mask = os.umask(0o077)
+
+ # Bind (creating the socket file), and immediately register it for
+ # deletion from disk when the stack is cleaned up.
+ self._listener.bind(bytes(sockfile))
+ self.callback(os.unlink, sockfile)
+
+ os.umask(prev_mask)
+
+ self._listener.listen(1)
+
+ def background(self, fn: Callable[[socket.socket], None]) -> None:
+ """
+ Accepts a client connection on a background thread and passes it to
+ the provided callback. Any exceptions raised from the callback will
+ be re-raised on the main thread during fixture teardown.
+
+ Blocking operations on the connected socket default to using the
+ remaining_timeout(), though this can be changed by the test via the
+ socket's .settimeout().
+ """
+
+ def _bg():
+ try:
+ self._listener.settimeout(remaining_timeout())
+ sock, _ = self._listener.accept()
+
+ with sock:
+ sock.settimeout(remaining_timeout())
+ fn(sock)
+
+ except Exception as e:
+ # Save the exception for re-raising on the main thread.
+ self._thread_exc = e
+
+ # TODO: rather than using callback(), consider explicitly signaling
+ # the fn() implementation to stop early if we get an exception.
+ # Otherwise we'll hang until the end of the timeout.
+ self._thread = threading.Thread(target=_bg)
+ self.callback(self._join)
+
+ self._thread.start()
+
+ def _join(self):
+ """
+ Waits for the background thread to finish and raises any thrown
+ exception. This is called during fixture teardown.
+ """
+ # Give a little bit of wiggle room on the join timeout, since we're
+ # racing against the test's own use of remaining_timeout(). (It's
+ # preferable to let tests report timeouts; the stack traces will
+ # help with debugging.)
+ self._thread.join(remaining_timeout() + 1)
+ if self._thread.is_alive():
+ raise TimeoutError("background thread is still running after timeout")
+
+ if self._thread_exc is not None:
+ raise self._thread_exc
+
+ with _Server() as s:
+ s.bind_and_listen()
+ yield s
+
+
+def test_connection_is_finished_on_error(connect, local_server):
+ """Tests that PQfinish() gets called at the end of testing."""
+ expected_error = "something is wrong"
+
+ def serve_error(s: socket.socket) -> None:
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Quick check for the startup packet version.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (3, 0)
+
+ # Discard the remainder of the startup packet and send a v2 error.
+ s.recv(pktlen - 8)
+ s.send(b"E" + expected_error.encode() + b"\0")
+
+ # And now the socket should be closed.
+ assert not s.recv(1), "client sent unexpected data"
+
+ local_server.background(serve_error)
+
+ with pytest.raises(LibpqError, match=expected_error):
+ # Exiting this context should result in PQfinish().
+ connect(host=local_server.host, port=local_server.port)
diff --git a/src/test/pytest/pyt/test_multi_server.py b/src/test/pytest/pyt/test_multi_server.py
new file mode 100644
index 00000000000..8ee045b0cc8
--- /dev/null
+++ b/src/test/pytest/pyt/test_multi_server.py
@@ -0,0 +1,46 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests demonstrating multi-server functionality using create_pg fixture.
+
+These tests verify that the pytest infrastructure correctly handles
+multiple PostgreSQL server instances within a single test, and that
+module-scoped servers persist across tests.
+"""
+
+import pytest
+
+
+def test_multiple_servers_basic(create_pg):
+ """Test that we can create and connect to multiple servers."""
+ node1 = create_pg("primary")
+ node2 = create_pg("secondary")
+
+ conn1 = node1.connect()
+ conn2 = node2.connect()
+
+ # Each server should have its own data directory
+ datadir1 = conn1.sql("SHOW data_directory")
+ datadir2 = conn2.sql("SHOW data_directory")
+ assert datadir1 != datadir2
+
+ # Each server should be listening on a different port
+ assert node1.port != node2.port
+
+
+@pytest.fixture(scope="module")
+def shared_server(create_pg_module):
+ """A server shared across all tests in this module."""
+ server = create_pg_module("shared")
+ server.sql("CREATE TABLE module_state (value int DEFAULT 0)")
+ return server
+
+
+def test_module_server_create_row(shared_server):
+ """First test: create a row in the shared server."""
+ shared_server.connect().sql("INSERT INTO module_state VALUES (42)")
+
+
+def test_module_server_see_row(shared_server):
+ """Second test: verify we see the row from the previous test."""
+ assert shared_server.connect().sql("SELECT value FROM module_state") == 42
diff --git a/src/test/pytest/pyt/test_query_helpers.py b/src/test/pytest/pyt/test_query_helpers.py
new file mode 100644
index 00000000000..abcd9084214
--- /dev/null
+++ b/src/test/pytest/pyt/test_query_helpers.py
@@ -0,0 +1,347 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests for query helper functions with type conversion and result simplification.
+"""
+
+import uuid
+
+import pytest
+
+
+def test_single_cell_int(conn):
+ """Single cell integer query returns just the value."""
+ result = conn.sql("SELECT 1")
+ assert result == 1
+ assert isinstance(result, int)
+
+
+def test_single_cell_string(conn):
+ """Single cell string query returns just the value."""
+ result = conn.sql("SELECT 'hello'")
+ assert result == "hello"
+ assert isinstance(result, str)
+
+
+def test_single_cell_bool(conn):
+ """Single cell boolean query returns just the value."""
+
+ result = conn.sql("SELECT true")
+ assert result is True
+ assert isinstance(result, bool)
+
+ result = conn.sql("SELECT false")
+ assert result is False
+
+
+def test_single_cell_float(conn):
+ """Single cell float query returns just the value."""
+
+ result = conn.sql("SELECT 3.14::float4")
+ assert isinstance(result, float)
+ assert abs(result - 3.14) < 0.01
+
+
+def test_single_cell_null(conn):
+ """Single cell NULL query returns None."""
+
+ result = conn.sql("SELECT NULL")
+ assert result is None
+
+
+def test_single_row_multiple_columns(conn):
+ """Single row with multiple columns returns a tuple."""
+
+ result = conn.sql("SELECT 1, 'hello', true")
+ assert result == (1, "hello", True)
+ assert isinstance(result, tuple)
+
+
+def test_single_column_multiple_rows(conn):
+ """Single column with multiple rows returns a list of values."""
+
+ result = conn.sql("SELECT * FROM generate_series(1, 3)")
+ assert result == [1, 2, 3]
+ assert isinstance(result, list)
+
+
+def test_multiple_rows_and_columns(conn):
+ """Multiple rows and columns returns list of tuples."""
+
+ result = conn.sql("SELECT * FROM (VALUES (1, 'a'), (2, 'b'), (3, 'c')) AS t")
+ assert result == [(1, "a"), (2, "b"), (3, "c")]
+ assert isinstance(result, list)
+ assert all(isinstance(row, tuple) for row in result)
+
+
+def test_empty_result(conn):
+ """Empty result set returns empty list."""
+
+ result = conn.sql("SELECT 1 WHERE false")
+ assert result == []
+
+
+def test_query_error_handling(conn):
+ """Query errors raise RuntimeError with actual error message."""
+
+ with pytest.raises(RuntimeError) as exc_info:
+ conn.sql("SELECT * FROM nonexistent_table")
+
+ error_msg = str(exc_info.value)
+ assert "nonexistent_table" in error_msg or "does not exist" in error_msg
+
+
+def test_division_by_zero_error(conn):
+ """Division by zero raises RuntimeError."""
+
+ with pytest.raises(RuntimeError) as exc_info:
+ conn.sql("SELECT 1/0")
+
+ error_msg = str(exc_info.value)
+ assert "division by zero" in error_msg.lower()
+
+
+def test_simple_exec_create_table(conn):
+ """sql for CREATE TABLE returns None."""
+
+ result = conn.sql("CREATE TEMP TABLE test_table (id int, name text)")
+ assert result is None
+
+ # Verify table was created
+ count = conn.sql("SELECT COUNT(*) FROM test_table")
+ assert count == 0
+
+
+def test_simple_exec_insert(conn):
+ """sql for INSERT returns None."""
+
+ conn.sql("CREATE TEMP TABLE test_table (id int, name text)")
+ result = conn.sql("INSERT INTO test_table VALUES (1, 'Alice'), (2, 'Bob')")
+ assert result is None
+
+ # Verify data was inserted
+ count = conn.sql("SELECT COUNT(*) FROM test_table")
+ assert count == 2
+
+
+def test_type_conversion_mixed(conn):
+ """Test mixed type conversion in a single row."""
+
+ result = conn.sql("SELECT 42::int4, 123::int8, 3.14::float8, 'text', true, NULL")
+ assert result == (42, 123, 3.14, "text", True, None)
+ assert isinstance(result[0], int)
+ assert isinstance(result[1], int)
+ assert isinstance(result[2], float)
+ assert isinstance(result[3], str)
+ assert isinstance(result[4], bool)
+ assert result[5] is None
+
+
+def test_multiple_queries_same_connection(conn):
+ """Test running multiple queries on the same connection."""
+
+ result1 = conn.sql("SELECT 1")
+ assert result1 == 1
+
+ result2 = conn.sql("SELECT 'hello', 'world'")
+ assert result2 == ("hello", "world")
+
+ result3 = conn.sql("SELECT * FROM generate_series(1, 5)")
+ assert result3 == [1, 2, 3, 4, 5]
+
+
+def test_date_type(conn):
+ """Test date type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '2025-10-20'::date")
+ assert result == datetime.date(2025, 10, 20)
+ assert isinstance(result, datetime.date)
+
+
+def test_timestamp_type(conn):
+ """Test timestamp type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '2025-10-20 15:30:45'::timestamp")
+ assert result == datetime.datetime(2025, 10, 20, 15, 30, 45)
+ assert isinstance(result, datetime.datetime)
+
+
+def test_time_type(conn):
+ """Test time type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '15:30:45'::time")
+ assert result == datetime.time(15, 30, 45)
+ assert isinstance(result, datetime.time)
+
+
+def test_numeric_type(conn):
+ """Test numeric/decimal type conversion."""
+ import decimal
+
+ result = conn.sql("SELECT 123.456::numeric")
+ assert result == decimal.Decimal("123.456")
+ assert isinstance(result, decimal.Decimal)
+
+
+def test_int_array(conn):
+ """Test integer array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[1, 2, 3, 4, 5]")
+ assert result == [1, 2, 3, 4, 5]
+ assert isinstance(result, list)
+ assert all(isinstance(x, int) for x in result)
+
+
+def test_text_array(conn):
+ """Test text array type conversion."""
+
+ result = conn.sql("SELECT ARRAY['hello', 'world', 'test']")
+ assert result == ["hello", "world", "test"]
+ assert isinstance(result, list)
+ assert all(isinstance(x, str) for x in result)
+
+
+def test_bool_array(conn):
+ """Test boolean array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[true, false, true]")
+ assert result == [True, False, True]
+ assert isinstance(result, list)
+ assert all(isinstance(x, bool) for x in result)
+
+
+def test_empty_array(conn):
+ """Test empty array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[]::int[]")
+ assert result == []
+ assert isinstance(result, list)
+
+
+def test_json_type(conn):
+ """Test JSON type (parsed to dict)."""
+
+ result = conn.sql('SELECT \'{"key": "value"}\'::json')
+ assert isinstance(result, dict)
+ assert result == {"key": "value"}
+
+
+def test_jsonb_type(conn):
+ """Test JSONB type (parsed to dict)."""
+
+ result = conn.sql('SELECT \'{"name": "test", "count": 42}\'::jsonb')
+ assert isinstance(result, dict)
+ assert result == {"name": "test", "count": 42}
+
+
+def test_json_array(conn):
+ """Test JSON array type."""
+
+ result = conn.sql("SELECT '[1, 2, 3, 4, 5]'::json")
+ assert isinstance(result, list)
+ assert result == [1, 2, 3, 4, 5]
+
+
+def test_json_nested(conn):
+ """Test nested JSON object."""
+
+ result = conn.sql(
+ 'SELECT \'{"user": {"id": 1, "name": "Alice"}, "active": true}\'::json'
+ )
+ assert isinstance(result, dict)
+ assert result == {"user": {"id": 1, "name": "Alice"}, "active": True}
+
+
+def test_mixed_types_with_arrays(conn):
+ """Test mixed types including arrays in a single row."""
+
+ result = conn.sql("SELECT 42, 'text', ARRAY[1, 2, 3], true")
+ assert result == (42, "text", [1, 2, 3], True)
+ assert isinstance(result[0], int)
+ assert isinstance(result[1], str)
+ assert isinstance(result[2], list)
+ assert isinstance(result[3], bool)
+
+
+def test_uuid_type(conn):
+ """Test UUID type conversion."""
+ test_uuid = "550e8400-e29b-41d4-a716-446655440000"
+ result = conn.sql(f"SELECT '{test_uuid}'::uuid")
+ assert result == uuid.UUID(test_uuid)
+ assert isinstance(result, uuid.UUID)
+
+
+def test_uuid_generation(conn):
+ """Test generated UUID type conversion."""
+ result = conn.sql("SELECT uuidv4()")
+ assert isinstance(result, uuid.UUID)
+ # Check it's a valid UUID by ensuring it can be converted to string
+ assert len(str(result)) == 36 # UUID string format length
+
+
+def test_text_array_with_commas(conn):
+ """Test text array with elements containing commas."""
+
+ result = conn.sql("SELECT ARRAY['A,B', 'C', ' D ']")
+ assert result == ["A,B", "C", " D "]
+
+
+def test_text_array_with_quotes(conn):
+ """Test text array with elements containing quotes."""
+
+ result = conn.sql(r"SELECT ARRAY[E'a\"b', 'c']")
+ assert result == ['a"b', "c"]
+
+
+def test_text_array_with_backslash(conn):
+ """Test text array with elements containing backslashes."""
+
+ result = conn.sql(r"SELECT ARRAY[E'a\\b', 'c']")
+ assert result == ["a\\b", "c"]
+
+
+def test_json_array_type(conn):
+ """Test array of JSON values with embedded quotes and commas."""
+
+ result = conn.sql("""SELECT ARRAY['{"abc": 123, "xyz": 456}'::json]""")
+ assert result == [{"abc": 123, "xyz": 456}]
+
+
+def test_json_array_multiple(conn):
+ """Test array of multiple JSON objects."""
+
+ result = conn.sql(
+ """SELECT ARRAY['{"a": 1}'::json, '{"b": 2}'::json, '["x", "y"]'::json]"""
+ )
+ assert result == [{"a": 1}, {"b": 2}, ["x", "y"]]
+
+
+def test_2d_int_array(conn):
+ """Test 2D integer array."""
+
+ result = conn.sql("SELECT ARRAY[[1,2],[3,4]]")
+ assert result == [[1, 2], [3, 4]]
+
+
+def test_2d_text_array(conn):
+ """Test 2D integer array."""
+
+ result = conn.sql("SELECT ARRAY[['a','b'],['c','d,e']]")
+ assert result == [["a", "b"], ["c", "d,e"]]
+
+
+def test_3d_int_array(conn):
+ """Test 3D integer array."""
+
+ result = conn.sql("SELECT ARRAY[[[1,2],[3,4]],[[5,6],[7,8]]]")
+ assert result == [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]
+
+
+def test_array_with_null(conn):
+ """Test array with NULL elements."""
+
+ result = conn.sql("SELECT ARRAY[1, NULL, 3]")
+ assert result == [1, None, 3]
diff --git a/src/tools/generate_pytest_libpq_errors.py b/src/tools/generate_pytest_libpq_errors.py
new file mode 100755
index 00000000000..ba92891c17a
--- /dev/null
+++ b/src/tools/generate_pytest_libpq_errors.py
@@ -0,0 +1,147 @@
+#!/usr/bin/env python3
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Generate src/test/pytest/libpq/_generated_errors.py from errcodes.txt.
+"""
+
+import sys
+from pathlib import Path
+
+
+ACRONYMS = {"sql", "fdw"}
+WORD_MAP = {
+ "sqlclient": "SQLClient",
+ "sqlserver": "SQLServer",
+ "sqlconnection": "SQLConnection",
+}
+
+
+def snake_to_pascal(name: str) -> str:
+ """Convert snake_case to PascalCase, keeping acronyms uppercase."""
+ words = []
+ for word in name.split("_"):
+ if word in WORD_MAP:
+ words.append(WORD_MAP[word])
+ elif word in ACRONYMS:
+ words.append(word.upper())
+ else:
+ words.append(word.capitalize())
+ return "".join(words)
+
+
+def parse_errcodes(path: Path):
+ """Parse errcodes.txt and return list of (sqlstate, macro_name, spec_name) tuples."""
+ errors = []
+
+ with open(path) as f:
+ for line in f:
+ parts = line.split()
+ if len(parts) >= 4 and len(parts[0]) == 5:
+ sqlstate, _, macro_name, spec_name = parts[:4]
+ errors.append((sqlstate, macro_name, spec_name))
+
+ return errors
+
+
+def macro_to_class_name(macro_name: str) -> str:
+ """Convert ERRCODE_FOO_BAR to FooBar."""
+ name = macro_name.removeprefix("ERRCODE_")
+ # Move WARNING prefix to the end as a suffix
+ if name.startswith("WARNING_"):
+ name = name.removeprefix("WARNING_") + "_WARNING"
+ return snake_to_pascal(name.lower())
+
+
+def generate_errors(errcodes_path: Path):
+ """Generate the _generated_errors.py content."""
+ errors = parse_errcodes(errcodes_path)
+
+ # Find spec_names that appear more than once (collisions)
+ spec_name_counts: dict[str, int] = {}
+ for _, _, spec_name in errors:
+ spec_name_counts[spec_name] = spec_name_counts.get(spec_name, 0) + 1
+ colliding_spec_names = {
+ name for name, count in spec_name_counts.items() if count > 1
+ }
+
+ lines = [
+ "# Copyright (c) 2025, PostgreSQL Global Development Group",
+ "# This file is generated by src/tools/generate_pytest_libpq_errors.py - do not edit directly.",
+ "",
+ '"""',
+ "Generated PostgreSQL error classes mapped from SQLSTATE codes.",
+ '"""',
+ "",
+ "from typing import Dict",
+ "",
+ "from ._error_base import LibpqError, LibpqWarning",
+ "",
+ "",
+ ]
+
+ generated_classes = {"LibpqError"}
+ sqlstate_to_exception = {}
+
+ for sqlstate, macro_name, spec_name in errors:
+ # 000 errors define the parent class for all errors in this SQLSTATE class
+ if sqlstate.endswith("000"):
+ exc_name = snake_to_pascal(spec_name)
+ if exc_name == "Warning":
+ parent = "LibpqWarning"
+ else:
+ parent = "LibpqError"
+ else:
+ if spec_name in colliding_spec_names:
+ exc_name = macro_to_class_name(macro_name)
+ else:
+ exc_name = snake_to_pascal(spec_name)
+ # Use parent class if available, otherwise LibpqError
+ parent = sqlstate_to_exception.get(sqlstate[:2] + "000", "LibpqError")
+ # Warnings should end with "Warning"
+ if parent == "Warning" and not exc_name.endswith("Warning"):
+ exc_name += "Warning"
+
+ generated_classes.add(exc_name)
+ sqlstate_to_exception[sqlstate] = exc_name
+ lines.extend(
+ [
+ f"class {exc_name}({parent}):",
+ f' """SQLSTATE {sqlstate} - {spec_name.replace("_", " ")}."""',
+ "",
+ " pass",
+ "",
+ "",
+ ]
+ )
+
+ lines.append("SQLSTATE_TO_EXCEPTION: Dict[str, type] = {")
+ for sqlstate, exc_name in sqlstate_to_exception.items():
+ lines.append(f' "{sqlstate}": {exc_name},')
+ lines.extend(["}", "", ""])
+
+ all_exports = list(generated_classes) + ["SQLSTATE_TO_EXCEPTION"]
+ lines.append("__all__ = [")
+ for name in all_exports:
+ lines.append(f' "{name}",')
+ lines.append("]")
+
+ return "\n".join(lines) + "\n"
+
+
+if __name__ == "__main__":
+ script_dir = Path(__file__).resolve().parent
+ src_root = script_dir.parent.parent
+
+ errcodes_path = src_root / "src" / "backend" / "utils" / "errcodes.txt"
+ output_path = (
+ src_root / "src" / "test" / "pytest" / "libpq" / "_generated_errors.py"
+ )
+
+ if not errcodes_path.exists():
+ print(f"Error: {errcodes_path} not found", file=sys.stderr)
+ sys.exit(1)
+
+ output = generate_errors(errcodes_path)
+ output_path.write_text(output)
+ print(f"Generated {output_path}")
--
2.52.0
v8-0005-Convert-load-balance-tests-from-perl-to-python.patchtext/x-patch; charset=utf-8; name=v8-0005-Convert-load-balance-tests-from-perl-to-python.patchDownload
From 75b7774a2488a2d3e2f8dfe4aafdb9a4b82dc256 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <postgres@jeltef.nl>
Date: Fri, 26 Dec 2025 12:31:43 +0100
Subject: [PATCH v8 5/7] Convert load balance tests from perl to python
---
src/interfaces/libpq/Makefile | 1 +
src/interfaces/libpq/meson.build | 7 +-
src/interfaces/libpq/pyt/test_load_balance.py | 170 ++++++++++++++++++
.../libpq/t/003_load_balance_host_list.pl | 94 ----------
.../libpq/t/004_load_balance_dns.pl | 144 ---------------
5 files changed, 176 insertions(+), 240 deletions(-)
create mode 100644 src/interfaces/libpq/pyt/test_load_balance.py
delete mode 100644 src/interfaces/libpq/t/003_load_balance_host_list.pl
delete mode 100644 src/interfaces/libpq/t/004_load_balance_dns.pl
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index bf4baa92917..4c4bdb4b3a3 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -167,6 +167,7 @@ check installcheck: export PATH := $(CURDIR)/test:$(PATH)
check: test-build all
$(prove_check)
+ $(pytest_check)
installcheck: test-build all
$(prove_installcheck)
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index c5ecd9c3a87..56790dd92a9 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -150,8 +150,6 @@ tests += {
'tests': [
't/001_uri.pl',
't/002_api.pl',
- 't/003_load_balance_host_list.pl',
- 't/004_load_balance_dns.pl',
't/005_negotiate_encryption.pl',
't/006_service.pl',
],
@@ -162,6 +160,11 @@ tests += {
},
'deps': libpq_test_deps,
},
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_load_balance.py',
+ ],
+ },
}
subdir('po', if_found: libintl)
diff --git a/src/interfaces/libpq/pyt/test_load_balance.py b/src/interfaces/libpq/pyt/test_load_balance.py
new file mode 100644
index 00000000000..0af46d8f37d
--- /dev/null
+++ b/src/interfaces/libpq/pyt/test_load_balance.py
@@ -0,0 +1,170 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests for load_balance_hosts connection parameter.
+
+These tests verify that libpq correctly handles load balancing across multiple
+PostgreSQL servers specified in the connection string.
+"""
+
+import platform
+import re
+
+import pytest
+
+from libpq import LibpqError
+import pypg
+
+
+@pytest.fixture(scope="module")
+def load_balance_nodes_hostlist(create_pg_module):
+ """
+ Create 3 PostgreSQL nodes with different socket directories.
+
+ Each node has its own Unix socket directory for isolation.
+ Returns a tuple of (nodes, connect).
+ """
+ nodes = [create_pg_module() for _ in range(3)]
+
+ hostlist = ",".join(node.host for node in nodes)
+ portlist = ",".join(str(node.port) for node in nodes)
+
+ def connect(**kwargs):
+ return nodes[0].connect(host=hostlist, port=portlist, **kwargs)
+
+ return nodes, connect
+
+
+@pytest.fixture(scope="module")
+def load_balance_nodes_dns(create_pg_module):
+ """
+ Create 3 PostgreSQL nodes on the same port but different IP addresses.
+
+ Uses 127.0.0.1, 127.0.0.2, 127.0.0.3 with a shared port, so that
+ connections to 'pg-loadbalancetest' can be load balanced via DNS.
+
+ Since setting up a DNS server is more effort than we consider reasonable to
+ run this test, this situation is instead imitated by using a hosts file
+ where a single hostname maps to multiple different IP addresses. This test
+ requires the administrator to add the following lines to the hosts file (if
+ we detect that this hasn't happened we skip the test):
+
+ 127.0.0.1 pg-loadbalancetest
+ 127.0.0.2 pg-loadbalancetest
+ 127.0.0.3 pg-loadbalancetest
+
+ Windows or Linux are required to run this test because these OSes allow
+ binding to 127.0.0.2 and 127.0.0.3 addresses by default, but other OSes
+ don't. We need to bind to different IP addresses, so that we can use these
+ different IP addresses in the hosts file.
+
+ The hosts file needs to be prepared before running this test. We don't do
+ it on the fly, because it requires root permissions to change the hosts
+ file. In CI we set up the previously mentioned rules in the hosts file, so
+ that this load balancing method is tested.
+
+ Requires PG_TEST_EXTRA=load_balance because it requires this manual hosts
+ file configuration and also uses TCP with trust auth, which is potentially
+ unsafe on multiuser systems.
+ """
+ pypg.skip_unless_test_extras("load_balance")
+
+ if platform.system() not in ("Linux", "Windows"):
+ pytest.skip("DNS load balance test only supported on Linux and Windows")
+
+ if platform.system() == "Windows":
+ hosts_path = r"c:\Windows\System32\Drivers\etc\hosts"
+ else:
+ hosts_path = "/etc/hosts"
+
+ try:
+ with open(hosts_path) as f:
+ hosts_content = f.read()
+ except (OSError, IOError):
+ pytest.skip(f"Could not read hosts file: {hosts_path}")
+
+ count = len(re.findall(r"127\.0\.0\.[1-3]\s+pg-loadbalancetest", hosts_content))
+ if count != 3:
+ pytest.skip("hosts file not prepared for DNS load balance test")
+
+ first_node = create_pg_module(hostaddr="127.0.0.1")
+ nodes = [
+ first_node,
+ create_pg_module(hostaddr="127.0.0.2", port=first_node.port),
+ create_pg_module(hostaddr="127.0.0.3", port=first_node.port),
+ ]
+
+ # Allow trust authentication for TCP connections from loopback
+ for node in nodes:
+ hba_path = node.datadir / "pg_hba.conf"
+ with open(hba_path, "r") as f:
+ original_content = f.read()
+ with open(hba_path, "w") as f:
+ f.write("host all all 127.0.0.0/8 trust\n")
+ f.write(original_content)
+ node.pg_ctl("reload")
+
+ def connect(**kwargs):
+ return nodes[0].connect(host="pg-loadbalancetest", **kwargs)
+
+ return nodes, connect
+
+
+@pytest.fixture(scope="module", params=["hostlist", "dns"])
+def load_balance_nodes(request):
+ """
+ Parametrized fixture providing both load balancing test environments.
+ """
+ return request.getfixturevalue(f"load_balance_nodes_{request.param}")
+
+
+def test_load_balance_hosts_invalid_value(load_balance_nodes):
+ """load_balance_hosts doesn't accept unknown values."""
+ _, connect = load_balance_nodes
+
+ with pytest.raises(
+ LibpqError, match='invalid load_balance_hosts value: "doesnotexist"'
+ ):
+ connect(load_balance_hosts="doesnotexist")
+
+
+def test_load_balance_hosts_disable(load_balance_nodes):
+ """load_balance_hosts=disable always connects to the first node."""
+ nodes, connect = load_balance_nodes
+
+ with nodes[0].log_contains("connection received"):
+ connect(load_balance_hosts="disable")
+
+
+def test_load_balance_hosts_random_distribution(load_balance_nodes):
+ """load_balance_hosts=random distributes connections across all nodes."""
+ nodes, connect = load_balance_nodes
+
+ for _ in range(50):
+ connect(load_balance_hosts="random")
+
+ occurrences = [
+ len(re.findall("connection received", node.log_content())) for node in nodes
+ ]
+
+ # Statistically, each node should receive at least one connection.
+ # The probability of any node receiving 0 connections is (2/3)^50 ≈ 1.57e-9
+ assert occurrences[0] > 0, "node1 should receive at least one connection"
+ assert occurrences[1] > 0, "node2 should receive at least one connection"
+ assert occurrences[2] > 0, "node3 should receive at least one connection"
+ assert sum(occurrences) == 50, "total connections should be 50"
+
+
+def test_load_balance_hosts_failover(load_balance_nodes):
+ """load_balance_hosts continues trying hosts until it finds a working one."""
+ nodes, connect = load_balance_nodes
+
+ nodes[0].stop()
+ nodes[1].stop()
+
+ with nodes[2].log_contains("connection received"):
+ connect(load_balance_hosts="disable")
+
+ with nodes[2].log_contains("connection received", times=5):
+ for _ in range(5):
+ connect(load_balance_hosts="random")
diff --git a/src/interfaces/libpq/t/003_load_balance_host_list.pl b/src/interfaces/libpq/t/003_load_balance_host_list.pl
deleted file mode 100644
index 1f970ff994b..00000000000
--- a/src/interfaces/libpq/t/003_load_balance_host_list.pl
+++ /dev/null
@@ -1,94 +0,0 @@
-# Copyright (c) 2023-2026, PostgreSQL Global Development Group
-use strict;
-use warnings FATAL => 'all';
-use Config;
-use PostgreSQL::Test::Utils;
-use PostgreSQL::Test::Cluster;
-use Test::More;
-
-# This tests load balancing across the list of different hosts in the host
-# parameter of the connection string.
-
-# Cluster setup which is shared for testing both load balancing methods
-my $node1 = PostgreSQL::Test::Cluster->new('node1');
-my $node2 = PostgreSQL::Test::Cluster->new('node2', own_host => 1);
-my $node3 = PostgreSQL::Test::Cluster->new('node3', own_host => 1);
-
-# Create a data directory with initdb
-$node1->init();
-$node2->init();
-$node3->init();
-
-# Start the PostgreSQL server
-$node1->start();
-$node2->start();
-$node3->start();
-
-# Start the tests for load balancing method 1
-my $hostlist = $node1->host . ',' . $node2->host . ',' . $node3->host;
-my $portlist = $node1->port . ',' . $node2->port . ',' . $node3->port;
-
-$node1->connect_fails(
- "host=$hostlist port=$portlist load_balance_hosts=doesnotexist",
- "load_balance_hosts doesn't accept unknown values",
- expected_stderr => qr/invalid load_balance_hosts value: "doesnotexist"/);
-
-# load_balance_hosts=disable should always choose the first one.
-$node1->connect_ok(
- "host=$hostlist port=$portlist load_balance_hosts=disable",
- "load_balance_hosts=disable connects to the first node",
- sql => "SELECT 'connect1'",
- log_like => [qr/statement: SELECT 'connect1'/]);
-
-# Statistically the following loop with load_balance_hosts=random will almost
-# certainly connect at least once to each of the nodes. The chance of that not
-# happening is so small that it's negligible: (2/3)^50 = 1.56832855e-9
-foreach my $i (1 .. 50)
-{
- $node1->connect_ok(
- "host=$hostlist port=$portlist load_balance_hosts=random",
- "repeated connections with random load balancing",
- sql => "SELECT 'connect2'");
-}
-
-my $node1_occurrences = () =
- $node1->log_content() =~ /statement: SELECT 'connect2'/g;
-my $node2_occurrences = () =
- $node2->log_content() =~ /statement: SELECT 'connect2'/g;
-my $node3_occurrences = () =
- $node3->log_content() =~ /statement: SELECT 'connect2'/g;
-
-my $total_occurrences =
- $node1_occurrences + $node2_occurrences + $node3_occurrences;
-
-cmp_ok($node1_occurrences, '>', 1,
- "received at least one connection on node1");
-cmp_ok($node2_occurrences, '>', 1,
- "received at least one connection on node2");
-cmp_ok($node3_occurrences, '>', 1,
- "received at least one connection on node3");
-is($total_occurrences, 50, "received 50 connections across all nodes");
-
-$node1->stop();
-$node2->stop();
-
-# load_balance_hosts=disable should continue trying hosts until it finds a
-# working one.
-$node3->connect_ok(
- "host=$hostlist port=$portlist load_balance_hosts=disable",
- "load_balance_hosts=disable continues until it connects to the a working node",
- sql => "SELECT 'connect3'",
- log_like => [qr/statement: SELECT 'connect3'/]);
-
-# Also with load_balance_hosts=random we continue to the next nodes if previous
-# ones are down. Connect a few times to make sure it's not just lucky.
-foreach my $i (1 .. 5)
-{
- $node3->connect_ok(
- "host=$hostlist port=$portlist load_balance_hosts=random",
- "load_balance_hosts=random continues until it connects to the a working node",
- sql => "SELECT 'connect4'",
- log_like => [qr/statement: SELECT 'connect4'/]);
-}
-
-done_testing();
diff --git a/src/interfaces/libpq/t/004_load_balance_dns.pl b/src/interfaces/libpq/t/004_load_balance_dns.pl
deleted file mode 100644
index 210ec1ff517..00000000000
--- a/src/interfaces/libpq/t/004_load_balance_dns.pl
+++ /dev/null
@@ -1,144 +0,0 @@
-# Copyright (c) 2023-2026, PostgreSQL Global Development Group
-use strict;
-use warnings FATAL => 'all';
-use Config;
-use PostgreSQL::Test::Utils;
-use PostgreSQL::Test::Cluster;
-use Test::More;
-
-if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\bload_balance\b/)
-{
- plan skip_all =>
- 'Potentially unsafe test load_balance not enabled in PG_TEST_EXTRA';
-}
-
-# This tests loadbalancing based on a DNS entry that contains multiple records
-# for different IPs. Since setting up a DNS server is more effort than we
-# consider reasonable to run this test, this situation is instead imitated by
-# using a hosts file where a single hostname maps to multiple different IP
-# addresses. This test requires the administrator to add the following lines to
-# the hosts file (if we detect that this hasn't happened we skip the test):
-#
-# 127.0.0.1 pg-loadbalancetest
-# 127.0.0.2 pg-loadbalancetest
-# 127.0.0.3 pg-loadbalancetest
-#
-# Windows or Linux are required to run this test because these OSes allow
-# binding to 127.0.0.2 and 127.0.0.3 addresses by default, but other OSes
-# don't. We need to bind to different IP addresses, so that we can use these
-# different IP addresses in the hosts file.
-#
-# The hosts file needs to be prepared before running this test. We don't do it
-# on the fly, because it requires root permissions to change the hosts file. In
-# CI we set up the previously mentioned rules in the hosts file, so that this
-# load balancing method is tested.
-
-# Cluster setup which is shared for testing both load balancing methods
-my $can_bind_to_127_0_0_2 =
- $Config{osname} eq 'linux' || $PostgreSQL::Test::Utils::windows_os;
-
-# Checks for the requirements for testing load balancing method 2
-if (!$can_bind_to_127_0_0_2)
-{
- plan skip_all => 'load_balance test only supported on Linux and Windows';
-}
-
-my $hosts_path;
-if ($windows_os)
-{
- $hosts_path = 'c:\Windows\System32\Drivers\etc\hosts';
-}
-else
-{
- $hosts_path = '/etc/hosts';
-}
-
-my $hosts_content = PostgreSQL::Test::Utils::slurp_file($hosts_path);
-
-my $hosts_count = () =
- $hosts_content =~ /127\.0\.0\.[1-3] pg-loadbalancetest/g;
-if ($hosts_count != 3)
-{
- # Host file is not prepared for this test
- plan skip_all => "hosts file was not prepared for DNS load balance test";
-}
-
-$PostgreSQL::Test::Cluster::use_tcp = 1;
-$PostgreSQL::Test::Cluster::test_pghost = '127.0.0.1';
-my $port = PostgreSQL::Test::Cluster::get_free_port();
-my $node1 = PostgreSQL::Test::Cluster->new('node1', port => $port);
-my $node2 =
- PostgreSQL::Test::Cluster->new('node2', port => $port, own_host => 1);
-my $node3 =
- PostgreSQL::Test::Cluster->new('node3', port => $port, own_host => 1);
-
-# Create a data directory with initdb
-$node1->init();
-$node2->init();
-$node3->init();
-
-# Start the PostgreSQL server
-$node1->start();
-$node2->start();
-$node3->start();
-
-# load_balance_hosts=disable should always choose the first one.
-$node1->connect_ok(
- "host=pg-loadbalancetest port=$port load_balance_hosts=disable",
- "load_balance_hosts=disable connects to the first node",
- sql => "SELECT 'connect1'",
- log_like => [qr/statement: SELECT 'connect1'/]);
-
-
-# Statistically the following loop with load_balance_hosts=random will almost
-# certainly connect at least once to each of the nodes. The chance of that not
-# happening is so small that it's negligible: (2/3)^50 = 1.56832855e-9
-foreach my $i (1 .. 50)
-{
- $node1->connect_ok(
- "host=pg-loadbalancetest port=$port load_balance_hosts=random",
- "repeated connections with random load balancing",
- sql => "SELECT 'connect2'");
-}
-
-my $node1_occurrences = () =
- $node1->log_content() =~ /statement: SELECT 'connect2'/g;
-my $node2_occurrences = () =
- $node2->log_content() =~ /statement: SELECT 'connect2'/g;
-my $node3_occurrences = () =
- $node3->log_content() =~ /statement: SELECT 'connect2'/g;
-
-my $total_occurrences =
- $node1_occurrences + $node2_occurrences + $node3_occurrences;
-
-cmp_ok($node1_occurrences, '>', 1,
- "received at least one connection on node1");
-cmp_ok($node2_occurrences, '>', 1,
- "received at least one connection on node2");
-cmp_ok($node3_occurrences, '>', 1,
- "received at least one connection on node3");
-is($total_occurrences, 50, "received 50 connections across all nodes");
-
-$node1->stop();
-$node2->stop();
-
-# load_balance_hosts=disable should continue trying hosts until it finds a
-# working one.
-$node3->connect_ok(
- "host=pg-loadbalancetest port=$port load_balance_hosts=disable",
- "load_balance_hosts=disable continues until it connects to the a working node",
- sql => "SELECT 'connect3'",
- log_like => [qr/statement: SELECT 'connect3'/]);
-
-# Also with load_balance_hosts=random we continue to the next nodes if previous
-# ones are down. Connect a few times to make sure it's not just lucky.
-foreach my $i (1 .. 5)
-{
- $node3->connect_ok(
- "host=pg-loadbalancetest port=$port load_balance_hosts=random",
- "load_balance_hosts=random continues until it connects to the a working node",
- sql => "SELECT 'connect4'",
- log_like => [qr/statement: SELECT 'connect4'/]);
-}
-
-done_testing();
--
2.52.0
v8-0006-WIP-pytest-Add-some-SSL-client-tests.patchtext/x-patch; charset=utf-8; name=v8-0006-WIP-pytest-Add-some-SSL-client-tests.patchDownload
From cc5897bb79b78fdffbe0b9993b775c0c4553a4ce Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 16 Dec 2025 09:30:55 +0100
Subject: [PATCH v8 6/7] WIP: pytest: Add some SSL client tests
This is a sample client-only test suite. It tests some handshake
failures against a mock server, as well as a full SSL handshake + empty
query + response.
pyca/cryptography is added as a new package dependency. Certificates for
testing are generated on the fly.
The mock design is threaded: the server socket is listening on a
background thread, and the test provides the server logic via a
callback. There is some additional work still needed to make this
production-ready; see the notes for _TCPServer.background(). (Currently,
an exception in the wrong place could result in a hang-until-timeout
rather than an immediate failure.)
TODOs:
- local_server and tcp_server_class are nearly identical and should
share code.
- fix exception-related timeouts for .background()
- figure out the proper use of "session" vs "module" scope
- ensure that pq.libpq unwinds (to close connections) before tcp_server;
see comment in test_server_with_ssl_disabled()
---
.cirrus.tasks.yml | 2 +
pyproject.toml | 8 +
src/test/ssl/Makefile | 2 +
src/test/ssl/meson.build | 6 +
src/test/ssl/pyt/conftest.py | 128 +++++++++++++++
src/test/ssl/pyt/test_client.py | 278 ++++++++++++++++++++++++++++++++
6 files changed, 424 insertions(+)
create mode 100644 src/test/ssl/pyt/conftest.py
create mode 100644 src/test/ssl/pyt/test_client.py
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 388f5c75556..8621db4a9a9 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -647,6 +647,7 @@ task:
CIRRUS_WORKING_DIR: ${HOME}/pgsql/
CCACHE_DIR: ${HOME}/ccache
MACPORTS_CACHE: ${HOME}/macports-cache
+ PYTEST_DEBUG_TEMPROOT: /tmp # default is too long for UNIX sockets on Mac
MESON_FEATURES: >-
-Dbonjour=enabled
@@ -667,6 +668,7 @@ task:
p5.34-io-tty
p5.34-ipc-run
python312
+ py312-cryptography
py312-packaging
py312-pytest
tcl
diff --git a/pyproject.toml b/pyproject.toml
index 4628d2274e0..00c8ae88583 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -12,6 +12,14 @@ dependencies = [
# Any other dependencies are effectively optional (added below). We import
# these libraries using pytest.importorskip(). So tests will be skipped if
# they are not available.
+
+ # Notes on the cryptography package:
+ # - 3.3.2 is shipped on Debian bullseye.
+ # - 3.4.x drops support for Python 2, making it a version of note for older LTS
+ # distros.
+ # - 35.x switched versioning schemes and moved to Rust parsing.
+ # - 40.x is the last version supporting Python 3.6.
+ "cryptography >= 3.3.2",
]
[tool.pytest.ini_options]
diff --git a/src/test/ssl/Makefile b/src/test/ssl/Makefile
index aa062945fb9..287729ad9fb 100644
--- a/src/test/ssl/Makefile
+++ b/src/test/ssl/Makefile
@@ -30,6 +30,8 @@ clean distclean:
# Doesn't depend on sslfiles because we don't rebuild them by default
check:
$(prove_check)
+ # XXX these suites should run independently, not serially
+ $(pytest_check)
installcheck:
$(prove_installcheck)
diff --git a/src/test/ssl/meson.build b/src/test/ssl/meson.build
index 9e5bdbb6136..6ec274d8165 100644
--- a/src/test/ssl/meson.build
+++ b/src/test/ssl/meson.build
@@ -15,4 +15,10 @@ tests += {
't/003_sslinfo.pl',
],
},
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_client.py',
+ 'pyt/test_server.py',
+ ],
+ },
}
diff --git a/src/test/ssl/pyt/conftest.py b/src/test/ssl/pyt/conftest.py
new file mode 100644
index 00000000000..870f738ac44
--- /dev/null
+++ b/src/test/ssl/pyt/conftest.py
@@ -0,0 +1,128 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import datetime
+import re
+import subprocess
+import tempfile
+from collections import namedtuple
+
+import pytest
+
+
+@pytest.fixture(scope="session")
+def cryptography():
+ return pytest.importorskip("cryptography", "3.3.2")
+
+
+Cert = namedtuple("Cert", "cert, certpath, key, keypath")
+
+
+@pytest.fixture(scope="session")
+def certs(cryptography, tmp_path_factory):
+ """
+ Caches commonly used certificates at the session level, and provides a way
+ to create new ones.
+
+ - certs.ca: the root CA certificate
+
+ - certs.server: the "standard" server certficate, signed by certs.ca
+
+ - certs.server_host: the hostname of the certs.server certificate
+
+ - certs.new(): creates a custom certificate, signed by certs.ca
+ """
+
+ from cryptography import x509
+ from cryptography.hazmat.primitives import hashes, serialization
+ from cryptography.hazmat.primitives.asymmetric import rsa
+ from cryptography.x509.oid import NameOID
+
+ tmpdir = tmp_path_factory.mktemp("test-certs")
+
+ class _Certs:
+ def __init__(self):
+ self.ca = self.new(
+ x509.Name(
+ [x509.NameAttribute(NameOID.COMMON_NAME, "PG pytest CA")],
+ ),
+ ca=True,
+ )
+
+ self.server_host = "example.org"
+ self.server = self.new(
+ x509.Name(
+ [x509.NameAttribute(NameOID.COMMON_NAME, self.server_host)],
+ )
+ )
+
+ def new(self, subject: x509.Name, *, ca=False) -> Cert:
+ """
+ Creates and signs a new Cert with the given subject name. If ca is
+ True, the certificate will be self-signed; otherwise the certificate
+ is signed by self.ca.
+ """
+ key = rsa.generate_private_key(
+ public_exponent=65537,
+ key_size=2048,
+ )
+
+ builder = x509.CertificateBuilder()
+ now = datetime.datetime.now(datetime.timezone.utc)
+
+ builder = (
+ builder.subject_name(subject)
+ .public_key(key.public_key())
+ .serial_number(x509.random_serial_number())
+ .not_valid_before(now)
+ .not_valid_after(now + datetime.timedelta(hours=1))
+ )
+
+ if ca:
+ builder = builder.issuer_name(subject)
+ else:
+ builder = builder.issuer_name(self.ca.cert.subject)
+
+ builder = builder.add_extension(
+ x509.BasicConstraints(ca=ca, path_length=None),
+ critical=True,
+ )
+
+ cert = builder.sign(
+ private_key=key if ca else self.ca.key,
+ algorithm=hashes.SHA256(),
+ )
+
+ # Dump the certificate and key to file.
+ keypath = self._tofile(
+ key.private_bytes(
+ serialization.Encoding.PEM,
+ serialization.PrivateFormat.PKCS8,
+ serialization.NoEncryption(),
+ ),
+ suffix=".key",
+ )
+ certpath = self._tofile(
+ cert.public_bytes(serialization.Encoding.PEM),
+ suffix="-ca.crt" if ca else ".crt",
+ )
+
+ return Cert(
+ cert=cert,
+ certpath=certpath,
+ key=key,
+ keypath=keypath,
+ )
+
+ def _tofile(self, data: bytes, *, suffix) -> str:
+ """
+ Dumps data to a file on disk with the requested suffix and returns
+ the path. The file is located somewhere in pytest's temporary
+ directory root.
+ """
+ f = tempfile.NamedTemporaryFile(suffix=suffix, dir=tmpdir, delete=False)
+ with f:
+ f.write(data)
+
+ return f.name
+
+ return _Certs()
diff --git a/src/test/ssl/pyt/test_client.py b/src/test/ssl/pyt/test_client.py
new file mode 100644
index 00000000000..556bad33bf8
--- /dev/null
+++ b/src/test/ssl/pyt/test_client.py
@@ -0,0 +1,278 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import ctypes
+import socket
+import ssl
+import struct
+import threading
+from typing import Callable
+
+import pytest
+
+import pypg
+from libpq import LibpqError, ExecStatus
+
+# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
+pytestmark = pypg.require_test_extras("ssl")
+
+
+@pytest.fixture(scope="session", autouse=True)
+def skip_if_no_ssl_support(libpq_handle):
+ """Skips tests if SSL support is not configured."""
+
+ # Declare PQsslAttribute().
+ PQsslAttribute = libpq_handle.PQsslAttribute
+ PQsslAttribute.restype = ctypes.c_char_p
+ PQsslAttribute.argtypes = [ctypes.c_void_p, ctypes.c_char_p]
+
+ if not PQsslAttribute(None, b"library"):
+ pytest.skip("requires SSL support to be configured")
+
+
+#
+# Test Fixtures
+#
+
+
+@pytest.fixture
+def tcp_server_class(remaining_timeout):
+ """
+ Metafixture to combine related logic for tcp_server and ssl_server.
+
+ TODO: combine with test_libpq.local_server
+ """
+
+ class _TCPServer(contextlib.ExitStack):
+ """
+ Implementation class for tcp_server. See .background() for the primary
+ entry point for tests. Postgres clients may connect to this server via
+ **tcp_server.conninfo.
+
+ _TCPServer derives from contextlib.ExitStack to provide easy cleanup of
+ associated resources; see the documentation for that class for a full
+ explanation.
+ """
+
+ def __init__(self):
+ super().__init__()
+
+ self._thread = None
+ self._thread_exc = None
+ self._listener = self.enter_context(
+ socket.socket(socket.AF_INET, socket.SOCK_STREAM),
+ )
+
+ self._bind_and_listen()
+ sockname = self._listener.getsockname()
+ self.conninfo = dict(
+ hostaddr=sockname[0],
+ port=sockname[1],
+ )
+
+ def _bind_and_listen(self):
+ """
+ Does the actual work of binding the socket and listening for
+ connections.
+
+ The listen backlog is currently hardcoded to one.
+ """
+ self._listener.bind(("127.0.0.1", 0))
+ self._listener.listen(1)
+
+ def background(self, fn: Callable[[socket.socket], None]) -> None:
+ """
+ Accepts a client connection on a background thread and passes it to
+ the provided callback. Any exceptions raised from the callback will
+ be re-raised on the main thread during fixture teardown.
+
+ Blocking operations on the connected socket default to using the
+ remaining_timeout(), though this can be changed by the test via the
+ socket's .settimeout().
+ """
+
+ def _bg():
+ try:
+ self._listener.settimeout(remaining_timeout())
+ sock, _ = self._listener.accept()
+
+ with sock:
+ sock.settimeout(remaining_timeout())
+ fn(sock)
+
+ except Exception as e:
+ # Save the exception for re-raising on the main thread.
+ self._thread_exc = e
+
+ # TODO: rather than using callback(), consider explicitly signaling
+ # the fn() implementation to stop early if we get an exception.
+ # Otherwise we'll hang until the end of the timeout.
+ self._thread = threading.Thread(target=_bg)
+ self.callback(self._join)
+
+ self._thread.start()
+
+ def _join(self):
+ """
+ Waits for the background thread to finish and raises any thrown
+ exception. This is called during fixture teardown.
+ """
+ # Give a little bit of wiggle room on the join timeout, since we're
+ # racing against the test's own use of remaining_timeout(). (It's
+ # preferable to let tests report timeouts; the stack traces will
+ # help with debugging.)
+ self._thread.join(remaining_timeout() + 1)
+ if self._thread.is_alive():
+ raise TimeoutError("background thread is still running after timeout")
+
+ if self._thread_exc is not None:
+ raise self._thread_exc
+
+ return _TCPServer
+
+
+@pytest.fixture
+def tcp_server(tcp_server_class):
+ """
+ Opens up a local TCP socket for mocking a Postgres server on a background
+ thread. See the _TCPServer API for usage.
+ """
+ with tcp_server_class() as s:
+ yield s
+
+
+@pytest.fixture
+def ssl_server(tcp_server_class, certs):
+ """
+ Like tcp_server, but with an additional .background_ssl() method which will
+ perform a SSLRequest handshake on the socket before handing the connection
+ to the test callback.
+
+ This server uses certs.server as its identity.
+ """
+
+ class _SSLServer(tcp_server_class):
+ def __init__(self):
+ super().__init__()
+
+ self.conninfo["host"] = certs.server_host
+
+ self._ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
+ self._ctx.load_cert_chain(certs.server.certpath, certs.server.keypath)
+
+ def background_ssl(self, fn: Callable[[ssl.SSLSocket], None]) -> None:
+ """
+ Invokes a server callback as with .background(), but an SSLRequest
+ handshake is performed first, and the socket provided to the
+ callback has been wrapped in an OpenSSL layer.
+ """
+
+ def handshake(s: socket.socket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Make sure we get an SSLRequest.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (1234, 5679)
+ assert pktlen == 8
+
+ # Accept the SSLRequest.
+ s.send(b"S")
+
+ with self._ctx.wrap_socket(s, server_side=True) as wrapped:
+ fn(wrapped)
+
+ self.background(handshake)
+
+ with _SSLServer() as s:
+ yield s
+
+
+#
+# Tests
+#
+
+
+@pytest.mark.parametrize("sslmode", ("require", "verify-ca", "verify-full"))
+def test_server_with_ssl_disabled(connect, tcp_server, certs, sslmode):
+ """
+ Make sure client refuses to talk to non-SSL servers with stricter
+ sslmodes.
+ """
+
+ def refuse_ssl(s: socket.socket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Make sure we get an SSLRequest.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (1234, 5679)
+ assert pktlen == 8
+
+ # Refuse the SSLRequest.
+ s.send(b"N")
+
+ # Wait for the client to close the connection.
+ assert not s.recv(1), "client sent unexpected data"
+
+ tcp_server.background(refuse_ssl)
+
+ with pytest.raises(LibpqError, match="server does not support SSL"):
+ connect(
+ **tcp_server.conninfo,
+ sslrootcert=certs.ca.certpath,
+ sslmode=sslmode,
+ )
+
+
+def test_verify_full_connection(connect, ssl_server, certs):
+ """Completes a verify-full connection and empty query."""
+
+ def handle_empty_query(s: ssl.SSLSocket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Check the startup packet version, then discard the remainder.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (3, 0)
+ s.recv(pktlen - 8)
+
+ # Send the required litany of server messages.
+ s.send(struct.pack("!cII", b"R", 8, 0)) # AuthenticationOK
+
+ # ParameterStatus: client_encoding
+ key = b"client_encoding\0"
+ val = b"UTF-8\0"
+ s.send(struct.pack("!cI", b"S", 4 + len(key) + len(val)) + key + val)
+
+ # ParameterStatus: DateStyle
+ key = b"DateStyle\0"
+ val = b"ISO, MDY\0"
+ s.send(struct.pack("!cI", b"S", 4 + len(key) + len(val)) + key + val)
+
+ s.send(struct.pack("!cIII", b"K", 12, 1234, 1234)) # BackendKeyData
+ s.send(struct.pack("!cIc", b"Z", 5, b"I")) # ReadyForQuery
+
+ # Expect an empty query.
+ pkttype = s.recv(1)
+ assert pkttype == b"Q"
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+ assert s.recv(pktlen - 4) == b"\0"
+
+ # Send an EmptyQueryResponse+ReadyForQuery.
+ s.send(struct.pack("!cI", b"I", 4))
+ s.send(struct.pack("!cIc", b"Z", 5, b"I"))
+
+ # libpq should terminate and close the connection.
+ assert s.recv(1) == b"X"
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+ assert pktlen == 4
+
+ assert not s.recv(1), "client sent unexpected data"
+
+ ssl_server.background_ssl(handle_empty_query)
+
+ conn = connect(
+ **ssl_server.conninfo,
+ sslrootcert=certs.ca.certpath,
+ sslmode="verify-full",
+ )
+ with conn:
+ assert conn.exec("").status() == ExecStatus.PGRES_EMPTY_QUERY
--
2.52.0
v8-0007-WIP-pytest-Add-some-server-side-SSL-tests.patchtext/x-patch; charset=utf-8; name=v8-0007-WIP-pytest-Add-some-server-side-SSL-tests.patchDownload
From 841ca9ad59e9f3065e57e93f26d0ba60c108bccf Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 16 Dec 2025 09:31:46 +0100
Subject: [PATCH v8 7/7] WIP: pytest: Add some server-side SSL tests
In the same vein as the previous commit, this is a server-only test
suite operating against a mock client. The test itself is a heavily
parameterized check for direct-SSL handshake behavior, using a
combination of "standard" and "custom" certificates via the certs
fixture.
installcheck is currently unsupported, but the architecture has some
extension points that should make it possible later. For now, a new
server is always started for the test session.
TODOs:
- improve remaining_timeout() integration with socket operations; at the
moment, the timeout resets on every call rather than decrementing
---
src/test/ssl/pyt/conftest.py | 50 ++++++++++
src/test/ssl/pyt/test_server.py | 161 ++++++++++++++++++++++++++++++++
2 files changed, 211 insertions(+)
create mode 100644 src/test/ssl/pyt/test_server.py
diff --git a/src/test/ssl/pyt/conftest.py b/src/test/ssl/pyt/conftest.py
index 870f738ac44..d121724800b 100644
--- a/src/test/ssl/pyt/conftest.py
+++ b/src/test/ssl/pyt/conftest.py
@@ -126,3 +126,53 @@ def certs(cryptography, tmp_path_factory):
return f.name
return _Certs()
+
+
+@pytest.fixture(scope="module", autouse=True)
+def ssl_setup(pg_server_module, certs, datadir):
+ """
+ Sets up required server settings for all tests in this module.
+ """
+ try:
+ with pg_server_module.restarting() as s:
+ s.conf.set(
+ ssl="on",
+ ssl_ca_file=certs.ca.certpath,
+ ssl_cert_file=certs.server.certpath,
+ ssl_key_file=certs.server.keypath,
+ )
+
+ # Reject by default.
+ s.hba.prepend("hostssl all all all reject")
+
+ except subprocess.CalledProcessError:
+ # This is a decent place to skip if the server isn't set up for SSL.
+ logpath = datadir / "postgresql.log"
+ unsupported = re.compile("SSL is not supported")
+
+ with open(logpath, "r") as log:
+ for line in log:
+ if unsupported.search(line):
+ pytest.skip("the server does not support SSL")
+
+ # Some other error happened.
+ raise
+
+ users = pg_server_module.create_users("ssl")
+ dbs = pg_server_module.create_dbs("ssl")
+
+ return (users, dbs)
+
+
+@pytest.fixture(scope="module")
+def client_cert(ssl_setup, certs):
+ """
+ Creates a Cert for the "ssl" user.
+ """
+ from cryptography import x509
+ from cryptography.x509.oid import NameOID
+
+ users, _ = ssl_setup
+ user = users["ssl"]
+
+ return certs.new(x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, user)]))
diff --git a/src/test/ssl/pyt/test_server.py b/src/test/ssl/pyt/test_server.py
new file mode 100644
index 00000000000..d5cb14b6c9a
--- /dev/null
+++ b/src/test/ssl/pyt/test_server.py
@@ -0,0 +1,161 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import re
+import socket
+import ssl
+import struct
+
+import pytest
+
+import pypg
+
+# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
+pytestmark = pypg.require_test_extras("ssl")
+
+# For use with the `creds` parameter below.
+CLIENT = "client"
+SERVER = "server"
+
+
+# fmt: off
+@pytest.mark.parametrize(
+ "auth_method, creds, expected_error",
+[
+ # Trust allows anything.
+ ("trust", None, None),
+ ("trust", CLIENT, None),
+ ("trust", SERVER, None),
+
+ # verify-ca allows any CA-signed certificate.
+ ("trust clientcert=verify-ca", None, "requires a valid client certificate"),
+ ("trust clientcert=verify-ca", CLIENT, None),
+ ("trust clientcert=verify-ca", SERVER, None),
+
+ # cert and verify-full allow only the correct certificate.
+ ("trust clientcert=verify-full", None, "requires a valid client certificate"),
+ ("trust clientcert=verify-full", CLIENT, None),
+ ("trust clientcert=verify-full", SERVER, "authentication failed for user"),
+ ("cert", None, "requires a valid client certificate"),
+ ("cert", CLIENT, None),
+ ("cert", SERVER, "authentication failed for user"),
+],
+)
+# fmt: on
+def test_direct_ssl_certificate_authentication(
+ pg,
+ ssl_setup,
+ certs,
+ client_cert,
+ remaining_timeout,
+ # test parameters
+ auth_method,
+ creds,
+ expected_error,
+):
+ """
+ Tests direct SSL connections with various client-certificate/HBA
+ combinations.
+ """
+
+ # Set up the HBA as desired by the test.
+ users, dbs = ssl_setup
+
+ user = users["ssl"]
+ db = dbs["ssl"]
+
+ with pg.reloading() as s:
+ s.hba.prepend(
+ ["hostssl", db, user, "127.0.0.1/32", auth_method],
+ ["hostssl", db, user, "::1/128", auth_method],
+ )
+
+ # Configure the SSL settings for the client.
+ ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
+ ctx.load_verify_locations(cafile=certs.ca.certpath)
+ ctx.set_alpn_protocols(["postgresql"]) # for direct SSL
+
+ # Load up a client certificate if required by the test.
+ if creds == CLIENT:
+ ctx.load_cert_chain(client_cert.certpath, client_cert.keypath)
+ elif creds == SERVER:
+ # Using a server certificate as the client credential is expected to
+ # work only for clientcert=verify-ca (and `trust`, naturally).
+ ctx.load_cert_chain(certs.server.certpath, certs.server.keypath)
+
+ # Make a direct SSL connection. There's no SSLRequest in the handshake; we
+ # simply wrap a TCP connection with OpenSSL.
+ addr = (pg.hostaddr, pg.port)
+ with socket.create_connection(addr) as s:
+ s.settimeout(remaining_timeout()) # XXX this resets every operation
+
+ with ctx.wrap_socket(s, server_hostname=certs.server_host) as conn:
+ # Build and send the startup packet.
+ startup_options = dict(
+ user=user,
+ database=db,
+ application_name="pytest",
+ )
+
+ payload = b""
+ for k, v in startup_options.items():
+ payload += k.encode() + b"\0"
+ payload += str(v).encode() + b"\0"
+ payload += b"\0" # null terminator
+
+ pktlen = 4 + 4 + len(payload)
+ conn.send(struct.pack("!IHH", pktlen, 3, 0) + payload)
+
+ if not expected_error:
+ # Expect an AuthenticationOK to come back.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"R"
+ assert pktlen == 8
+
+ authn_result = struct.unpack("!I", conn.recv(4))[0]
+ assert authn_result == 0
+
+ # Read and discard to ReadyForQuery.
+ while True:
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ payload = conn.recv(pktlen - 4)
+
+ if pkttype == b"Z":
+ assert payload == b"I"
+ break
+
+ # Send an empty query.
+ conn.send(struct.pack("!cI", b"Q", 5) + b"\0")
+
+ # Expect EmptyQueryResponse+ReadyForQuery.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"I"
+ assert pktlen == 4
+
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"Z"
+
+ payload = conn.recv(pktlen - 4)
+ assert payload == b"I"
+
+ else:
+ # Match the expected authentication error.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"E"
+
+ payload = conn.recv(pktlen - 4)
+ msg = None
+
+ for component in payload.split(b"\0"):
+ if not component:
+ break # end of message
+
+ key, val = component[:1], component[1:]
+ if key == b"S":
+ assert val == b"FATAL"
+ elif key == b"M":
+ msg = val.decode()
+
+ assert re.search(expected_error, msg), "server error did not match"
+
+ # Terminate.
+ conn.send(struct.pack("!cI", b"X", 4))
--
2.52.0
Hi,
On 2026-01-06 20:07:22 +0100, Jelte Fennema-Nio wrote:
On Mon Jan 5, 2026 at 9:19 PM CET, Jacob Champion wrote:
On Wed, Dec 17, 2025 at 8:10 AM Andres Freund <andres@anarazel.de> wrote:
Before it gets too far away from me: note that I have not yet been
able to get up to speed with the combined refactoring+feature patch
that Jelte added in v3, and it's now up to v7,Attached is v8. It simplifies the Cirrus CI yaml, because the
dependencies are now baked into the images. I also removed the optional
dependency on uv. Meson/autoconf now simply search for pytest binary in
the .venv directory too. Devs can then choose if they want to populate
.venv with pip or uv. Finally, if the pytest binary cannot be found,
there's a fallback attempt to use `python -m pytest`.
I'm somewhat sceptical that the .venv support should be introduced together
with the rest of this.
-SUBDIRS = perl postmaster regress isolation modules authentication recovery subscription +SUBDIRS = \ + authentication \ + isolation \ + modules \ + perl \ + postmaster \ + pytest \ + recovery \ + regress \ + subscriptionI'm onboard with that, but we should do it separately and probably check for
other cases where we should do it at the same time.I'm not sure what context this is referring to? What are you on board with?
If I understood Andres correctly this was about splitting the items
across multiple lines.
Yep.
I moved this to a separate thread, and it was
cammitted by Michael in 9adf32da6b. So this has been resolved afaik.
Yay.
I think it'd be a seriously bad idea to start with no central infrastructure,
we'd be force to duplicate that all over.Right, I just want central infra to be pulled out of the new tests
that need them rather than the other way around.
I'm not sure how you expect that to work in practice. I believe (and I
think Andres too) that there's some infra that we already know we'll
need for many tests, e.g. starting/stopping nodes, running queries,
handling errors.
Yes, I do indeed agree with that.
I don't think it makes sense to have those be pulled
out of new tests. You need some basics, otherwise no-one will want to
write tests. And even if they do, everyone ends up with different styles
of doing basic things. I'd rather coordinate on a bit of style upront so
that tests behave similarly for common usages.
Indeed. I'm fairly fundamentally opposed to merging any of this without first
having developed the basic infrastructure.
Greetings,
Andres Freund
On Wed, 7 Jan 2026 at 00:17, Andres Freund <andres@anarazel.de> wrote:
I'm somewhat sceptical that the .venv support should be introduced together
with the rest of this.
Could you expand a bit on this? My thinking was that people have a
tendency to get confused by python dependency management (because
there's too many options to do it). So having an easy documented and
supported way to do it seemed like a good idea to have people not get
frustrated.
Would you rather have it only be documented how to install the python
dependencies? And not have meson/autoconf automatically detect the
.venv?
To be clear, if it was only pytest then recommending "pipx install
pytest" would probably be easiest, but it seems like we'll at least
want cryptography for the tests Jacob is writing.
And I'm also thinking ahead a bit towards being able to use (a
specific version of) ruff for formatting & linting of python code. See
also[1]/messages/by-id/DFCDD5H4J7VX.3GJKRBBDCKQ86@jeltef.nl
Hi,
On 2026-01-07 00:49:28 +0100, Jelte Fennema-Nio wrote:
On Wed, 7 Jan 2026 at 00:17, Andres Freund <andres@anarazel.de> wrote:
I'm somewhat sceptical that the .venv support should be introduced together
with the rest of this.Could you expand a bit on this? My thinking was that people have a
tendency to get confused by python dependency management (because
there's too many options to do it). So having an easy documented and
supported way to do it seemed like a good idea to have people not get
frustrated.
Would you rather have it only be documented how to install the python
dependencies? And not have meson/autoconf automatically detect the
.venv?To be clear, if it was only pytest then recommending "pipx install
pytest" would probably be easiest, but it seems like we'll at least
want cryptography for the tests Jacob is writing.And I'm also thinking ahead a bit towards being able to use (a
specific version of) ruff for formatting & linting of python code. See
also[1]
I mainly think you're doing too much at once. There may be arguments for
recognizing a .venv in the build or source directory. But it just seems like a
separate project than the basic infrastructure to have python tests. I'd
start with just documenting the set of packages that are required and let the
user deal with the rest. Then, in a subsequent step, we can discuss whether or
not we want the venv support.
I'd also not include support for MTEST_SUITES, nor would I remove an existing
test as part of this thread. I'd not include something like
src/test/pytest/libpq/_generated_errors.py either, that seems orthogonal to
me. Getting something that's not yet complete, but already 288kB, merged, is
no small feat...
Greetings,
Andres Freund
On Tue, Jan 6, 2026 at 11:07 AM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:
That's fine by me, but I plan to focus on
things that need to get into PG19 before I focus on this, since
nothing is really blocked on it.Part of the reason why I've been trying to push this forward is that
automated tests for the GoAway patch are definitely blocked on this. (The
other reason is that I'd like to reduce the amount of perl I have to
read/write.)
Ah, okay. To state publicly what I've already mentioned to you
off-list: I have absolutely no intention of going it alone on this. My
bar is enthusiastic buy-in from a number of maintainers (the price
paid for expanding the scope from "made-for-purpose protocol test
suite" to "eventual Test::Cluster replacement"), and no -1s. I expect
that to take a while.
It's perfectly okay if you'd like to tie the GoAway proposal to this,
but that seems like it's unlikely to result in short-term success. It
was in Drafts for a reason. The stated point of v2 was to "spark some
opinions and conversation", and I have no plans to fast-track this
patchset for 19.
I think it'd be a seriously bad idea to start with no central infrastructure,
we'd be force to duplicate that all over.Right, I just want central infra to be pulled out of the new tests
that need them rather than the other way around.I'm not sure how you expect that to work in practice.
Hopefully I'm just describing refactoring? Holding your nose,
open-coding it in a test even though you'd rather not, showcasing the
way you'd like to use it so we can discuss the style, and refactoring
it outwards and upwards in a later patch in the set, once you hit the
rule of three and there's a pattern of usage and an API that people
like. (This may have been what you're doing with v4 onwards, but again
I haven't been able to sit down with it.)
I believe (and I
think Andres too) that there's some infra that we already know we'll
need for many tests, e.g. starting/stopping nodes, running queries,
handling errors. [snip]
Sure -- I feel like I've already agreed with that upthread, ad
nauseam... along with my reasons why v2 hadn't tackled any of that
yet. I'd hoped to talk about the design questions I had in v2, but
it's OSS and no one is required to talk about things just because I
want them to. :D Your v3 rewrote it instead, and I haven't had time to
review v3 yet.
Writing code to start and stop a server and run SQL is a matter of
programming. Writing a test suite that newcomers can intuitively use,
and test interesting new things with, is a long-term collaboration. I
am much more interested in doing the latter, because we already have
the former, and personally I'm happy to build momentum slowly and wait
on a group of people who are in a good place to discuss it.
--Jacob
On Wed Jan 7, 2026 at 1:48 AM CET, Andres Freund wrote:
I mainly think you're doing too much at once.
Attached is a simplified version with all of the things you mentioned
removed.
The load balance test replacement was more meant as a POC of what this
test would look like if it had been written in Python. So I kept it in a
separate commit, but labeled it as POC for now (it's not WIP though
since I'm pretty happy with what it looks like).
Attachments:
v9-0001-Add-support-for-pytest-test-suites.patchtext/x-patch; charset=utf-8; name=v9-0001-Add-support-for-pytest-test-suites.patchDownload
From f7e6a30d6ea639b137f775311f06a13779fff6ed Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 13 Aug 2025 10:58:56 -0700
Subject: [PATCH v9 1/5] Add support for pytest test suites
Specify --enable-pytest/-Dpytest=enabled at configure time. This
contains no Postgres test logic -- it is just a "vanilla" pytest
skeleton.
This contains a custom pytest plugin to generate TAP output. This plugin
is used by the Meson mtest runner, to show relevant information for
failed tests. The pytest-tap plugin would have been preferable, but it's
now in maintenance mode, and it has problems with accidentally
suppressing important collection failures.
Co-authored-by: Jelte Fennema-Nio <postgres@jeltef.nl>
---
.cirrus.tasks.yml | 11 +-
.gitignore | 3 +
configure | 166 +++++++++++++++++++++++++++++-
configure.ac | 24 ++++-
meson.build | 100 ++++++++++++++++++
meson_options.txt | 8 +-
pyproject.toml | 21 ++++
src/Makefile.global.in | 29 ++++++
src/makefiles/meson.build | 2 +
src/test/Makefile | 1 +
src/test/meson.build | 1 +
src/test/pytest/Makefile | 20 ++++
src/test/pytest/README | 1 +
src/test/pytest/meson.build | 15 +++
src/test/pytest/pgtap.py | 198 ++++++++++++++++++++++++++++++++++++
src/tools/testwrap | 6 +-
16 files changed, 598 insertions(+), 8 deletions(-)
create mode 100644 pyproject.toml
create mode 100644 src/test/pytest/Makefile
create mode 100644 src/test/pytest/README
create mode 100644 src/test/pytest/meson.build
create mode 100644 src/test/pytest/pgtap.py
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 77d0362a551..1b0deae8d87 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -44,6 +44,7 @@ env:
-Dldap=enabled
-Dssl=openssl
-Dtap_tests=enabled
+ -Dpytest=enabled
-Dplperl=enabled
-Dplpython=enabled
-Ddocs=enabled
@@ -315,6 +316,7 @@ task:
-Dlibcurl=enabled
-Dnls=enabled
-Dpam=enabled
+ -DPYTEST=pytest-3.12
setup_additional_packages_script: |
#pkgin -y install ...
@@ -518,14 +520,15 @@ task:
set -e
./configure \
--enable-cassert --enable-injection-points --enable-debug \
- --enable-tap-tests --enable-nls \
+ --enable-tap-tests --enable-pytest --enable-nls \
--with-segsize-blocks=6 \
--with-libnuma \
--with-liburing \
\
${LINUX_CONFIGURE_FEATURES} \
\
- CLANG="ccache clang"
+ CLANG="ccache clang" \
+ PYTEST="env LD_PRELOAD=/lib/x86_64-linux-gnu/libasan.so.8 pytest"
EOF
build_script: su postgres -c "make -s -j${BUILD_JOBS} world-bin"
upload_caches: ccache
@@ -662,6 +665,8 @@ task:
p5.34-io-tty
p5.34-ipc-run
python312
+ py312-packaging
+ py312-pytest
tcl
zstd
@@ -711,6 +716,7 @@ task:
sh src/tools/ci/ci_macports_packages.sh $MACOS_PACKAGE_LIST
# system python doesn't provide headers
sudo /opt/local/bin/port select python3 python312
+ sudo /opt/local/bin/port select pytest pytest312
# Make macports install visible for subsequent steps
echo PATH=/opt/local/sbin/:/opt/local/bin/:$PATH >> $CIRRUS_ENV
upload_caches: macports
@@ -784,6 +790,7 @@ task:
-Dldap=enabled
-Dssl=openssl
-Dtap_tests=enabled
+ -Dpytest=enabled
-Dplperl=enabled
-Dplpython=enabled
diff --git a/.gitignore b/.gitignore
index 4e911395fe3..a550ce6194b 100644
--- a/.gitignore
+++ b/.gitignore
@@ -31,6 +31,7 @@ win32ver.rc
*.exe
lib*dll.def
lib*.pc
+__pycache__/
# Local excludes in root directory
/GNUmakefile
@@ -43,3 +44,5 @@ lib*.pc
/Release/
/tmp_install/
/portlock/
+/.venv/
+/uv.lock
diff --git a/configure b/configure
index 045c913865d..1263f84e699 100755
--- a/configure
+++ b/configure
@@ -630,6 +630,8 @@ vpath_build
PG_SYSROOT
PG_VERSION_NUM
LDFLAGS_EX_BE
+UV
+PYTEST
PROVE
DBTOEPUB
FOP
@@ -772,6 +774,7 @@ CFLAGS
CC
enable_injection_points
PG_TEST_EXTRA
+enable_pytest
enable_tap_tests
enable_dtrace
DTRACEFLAGS
@@ -850,6 +853,7 @@ enable_profiling
enable_coverage
enable_dtrace
enable_tap_tests
+enable_pytest
enable_injection_points
with_blocksize
with_segsize
@@ -1550,7 +1554,10 @@ Optional Features:
--enable-profiling build with profiling enabled
--enable-coverage build with coverage testing instrumentation
--enable-dtrace build with DTrace support
- --enable-tap-tests enable TAP tests (requires Perl and IPC::Run)
+ --enable-tap-tests enable (Perl-based) TAP tests (requires Perl and
+ IPC::Run)
+ --enable-pytest enable (Python-based) pytest suites (requires
+ Python)
--enable-injection-points
enable injection points (for testing)
--enable-depend turn on automatic dependency tracking
@@ -3632,7 +3639,7 @@ fi
#
-# TAP tests
+# Test frameworks
#
@@ -3660,6 +3667,32 @@ fi
+
+# Check whether --enable-pytest was given.
+if test "${enable_pytest+set}" = set; then :
+ enableval=$enable_pytest;
+ case $enableval in
+ yes)
+ :
+ ;;
+ no)
+ :
+ ;;
+ *)
+ as_fn_error $? "no argument expected for --enable-pytest option" "$LINENO" 5
+ ;;
+ esac
+
+else
+ enable_pytest=no
+
+fi
+
+
+
+
+
+
#
# Injection points
#
@@ -19174,6 +19207,135 @@ $as_echo "$modulestderr" >&6; }
fi
fi
+if test "$enable_pytest" = yes; then
+ if test -z "$PYTEST"; then
+ for ac_prog in pytest py.test
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_PYTEST+:} false; then :
+ $as_echo_n "(cached) " >&6
+else
+ case $PYTEST in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_PYTEST="$PYTEST" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_path_PYTEST="$as_dir/$ac_word$ac_exec_ext"
+ $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+PYTEST=$ac_cv_path_PYTEST
+if test -n "$PYTEST"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTEST" >&5
+$as_echo "$PYTEST" >&6; }
+else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+ test -n "$PYTEST" && break
+done
+
+else
+ # Report the value of PYTEST in configure's output in all cases.
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for PYTEST" >&5
+$as_echo_n "checking for PYTEST... " >&6; }
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTEST" >&5
+$as_echo "$PYTEST" >&6; }
+fi
+
+ if test -z "$PYTEST"; then
+ # If pytest not found, try installing with uv
+ if test -z "$UV"; then
+ for ac_prog in uv
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_UV+:} false; then :
+ $as_echo_n "(cached) " >&6
+else
+ case $UV in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_UV="$UV" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_path_UV="$as_dir/$ac_word$ac_exec_ext"
+ $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+UV=$ac_cv_path_UV
+if test -n "$UV"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $UV" >&5
+$as_echo "$UV" >&6; }
+else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+ test -n "$UV" && break
+done
+
+else
+ # Report the value of UV in configure's output in all cases.
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for UV" >&5
+$as_echo_n "checking for UV... " >&6; }
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $UV" >&5
+$as_echo "$UV" >&6; }
+fi
+
+ if test -n "$UV"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether uv can install pytest dependencies" >&5
+$as_echo_n "checking whether uv can install pytest dependencies... " >&6; }
+ if "$UV" pip install "$srcdir" >&5 2>&1; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+ PYTEST="$UV run pytest"
+ else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+ as_fn_error $? "pytest not found and uv failed to install dependencies" "$LINENO" 5
+ fi
+ else
+ as_fn_error $? "pytest not found" "$LINENO" 5
+ fi
+ fi
+fi
+
# If compiler will take -Wl,--as-needed (or various platform-specific
# spellings thereof) then add that to LDFLAGS. This is much easier than
# trying to filter LIBS to the minimum for each executable.
diff --git a/configure.ac b/configure.ac
index 145197e6bd6..0a4498999fe 100644
--- a/configure.ac
+++ b/configure.ac
@@ -225,11 +225,16 @@ AC_SUBST(DTRACEFLAGS)])
AC_SUBST(enable_dtrace)
#
-# TAP tests
+# Test frameworks
#
PGAC_ARG_BOOL(enable, tap-tests, no,
- [enable TAP tests (requires Perl and IPC::Run)])
+ [enable (Perl-based) TAP tests (requires Perl and IPC::Run)])
AC_SUBST(enable_tap_tests)
+
+PGAC_ARG_BOOL(enable, pytest, no,
+ [enable (Python-based) pytest suites (requires Python)])
+AC_SUBST(enable_pytest)
+
AC_ARG_VAR(PG_TEST_EXTRA,
[enable selected extra tests (overridden at runtime by PG_TEST_EXTRA environment variable)])
@@ -2405,6 +2410,21 @@ if test "$enable_tap_tests" = yes; then
fi
fi
+if test "$enable_pytest" = yes; then
+ PGAC_PATH_PROGS(PYTEST, [pytest py.test])
+ if test -z "$PYTEST"; then
+ # Try python -m pytest as a fallback
+ AC_MSG_CHECKING([whether python -m pytest works])
+ if "$PYTHON" -m pytest --version >&AS_MESSAGE_LOG_FD 2>&1; then
+ AC_MSG_RESULT([yes])
+ PYTEST="$PYTHON -m pytest"
+ else
+ AC_MSG_RESULT([no])
+ AC_MSG_ERROR([pytest not found])
+ fi
+ fi
+fi
+
# If compiler will take -Wl,--as-needed (or various platform-specific
# spellings thereof) then add that to LDFLAGS. This is much easier than
# trying to filter LIBS to the minimum for each executable.
diff --git a/meson.build b/meson.build
index 2064d1b0a8d..659d36d601d 100644
--- a/meson.build
+++ b/meson.build
@@ -1711,6 +1711,47 @@ endif
+###############################################################
+# Library: pytest
+###############################################################
+
+pytest_enabled = false
+pytest_version = ''
+pytest_cmd = ['pytest'] # dummy, overwritten when pytest is found
+# We also configure the same PYTHONPATH in the pytest settings in
+# pyproject.toml, but pytest versions below 8.4 only actually use that
+# value after plugin loading. On lower versions pytest will throw an error even
+# when just running 'pytest --version'. So we need to configure it here too.
+# This won't help people manually running pytest outside of meson/make, but we
+# expect those to use a recent enough version of pytest anyway (and if not they
+# can manually configure PYTHONPATH too).
+pytest_env = {'PYTHONPATH': meson.project_source_root() / 'src' / 'test' / 'pytest'}
+
+pytestopt = get_option('pytest')
+if not pytestopt.disabled()
+ pytest = find_program(get_option('PYTEST'), native: true, required: false)
+
+ if pytest.found()
+ pytest_enabled = true
+ pytest_version = run_command(pytest, '--version', env: pytest_env, check: false).stdout().strip().split(' ')[-1]
+ pytest_cmd = [pytest.full_path()]
+ else
+ # Try python -m pytest as a fallback
+ pytest_check = run_command(python, '-m', 'pytest', '--version', env: pytest_env, check: false)
+ if pytest_check.returncode() == 0
+ pytest_enabled = true
+ pytest_version = pytest_check.stdout().strip().split(' ')[-1]
+ pytest_cmd = [python.full_path(), '-m', 'pytest']
+ endif
+ endif
+
+ if not pytest_enabled and pytestopt.enabled()
+ error('pytest not found')
+ endif
+endif
+
+
+
###############################################################
# Library: zstd
###############################################################
@@ -3800,6 +3841,64 @@ foreach test_dir : tests
)
endforeach
install_suites += test_group
+ elif kind == 'pytest'
+ testwrap_pytest = testwrap_base
+ if not pytest_enabled
+ testwrap_pytest += ['--skip', 'pytest not enabled']
+ endif
+
+ test_command = pytest_cmd
+
+ test_command += [
+ '-c', meson.project_source_root() / 'pyproject.toml',
+ '--verbose',
+ '-p', 'pgtap', # enable our test reporter plugin
+ '-ra', # show skipped and xfailed tests too
+ ]
+
+ # Add temporary install, the build directory for non-installed binaries and
+ # also test/ for non-installed test binaries built separately.
+ env = test_env
+ env.prepend('PATH', temp_install_bindir, test_dir['bd'], test_dir['bd'] / 'test')
+ temp_install_datadir = '@0@@1@'.format(test_install_destdir, dir_prefix / dir_data)
+ env.set('share_contrib_dir', temp_install_datadir / 'contrib')
+ env.prepend('PYTHONPATH', pytest_env['PYTHONPATH'])
+
+ foreach name, value : t.get('env', {})
+ env.set(name, value)
+ endforeach
+
+ test_group = test_dir['name']
+ test_kwargs = {
+ 'protocol': 'tap',
+ 'suite': test_group,
+ 'timeout': 1000,
+ 'depends': test_deps + t.get('deps', []),
+ 'env': env,
+ } + t.get('test_kwargs', {})
+
+ foreach onetest : t['tests']
+ # Make test names prettier, remove pyt/ and .py
+ onetest_p = onetest
+ if onetest_p.startswith('pyt/')
+ onetest_p = onetest.split('pyt/')[1]
+ endif
+ if onetest_p.endswith('.py')
+ onetest_p = fs.stem(onetest_p)
+ endif
+
+ test(test_dir['name'] / onetest_p,
+ python,
+ kwargs: test_kwargs,
+ args: testwrap_pytest + [
+ '--testgroup', test_dir['name'],
+ '--testname', onetest_p,
+ '--', test_command,
+ test_dir['sd'] / onetest,
+ ],
+ )
+ endforeach
+ install_suites += test_group
else
error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
endif
@@ -3973,6 +4072,7 @@ summary(
'bison': '@0@ @1@'.format(bison.full_path(), bison_version),
'dtrace': dtrace,
'flex': '@0@ @1@'.format(flex.full_path(), flex_version),
+ 'pytest': pytest_enabled ? ' '.join(pytest_cmd) + ' ' + pytest_version : not_found_dep,
},
section: 'Programs',
)
diff --git a/meson_options.txt b/meson_options.txt
index 6a793f3e479..cb4825c3575 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -41,7 +41,10 @@ option('cassert', type: 'boolean', value: false,
description: 'Enable assertion checks (for debugging)')
option('tap_tests', type: 'feature', value: 'auto',
- description: 'Enable TAP tests')
+ description: 'Enable (Perl-based) TAP tests')
+
+option('pytest', type: 'feature', value: 'auto',
+ description: 'Enable (Python-based) pytest suites')
option('injection_points', type: 'boolean', value: false,
description: 'Enable injection points')
@@ -195,6 +198,9 @@ option('PERL', type: 'string', value: 'perl',
option('PROVE', type: 'string', value: 'prove',
description: 'Path to prove binary')
+option('PYTEST', type: 'array', value: ['pytest', 'py.test'],
+ description: 'Path to pytest binary')
+
option('PYTHON', type: 'array', value: ['python3', 'python'],
description: 'Path to python binary')
diff --git a/pyproject.toml b/pyproject.toml
new file mode 100644
index 00000000000..60abb4d0655
--- /dev/null
+++ b/pyproject.toml
@@ -0,0 +1,21 @@
+[project]
+name = "postgresql-hackers-tooling"
+version = "0.1.0"
+description = "Pytest infrastructure for PostgreSQL"
+requires-python = ">=3.6"
+dependencies = [
+ # pytest 7.0 was the last version which supported Python 3.6, but the BSDs
+ # have started putting 8.x into ports, so we support both. (pytest 8 can be
+ # used throughout once we drop support for Python 3.7.)
+ "pytest >= 7.0, < 10",
+
+ # Any other dependencies are effectively optional (added below). We import
+ # these libraries using pytest.importorskip(). So tests will be skipped if
+ # they are not available.
+]
+
+[tool.pytest.ini_options]
+minversion = "7.0"
+
+# Common test code can be found here.
+pythonpath = ["src/test/pytest"]
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 371cd7eba2c..160cdffd4f1 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -211,6 +211,7 @@ enable_dtrace = @enable_dtrace@
enable_coverage = @enable_coverage@
enable_injection_points = @enable_injection_points@
enable_tap_tests = @enable_tap_tests@
+enable_pytest = @enable_pytest@
python_includespec = @python_includespec@
python_libdir = @python_libdir@
@@ -354,6 +355,7 @@ MSGFMT = @MSGFMT@
MSGFMT_FLAGS = @MSGFMT_FLAGS@
MSGMERGE = @MSGMERGE@
OPENSSL = @OPENSSL@
+PYTEST = @PYTEST@
PYTHON = @PYTHON@
TAR = @TAR@
XGETTEXT = @XGETTEXT@
@@ -508,6 +510,33 @@ prove_installcheck = @echo "TAP tests not enabled. Try configuring with --enable
prove_check = $(prove_installcheck)
endif
+ifeq ($(enable_pytest),yes)
+
+pytest_installcheck = @echo "Installcheck is not currently supported for pytest."
+
+# We also configure the same PYTHONPATH in the pytest settings in
+# pyproject.toml, but pytest versions below 8.4 only actually use that value
+# after plugin loading. So we need to configure it here too. This won't help
+# people manually running pytest outside of meson/make, but we expect those to
+# use a recent enough version of pytest anyway (and if not they can manually
+# configure PYTHONPATH too).
+define pytest_check
+echo "# +++ pytest check in $(subdir) +++" && \
+rm -rf '$(CURDIR)'/tmp_check && \
+$(MKDIR_P) '$(CURDIR)'/tmp_check && \
+cd $(srcdir) && \
+ TESTLOGDIR='$(CURDIR)/tmp_check/log' \
+ TESTDATADIR='$(CURDIR)/tmp_check' \
+ PYTHONPATH='$(abs_top_srcdir)/src/test/pytest:$$PYTHONPATH' \
+ $(with_temp_install) \
+ $(PYTEST) -c '$(abs_top_srcdir)/pyproject.toml' --verbose -ra ./pyt/
+endef
+
+else
+pytest_installcheck = @echo "pytest is not enabled. Try configuring with --enable-pytest"
+pytest_check = $(pytest_installcheck)
+endif
+
# Installation.
install_bin = @install_bin@
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 124df2c8582..778b59c9afb 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -56,6 +56,8 @@ pgxs_kv = {
'enable_nls': libintl.found() ? 'yes' : 'no',
'enable_injection_points': get_option('injection_points') ? 'yes' : 'no',
'enable_tap_tests': tap_tests_enabled ? 'yes' : 'no',
+ 'enable_pytest': pytest_enabled ? 'yes' : 'no',
+ 'PYTEST': pytest_enabled ? ' '.join(pytest_cmd) : '',
'enable_debug': get_option('debug') ? 'yes' : 'no',
'enable_coverage': 'no',
'enable_dtrace': dtrace.found() ? 'yes' : 'no',
diff --git a/src/test/Makefile b/src/test/Makefile
index 3eb0a06abb4..0be9771d71f 100644
--- a/src/test/Makefile
+++ b/src/test/Makefile
@@ -18,6 +18,7 @@ SUBDIRS = \
modules \
perl \
postmaster \
+ pytest \
recovery \
regress \
subscription
diff --git a/src/test/meson.build b/src/test/meson.build
index cd45cbf57fb..09175f0eaea 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -5,6 +5,7 @@ subdir('isolation')
subdir('authentication')
subdir('postmaster')
+subdir('pytest')
subdir('recovery')
subdir('subscription')
subdir('modules')
diff --git a/src/test/pytest/Makefile b/src/test/pytest/Makefile
new file mode 100644
index 00000000000..2bdca96ccbe
--- /dev/null
+++ b/src/test/pytest/Makefile
@@ -0,0 +1,20 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for pytest
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/pytest/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/test/pytest
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+check:
+ $(pytest_check)
+
+clean distclean maintainer-clean:
+ rm -rf tmp_check
diff --git a/src/test/pytest/README b/src/test/pytest/README
new file mode 100644
index 00000000000..1333ed77b7e
--- /dev/null
+++ b/src/test/pytest/README
@@ -0,0 +1 @@
+TODO
diff --git a/src/test/pytest/meson.build b/src/test/pytest/meson.build
new file mode 100644
index 00000000000..b1f6061b307
--- /dev/null
+++ b/src/test/pytest/meson.build
@@ -0,0 +1,15 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+if not pytest_enabled
+ subdir_done()
+endif
+
+tests += {
+ 'name': 'pytest',
+ 'sd': meson.current_source_dir(),
+ 'bd': meson.current_build_dir(),
+ 'pytest': {
+ 'tests': [
+ ],
+ },
+}
diff --git a/src/test/pytest/pgtap.py b/src/test/pytest/pgtap.py
new file mode 100644
index 00000000000..c92cad98d95
--- /dev/null
+++ b/src/test/pytest/pgtap.py
@@ -0,0 +1,198 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import os
+import sys
+
+import pytest
+
+#
+# Helpers
+#
+
+
+class TAP:
+ """
+ A basic API for reporting via the TAP protocol.
+ """
+
+ def __init__(self):
+ self.count = 0
+
+ # XXX interacts poorly with testwrap's boilerplate diagnostics
+ # self.print("TAP version 13")
+
+ def expect(self, num: int):
+ self.print(f"1..{num}")
+
+ def print(self, *args):
+ print(*args, file=sys.__stdout__)
+
+ def ok(self, name: str):
+ self.count += 1
+ self.print("ok", self.count, "-", name)
+
+ def skip(self, name: str, reason: str):
+ self.count += 1
+ self.print("ok", self.count, "-", name, "# skip", reason)
+
+ def fail(self, name: str, details: str):
+ self.count += 1
+ self.print("not ok", self.count, "-", name)
+
+ # mtest has some odd behavior around TAP tests where it won't print
+ # diagnostics on failure if they're part of the stdout stream, so we
+ # might as well just dump the details directly to stderr instead.
+ print(details, file=sys.__stderr__)
+
+
+tap = TAP()
+
+
+class TestNotes:
+ """
+ Annotations for a single test. The existing pytest hooks keep interesting
+ information somewhat separated across the different stages
+ (setup/test/teardown), so this class is used to correlate them.
+ """
+
+ skipped = False
+ skip_reason = None
+
+ failed = False
+ details = ""
+
+
+# Register a custom key in the stash dictionary for keeping our TestNotes.
+notes_key = pytest.StashKey[TestNotes]()
+
+
+#
+# Hook Implementations
+#
+
+
+@pytest.hookimpl(tryfirst=True)
+def pytest_configure(config):
+ """
+ Hijacks the standard streams as soon as possible during pytest startup. The
+ pytest-formatted output gets logged to file instead, and we'll use the
+ original sys.__stdout__/__stderr__ streams for the TAP protocol.
+ """
+ logdir = os.getenv("TESTLOGDIR")
+ if not logdir:
+ raise RuntimeError("pgtap requires the TESTLOGDIR envvar to be set")
+
+ os.makedirs(logdir)
+ logpath = os.path.join(logdir, "pytest.log")
+ sys.stdout = sys.stderr = open(logpath, "a", buffering=1)
+
+
+@pytest.hookimpl(trylast=True)
+def pytest_sessionfinish(session, exitstatus):
+ """
+ Suppresses nonzero exit codes due to failed tests. (In that case, we want
+ Meson to report a failure count, not a generic ERROR.)
+ """
+ if exitstatus == pytest.ExitCode.TESTS_FAILED:
+ session.exitstatus = pytest.ExitCode.OK
+
+
+@pytest.hookimpl
+def pytest_collectreport(report):
+ # Include collection failures directly in Meson error output.
+ if report.failed:
+ print(report.longreprtext, file=sys.__stderr__)
+
+
+@pytest.hookimpl
+def pytest_internalerror(excrepr, excinfo):
+ # Include internal errors directly in Meson error output.
+ print(excrepr, file=sys.__stderr__)
+
+
+#
+# Hook Wrappers
+#
+# In pytest parlance, a "wrapper" for a hook can inspect and optionally modify
+# existing hooks' behavior, but it does not replace the hook chain. This is done
+# through a generator-style API which chains the hooks together (see the use of
+# `yield`).
+#
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_collection(session):
+ """Reports the number of gathered tests after collection is finished."""
+ res = yield
+ tap.expect(session.testscollected)
+ return res
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_runtest_makereport(item, call):
+ """
+ Annotates a test item with our TestNotes and grabs relevant information for
+ reporting.
+
+ This is called multiple times per test, so it's not correct to print the TAP
+ result here. (A test and its teardown stage can both fail, and we want to
+ see the details for both.) We instead combine all the information for use by
+ our pytest_runtest_protocol wrapper later on.
+ """
+ res = yield
+
+ if notes_key not in item.stash:
+ item.stash[notes_key] = TestNotes()
+ notes = item.stash[notes_key]
+
+ report = res.get_result()
+ if report.passed:
+ pass # no annotation needed
+
+ elif report.skipped:
+ notes.skipped = True
+ _, _, notes.skip_reason = report.longrepr
+
+ elif report.failed:
+ notes.failed = True
+
+ if not notes.details:
+ notes.details += "{:_^72}\n\n".format(f" {report.head_line} ")
+
+ if report.when in ("setup", "teardown"):
+ notes.details += "\n{:_^72}\n\n".format(
+ f" Error during {report.when} of {report.head_line} "
+ )
+
+ notes.details += report.longreprtext + "\n"
+
+ # Include captured stdout/stderr/log in failure output
+ for section_name, section_content in report.sections:
+ if section_content.strip():
+ notes.details += "\n{:-^72}\n".format(f" {section_name} ")
+ notes.details += section_content + "\n"
+
+ else:
+ raise RuntimeError("pytest_runtest_makereport received unknown test status")
+
+ return res
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_runtest_protocol(item, nextitem):
+ """
+ Reports the TAP result for this test item using our gathered TestNotes.
+ """
+ res = yield
+
+ assert notes_key in item.stash, "pgtap didn't annotate a test item?"
+ notes = item.stash[notes_key]
+
+ if notes.failed:
+ tap.fail(item.nodeid, notes.details)
+ elif notes.skipped:
+ tap.skip(item.nodeid, notes.skip_reason)
+ else:
+ tap.ok(item.nodeid)
+
+ return res
diff --git a/src/tools/testwrap b/src/tools/testwrap
index e91296ecd15..346f86b8ea3 100755
--- a/src/tools/testwrap
+++ b/src/tools/testwrap
@@ -42,7 +42,11 @@ open(os.path.join(testdir, 'test.start'), 'x')
env_dict = {**os.environ,
'TESTDATADIR': os.path.join(testdir, 'data'),
- 'TESTLOGDIR': os.path.join(testdir, 'log')}
+ 'TESTLOGDIR': os.path.join(testdir, 'log'),
+ # Prevent emitting terminal capability sequences that pollute the
+ # TAP output stream (i.e.\033[?1034h). This happens on OpenBSD with
+ # pytest for unknown reasons.
+ 'TERM': ''}
# The configuration time value of PG_TEST_EXTRA is supplied via argument
base-commit: 31ddbb38eeff60ad5353768c7416fea3a0ecafce
--
2.52.0
v9-0002-Add-pytest-infrastructure-to-interact-with-Postgr.patchtext/x-patch; charset=utf-8; name=v9-0002-Add-pytest-infrastructure-to-interact-with-Postgr.patchDownload
From ebd0dc2218a744c0c94180472dc92a4ab901999a Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <postgres@jeltef.nl>
Date: Tue, 16 Dec 2025 09:25:48 +0100
Subject: [PATCH v9 2/5] Add pytest infrastructure to interact with PostgreSQL
servers
This adds functionality to the pytest infrastructure that allows tests
to do common things with PostgreSQL servers like:
- creating
- starting
- stopping
- connecting
- running queries
- handling errors
The goal of this infrastructure is to be so easy to use that the actual
tests really only contain the logic to test the behaviour that the tests
are testing, as opposed to a bunch of boilerplate. Examples of this are:
Types get converted to their Python counter parts automatically. Errors
become actual Python exceptions. Results of queries that only return a
single row or cell are unpacked automatically, so you don't have to do
rows[0][0] if the query only returns a single cell.
The only new tests that are part of this commit are tests that cover
this testing infrastructure itself. It's debatable whether such tests
are useful long term, because any infrastructure that's unused by actual
tests should probably not exist. For now it seems good to test this
basic functionality though, both to make sure we don't break it before
committing actual tests that use it, and also as an example for people
writing new tests.
---
doc/src/sgml/regress.sgml | 66 ++-
pyproject.toml | 3 +
src/test/pytest/README | 154 ++++++-
src/test/pytest/libpq/__init__.py | 35 ++
src/test/pytest/libpq/_core.py | 488 ++++++++++++++++++++++
src/test/pytest/libpq/errors.py | 62 +++
src/test/pytest/meson.build | 4 +
src/test/pytest/pypg/__init__.py | 10 +
src/test/pytest/pypg/_env.py | 72 ++++
src/test/pytest/pypg/fixtures.py | 335 +++++++++++++++
src/test/pytest/pypg/server.py | 470 +++++++++++++++++++++
src/test/pytest/pypg/util.py | 42 ++
src/test/pytest/pyt/conftest.py | 1 +
src/test/pytest/pyt/test_errors.py | 34 ++
src/test/pytest/pyt/test_libpq.py | 172 ++++++++
src/test/pytest/pyt/test_multi_server.py | 46 ++
src/test/pytest/pyt/test_query_helpers.py | 347 +++++++++++++++
17 files changed, 2339 insertions(+), 2 deletions(-)
create mode 100644 src/test/pytest/libpq/__init__.py
create mode 100644 src/test/pytest/libpq/_core.py
create mode 100644 src/test/pytest/libpq/errors.py
create mode 100644 src/test/pytest/pypg/__init__.py
create mode 100644 src/test/pytest/pypg/_env.py
create mode 100644 src/test/pytest/pypg/fixtures.py
create mode 100644 src/test/pytest/pypg/server.py
create mode 100644 src/test/pytest/pypg/util.py
create mode 100644 src/test/pytest/pyt/conftest.py
create mode 100644 src/test/pytest/pyt/test_errors.py
create mode 100644 src/test/pytest/pyt/test_libpq.py
create mode 100644 src/test/pytest/pyt/test_multi_server.py
create mode 100644 src/test/pytest/pyt/test_query_helpers.py
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index d80dd46c5fd..2d85edacec7 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -840,7 +840,7 @@ float4:out:.*-.*-cygwin.*=float4-misrounded-input.out
</sect1>
<sect1 id="regress-tap">
- <title>TAP Tests</title>
+ <title>Perl TAP Tests</title>
<para>
Various tests, particularly the client program tests
@@ -929,6 +929,70 @@ PG_TEST_NOCLEAN=1 make -C src/bin/pg_dump check
</sect1>
+ <sect1 id="regress-pytest">
+ <title>Pytest Tests</title>
+
+ <para>
+ Tests in <filename>pyt</filename> directories use the Python
+ <application>pytest</application> framework. These tests provide a
+ convenient way to test libpq client functionality and scenarios requiring
+ multiple PostgreSQL server instances.
+ </para>
+
+ <para>
+ The pytest tests require <productname>PostgreSQL</productname> to be
+ configured with the option <option>--enable-pytest</option> (or
+ <option>-Dpytest=enabled</option> for Meson builds). You also need
+ <application>pytest</application> installed. You can either install it
+ system-wide, or create a virtual environment in the source directory:
+<programlisting>
+python -m venv .venv
+source .venv/bin/activate
+pip install .
+</programlisting>
+ Alternatively, if you have <application>uv</application> installed:
+<programlisting>
+uv sync
+source .venv/bin/activate
+</programlisting>
+ Remember to activate the virtual environment before running
+ <command>configure</command> or <command>meson setup</command>.
+ </para>
+
+ <para>
+ With Meson builds, you can run the pytest tests using:
+<programlisting>
+meson test --suite pytest
+</programlisting>
+ With autoconf-based builds, you can run them from the
+ <filename>src/test/pytest</filename> directory using:
+<programlisting>
+make check
+</programlisting>
+ </para>
+
+ <para>
+ You can also run specific test files directly using pytest:
+<programlisting>
+pytest src/test/pytest/pyt/test_libpq.py
+pytest -k "test_connstr"
+</programlisting>
+ </para>
+
+ <para>
+ Many operations in the test suites use a 180-second timeout, which on slow
+ hosts may lead to load-induced timeouts. Setting the environment variable
+ <varname>PG_TEST_TIMEOUT_DEFAULT</varname> to a higher number will change
+ the default to avoid this.
+ </para>
+
+ <para>
+ For more information on writing pytest tests, see the
+ <filename>src/test/pytest/README</filename> file.
+ </para>
+
+ </sect1>
+
<sect1 id="regress-coverage">
<title>Test Coverage Examination</title>
diff --git a/pyproject.toml b/pyproject.toml
index 60abb4d0655..4628d2274e0 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -19,3 +19,6 @@ minversion = "7.0"
# Common test code can be found here.
pythonpath = ["src/test/pytest"]
+
+# Load the shared fixtures plugin
+addopts = ["-p", "pypg.fixtures"]
diff --git a/src/test/pytest/README b/src/test/pytest/README
index 1333ed77b7e..bb75e56a25d 100644
--- a/src/test/pytest/README
+++ b/src/test/pytest/README
@@ -1 +1,153 @@
-TODO
+src/test/pytest/README
+
+Pytest-based tests
+==================
+
+This directory contains infrastructure for Python-based tests using pytest,
+along with some core tests for the pytest infrastructure itself. The framework
+provides fixtures for managing PostgreSQL server instances and connecting to
+them via libpq.
+
+
+Running the tests
+=================
+
+NOTE: You must have given the --enable-pytest argument to configure (or
+-Dpytest=enabled for Meson builds). You also need to have pytest installed.
+
+If you don't have pytest installed system-wide, you can create a virtual
+environment:
+
+ python3 -m venv .venv
+ source .venv/bin/activate # On Windows: .venv\Scripts\activate
+ pip install . # Installs pytest and other dependencies
+
+Or using uv (https://docs.astral.sh/uv/):
+
+ uv sync
+ source .venv/bin/activate # On Windows: .venv\Scripts\activate
+
+Remember to activate the virtual environment before running configure/meson
+setup.
+
+With Meson builds, you can run:
+ meson test --suite pytest
+
+With autoconf based builds, you can run:
+ make check
+or
+ make installcheck
+
+You can run specific test files and/or use pytest's -k option to select tests:
+ pytest src/test/pytest/pyt/test_libpq.py
+ pytest -k "test_connstr"
+
+
+Directory structure
+===================
+
+pypg/
+ Python library providing common functions and pytest fixtures that can be
+ used in tests.
+
+libpq/
+ A simple but user-friendly python wrapper around libpq
+
+pyt/
+ Tests for the pytest infrastructure itself
+
+pgtap.py
+ A pytest plugin to output results in TAP format
+
+
+Writing tests
+=============
+
+Tests use pytest fixtures to manage server instances and connections. The
+most commonly used fixtures are:
+
+pg
+ A PostgresServer instance configured for the current test. Use this for
+ creating test users/databases or modifying server configuration. Changes
+ are automatically rolled back after the test.
+
+conn
+ A connected PGconn instance to the test server. Automatically cleaned up
+ after the test.
+
+connect
+ A function to create additional connections with custom options.
+
+create_pg
+ A factory function to create additional PostgreSQL servers within a test.
+ Servers are automatically cleaned up at the end of the test. Useful for
+ testing scenarios that require multiple independent servers.
+
+create_pg_module
+ Like create_pg, but servers persist for the entire test module. Use this
+ when multiple tests in a module can share the same servers, which is
+ faster than creating new servers for each test.
+
+
+Example test:
+
+ def test_simple_query(conn):
+ result = conn.sql("SELECT 1 + 1")
+ assert result == 2
+
+ def test_with_user(pg):
+ users = pg.create_users("test")
+ with pg.reloading() as s:
+ s.hba.prepend(["local", "all", users["test"], "trust"])
+
+ conn = pg.connect(user=users["test"])
+ assert conn.sql("SELECT current_user") == users["test"]
+
+ def test_multiple_servers(create_pg):
+ node1 = create_pg("primary")
+ node2 = create_pg("secondary")
+
+ conn1 = node1.connect()
+ conn2 = node2.connect()
+
+ # Each server is independent
+ assert node1.port != node2.port
+
+
+Server configuration
+====================
+
+Tests can temporarily modify server configuration using context managers:
+
+ with pg.reloading() as s:
+ s.conf.set(log_connections="on")
+ s.hba.prepend("local all all trust")
+ # Server is reloaded here
+ # After the test finished the original configuration is restored and
+ # the server is reloaded again
+
+Use pg.restarting() instead if the configuration change requires a restart.
+
+
+Timeouts
+========
+
+Tests inherit the PG_TEST_TIMEOUT_DEFAULT environment variable (defaulting
+to 180 seconds). The remaining_timeout fixture provides a function that
+returns how much time remains for the current test.
+
+
+Environment variables
+=====================
+
+PG_TEST_TIMEOUT_DEFAULT
+ Per-test timeout in seconds (default: 180)
+
+PG_CONFIG
+ Path to pg_config (default: uses PATH)
+
+TESTDATADIR
+ Directory for test data (default: pytest temp directory)
+
+PG_TEST_EXTRA
+ Space-separated list of optional test categories to run (e.g., "ssl")
diff --git a/src/test/pytest/libpq/__init__.py b/src/test/pytest/libpq/__init__.py
new file mode 100644
index 00000000000..6a71ebbe43f
--- /dev/null
+++ b/src/test/pytest/libpq/__init__.py
@@ -0,0 +1,35 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+libpq testing utilities - ctypes bindings and helpers for PostgreSQL's libpq library.
+
+This module provides Python wrappers around libpq for use in pytest tests.
+"""
+
+from . import errors
+from .errors import LibpqError
+from ._core import (
+ ConnectionStatus,
+ DiagField,
+ ExecStatus,
+ PGconn,
+ PGresult,
+ connect,
+ connstr,
+ load_libpq_handle,
+ register_type_info,
+)
+
+__all__ = [
+ "errors",
+ "LibpqError",
+ "ConnectionStatus",
+ "DiagField",
+ "ExecStatus",
+ "PGconn",
+ "PGresult",
+ "connect",
+ "connstr",
+ "load_libpq_handle",
+ "register_type_info",
+]
diff --git a/src/test/pytest/libpq/_core.py b/src/test/pytest/libpq/_core.py
new file mode 100644
index 00000000000..1c059b9b446
--- /dev/null
+++ b/src/test/pytest/libpq/_core.py
@@ -0,0 +1,488 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Core libpq functionality - ctypes bindings and connection handling.
+"""
+
+import contextlib
+import ctypes
+import datetime
+import decimal
+import enum
+import json
+import platform
+import os
+import uuid
+from typing import Any, Callable, Dict, Optional
+
+from .errors import LibpqError
+
+
+# PG_DIAG field identifiers from postgres_ext.h
+class DiagField(enum.IntEnum):
+ SEVERITY = ord("S")
+ SEVERITY_NONLOCALIZED = ord("V")
+ SQLSTATE = ord("C")
+ MESSAGE_PRIMARY = ord("M")
+ MESSAGE_DETAIL = ord("D")
+ MESSAGE_HINT = ord("H")
+ STATEMENT_POSITION = ord("P")
+ INTERNAL_POSITION = ord("p")
+ INTERNAL_QUERY = ord("q")
+ CONTEXT = ord("W")
+ SCHEMA_NAME = ord("s")
+ TABLE_NAME = ord("t")
+ COLUMN_NAME = ord("c")
+ DATATYPE_NAME = ord("d")
+ CONSTRAINT_NAME = ord("n")
+ SOURCE_FILE = ord("F")
+ SOURCE_LINE = ord("L")
+ SOURCE_FUNCTION = ord("R")
+
+
+class ConnectionStatus(enum.IntEnum):
+ """PostgreSQL connection status codes from libpq."""
+
+ CONNECTION_OK = 0
+ CONNECTION_BAD = 1
+
+
+class ExecStatus(enum.IntEnum):
+ """PostgreSQL result status codes from PQresultStatus."""
+
+ PGRES_EMPTY_QUERY = 0
+ PGRES_COMMAND_OK = 1
+ PGRES_TUPLES_OK = 2
+ PGRES_COPY_OUT = 3
+ PGRES_COPY_IN = 4
+ PGRES_BAD_RESPONSE = 5
+ PGRES_NONFATAL_ERROR = 6
+ PGRES_FATAL_ERROR = 7
+ PGRES_COPY_BOTH = 8
+ PGRES_SINGLE_TUPLE = 9
+ PGRES_PIPELINE_SYNC = 10
+ PGRES_PIPELINE_ABORTED = 11
+
+
+class _PGconn(ctypes.Structure):
+ pass
+
+
+class _PGresult(ctypes.Structure):
+ pass
+
+
+_PGconn_p = ctypes.POINTER(_PGconn)
+_PGresult_p = ctypes.POINTER(_PGresult)
+
+
+def load_libpq_handle(libdir, bindir):
+ """
+ Loads a ctypes handle for libpq. Some common function prototypes are
+ initialized for general use.
+ """
+ system = platform.system()
+
+ if system in ("Linux", "FreeBSD", "NetBSD", "OpenBSD"):
+ name = "libpq.so.5"
+ elif system == "Darwin":
+ name = "libpq.5.dylib"
+ elif system == "Windows":
+ name = "libpq.dll"
+ else:
+ assert False, f"the libpq fixture must be updated for {system}"
+
+ if system == "Windows":
+ # On Windows, libpq.dll is confusingly in bindir, not libdir. And we
+ # need to add this directory the the search path.
+ libpq_path = os.path.join(bindir, name)
+ lib = ctypes.CDLL(libpq_path)
+ else:
+ libpq_path = os.path.join(libdir, name)
+ lib = ctypes.CDLL(libpq_path)
+
+ #
+ # Function Prototypes
+ #
+
+ lib.PQconnectdb.restype = _PGconn_p
+ lib.PQconnectdb.argtypes = [ctypes.c_char_p]
+
+ lib.PQstatus.restype = ctypes.c_int
+ lib.PQstatus.argtypes = [_PGconn_p]
+
+ lib.PQexec.restype = _PGresult_p
+ lib.PQexec.argtypes = [_PGconn_p, ctypes.c_char_p]
+
+ lib.PQresultStatus.restype = ctypes.c_int
+ lib.PQresultStatus.argtypes = [_PGresult_p]
+
+ lib.PQclear.restype = None
+ lib.PQclear.argtypes = [_PGresult_p]
+
+ lib.PQerrorMessage.restype = ctypes.c_char_p
+ lib.PQerrorMessage.argtypes = [_PGconn_p]
+
+ lib.PQfinish.restype = None
+ lib.PQfinish.argtypes = [_PGconn_p]
+
+ lib.PQresultErrorMessage.restype = ctypes.c_char_p
+ lib.PQresultErrorMessage.argtypes = [_PGresult_p]
+
+ lib.PQntuples.restype = ctypes.c_int
+ lib.PQntuples.argtypes = [_PGresult_p]
+
+ lib.PQnfields.restype = ctypes.c_int
+ lib.PQnfields.argtypes = [_PGresult_p]
+
+ lib.PQgetvalue.restype = ctypes.c_char_p
+ lib.PQgetvalue.argtypes = [_PGresult_p, ctypes.c_int, ctypes.c_int]
+
+ lib.PQgetisnull.restype = ctypes.c_int
+ lib.PQgetisnull.argtypes = [_PGresult_p, ctypes.c_int, ctypes.c_int]
+
+ lib.PQftype.restype = ctypes.c_uint
+ lib.PQftype.argtypes = [_PGresult_p, ctypes.c_int]
+
+ lib.PQresultErrorField.restype = ctypes.c_char_p
+ lib.PQresultErrorField.argtypes = [_PGresult_p, ctypes.c_int]
+
+ return lib
+
+
+# PostgreSQL type OIDs and conversion system
+# Type registry - maps OID to converter function
+_type_converters: Dict[int, Callable[[str], Any]] = {}
+_array_to_elem_map: Dict[int, int] = {}
+
+
+def register_type_info(
+ name: str, oid: int, array_oid: int, converter: Callable[[str], Any]
+):
+ """
+ Register a PostgreSQL type with its OID, array OID, and conversion function.
+
+ Usage:
+ register_type_info("bool", 16, 1000, lambda v: v == "t")
+ """
+ _type_converters[oid] = converter
+ if array_oid is not None:
+ _array_to_elem_map[array_oid] = oid
+
+
+def _parse_array(value: str, elem_oid: int):
+ """Parse PostgreSQL array syntax into nested Python lists."""
+ stack: list[list] = []
+ current_element: list[str] = []
+ in_quotes = False
+ was_quoted = False
+ pos = 0
+
+ while pos < len(value):
+ char = value[pos]
+
+ if in_quotes:
+ if char == "\\":
+ next_char = value[pos + 1]
+ if next_char not in '"\\':
+ raise NotImplementedError('Only \\" and \\\\ escapes are supported')
+ current_element.append(next_char)
+ pos += 2
+ continue
+ elif char == '"':
+ in_quotes = False
+ else:
+ current_element.append(char)
+ elif char == '"':
+ in_quotes = True
+ was_quoted = True
+ elif char == "{":
+ stack.append([])
+ elif char in ",}":
+ if current_element or was_quoted:
+ elem = "".join(current_element)
+ if not was_quoted and elem == "NULL":
+ stack[-1].append(None)
+ else:
+ stack[-1].append(_convert_pg_value(elem, elem_oid))
+ current_element = []
+ was_quoted = False
+ if char == "}":
+ completed = stack.pop()
+ if not stack:
+ return completed
+ stack[-1].append(completed)
+ elif char != " ":
+ current_element.append(char)
+ pos += 1
+
+ raise ValueError(f"Malformed array literal: {value}")
+
+
+# Register standard PostgreSQL types that we'll likely encounter in tests
+register_type_info("bool", 16, 1000, lambda v: v == "t")
+register_type_info("int2", 21, 1005, int)
+register_type_info("int4", 23, 1007, int)
+register_type_info("int8", 20, 1016, int)
+register_type_info("float4", 700, 1021, float)
+register_type_info("float8", 701, 1022, float)
+register_type_info("numeric", 1700, 1231, decimal.Decimal)
+register_type_info("text", 25, 1009, str)
+register_type_info("varchar", 1043, 1015, str)
+register_type_info("date", 1082, 1182, datetime.date.fromisoformat)
+register_type_info("time", 1083, 1183, datetime.time.fromisoformat)
+register_type_info("timestamp", 1114, 1115, datetime.datetime.fromisoformat)
+register_type_info("timestamptz", 1184, 1185, datetime.datetime.fromisoformat)
+register_type_info("uuid", 2950, 2951, uuid.UUID)
+register_type_info("json", 114, 199, json.loads)
+register_type_info("jsonb", 3802, 3807, json.loads)
+
+
+def _convert_pg_value(value: str, type_oid: int) -> Any:
+ """
+ Convert PostgreSQL string value to appropriate Python type based on OID.
+ Uses the registered type converters from register_type_info().
+ """
+ # Check if it's an array type
+ if type_oid in _array_to_elem_map:
+ elem_oid = _array_to_elem_map[type_oid]
+ return _parse_array(value, elem_oid)
+
+ # Use registered converter if available
+ converter = _type_converters.get(type_oid)
+ if converter:
+ return converter(value)
+
+ # Unknown types - return as string
+ return value
+
+
+def simplify_query_results(results) -> Any:
+ """
+ Simplify the results of a query so that the caller doesn't have to unpack
+ lists and tuples of length 1.
+ """
+ if len(results) == 1:
+ row = results[0]
+ if len(row) == 1:
+ # If there's only a single cell, just return the value
+ return row[0]
+ # If there's only a single row, just return that row
+ return row
+
+ if len(results) != 0 and len(results[0]) == 1:
+ # If there's only a single column, return an array of values
+ return [row[0] for row in results]
+
+ # if there are multiple rows and columns, return the results as is
+ return results
+
+
+class PGresult(contextlib.AbstractContextManager):
+ """Wraps a raw _PGresult_p with a more friendly interface."""
+
+ def __init__(self, lib: ctypes.CDLL, res: _PGresult_p):
+ self._lib = lib
+ self._res = res
+
+ def __exit__(self, *exc):
+ self._lib.PQclear(self._res)
+ self._res = None
+
+ def status(self) -> ExecStatus:
+ return ExecStatus(self._lib.PQresultStatus(self._res))
+
+ def error_message(self):
+ """Returns the error message associated with this result."""
+ msg = self._lib.PQresultErrorMessage(self._res)
+ return msg.decode() if msg else ""
+
+ def _get_error_field(self, field: DiagField) -> Optional[str]:
+ """Get an error field from the result using PQresultErrorField."""
+ val = self._lib.PQresultErrorField(self._res, int(field))
+ return val.decode() if val else None
+
+ def raise_error(self) -> None:
+ """
+ Raises LibpqError with diagnostic information from the result.
+ """
+ if not self._res:
+ raise LibpqError("query failed: out of memory or connection lost")
+
+ sqlstate = self._get_error_field(DiagField.SQLSTATE)
+ primary = self._get_error_field(DiagField.MESSAGE_PRIMARY)
+ detail = self._get_error_field(DiagField.MESSAGE_DETAIL)
+ hint = self._get_error_field(DiagField.MESSAGE_HINT)
+ severity = self._get_error_field(DiagField.SEVERITY)
+ schema_name = self._get_error_field(DiagField.SCHEMA_NAME)
+ table_name = self._get_error_field(DiagField.TABLE_NAME)
+ column_name = self._get_error_field(DiagField.COLUMN_NAME)
+ datatype_name = self._get_error_field(DiagField.DATATYPE_NAME)
+ constraint_name = self._get_error_field(DiagField.CONSTRAINT_NAME)
+ context = self._get_error_field(DiagField.CONTEXT)
+
+ position_str = self._get_error_field(DiagField.STATEMENT_POSITION)
+ position = int(position_str) if position_str else None
+
+ raise LibpqError(
+ primary or self.error_message(),
+ sqlstate=sqlstate,
+ severity=severity,
+ primary=primary,
+ detail=detail,
+ hint=hint,
+ schema_name=schema_name,
+ table_name=table_name,
+ column_name=column_name,
+ datatype_name=datatype_name,
+ constraint_name=constraint_name,
+ position=position,
+ context=context,
+ )
+
+ def fetch_all(self):
+ """
+ Fetch all rows and convert to Python types.
+ Returns a list of tuples, with values converted based on their PostgreSQL type.
+ """
+ nrows = self._lib.PQntuples(self._res)
+ ncols = self._lib.PQnfields(self._res)
+
+ # Get type OIDs for each column
+ type_oids = [self._lib.PQftype(self._res, col) for col in range(ncols)]
+
+ results = []
+ for row in range(nrows):
+ row_data = []
+ for col in range(ncols):
+ if self._lib.PQgetisnull(self._res, row, col):
+ row_data.append(None)
+ else:
+ value = self._lib.PQgetvalue(self._res, row, col).decode()
+ row_data.append(_convert_pg_value(value, type_oids[col]))
+ results.append(tuple(row_data))
+
+ return results
+
+
+class PGconn(contextlib.AbstractContextManager):
+ """
+ Wraps a raw _PGconn_p with a more friendly interface. This is just a
+ stub; it's expected to grow.
+ """
+
+ def __init__(
+ self,
+ lib: ctypes.CDLL,
+ handle: _PGconn_p,
+ stack: contextlib.ExitStack,
+ ):
+ self._lib = lib
+ self._handle = handle
+ self._stack = stack
+
+ def __exit__(self, *exc):
+ self._lib.PQfinish(self._handle)
+ self._handle = None
+
+ def exec(self, query: str):
+ """
+ Executes a query via PQexec() and returns a PGresult.
+ """
+ res = self._lib.PQexec(self._handle, query.encode())
+ return self._stack.enter_context(PGresult(self._lib, res))
+
+ def sql(self, query: str):
+ """
+ Executes a query and raises an exception if it fails.
+ Returns the query results with automatic type conversion and simplification.
+ For commands that don't return data (INSERT, UPDATE, etc.), returns None.
+
+ Examples:
+ - SELECT 1 -> 1
+ - SELECT 1, 2 -> (1, 2)
+ - SELECT * FROM generate_series(1, 3) -> [1, 2, 3]
+ - SELECT * FROM (VALUES (1, 'a'), (2, 'b')) t -> [(1, 'a'), (2, 'b')]
+ - CREATE TABLE ... -> None
+ - INSERT INTO ... -> None
+ """
+ res = self.exec(query)
+ status = res.status()
+
+ if status == ExecStatus.PGRES_FATAL_ERROR:
+ res.raise_error()
+ elif status == ExecStatus.PGRES_COMMAND_OK:
+ return None
+ elif status == ExecStatus.PGRES_TUPLES_OK:
+ results = res.fetch_all()
+ return simplify_query_results(results)
+ else:
+ res.raise_error()
+
+
+def connstr(opts: Dict[str, Any]) -> str:
+ """
+ Flattens the provided options into a libpq connection string. Values
+ are converted to str and quoted/escaped as necessary.
+ """
+ settings = []
+
+ for k, v in opts.items():
+ v = str(v)
+ if not v:
+ v = "''"
+ else:
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+
+ if " " in v:
+ v = f"'{v}'"
+
+ settings.append(f"{k}={v}")
+
+ return " ".join(settings)
+
+
+def connect(
+ libpq_handle: ctypes.CDLL,
+ stack: contextlib.ExitStack,
+ remaining_timeout_fn: Callable[[], float],
+ **opts,
+) -> PGconn:
+ """
+ Connects to a server, using the given connection options, and
+ returns a PGconn object wrapping the connection handle. A
+ failure will raise LibpqError.
+
+ Connections honor PG_TEST_TIMEOUT_DEFAULT unless connect_timeout is
+ explicitly overridden in opts.
+
+ Args:
+ libpq_handle: ctypes.CDLL handle to libpq library
+ stack: ExitStack for managing connection cleanup
+ remaining_timeout_fn: Function that returns remaining timeout in seconds
+ **opts: Connection options (host, port, dbname, etc.)
+
+ Returns:
+ PGconn: Connected database connection
+
+ Raises:
+ LibpqError: If connection fails
+ """
+
+ if "connect_timeout" not in opts:
+ t = int(remaining_timeout_fn())
+ opts["connect_timeout"] = max(t, 1)
+
+ conn_p = libpq_handle.PQconnectdb(connstr(opts).encode())
+
+ # Check connection status before adding to stack
+ if libpq_handle.PQstatus(conn_p) != ConnectionStatus.CONNECTION_OK:
+ error_msg = libpq_handle.PQerrorMessage(conn_p).decode()
+ # Manually close the failed connection
+ libpq_handle.PQfinish(conn_p)
+ raise LibpqError(error_msg)
+
+ # Connection succeeded - add to stack for cleanup
+ conn = stack.enter_context(PGconn(libpq_handle, conn_p, stack=stack))
+ return conn
diff --git a/src/test/pytest/libpq/errors.py b/src/test/pytest/libpq/errors.py
new file mode 100644
index 00000000000..c665b663e22
--- /dev/null
+++ b/src/test/pytest/libpq/errors.py
@@ -0,0 +1,62 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Exception classes for libpq errors.
+"""
+
+from typing import Optional
+
+
+class LibpqError(RuntimeError):
+ """Exception for libpq errors with PostgreSQL diagnostic fields."""
+
+ sqlstate: Optional[str]
+ severity: Optional[str]
+ primary: Optional[str]
+ detail: Optional[str]
+ hint: Optional[str]
+ schema_name: Optional[str]
+ table_name: Optional[str]
+ column_name: Optional[str]
+ datatype_name: Optional[str]
+ constraint_name: Optional[str]
+ position: Optional[int]
+ context: Optional[str]
+
+ def __init__(
+ self,
+ message: str,
+ *,
+ sqlstate: Optional[str] = None,
+ severity: Optional[str] = None,
+ primary: Optional[str] = None,
+ detail: Optional[str] = None,
+ hint: Optional[str] = None,
+ schema_name: Optional[str] = None,
+ table_name: Optional[str] = None,
+ column_name: Optional[str] = None,
+ datatype_name: Optional[str] = None,
+ constraint_name: Optional[str] = None,
+ position: Optional[int] = None,
+ context: Optional[str] = None,
+ ):
+ super().__init__(message)
+ self.sqlstate = sqlstate
+ self.severity = severity
+ self.primary = primary
+ self.detail = detail
+ self.hint = hint
+ self.schema_name = schema_name
+ self.table_name = table_name
+ self.column_name = column_name
+ self.datatype_name = datatype_name
+ self.constraint_name = constraint_name
+ self.position = position
+ self.context = context
+
+ @property
+ def sqlstate_class(self) -> Optional[str]:
+ """Returns the 2-character SQLSTATE class."""
+ if self.sqlstate and len(self.sqlstate) >= 2:
+ return self.sqlstate[:2]
+ return None
diff --git a/src/test/pytest/meson.build b/src/test/pytest/meson.build
index b1f6061b307..b86be901e7c 100644
--- a/src/test/pytest/meson.build
+++ b/src/test/pytest/meson.build
@@ -10,6 +10,10 @@ tests += {
'bd': meson.current_build_dir(),
'pytest': {
'tests': [
+ 'pyt/test_errors.py',
+ 'pyt/test_libpq.py',
+ 'pyt/test_multi_server.py',
+ 'pyt/test_query_helpers.py',
],
},
}
diff --git a/src/test/pytest/pypg/__init__.py b/src/test/pytest/pypg/__init__.py
new file mode 100644
index 00000000000..4ee91289f70
--- /dev/null
+++ b/src/test/pytest/pypg/__init__.py
@@ -0,0 +1,10 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+from ._env import require_test_extras, skip_unless_test_extras
+from .server import PostgresServer
+
+__all__ = [
+ "require_test_extras",
+ "skip_unless_test_extras",
+ "PostgresServer",
+]
diff --git a/src/test/pytest/pypg/_env.py b/src/test/pytest/pypg/_env.py
new file mode 100644
index 00000000000..c4087be3212
--- /dev/null
+++ b/src/test/pytest/pypg/_env.py
@@ -0,0 +1,72 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import logging
+import os
+
+import pytest
+
+logger = logging.getLogger(__name__)
+
+
+def _test_extra_skip_reason(*keys: str) -> str:
+ return "requires {} to be set in PG_TEST_EXTRA".format(", ".join(keys))
+
+
+def _has_test_extra(key: str) -> bool:
+ """
+ Returns True if the PG_TEST_EXTRA environment variable contains the given
+ key.
+ """
+ extra = os.getenv("PG_TEST_EXTRA", "")
+ return key in extra.split()
+
+
+def require_test_extras(*keys: str):
+ """
+ A convenience annotation which will skip tests if all of the required keys
+ are not present in PG_TEST_EXTRA.
+
+ To skip a particular test function or class:
+
+ @pypg.require_test_extras("ldap")
+ def test_some_ldap_feature():
+ ...
+
+ To skip an entire module:
+
+ pytestmark = pypg.require_test_extra("ssl", "kerberos")
+ """
+ return pytest.mark.skipif(
+ not all([_has_test_extra(k) for k in keys]),
+ reason=_test_extra_skip_reason(*keys),
+ )
+
+
+def skip_unless_test_extras(*keys: str):
+ """
+ Skip the current test/fixture if any of the required keys are not present
+ in PG_TEST_EXTRA. Use this inside fixtures where decorators can't be used.
+
+ @pytest.fixture
+ def my_fixture():
+ skip_unless_test_extras("ldap")
+ ...
+ """
+ if not all([_has_test_extra(k) for k in keys]):
+ pytest.skip(_test_extra_skip_reason(*keys))
+
+
+def test_timeout_default() -> int:
+ """
+ Returns the value of the PG_TEST_TIMEOUT_DEFAULT environment variable, in
+ seconds, or 180 if one was not provided.
+ """
+ default = os.getenv("PG_TEST_TIMEOUT_DEFAULT", "")
+ if not default:
+ return 180
+
+ try:
+ return int(default)
+ except ValueError as v:
+ logger.warning("PG_TEST_TIMEOUT_DEFAULT could not be parsed: " + str(v))
+ return 180
diff --git a/src/test/pytest/pypg/fixtures.py b/src/test/pytest/pypg/fixtures.py
new file mode 100644
index 00000000000..8c0cb60daa5
--- /dev/null
+++ b/src/test/pytest/pypg/fixtures.py
@@ -0,0 +1,335 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import os
+import contextlib
+import pathlib
+import time
+from typing import List
+
+import pytest
+
+from ._env import test_timeout_default
+from .util import capture
+from .server import PostgresServer
+
+from libpq import load_libpq_handle, connect as libpq_connect
+
+
+# Stash key for tracking servers for log reporting.
+_servers_key = pytest.StashKey[List[PostgresServer]]()
+
+
+def _record_server_for_log_reporting(request, server):
+ """Record a server for log reporting on test failure."""
+ if _servers_key not in request.node.stash:
+ request.node.stash[_servers_key] = []
+ request.node.stash[_servers_key].append(server)
+
+
+@pytest.fixture
+def remaining_timeout():
+ """
+ This fixture provides a function that returns how much of the
+ PG_TEST_TIMEOUT_DEFAULT remains for the current test, in fractional seconds.
+ This value is never less than zero.
+
+ This fixture is per-test, so the deadline is also reset on a per-test basis.
+ """
+ now = time.monotonic()
+ deadline = now + test_timeout_default()
+
+ return lambda: max(deadline - time.monotonic(), 0)
+
+
+@pytest.fixture(scope="module")
+def remaining_timeout_module():
+ """
+ Same as remaining_timeout, but the deadline is set once per module.
+
+ This fixture is per-module, which means it's generally only really useful
+ for configuring timeouts of operations that happen in the setup phase of
+ another module fixtures. If you use it in a test it would mean that each
+ subsequent test in the module gets a reduced timeout.
+ """
+ now = time.monotonic()
+ deadline = now + test_timeout_default()
+
+ return lambda: max(deadline - time.monotonic(), 0)
+
+
+@pytest.fixture(scope="session")
+def libpq_handle(libdir, bindir):
+ """
+ Loads a ctypes handle for libpq. Some common function prototypes are
+ initialized for general use.
+ """
+ try:
+ return load_libpq_handle(libdir, bindir)
+ except OSError as e:
+ if "wrong ELF class" in str(e):
+ # This happens in CI when trying to lead a 32-bit libpq library
+ # with a 64-bit Python
+ pytest.skip("libpq architecture does not match Python interpreter")
+ raise
+
+
+@pytest.fixture
+def connect(libpq_handle, remaining_timeout):
+ """
+ Returns a function to connect to PostgreSQL via libpq.
+
+ The returned function accepts connection options as keyword arguments
+ (host, port, dbname, etc.) and returns a PGconn object. Connections
+ are automatically cleaned up at the end of the test.
+
+ Example:
+ conn = connect(host='localhost', port=5432, dbname='postgres')
+ result = conn.sql("SELECT 1")
+ """
+ with contextlib.ExitStack() as stack:
+
+ def _connect(**opts):
+ return libpq_connect(libpq_handle, stack, remaining_timeout, **opts)
+
+ yield _connect
+
+
+@pytest.fixture(scope="session")
+def pg_config():
+ """
+ Returns the path to pg_config. Uses PG_CONFIG environment variable if set,
+ otherwise uses 'pg_config' from PATH.
+ """
+ return os.environ.get("PG_CONFIG", "pg_config")
+
+
+@pytest.fixture(scope="session")
+def bindir(pg_config):
+ """
+ Returns the PostgreSQL bin directory using pg_config --bindir.
+ """
+ return pathlib.Path(capture(pg_config, "--bindir"))
+
+
+@pytest.fixture(scope="session")
+def libdir(pg_config):
+ """
+ Returns the PostgreSQL lib directory using pg_config --libdir.
+ """
+ return pathlib.Path(capture(pg_config, "--libdir"))
+
+
+@pytest.fixture(scope="session")
+def tmp_check(tmp_path_factory) -> pathlib.Path:
+ """
+ Returns the tmp_check directory that should be used for the tests. If
+ TESTDATADIR is provided, that will be used; otherwise a new temporary
+ directory is created in the pytest temp root.
+ """
+ d = os.getenv("TESTDATADIR")
+ if d:
+ d = pathlib.Path(d)
+ else:
+ d = tmp_path_factory.mktemp("tmp_check")
+
+ return d
+
+
+@pytest.fixture(scope="session")
+def datadir(tmp_check):
+ """
+ Returns the data directory to use for the pg fixture.
+ """
+
+ return tmp_check / "pgdata"
+
+
+@pytest.fixture(scope="session")
+def sockdir(tmp_path_factory):
+ """
+ Returns the directory name to use as the server's unix_socket_directories
+ setting. Local client connections use this as the PGHOST.
+
+ At the moment, this is always put under the pytest temp root.
+ """
+ return tmp_path_factory.mktemp("sockfiles")
+
+
+@pytest.fixture(scope="session")
+def pg_server_global(bindir, datadir, sockdir, libpq_handle):
+ """
+ Starts a running Postgres server listening on localhost. The HBA initially
+ allows only local UNIX connections from the same user.
+
+ Returns a PostgresServer instance with methods for server management, configuration,
+ and creating test databases/users.
+ """
+ server = PostgresServer("default", bindir, datadir, sockdir, libpq_handle)
+
+ yield server
+
+ # Cleanup any test resources
+ server.cleanup()
+
+ # Stop the server
+ server.stop()
+
+
+@pytest.fixture(scope="module")
+def pg_server_module(pg_server_global):
+ """
+ Module-scoped server context. Which can be useful so that certain settings
+ can be overriden at the module level through autouse fixtures. An example
+ of this is in the SSL tests.
+ """
+ with pg_server_global.subcontext() as s:
+ yield s
+
+
+@pytest.fixture
+def pg(request, pg_server_module, remaining_timeout):
+ """
+ Per-test server context. Use this fixture to make changes to the server
+ which will be rolled back at the end of the test (e.g., creating test
+ users/databases).
+
+ Also captures the PostgreSQL log position at test start so that any new
+ log entries can be included in the test report on failure.
+ """
+ with pg_server_module.start_new_test(remaining_timeout) as s:
+ _record_server_for_log_reporting(request, s)
+ yield s
+
+
+@pytest.fixture
+def conn(pg):
+ """
+ Returns a connected PGconn instance to the test PostgreSQL server.
+ The connection is automatically cleaned up at the end of the test.
+
+ Example:
+ def test_something(conn):
+ result = conn.sql("SELECT 1")
+ assert result == 1
+ """
+ return pg.connect()
+
+
+@pytest.fixture
+def create_pg(request, bindir, sockdir, libpq_handle, tmp_check, remaining_timeout):
+ """
+ Factory fixture to create additional PostgreSQL servers (per-test scope).
+
+ Returns a function that creates new PostgreSQL server instances.
+ Servers are automatically cleaned up at the end of the test.
+
+ Example:
+ def test_multiple_servers(create_pg):
+ node1 = create_pg()
+ node2 = create_pg()
+ node3 = create_pg()
+ """
+ servers = []
+
+ def _create(name=None, **kwargs):
+ if name is None:
+ count = len(servers) + 1
+ name = f"pg{count}"
+
+ datadir = tmp_check / f"pgdata_{name}"
+ server = PostgresServer(name, bindir, datadir, sockdir, libpq_handle, **kwargs)
+ server.set_timeout(remaining_timeout)
+ _record_server_for_log_reporting(request, server)
+ servers.append(server)
+ return server
+
+ yield _create
+
+ for server in servers:
+ server.cleanup()
+ server.stop()
+
+
+@pytest.fixture(scope="module")
+def _module_scoped_servers():
+ """Session-scoped list to track servers created by create_pg_module."""
+ return []
+
+
+@pytest.fixture(scope="module")
+def create_pg_module(
+ bindir,
+ sockdir,
+ libpq_handle,
+ tmp_check,
+ remaining_timeout_module,
+ _module_scoped_servers,
+):
+ """
+ Factory fixture to create additional PostgreSQL servers (module scope).
+
+ Like create_pg, but servers persist for the entire test module.
+ Use this when multiple tests in a module can share the same servers.
+
+ The timeout is automatically set on all servers at the start of each test
+ via the _set_module_server_timeouts autouse fixture.
+
+ Example:
+ @pytest.fixture(scope="module")
+ def shared_nodes(create_pg_module):
+ return [create_pg_module() for _ in range(3)]
+ """
+
+ def _create(name=None, **kwargs):
+ if name is None:
+ count = len(_module_scoped_servers) + 1
+ name = f"pg{count}"
+ datadir = tmp_check / f"pgdata_{name}"
+ server = PostgresServer(name, bindir, datadir, sockdir, libpq_handle, **kwargs)
+ server.set_timeout(remaining_timeout_module)
+ _module_scoped_servers.append(server)
+ return server
+
+ yield _create
+
+ for server in _module_scoped_servers:
+ server.cleanup()
+ server.stop()
+
+
+@pytest.fixture(autouse=True)
+def _set_module_server_timeouts(request, _module_scoped_servers, remaining_timeout):
+ """Autouse fixture that sets timeout, enters subcontext, and records log positions for module-scoped servers."""
+ with contextlib.ExitStack() as stack:
+ for server in _module_scoped_servers:
+ stack.enter_context(server.start_new_test(remaining_timeout))
+ _record_server_for_log_reporting(request, server)
+ yield
+
+
+@pytest.hookimpl(hookwrapper=True, trylast=True)
+def pytest_runtest_makereport(item, call):
+ """
+ Adds PostgreSQL server logs to the test report sections.
+ """
+ outcome = yield
+ report = outcome.get_result()
+
+ if report.when != "call":
+ return
+
+ if _servers_key not in item.stash:
+ return
+
+ servers = item.stash[_servers_key]
+ del item.stash[_servers_key]
+
+ include_name = len(servers) > 1
+
+ for server in servers:
+ content = server.log_content()
+ if content.strip():
+ section_title = "Postgres log"
+ if include_name:
+ section_title += f" ({server.name})"
+ report.sections.append((section_title, content))
diff --git a/src/test/pytest/pypg/server.py b/src/test/pytest/pypg/server.py
new file mode 100644
index 00000000000..9242ab25007
--- /dev/null
+++ b/src/test/pytest/pypg/server.py
@@ -0,0 +1,470 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import os
+import pathlib
+import platform
+import re
+import shutil
+import socket
+import subprocess
+import tempfile
+from collections import namedtuple
+from typing import Callable, Optional
+
+from .util import run
+from libpq import PGconn, connect as libpq_connect
+
+
+class FileBackup(contextlib.AbstractContextManager):
+ """
+ A context manager which backs up a file's contents, restoring them on exit.
+ """
+
+ def __init__(self, file: pathlib.Path):
+ super().__init__()
+
+ self._file = file
+
+ def __enter__(self):
+ with tempfile.NamedTemporaryFile(
+ prefix=self._file.name, dir=self._file.parent, delete=False
+ ) as f:
+ self._backup = pathlib.Path(f.name)
+
+ shutil.copyfile(self._file, self._backup)
+
+ return self
+
+ def __exit__(self, *exc):
+ # Swap the backup and the original file, so that the modified contents
+ # can still be inspected in case of failure.
+ tmp = self._backup.parent / (self._backup.name + ".tmp")
+
+ shutil.copyfile(self._file, tmp)
+ shutil.copyfile(self._backup, self._file)
+ shutil.move(tmp, self._backup)
+
+
+class HBA(FileBackup):
+ """
+ Backs up a server's HBA configuration and provides means for temporarily
+ editing it.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "pg_hba.conf")
+
+ def prepend(self, *lines):
+ """
+ Temporarily prepends lines to the server's pg_hba.conf.
+
+ As sugar for aligning HBA columns in the tests, each line can be either
+ a string or a list of strings. List elements will be joined by single
+ spaces before they are written to file.
+ """
+ with open(self._file, "r") as f:
+ prior_data = f.read()
+
+ with open(self._file, "w") as f:
+ for line in lines:
+ if isinstance(line, list):
+ print(*line, file=f)
+ else:
+ print(line, file=f)
+
+ f.write(prior_data)
+
+
+class Config(FileBackup):
+ """
+ Backs up a server's postgresql.conf and provides means for temporarily
+ editing it.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "postgresql.conf")
+
+ def set(self, **gucs):
+ """
+ Temporarily appends GUC settings to the server's postgresql.conf.
+ """
+
+ with open(self._file, "a") as f:
+ print(file=f)
+
+ for n, v in gucs.items():
+ v = str(v)
+
+ # TODO: proper quoting
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+ v = "'{}'".format(v)
+
+ print(n, "=", v, file=f)
+
+
+Backup = namedtuple("Backup", "conf, hba")
+
+
+class PostgresServer:
+ """
+ Represents a running PostgreSQL server instance with management utilities.
+ Provides methods for configuration, user/database creation, and server control.
+ """
+
+ def __init__(
+ self,
+ name,
+ bindir,
+ datadir,
+ sockdir,
+ libpq_handle,
+ *,
+ hostaddr: Optional[str] = None,
+ port: Optional[int] = None,
+ ):
+ """
+ Initialize and start a PostgreSQL server instance.
+
+ Args:
+ name: The name of this server instance (for logging purposes)
+ bindir: Path to PostgreSQL bin directory
+ datadir: Path to data directory for this server
+ sockdir: Path to directory for Unix sockets
+ libpq_handle: ctypes handle to libpq
+ hostaddr: If provided, use this specific address (e.g., "127.0.0.2")
+ port: If provided, use this port instead of finding a free one,
+ is currently only allowed if hostaddr is also provided
+ """
+
+ if hostaddr is None and port is not None:
+ raise NotImplementedError("port was provided without hostaddr")
+
+ self.name = name
+ self.datadir = datadir
+ self.sockdir = sockdir
+ self.libpq_handle = libpq_handle
+ self._remaining_timeout_fn: Optional[Callable[[], float]] = None
+ self._bindir = bindir
+ self._pg_ctl = bindir / "pg_ctl"
+ self.log = datadir / "postgresql.log"
+ self._log_start_pos = 0
+
+ # Determine whether to use Unix sockets
+ use_unix_sockets = platform.system() != "Windows" and hostaddr is None
+
+ # Use INITDB_TEMPLATE if available (much faster than running initdb)
+ initdb_template = os.environ.get("INITDB_TEMPLATE")
+ if initdb_template and os.path.isdir(initdb_template):
+ shutil.copytree(initdb_template, datadir)
+ else:
+ if platform.system() == "Windows":
+ auth_method = "trust"
+ else:
+ auth_method = "peer"
+ run(
+ bindir / "initdb",
+ "--no-sync",
+ "--auth",
+ auth_method,
+ "--pgdata",
+ self.datadir,
+ )
+
+ # Figure out a port to listen on. Attempt to reserve both IPv4 and IPv6
+ # addresses in one go.
+ #
+ # Note: socket.has_dualstack_ipv6/create_server are only in Python 3.8+.
+ if hostaddr is not None:
+ # Explicit address provided
+ addrs: list[str] = [hostaddr]
+ temp_sock = socket.socket()
+ if port is None:
+ temp_sock.bind((hostaddr, 0))
+ _, port = temp_sock.getsockname()
+
+ elif hasattr(socket, "has_dualstack_ipv6") and socket.has_dualstack_ipv6():
+ addr = ("::1", 0)
+ temp_sock = socket.create_server(
+ addr, family=socket.AF_INET6, dualstack_ipv6=True
+ )
+
+ hostaddr, port, _, _ = temp_sock.getsockname()
+ assert hostaddr is not None
+ addrs = [hostaddr, "127.0.0.1"]
+
+ else:
+ addr = ("127.0.0.1", 0)
+
+ temp_sock = socket.socket()
+ temp_sock.bind(addr)
+
+ hostaddr, port = temp_sock.getsockname()
+ assert hostaddr is not None
+ addrs = [hostaddr]
+
+ # Store the computed values
+ self.hostaddr = hostaddr
+ self.port = port
+ # Including the host to use for connections - either the socket
+ # directory or TCP address
+ if use_unix_sockets:
+ self.host = str(sockdir)
+ else:
+ self.host = hostaddr
+
+ with open(os.path.join(datadir, "postgresql.conf"), "a") as f:
+ print(file=f)
+ if use_unix_sockets:
+ print(
+ "unix_socket_directories = '{}'".format(sockdir.as_posix()),
+ file=f,
+ )
+ else:
+ # Disable Unix sockets when using TCP to avoid lock conflicts
+ print("unix_socket_directories = ''", file=f)
+ print("listen_addresses = '{}'".format(",".join(addrs)), file=f)
+ print("port =", port, file=f)
+ print("log_connections = all", file=f)
+ print("fsync = off", file=f)
+ print("datestyle = 'ISO'", file=f)
+ print("timezone = 'UTC'", file=f)
+
+ # Between closing of the socket, s, and server start, we're racing
+ # against anything that wants to open up ephemeral ports, so try not to
+ # put any new work here.
+
+ temp_sock.close()
+ self.pg_ctl("start")
+
+ # Read the PID file to get the postmaster PID
+ with open(os.path.join(datadir, "postmaster.pid")) as f:
+ self.pid = int(f.readline().strip())
+
+ # ExitStack for cleanup callbacks
+ self._cleanup_stack = contextlib.ExitStack()
+
+ def current_log_position(self):
+ """Get the current end position of the log file."""
+ if self.log.exists():
+ return self.log.stat().st_size
+ return 0
+
+ def reset_log_position(self):
+ """Mark current log position as start for log_content()."""
+ self._log_start_pos = self.current_log_position()
+
+ @contextlib.contextmanager
+ def start_new_test(self, remaining_timeout):
+ """
+ Prepare server for a new test.
+
+ Sets timeout, resets log position, and enters a cleanup subcontext.
+ """
+ self.set_timeout(remaining_timeout)
+ self.reset_log_position()
+ with self.subcontext():
+ yield self
+
+ def psql(self, *args):
+ """Run psql with the given arguments."""
+ self._run(os.path.join(self._bindir, "psql"), "-w", *args)
+
+ def sql(self, query):
+ """Execute a SQL query via libpq. Returns simplified results."""
+ with self.connect() as conn:
+ return conn.sql(query)
+
+ def pg_ctl(self, *args):
+ """Run pg_ctl with the given arguments."""
+ self._run(self._pg_ctl, "--pgdata", self.datadir, "--log", self.log, *args)
+
+ def _run(self, cmd, *args, addenv: Optional[dict] = None):
+ """Run a command with PG* environment variables set."""
+ subenv = dict(os.environ)
+ subenv.update(
+ {
+ "PGHOST": str(self.host),
+ "PGPORT": str(self.port),
+ "PGDATABASE": "postgres",
+ "PGDATA": str(self.datadir),
+ }
+ )
+ if addenv:
+ subenv.update(addenv)
+ run(cmd, *args, env=subenv)
+
+ def create_users(self, *userkeys: str):
+ """Create test users and register them for cleanup."""
+ usermap = {}
+ for u in userkeys:
+ name = u + "user"
+ usermap[u] = name
+ self.psql("-c", "CREATE USER " + name)
+ self._cleanup_stack.callback(self.psql, "-c", "DROP USER " + name)
+ return usermap
+
+ def create_dbs(self, *dbkeys: str):
+ """Create test databases and register them for cleanup."""
+ dbmap = {}
+ for d in dbkeys:
+ name = d + "db"
+ dbmap[d] = name
+ self.psql("-c", "CREATE DATABASE " + name)
+ self._cleanup_stack.callback(self.psql, "-c", "DROP DATABASE " + name)
+ return dbmap
+
+ @contextlib.contextmanager
+ def reloading(self):
+ """
+ Provides a context manager for making configuration changes.
+
+ If the context suite finishes successfully, the configuration will
+ be reloaded via pg_ctl. On teardown, the configuration changes will
+ be unwound, and the server will be signaled to reload again.
+
+ The context target contains the following attributes which can be
+ used to configure the server:
+ - .conf: modifies postgresql.conf
+ - .hba: modifies pg_hba.conf
+
+ For example:
+
+ with pg_server_session.reloading() as s:
+ s.conf.set(log_connections="on")
+ s.hba.prepend("local all all trust")
+ """
+ # Push a reload onto the stack before making any other
+ # unwindable changes. That way the order of operations will be
+ #
+ # # test
+ # - config change 1
+ # - config change 2
+ # - reload
+ # # teardown
+ # - undo config change 2
+ # - undo config change 1
+ # - reload
+ #
+ self._cleanup_stack.callback(self.pg_ctl, "reload")
+ yield self._backup_configuration()
+
+ # Now actually reload
+ self.pg_ctl("reload")
+
+ @contextlib.contextmanager
+ def restarting(self):
+ """Like .reloading(), but with a full server restart."""
+ self._cleanup_stack.callback(self.pg_ctl, "restart")
+ yield self._backup_configuration()
+ self.pg_ctl("restart")
+
+ def _backup_configuration(self):
+ # Wrap the existing HBA and configuration with FileBackups.
+ return Backup(
+ hba=self._cleanup_stack.enter_context(HBA(self.datadir)),
+ conf=self._cleanup_stack.enter_context(Config(self.datadir)),
+ )
+
+ @contextlib.contextmanager
+ def subcontext(self):
+ """
+ Create a new cleanup context for per-test isolation.
+
+ Temporarily replaces the cleanup stack so that any cleanup callbacks
+ registered within this context will be cleaned up when the context exits.
+ """
+ old_stack = self._cleanup_stack
+ self._cleanup_stack = contextlib.ExitStack()
+ try:
+ self._cleanup_stack.__enter__()
+ yield self
+ finally:
+ self._cleanup_stack.__exit__(None, None, None)
+ self._cleanup_stack = old_stack
+
+ def stop(self, mode="fast"):
+ """
+ Stop the PostgreSQL server instance.
+
+ Ignores failures if the server is already stopped.
+ """
+ try:
+ self.pg_ctl("stop", "--mode", mode)
+ except subprocess.CalledProcessError:
+ # Server may have already been stopped
+ pass
+
+ def log_content(self) -> str:
+ """Return log content from the current context's start position."""
+ with open(self.log) as f:
+ f.seek(self._log_start_pos)
+ return f.read()
+
+ @contextlib.contextmanager
+ def log_contains(self, pattern, times=None):
+ """
+ Context manager that checks if the log matches pattern during the block.
+
+ Args:
+ pattern: The regex pattern to search for.
+ times: If None, any number of matches is accepted.
+ If a number, exactly that many matches are required.
+ """
+ start_pos = self.current_log_position()
+ yield
+ with open(self.log) as f:
+ f.seek(start_pos)
+ content = f.read()
+ if times is None:
+ assert re.search(pattern, content), f"Pattern {pattern!r} not found in log"
+ else:
+ match_count = len(re.findall(pattern, content))
+ assert match_count == times, (
+ f"Expected {times} matches of {pattern!r}, found {match_count}"
+ )
+
+ def cleanup(self):
+ """Run all registered cleanup callbacks."""
+ self._cleanup_stack.close()
+
+ def set_timeout(self, remaining_timeout_fn: Callable[[], float]) -> None:
+ """
+ Set the timeout function for connections.
+ This is typically called by pg fixture for each test.
+ """
+ self._remaining_timeout_fn = remaining_timeout_fn
+
+ def connect(self, **opts) -> PGconn:
+ """
+ Creates a connection to this PostgreSQL server instance.
+
+ Args:
+ **opts: Additional connection options (can override defaults)
+
+ Returns:
+ PGconn: Connected database connection
+
+ Example:
+ conn = pg.connect()
+ conn = pg.connect(dbname='mydb')
+ """
+ if self._remaining_timeout_fn is None:
+ raise RuntimeError(
+ "Timeout function not set. Use set_timeout() or pg fixture."
+ )
+
+ defaults = {
+ "host": self.host,
+ "port": self.port,
+ "dbname": "postgres",
+ }
+ defaults.update(opts)
+
+ return libpq_connect(
+ self.libpq_handle,
+ self._cleanup_stack,
+ self._remaining_timeout_fn,
+ **defaults,
+ )
diff --git a/src/test/pytest/pypg/util.py b/src/test/pytest/pypg/util.py
new file mode 100644
index 00000000000..b2a1e627e4b
--- /dev/null
+++ b/src/test/pytest/pypg/util.py
@@ -0,0 +1,42 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import shlex
+import subprocess
+import sys
+
+
+def eprint(*args, **kwargs):
+ """eprint prints to stderr"""
+ print(*args, file=sys.stderr, **kwargs)
+
+
+def run(*command, check=True, shell=None, silent=False, **kwargs):
+ """run runs the given command and prints it to stderr"""
+
+ if shell is None:
+ shell = len(command) == 1 and isinstance(command[0], str)
+
+ if shell:
+ command = command[0]
+ else:
+ command = list(map(str, command))
+
+ if not silent:
+ if shell:
+ eprint(f"+ {command}")
+ else:
+ # We could normally use shlex.join here, but it's not available in
+ # Python 3.6 which we still like to support
+ unsafe_string_cmd = " ".join(map(shlex.quote, command))
+ eprint(f"+ {unsafe_string_cmd}")
+
+ if silent:
+ kwargs.setdefault("stdout", subprocess.DEVNULL)
+
+ return subprocess.run(command, check=check, shell=shell, **kwargs)
+
+
+def capture(command, *args, stdout=subprocess.PIPE, encoding="utf-8", **kwargs):
+ return run(
+ command, *args, stdout=stdout, encoding=encoding, **kwargs
+ ).stdout.removesuffix("\n")
diff --git a/src/test/pytest/pyt/conftest.py b/src/test/pytest/pyt/conftest.py
new file mode 100644
index 00000000000..dd73917c68c
--- /dev/null
+++ b/src/test/pytest/pyt/conftest.py
@@ -0,0 +1 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
diff --git a/src/test/pytest/pyt/test_errors.py b/src/test/pytest/pyt/test_errors.py
new file mode 100644
index 00000000000..771fe8f76e3
--- /dev/null
+++ b/src/test/pytest/pyt/test_errors.py
@@ -0,0 +1,34 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests for libpq error types and SQLSTATE-based exception mapping.
+"""
+
+import pytest
+from libpq import LibpqError
+
+
+def test_syntax_error(conn):
+ """Invalid SQL syntax raises LibpqError with correct SQLSTATE."""
+ with pytest.raises(LibpqError) as exc_info:
+ conn.sql("SELEC 1")
+
+ err = exc_info.value
+ assert err.sqlstate == "42601"
+ assert err.sqlstate_class == "42"
+ assert "syntax" in str(err).lower()
+
+
+def test_unique_violation(conn):
+ """Unique violation includes all error fields."""
+ conn.sql("CREATE TEMP TABLE test_uv (id int CONSTRAINT test_uv_pk PRIMARY KEY)")
+ conn.sql("INSERT INTO test_uv VALUES (1)")
+
+ with pytest.raises(LibpqError) as exc_info:
+ conn.sql("INSERT INTO test_uv VALUES (1)")
+
+ err = exc_info.value
+ assert err.sqlstate == "23505"
+ assert err.table_name == "test_uv"
+ assert err.constraint_name == "test_uv_pk"
+ assert err.detail == "Key (id)=(1) already exists."
diff --git a/src/test/pytest/pyt/test_libpq.py b/src/test/pytest/pyt/test_libpq.py
new file mode 100644
index 00000000000..4fcf4056f41
--- /dev/null
+++ b/src/test/pytest/pyt/test_libpq.py
@@ -0,0 +1,172 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import os
+import socket
+import struct
+import threading
+from typing import Callable
+
+import pytest
+
+from libpq import connstr, LibpqError
+
+
+@pytest.mark.parametrize(
+ "opts, expected",
+ [
+ (dict(), ""),
+ (dict(port=5432), "port=5432"),
+ (dict(port=5432, dbname="postgres"), "port=5432 dbname=postgres"),
+ (dict(host=""), "host=''"),
+ (dict(host=" "), r"host=' '"),
+ (dict(keyword="'"), r"keyword=\'"),
+ (dict(keyword=" \\' "), r"keyword=' \\\' '"),
+ ],
+)
+def test_connstr(opts, expected):
+ """Tests the escape behavior for connstr()."""
+ assert connstr(opts) == expected
+
+
+def test_must_connect_errors(connect):
+ """Tests that connect() raises LibpqError."""
+ with pytest.raises(LibpqError, match="invalid connection option"):
+ connect(some_unknown_keyword="whatever")
+
+
+@pytest.fixture
+def local_server(tmp_path, remaining_timeout):
+ """
+ Opens up a local UNIX socket for mocking a Postgres server on a background
+ thread. See the _Server API for usage.
+
+ This fixture requires AF_UNIX support; dependent tests will be skipped on
+ platforms that don't provide it.
+ """
+
+ try:
+ from socket import AF_UNIX
+ except ImportError:
+ pytest.skip("AF_UNIX not supported on this platform")
+
+ class _Server(contextlib.ExitStack):
+ """
+ Implementation class for local_server. See .background() for the primary
+ entry point for tests. Postgres clients may connect to this server via
+ local_server.host/local_server.port.
+
+ _Server derives from contextlib.ExitStack to provide easy cleanup of
+ associated resources; see the documentation for that class for a full
+ explanation.
+ """
+
+ def __init__(self):
+ super().__init__()
+
+ self.host = tmp_path
+ self.port = 5432
+
+ self._thread = None
+ self._thread_exc = None
+ self._listener = self.enter_context(
+ socket.socket(AF_UNIX, socket.SOCK_STREAM),
+ )
+
+ def bind_and_listen(self):
+ """
+ Does the actual work of binding the UNIX socket using the Postgres
+ server conventions and listening for connections.
+
+ The listen backlog is currently hardcoded to one.
+ """
+ sockfile = self.host / ".s.PGSQL.{}".format(self.port)
+
+ # Lock down the permissions on the new socket.
+ prev_mask = os.umask(0o077)
+
+ # Bind (creating the socket file), and immediately register it for
+ # deletion from disk when the stack is cleaned up.
+ self._listener.bind(bytes(sockfile))
+ self.callback(os.unlink, sockfile)
+
+ os.umask(prev_mask)
+
+ self._listener.listen(1)
+
+ def background(self, fn: Callable[[socket.socket], None]) -> None:
+ """
+ Accepts a client connection on a background thread and passes it to
+ the provided callback. Any exceptions raised from the callback will
+ be re-raised on the main thread during fixture teardown.
+
+ Blocking operations on the connected socket default to using the
+ remaining_timeout(), though this can be changed by the test via the
+ socket's .settimeout().
+ """
+
+ def _bg():
+ try:
+ self._listener.settimeout(remaining_timeout())
+ sock, _ = self._listener.accept()
+
+ with sock:
+ sock.settimeout(remaining_timeout())
+ fn(sock)
+
+ except Exception as e:
+ # Save the exception for re-raising on the main thread.
+ self._thread_exc = e
+
+ # TODO: rather than using callback(), consider explicitly signaling
+ # the fn() implementation to stop early if we get an exception.
+ # Otherwise we'll hang until the end of the timeout.
+ self._thread = threading.Thread(target=_bg)
+ self.callback(self._join)
+
+ self._thread.start()
+
+ def _join(self):
+ """
+ Waits for the background thread to finish and raises any thrown
+ exception. This is called during fixture teardown.
+ """
+ # Give a little bit of wiggle room on the join timeout, since we're
+ # racing against the test's own use of remaining_timeout(). (It's
+ # preferable to let tests report timeouts; the stack traces will
+ # help with debugging.)
+ self._thread.join(remaining_timeout() + 1)
+ if self._thread.is_alive():
+ raise TimeoutError("background thread is still running after timeout")
+
+ if self._thread_exc is not None:
+ raise self._thread_exc
+
+ with _Server() as s:
+ s.bind_and_listen()
+ yield s
+
+
+def test_connection_is_finished_on_error(connect, local_server):
+ """Tests that PQfinish() gets called at the end of testing."""
+ expected_error = "something is wrong"
+
+ def serve_error(s: socket.socket) -> None:
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Quick check for the startup packet version.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (3, 0)
+
+ # Discard the remainder of the startup packet and send a v2 error.
+ s.recv(pktlen - 8)
+ s.send(b"E" + expected_error.encode() + b"\0")
+
+ # And now the socket should be closed.
+ assert not s.recv(1), "client sent unexpected data"
+
+ local_server.background(serve_error)
+
+ with pytest.raises(LibpqError, match=expected_error):
+ # Exiting this context should result in PQfinish().
+ connect(host=local_server.host, port=local_server.port)
diff --git a/src/test/pytest/pyt/test_multi_server.py b/src/test/pytest/pyt/test_multi_server.py
new file mode 100644
index 00000000000..8ee045b0cc8
--- /dev/null
+++ b/src/test/pytest/pyt/test_multi_server.py
@@ -0,0 +1,46 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests demonstrating multi-server functionality using create_pg fixture.
+
+These tests verify that the pytest infrastructure correctly handles
+multiple PostgreSQL server instances within a single test, and that
+module-scoped servers persist across tests.
+"""
+
+import pytest
+
+
+def test_multiple_servers_basic(create_pg):
+ """Test that we can create and connect to multiple servers."""
+ node1 = create_pg("primary")
+ node2 = create_pg("secondary")
+
+ conn1 = node1.connect()
+ conn2 = node2.connect()
+
+ # Each server should have its own data directory
+ datadir1 = conn1.sql("SHOW data_directory")
+ datadir2 = conn2.sql("SHOW data_directory")
+ assert datadir1 != datadir2
+
+ # Each server should be listening on a different port
+ assert node1.port != node2.port
+
+
+@pytest.fixture(scope="module")
+def shared_server(create_pg_module):
+ """A server shared across all tests in this module."""
+ server = create_pg_module("shared")
+ server.sql("CREATE TABLE module_state (value int DEFAULT 0)")
+ return server
+
+
+def test_module_server_create_row(shared_server):
+ """First test: create a row in the shared server."""
+ shared_server.connect().sql("INSERT INTO module_state VALUES (42)")
+
+
+def test_module_server_see_row(shared_server):
+ """Second test: verify we see the row from the previous test."""
+ assert shared_server.connect().sql("SELECT value FROM module_state") == 42
diff --git a/src/test/pytest/pyt/test_query_helpers.py b/src/test/pytest/pyt/test_query_helpers.py
new file mode 100644
index 00000000000..abcd9084214
--- /dev/null
+++ b/src/test/pytest/pyt/test_query_helpers.py
@@ -0,0 +1,347 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests for query helper functions with type conversion and result simplification.
+"""
+
+import uuid
+
+import pytest
+
+
+def test_single_cell_int(conn):
+ """Single cell integer query returns just the value."""
+ result = conn.sql("SELECT 1")
+ assert result == 1
+ assert isinstance(result, int)
+
+
+def test_single_cell_string(conn):
+ """Single cell string query returns just the value."""
+ result = conn.sql("SELECT 'hello'")
+ assert result == "hello"
+ assert isinstance(result, str)
+
+
+def test_single_cell_bool(conn):
+ """Single cell boolean query returns just the value."""
+
+ result = conn.sql("SELECT true")
+ assert result is True
+ assert isinstance(result, bool)
+
+ result = conn.sql("SELECT false")
+ assert result is False
+
+
+def test_single_cell_float(conn):
+ """Single cell float query returns just the value."""
+
+ result = conn.sql("SELECT 3.14::float4")
+ assert isinstance(result, float)
+ assert abs(result - 3.14) < 0.01
+
+
+def test_single_cell_null(conn):
+ """Single cell NULL query returns None."""
+
+ result = conn.sql("SELECT NULL")
+ assert result is None
+
+
+def test_single_row_multiple_columns(conn):
+ """Single row with multiple columns returns a tuple."""
+
+ result = conn.sql("SELECT 1, 'hello', true")
+ assert result == (1, "hello", True)
+ assert isinstance(result, tuple)
+
+
+def test_single_column_multiple_rows(conn):
+ """Single column with multiple rows returns a list of values."""
+
+ result = conn.sql("SELECT * FROM generate_series(1, 3)")
+ assert result == [1, 2, 3]
+ assert isinstance(result, list)
+
+
+def test_multiple_rows_and_columns(conn):
+ """Multiple rows and columns returns list of tuples."""
+
+ result = conn.sql("SELECT * FROM (VALUES (1, 'a'), (2, 'b'), (3, 'c')) AS t")
+ assert result == [(1, "a"), (2, "b"), (3, "c")]
+ assert isinstance(result, list)
+ assert all(isinstance(row, tuple) for row in result)
+
+
+def test_empty_result(conn):
+ """Empty result set returns empty list."""
+
+ result = conn.sql("SELECT 1 WHERE false")
+ assert result == []
+
+
+def test_query_error_handling(conn):
+ """Query errors raise RuntimeError with actual error message."""
+
+ with pytest.raises(RuntimeError) as exc_info:
+ conn.sql("SELECT * FROM nonexistent_table")
+
+ error_msg = str(exc_info.value)
+ assert "nonexistent_table" in error_msg or "does not exist" in error_msg
+
+
+def test_division_by_zero_error(conn):
+ """Division by zero raises RuntimeError."""
+
+ with pytest.raises(RuntimeError) as exc_info:
+ conn.sql("SELECT 1/0")
+
+ error_msg = str(exc_info.value)
+ assert "division by zero" in error_msg.lower()
+
+
+def test_simple_exec_create_table(conn):
+ """sql for CREATE TABLE returns None."""
+
+ result = conn.sql("CREATE TEMP TABLE test_table (id int, name text)")
+ assert result is None
+
+ # Verify table was created
+ count = conn.sql("SELECT COUNT(*) FROM test_table")
+ assert count == 0
+
+
+def test_simple_exec_insert(conn):
+ """sql for INSERT returns None."""
+
+ conn.sql("CREATE TEMP TABLE test_table (id int, name text)")
+ result = conn.sql("INSERT INTO test_table VALUES (1, 'Alice'), (2, 'Bob')")
+ assert result is None
+
+ # Verify data was inserted
+ count = conn.sql("SELECT COUNT(*) FROM test_table")
+ assert count == 2
+
+
+def test_type_conversion_mixed(conn):
+ """Test mixed type conversion in a single row."""
+
+ result = conn.sql("SELECT 42::int4, 123::int8, 3.14::float8, 'text', true, NULL")
+ assert result == (42, 123, 3.14, "text", True, None)
+ assert isinstance(result[0], int)
+ assert isinstance(result[1], int)
+ assert isinstance(result[2], float)
+ assert isinstance(result[3], str)
+ assert isinstance(result[4], bool)
+ assert result[5] is None
+
+
+def test_multiple_queries_same_connection(conn):
+ """Test running multiple queries on the same connection."""
+
+ result1 = conn.sql("SELECT 1")
+ assert result1 == 1
+
+ result2 = conn.sql("SELECT 'hello', 'world'")
+ assert result2 == ("hello", "world")
+
+ result3 = conn.sql("SELECT * FROM generate_series(1, 5)")
+ assert result3 == [1, 2, 3, 4, 5]
+
+
+def test_date_type(conn):
+ """Test date type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '2025-10-20'::date")
+ assert result == datetime.date(2025, 10, 20)
+ assert isinstance(result, datetime.date)
+
+
+def test_timestamp_type(conn):
+ """Test timestamp type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '2025-10-20 15:30:45'::timestamp")
+ assert result == datetime.datetime(2025, 10, 20, 15, 30, 45)
+ assert isinstance(result, datetime.datetime)
+
+
+def test_time_type(conn):
+ """Test time type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '15:30:45'::time")
+ assert result == datetime.time(15, 30, 45)
+ assert isinstance(result, datetime.time)
+
+
+def test_numeric_type(conn):
+ """Test numeric/decimal type conversion."""
+ import decimal
+
+ result = conn.sql("SELECT 123.456::numeric")
+ assert result == decimal.Decimal("123.456")
+ assert isinstance(result, decimal.Decimal)
+
+
+def test_int_array(conn):
+ """Test integer array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[1, 2, 3, 4, 5]")
+ assert result == [1, 2, 3, 4, 5]
+ assert isinstance(result, list)
+ assert all(isinstance(x, int) for x in result)
+
+
+def test_text_array(conn):
+ """Test text array type conversion."""
+
+ result = conn.sql("SELECT ARRAY['hello', 'world', 'test']")
+ assert result == ["hello", "world", "test"]
+ assert isinstance(result, list)
+ assert all(isinstance(x, str) for x in result)
+
+
+def test_bool_array(conn):
+ """Test boolean array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[true, false, true]")
+ assert result == [True, False, True]
+ assert isinstance(result, list)
+ assert all(isinstance(x, bool) for x in result)
+
+
+def test_empty_array(conn):
+ """Test empty array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[]::int[]")
+ assert result == []
+ assert isinstance(result, list)
+
+
+def test_json_type(conn):
+ """Test JSON type (parsed to dict)."""
+
+ result = conn.sql('SELECT \'{"key": "value"}\'::json')
+ assert isinstance(result, dict)
+ assert result == {"key": "value"}
+
+
+def test_jsonb_type(conn):
+ """Test JSONB type (parsed to dict)."""
+
+ result = conn.sql('SELECT \'{"name": "test", "count": 42}\'::jsonb')
+ assert isinstance(result, dict)
+ assert result == {"name": "test", "count": 42}
+
+
+def test_json_array(conn):
+ """Test JSON array type."""
+
+ result = conn.sql("SELECT '[1, 2, 3, 4, 5]'::json")
+ assert isinstance(result, list)
+ assert result == [1, 2, 3, 4, 5]
+
+
+def test_json_nested(conn):
+ """Test nested JSON object."""
+
+ result = conn.sql(
+ 'SELECT \'{"user": {"id": 1, "name": "Alice"}, "active": true}\'::json'
+ )
+ assert isinstance(result, dict)
+ assert result == {"user": {"id": 1, "name": "Alice"}, "active": True}
+
+
+def test_mixed_types_with_arrays(conn):
+ """Test mixed types including arrays in a single row."""
+
+ result = conn.sql("SELECT 42, 'text', ARRAY[1, 2, 3], true")
+ assert result == (42, "text", [1, 2, 3], True)
+ assert isinstance(result[0], int)
+ assert isinstance(result[1], str)
+ assert isinstance(result[2], list)
+ assert isinstance(result[3], bool)
+
+
+def test_uuid_type(conn):
+ """Test UUID type conversion."""
+ test_uuid = "550e8400-e29b-41d4-a716-446655440000"
+ result = conn.sql(f"SELECT '{test_uuid}'::uuid")
+ assert result == uuid.UUID(test_uuid)
+ assert isinstance(result, uuid.UUID)
+
+
+def test_uuid_generation(conn):
+ """Test generated UUID type conversion."""
+ result = conn.sql("SELECT uuidv4()")
+ assert isinstance(result, uuid.UUID)
+ # Check it's a valid UUID by ensuring it can be converted to string
+ assert len(str(result)) == 36 # UUID string format length
+
+
+def test_text_array_with_commas(conn):
+ """Test text array with elements containing commas."""
+
+ result = conn.sql("SELECT ARRAY['A,B', 'C', ' D ']")
+ assert result == ["A,B", "C", " D "]
+
+
+def test_text_array_with_quotes(conn):
+ """Test text array with elements containing quotes."""
+
+ result = conn.sql(r"SELECT ARRAY[E'a\"b', 'c']")
+ assert result == ['a"b', "c"]
+
+
+def test_text_array_with_backslash(conn):
+ """Test text array with elements containing backslashes."""
+
+ result = conn.sql(r"SELECT ARRAY[E'a\\b', 'c']")
+ assert result == ["a\\b", "c"]
+
+
+def test_json_array_type(conn):
+ """Test array of JSON values with embedded quotes and commas."""
+
+ result = conn.sql("""SELECT ARRAY['{"abc": 123, "xyz": 456}'::json]""")
+ assert result == [{"abc": 123, "xyz": 456}]
+
+
+def test_json_array_multiple(conn):
+ """Test array of multiple JSON objects."""
+
+ result = conn.sql(
+ """SELECT ARRAY['{"a": 1}'::json, '{"b": 2}'::json, '["x", "y"]'::json]"""
+ )
+ assert result == [{"a": 1}, {"b": 2}, ["x", "y"]]
+
+
+def test_2d_int_array(conn):
+ """Test 2D integer array."""
+
+ result = conn.sql("SELECT ARRAY[[1,2],[3,4]]")
+ assert result == [[1, 2], [3, 4]]
+
+
+def test_2d_text_array(conn):
+ """Test 2D integer array."""
+
+ result = conn.sql("SELECT ARRAY[['a','b'],['c','d,e']]")
+ assert result == [["a", "b"], ["c", "d,e"]]
+
+
+def test_3d_int_array(conn):
+ """Test 3D integer array."""
+
+ result = conn.sql("SELECT ARRAY[[[1,2],[3,4]],[[5,6],[7,8]]]")
+ assert result == [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]
+
+
+def test_array_with_null(conn):
+ """Test array with NULL elements."""
+
+ result = conn.sql("SELECT ARRAY[1, NULL, 3]")
+ assert result == [1, None, 3]
--
2.52.0
v9-0003-POC-Convert-load-balance-tests-from-perl-to-pytho.patchtext/x-patch; charset=utf-8; name=v9-0003-POC-Convert-load-balance-tests-from-perl-to-pytho.patchDownload
From 13c9c8576b70b4cf80ea6944727baa6f5639ddc3 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <postgres@jeltef.nl>
Date: Fri, 26 Dec 2025 12:31:43 +0100
Subject: [PATCH v9 3/5] POC: Convert load balance tests from perl to python
This is a proof of concept to show how to use the pytest test
infrastructure. It converts two existing tests that could not share
code. And now they do. If we ever introduce another load balance method
(e.g. round robin). We can easily test it for both DNS and hostlist
based load balancing by adding a single new test function.
---
src/interfaces/libpq/Makefile | 1 +
src/interfaces/libpq/meson.build | 7 +-
src/interfaces/libpq/pyt/test_load_balance.py | 170 ++++++++++++++++++
.../libpq/t/003_load_balance_host_list.pl | 94 ----------
.../libpq/t/004_load_balance_dns.pl | 144 ---------------
5 files changed, 176 insertions(+), 240 deletions(-)
create mode 100644 src/interfaces/libpq/pyt/test_load_balance.py
delete mode 100644 src/interfaces/libpq/t/003_load_balance_host_list.pl
delete mode 100644 src/interfaces/libpq/t/004_load_balance_dns.pl
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index bf4baa92917..4c4bdb4b3a3 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -167,6 +167,7 @@ check installcheck: export PATH := $(CURDIR)/test:$(PATH)
check: test-build all
$(prove_check)
+ $(pytest_check)
installcheck: test-build all
$(prove_installcheck)
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index c5ecd9c3a87..56790dd92a9 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -150,8 +150,6 @@ tests += {
'tests': [
't/001_uri.pl',
't/002_api.pl',
- 't/003_load_balance_host_list.pl',
- 't/004_load_balance_dns.pl',
't/005_negotiate_encryption.pl',
't/006_service.pl',
],
@@ -162,6 +160,11 @@ tests += {
},
'deps': libpq_test_deps,
},
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_load_balance.py',
+ ],
+ },
}
subdir('po', if_found: libintl)
diff --git a/src/interfaces/libpq/pyt/test_load_balance.py b/src/interfaces/libpq/pyt/test_load_balance.py
new file mode 100644
index 00000000000..0af46d8f37d
--- /dev/null
+++ b/src/interfaces/libpq/pyt/test_load_balance.py
@@ -0,0 +1,170 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests for load_balance_hosts connection parameter.
+
+These tests verify that libpq correctly handles load balancing across multiple
+PostgreSQL servers specified in the connection string.
+"""
+
+import platform
+import re
+
+import pytest
+
+from libpq import LibpqError
+import pypg
+
+
+@pytest.fixture(scope="module")
+def load_balance_nodes_hostlist(create_pg_module):
+ """
+ Create 3 PostgreSQL nodes with different socket directories.
+
+ Each node has its own Unix socket directory for isolation.
+ Returns a tuple of (nodes, connect).
+ """
+ nodes = [create_pg_module() for _ in range(3)]
+
+ hostlist = ",".join(node.host for node in nodes)
+ portlist = ",".join(str(node.port) for node in nodes)
+
+ def connect(**kwargs):
+ return nodes[0].connect(host=hostlist, port=portlist, **kwargs)
+
+ return nodes, connect
+
+
+@pytest.fixture(scope="module")
+def load_balance_nodes_dns(create_pg_module):
+ """
+ Create 3 PostgreSQL nodes on the same port but different IP addresses.
+
+ Uses 127.0.0.1, 127.0.0.2, 127.0.0.3 with a shared port, so that
+ connections to 'pg-loadbalancetest' can be load balanced via DNS.
+
+ Since setting up a DNS server is more effort than we consider reasonable to
+ run this test, this situation is instead imitated by using a hosts file
+ where a single hostname maps to multiple different IP addresses. This test
+ requires the administrator to add the following lines to the hosts file (if
+ we detect that this hasn't happened we skip the test):
+
+ 127.0.0.1 pg-loadbalancetest
+ 127.0.0.2 pg-loadbalancetest
+ 127.0.0.3 pg-loadbalancetest
+
+ Windows or Linux are required to run this test because these OSes allow
+ binding to 127.0.0.2 and 127.0.0.3 addresses by default, but other OSes
+ don't. We need to bind to different IP addresses, so that we can use these
+ different IP addresses in the hosts file.
+
+ The hosts file needs to be prepared before running this test. We don't do
+ it on the fly, because it requires root permissions to change the hosts
+ file. In CI we set up the previously mentioned rules in the hosts file, so
+ that this load balancing method is tested.
+
+ Requires PG_TEST_EXTRA=load_balance because it requires this manual hosts
+ file configuration and also uses TCP with trust auth, which is potentially
+ unsafe on multiuser systems.
+ """
+ pypg.skip_unless_test_extras("load_balance")
+
+ if platform.system() not in ("Linux", "Windows"):
+ pytest.skip("DNS load balance test only supported on Linux and Windows")
+
+ if platform.system() == "Windows":
+ hosts_path = r"c:\Windows\System32\Drivers\etc\hosts"
+ else:
+ hosts_path = "/etc/hosts"
+
+ try:
+ with open(hosts_path) as f:
+ hosts_content = f.read()
+ except (OSError, IOError):
+ pytest.skip(f"Could not read hosts file: {hosts_path}")
+
+ count = len(re.findall(r"127\.0\.0\.[1-3]\s+pg-loadbalancetest", hosts_content))
+ if count != 3:
+ pytest.skip("hosts file not prepared for DNS load balance test")
+
+ first_node = create_pg_module(hostaddr="127.0.0.1")
+ nodes = [
+ first_node,
+ create_pg_module(hostaddr="127.0.0.2", port=first_node.port),
+ create_pg_module(hostaddr="127.0.0.3", port=first_node.port),
+ ]
+
+ # Allow trust authentication for TCP connections from loopback
+ for node in nodes:
+ hba_path = node.datadir / "pg_hba.conf"
+ with open(hba_path, "r") as f:
+ original_content = f.read()
+ with open(hba_path, "w") as f:
+ f.write("host all all 127.0.0.0/8 trust\n")
+ f.write(original_content)
+ node.pg_ctl("reload")
+
+ def connect(**kwargs):
+ return nodes[0].connect(host="pg-loadbalancetest", **kwargs)
+
+ return nodes, connect
+
+
+@pytest.fixture(scope="module", params=["hostlist", "dns"])
+def load_balance_nodes(request):
+ """
+ Parametrized fixture providing both load balancing test environments.
+ """
+ return request.getfixturevalue(f"load_balance_nodes_{request.param}")
+
+
+def test_load_balance_hosts_invalid_value(load_balance_nodes):
+ """load_balance_hosts doesn't accept unknown values."""
+ _, connect = load_balance_nodes
+
+ with pytest.raises(
+ LibpqError, match='invalid load_balance_hosts value: "doesnotexist"'
+ ):
+ connect(load_balance_hosts="doesnotexist")
+
+
+def test_load_balance_hosts_disable(load_balance_nodes):
+ """load_balance_hosts=disable always connects to the first node."""
+ nodes, connect = load_balance_nodes
+
+ with nodes[0].log_contains("connection received"):
+ connect(load_balance_hosts="disable")
+
+
+def test_load_balance_hosts_random_distribution(load_balance_nodes):
+ """load_balance_hosts=random distributes connections across all nodes."""
+ nodes, connect = load_balance_nodes
+
+ for _ in range(50):
+ connect(load_balance_hosts="random")
+
+ occurrences = [
+ len(re.findall("connection received", node.log_content())) for node in nodes
+ ]
+
+ # Statistically, each node should receive at least one connection.
+ # The probability of any node receiving 0 connections is (2/3)^50 ≈ 1.57e-9
+ assert occurrences[0] > 0, "node1 should receive at least one connection"
+ assert occurrences[1] > 0, "node2 should receive at least one connection"
+ assert occurrences[2] > 0, "node3 should receive at least one connection"
+ assert sum(occurrences) == 50, "total connections should be 50"
+
+
+def test_load_balance_hosts_failover(load_balance_nodes):
+ """load_balance_hosts continues trying hosts until it finds a working one."""
+ nodes, connect = load_balance_nodes
+
+ nodes[0].stop()
+ nodes[1].stop()
+
+ with nodes[2].log_contains("connection received"):
+ connect(load_balance_hosts="disable")
+
+ with nodes[2].log_contains("connection received", times=5):
+ for _ in range(5):
+ connect(load_balance_hosts="random")
diff --git a/src/interfaces/libpq/t/003_load_balance_host_list.pl b/src/interfaces/libpq/t/003_load_balance_host_list.pl
deleted file mode 100644
index 1f970ff994b..00000000000
--- a/src/interfaces/libpq/t/003_load_balance_host_list.pl
+++ /dev/null
@@ -1,94 +0,0 @@
-# Copyright (c) 2023-2026, PostgreSQL Global Development Group
-use strict;
-use warnings FATAL => 'all';
-use Config;
-use PostgreSQL::Test::Utils;
-use PostgreSQL::Test::Cluster;
-use Test::More;
-
-# This tests load balancing across the list of different hosts in the host
-# parameter of the connection string.
-
-# Cluster setup which is shared for testing both load balancing methods
-my $node1 = PostgreSQL::Test::Cluster->new('node1');
-my $node2 = PostgreSQL::Test::Cluster->new('node2', own_host => 1);
-my $node3 = PostgreSQL::Test::Cluster->new('node3', own_host => 1);
-
-# Create a data directory with initdb
-$node1->init();
-$node2->init();
-$node3->init();
-
-# Start the PostgreSQL server
-$node1->start();
-$node2->start();
-$node3->start();
-
-# Start the tests for load balancing method 1
-my $hostlist = $node1->host . ',' . $node2->host . ',' . $node3->host;
-my $portlist = $node1->port . ',' . $node2->port . ',' . $node3->port;
-
-$node1->connect_fails(
- "host=$hostlist port=$portlist load_balance_hosts=doesnotexist",
- "load_balance_hosts doesn't accept unknown values",
- expected_stderr => qr/invalid load_balance_hosts value: "doesnotexist"/);
-
-# load_balance_hosts=disable should always choose the first one.
-$node1->connect_ok(
- "host=$hostlist port=$portlist load_balance_hosts=disable",
- "load_balance_hosts=disable connects to the first node",
- sql => "SELECT 'connect1'",
- log_like => [qr/statement: SELECT 'connect1'/]);
-
-# Statistically the following loop with load_balance_hosts=random will almost
-# certainly connect at least once to each of the nodes. The chance of that not
-# happening is so small that it's negligible: (2/3)^50 = 1.56832855e-9
-foreach my $i (1 .. 50)
-{
- $node1->connect_ok(
- "host=$hostlist port=$portlist load_balance_hosts=random",
- "repeated connections with random load balancing",
- sql => "SELECT 'connect2'");
-}
-
-my $node1_occurrences = () =
- $node1->log_content() =~ /statement: SELECT 'connect2'/g;
-my $node2_occurrences = () =
- $node2->log_content() =~ /statement: SELECT 'connect2'/g;
-my $node3_occurrences = () =
- $node3->log_content() =~ /statement: SELECT 'connect2'/g;
-
-my $total_occurrences =
- $node1_occurrences + $node2_occurrences + $node3_occurrences;
-
-cmp_ok($node1_occurrences, '>', 1,
- "received at least one connection on node1");
-cmp_ok($node2_occurrences, '>', 1,
- "received at least one connection on node2");
-cmp_ok($node3_occurrences, '>', 1,
- "received at least one connection on node3");
-is($total_occurrences, 50, "received 50 connections across all nodes");
-
-$node1->stop();
-$node2->stop();
-
-# load_balance_hosts=disable should continue trying hosts until it finds a
-# working one.
-$node3->connect_ok(
- "host=$hostlist port=$portlist load_balance_hosts=disable",
- "load_balance_hosts=disable continues until it connects to the a working node",
- sql => "SELECT 'connect3'",
- log_like => [qr/statement: SELECT 'connect3'/]);
-
-# Also with load_balance_hosts=random we continue to the next nodes if previous
-# ones are down. Connect a few times to make sure it's not just lucky.
-foreach my $i (1 .. 5)
-{
- $node3->connect_ok(
- "host=$hostlist port=$portlist load_balance_hosts=random",
- "load_balance_hosts=random continues until it connects to the a working node",
- sql => "SELECT 'connect4'",
- log_like => [qr/statement: SELECT 'connect4'/]);
-}
-
-done_testing();
diff --git a/src/interfaces/libpq/t/004_load_balance_dns.pl b/src/interfaces/libpq/t/004_load_balance_dns.pl
deleted file mode 100644
index 210ec1ff517..00000000000
--- a/src/interfaces/libpq/t/004_load_balance_dns.pl
+++ /dev/null
@@ -1,144 +0,0 @@
-# Copyright (c) 2023-2026, PostgreSQL Global Development Group
-use strict;
-use warnings FATAL => 'all';
-use Config;
-use PostgreSQL::Test::Utils;
-use PostgreSQL::Test::Cluster;
-use Test::More;
-
-if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\bload_balance\b/)
-{
- plan skip_all =>
- 'Potentially unsafe test load_balance not enabled in PG_TEST_EXTRA';
-}
-
-# This tests loadbalancing based on a DNS entry that contains multiple records
-# for different IPs. Since setting up a DNS server is more effort than we
-# consider reasonable to run this test, this situation is instead imitated by
-# using a hosts file where a single hostname maps to multiple different IP
-# addresses. This test requires the administrator to add the following lines to
-# the hosts file (if we detect that this hasn't happened we skip the test):
-#
-# 127.0.0.1 pg-loadbalancetest
-# 127.0.0.2 pg-loadbalancetest
-# 127.0.0.3 pg-loadbalancetest
-#
-# Windows or Linux are required to run this test because these OSes allow
-# binding to 127.0.0.2 and 127.0.0.3 addresses by default, but other OSes
-# don't. We need to bind to different IP addresses, so that we can use these
-# different IP addresses in the hosts file.
-#
-# The hosts file needs to be prepared before running this test. We don't do it
-# on the fly, because it requires root permissions to change the hosts file. In
-# CI we set up the previously mentioned rules in the hosts file, so that this
-# load balancing method is tested.
-
-# Cluster setup which is shared for testing both load balancing methods
-my $can_bind_to_127_0_0_2 =
- $Config{osname} eq 'linux' || $PostgreSQL::Test::Utils::windows_os;
-
-# Checks for the requirements for testing load balancing method 2
-if (!$can_bind_to_127_0_0_2)
-{
- plan skip_all => 'load_balance test only supported on Linux and Windows';
-}
-
-my $hosts_path;
-if ($windows_os)
-{
- $hosts_path = 'c:\Windows\System32\Drivers\etc\hosts';
-}
-else
-{
- $hosts_path = '/etc/hosts';
-}
-
-my $hosts_content = PostgreSQL::Test::Utils::slurp_file($hosts_path);
-
-my $hosts_count = () =
- $hosts_content =~ /127\.0\.0\.[1-3] pg-loadbalancetest/g;
-if ($hosts_count != 3)
-{
- # Host file is not prepared for this test
- plan skip_all => "hosts file was not prepared for DNS load balance test";
-}
-
-$PostgreSQL::Test::Cluster::use_tcp = 1;
-$PostgreSQL::Test::Cluster::test_pghost = '127.0.0.1';
-my $port = PostgreSQL::Test::Cluster::get_free_port();
-my $node1 = PostgreSQL::Test::Cluster->new('node1', port => $port);
-my $node2 =
- PostgreSQL::Test::Cluster->new('node2', port => $port, own_host => 1);
-my $node3 =
- PostgreSQL::Test::Cluster->new('node3', port => $port, own_host => 1);
-
-# Create a data directory with initdb
-$node1->init();
-$node2->init();
-$node3->init();
-
-# Start the PostgreSQL server
-$node1->start();
-$node2->start();
-$node3->start();
-
-# load_balance_hosts=disable should always choose the first one.
-$node1->connect_ok(
- "host=pg-loadbalancetest port=$port load_balance_hosts=disable",
- "load_balance_hosts=disable connects to the first node",
- sql => "SELECT 'connect1'",
- log_like => [qr/statement: SELECT 'connect1'/]);
-
-
-# Statistically the following loop with load_balance_hosts=random will almost
-# certainly connect at least once to each of the nodes. The chance of that not
-# happening is so small that it's negligible: (2/3)^50 = 1.56832855e-9
-foreach my $i (1 .. 50)
-{
- $node1->connect_ok(
- "host=pg-loadbalancetest port=$port load_balance_hosts=random",
- "repeated connections with random load balancing",
- sql => "SELECT 'connect2'");
-}
-
-my $node1_occurrences = () =
- $node1->log_content() =~ /statement: SELECT 'connect2'/g;
-my $node2_occurrences = () =
- $node2->log_content() =~ /statement: SELECT 'connect2'/g;
-my $node3_occurrences = () =
- $node3->log_content() =~ /statement: SELECT 'connect2'/g;
-
-my $total_occurrences =
- $node1_occurrences + $node2_occurrences + $node3_occurrences;
-
-cmp_ok($node1_occurrences, '>', 1,
- "received at least one connection on node1");
-cmp_ok($node2_occurrences, '>', 1,
- "received at least one connection on node2");
-cmp_ok($node3_occurrences, '>', 1,
- "received at least one connection on node3");
-is($total_occurrences, 50, "received 50 connections across all nodes");
-
-$node1->stop();
-$node2->stop();
-
-# load_balance_hosts=disable should continue trying hosts until it finds a
-# working one.
-$node3->connect_ok(
- "host=pg-loadbalancetest port=$port load_balance_hosts=disable",
- "load_balance_hosts=disable continues until it connects to the a working node",
- sql => "SELECT 'connect3'",
- log_like => [qr/statement: SELECT 'connect3'/]);
-
-# Also with load_balance_hosts=random we continue to the next nodes if previous
-# ones are down. Connect a few times to make sure it's not just lucky.
-foreach my $i (1 .. 5)
-{
- $node3->connect_ok(
- "host=pg-loadbalancetest port=$port load_balance_hosts=random",
- "load_balance_hosts=random continues until it connects to the a working node",
- sql => "SELECT 'connect4'",
- log_like => [qr/statement: SELECT 'connect4'/]);
-}
-
-done_testing();
--
2.52.0
v9-0004-WIP-pytest-Add-some-SSL-client-tests.patchtext/x-patch; charset=utf-8; name=v9-0004-WIP-pytest-Add-some-SSL-client-tests.patchDownload
From c82e188e3e31277ced29e102532682630f90dfc6 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 16 Dec 2025 09:30:55 +0100
Subject: [PATCH v9 4/5] WIP: pytest: Add some SSL client tests
This is a sample client-only test suite. It tests some handshake
failures against a mock server, as well as a full SSL handshake + empty
query + response.
pyca/cryptography is added as a new package dependency. Certificates for
testing are generated on the fly.
The mock design is threaded: the server socket is listening on a
background thread, and the test provides the server logic via a
callback. There is some additional work still needed to make this
production-ready; see the notes for _TCPServer.background(). (Currently,
an exception in the wrong place could result in a hang-until-timeout
rather than an immediate failure.)
TODOs:
- local_server and tcp_server_class are nearly identical and should
share code.
- fix exception-related timeouts for .background()
- figure out the proper use of "session" vs "module" scope
- ensure that pq.libpq unwinds (to close connections) before tcp_server;
see comment in test_server_with_ssl_disabled()
---
.cirrus.tasks.yml | 2 +
pyproject.toml | 8 +
src/test/ssl/Makefile | 2 +
src/test/ssl/meson.build | 6 +
src/test/ssl/pyt/conftest.py | 128 +++++++++++++++
src/test/ssl/pyt/test_client.py | 278 ++++++++++++++++++++++++++++++++
6 files changed, 424 insertions(+)
create mode 100644 src/test/ssl/pyt/conftest.py
create mode 100644 src/test/ssl/pyt/test_client.py
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 1b0deae8d87..17fd7e0c8c3 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -645,6 +645,7 @@ task:
CIRRUS_WORKING_DIR: ${HOME}/pgsql/
CCACHE_DIR: ${HOME}/ccache
MACPORTS_CACHE: ${HOME}/macports-cache
+ PYTEST_DEBUG_TEMPROOT: /tmp # default is too long for UNIX sockets on Mac
MESON_FEATURES: >-
-Dbonjour=enabled
@@ -665,6 +666,7 @@ task:
p5.34-io-tty
p5.34-ipc-run
python312
+ py312-cryptography
py312-packaging
py312-pytest
tcl
diff --git a/pyproject.toml b/pyproject.toml
index 4628d2274e0..00c8ae88583 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -12,6 +12,14 @@ dependencies = [
# Any other dependencies are effectively optional (added below). We import
# these libraries using pytest.importorskip(). So tests will be skipped if
# they are not available.
+
+ # Notes on the cryptography package:
+ # - 3.3.2 is shipped on Debian bullseye.
+ # - 3.4.x drops support for Python 2, making it a version of note for older LTS
+ # distros.
+ # - 35.x switched versioning schemes and moved to Rust parsing.
+ # - 40.x is the last version supporting Python 3.6.
+ "cryptography >= 3.3.2",
]
[tool.pytest.ini_options]
diff --git a/src/test/ssl/Makefile b/src/test/ssl/Makefile
index aa062945fb9..287729ad9fb 100644
--- a/src/test/ssl/Makefile
+++ b/src/test/ssl/Makefile
@@ -30,6 +30,8 @@ clean distclean:
# Doesn't depend on sslfiles because we don't rebuild them by default
check:
$(prove_check)
+ # XXX these suites should run independently, not serially
+ $(pytest_check)
installcheck:
$(prove_installcheck)
diff --git a/src/test/ssl/meson.build b/src/test/ssl/meson.build
index 9e5bdbb6136..6ec274d8165 100644
--- a/src/test/ssl/meson.build
+++ b/src/test/ssl/meson.build
@@ -15,4 +15,10 @@ tests += {
't/003_sslinfo.pl',
],
},
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_client.py',
+ 'pyt/test_server.py',
+ ],
+ },
}
diff --git a/src/test/ssl/pyt/conftest.py b/src/test/ssl/pyt/conftest.py
new file mode 100644
index 00000000000..870f738ac44
--- /dev/null
+++ b/src/test/ssl/pyt/conftest.py
@@ -0,0 +1,128 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import datetime
+import re
+import subprocess
+import tempfile
+from collections import namedtuple
+
+import pytest
+
+
+@pytest.fixture(scope="session")
+def cryptography():
+ return pytest.importorskip("cryptography", "3.3.2")
+
+
+Cert = namedtuple("Cert", "cert, certpath, key, keypath")
+
+
+@pytest.fixture(scope="session")
+def certs(cryptography, tmp_path_factory):
+ """
+ Caches commonly used certificates at the session level, and provides a way
+ to create new ones.
+
+ - certs.ca: the root CA certificate
+
+ - certs.server: the "standard" server certficate, signed by certs.ca
+
+ - certs.server_host: the hostname of the certs.server certificate
+
+ - certs.new(): creates a custom certificate, signed by certs.ca
+ """
+
+ from cryptography import x509
+ from cryptography.hazmat.primitives import hashes, serialization
+ from cryptography.hazmat.primitives.asymmetric import rsa
+ from cryptography.x509.oid import NameOID
+
+ tmpdir = tmp_path_factory.mktemp("test-certs")
+
+ class _Certs:
+ def __init__(self):
+ self.ca = self.new(
+ x509.Name(
+ [x509.NameAttribute(NameOID.COMMON_NAME, "PG pytest CA")],
+ ),
+ ca=True,
+ )
+
+ self.server_host = "example.org"
+ self.server = self.new(
+ x509.Name(
+ [x509.NameAttribute(NameOID.COMMON_NAME, self.server_host)],
+ )
+ )
+
+ def new(self, subject: x509.Name, *, ca=False) -> Cert:
+ """
+ Creates and signs a new Cert with the given subject name. If ca is
+ True, the certificate will be self-signed; otherwise the certificate
+ is signed by self.ca.
+ """
+ key = rsa.generate_private_key(
+ public_exponent=65537,
+ key_size=2048,
+ )
+
+ builder = x509.CertificateBuilder()
+ now = datetime.datetime.now(datetime.timezone.utc)
+
+ builder = (
+ builder.subject_name(subject)
+ .public_key(key.public_key())
+ .serial_number(x509.random_serial_number())
+ .not_valid_before(now)
+ .not_valid_after(now + datetime.timedelta(hours=1))
+ )
+
+ if ca:
+ builder = builder.issuer_name(subject)
+ else:
+ builder = builder.issuer_name(self.ca.cert.subject)
+
+ builder = builder.add_extension(
+ x509.BasicConstraints(ca=ca, path_length=None),
+ critical=True,
+ )
+
+ cert = builder.sign(
+ private_key=key if ca else self.ca.key,
+ algorithm=hashes.SHA256(),
+ )
+
+ # Dump the certificate and key to file.
+ keypath = self._tofile(
+ key.private_bytes(
+ serialization.Encoding.PEM,
+ serialization.PrivateFormat.PKCS8,
+ serialization.NoEncryption(),
+ ),
+ suffix=".key",
+ )
+ certpath = self._tofile(
+ cert.public_bytes(serialization.Encoding.PEM),
+ suffix="-ca.crt" if ca else ".crt",
+ )
+
+ return Cert(
+ cert=cert,
+ certpath=certpath,
+ key=key,
+ keypath=keypath,
+ )
+
+ def _tofile(self, data: bytes, *, suffix) -> str:
+ """
+ Dumps data to a file on disk with the requested suffix and returns
+ the path. The file is located somewhere in pytest's temporary
+ directory root.
+ """
+ f = tempfile.NamedTemporaryFile(suffix=suffix, dir=tmpdir, delete=False)
+ with f:
+ f.write(data)
+
+ return f.name
+
+ return _Certs()
diff --git a/src/test/ssl/pyt/test_client.py b/src/test/ssl/pyt/test_client.py
new file mode 100644
index 00000000000..556bad33bf8
--- /dev/null
+++ b/src/test/ssl/pyt/test_client.py
@@ -0,0 +1,278 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import ctypes
+import socket
+import ssl
+import struct
+import threading
+from typing import Callable
+
+import pytest
+
+import pypg
+from libpq import LibpqError, ExecStatus
+
+# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
+pytestmark = pypg.require_test_extras("ssl")
+
+
+@pytest.fixture(scope="session", autouse=True)
+def skip_if_no_ssl_support(libpq_handle):
+ """Skips tests if SSL support is not configured."""
+
+ # Declare PQsslAttribute().
+ PQsslAttribute = libpq_handle.PQsslAttribute
+ PQsslAttribute.restype = ctypes.c_char_p
+ PQsslAttribute.argtypes = [ctypes.c_void_p, ctypes.c_char_p]
+
+ if not PQsslAttribute(None, b"library"):
+ pytest.skip("requires SSL support to be configured")
+
+
+#
+# Test Fixtures
+#
+
+
+@pytest.fixture
+def tcp_server_class(remaining_timeout):
+ """
+ Metafixture to combine related logic for tcp_server and ssl_server.
+
+ TODO: combine with test_libpq.local_server
+ """
+
+ class _TCPServer(contextlib.ExitStack):
+ """
+ Implementation class for tcp_server. See .background() for the primary
+ entry point for tests. Postgres clients may connect to this server via
+ **tcp_server.conninfo.
+
+ _TCPServer derives from contextlib.ExitStack to provide easy cleanup of
+ associated resources; see the documentation for that class for a full
+ explanation.
+ """
+
+ def __init__(self):
+ super().__init__()
+
+ self._thread = None
+ self._thread_exc = None
+ self._listener = self.enter_context(
+ socket.socket(socket.AF_INET, socket.SOCK_STREAM),
+ )
+
+ self._bind_and_listen()
+ sockname = self._listener.getsockname()
+ self.conninfo = dict(
+ hostaddr=sockname[0],
+ port=sockname[1],
+ )
+
+ def _bind_and_listen(self):
+ """
+ Does the actual work of binding the socket and listening for
+ connections.
+
+ The listen backlog is currently hardcoded to one.
+ """
+ self._listener.bind(("127.0.0.1", 0))
+ self._listener.listen(1)
+
+ def background(self, fn: Callable[[socket.socket], None]) -> None:
+ """
+ Accepts a client connection on a background thread and passes it to
+ the provided callback. Any exceptions raised from the callback will
+ be re-raised on the main thread during fixture teardown.
+
+ Blocking operations on the connected socket default to using the
+ remaining_timeout(), though this can be changed by the test via the
+ socket's .settimeout().
+ """
+
+ def _bg():
+ try:
+ self._listener.settimeout(remaining_timeout())
+ sock, _ = self._listener.accept()
+
+ with sock:
+ sock.settimeout(remaining_timeout())
+ fn(sock)
+
+ except Exception as e:
+ # Save the exception for re-raising on the main thread.
+ self._thread_exc = e
+
+ # TODO: rather than using callback(), consider explicitly signaling
+ # the fn() implementation to stop early if we get an exception.
+ # Otherwise we'll hang until the end of the timeout.
+ self._thread = threading.Thread(target=_bg)
+ self.callback(self._join)
+
+ self._thread.start()
+
+ def _join(self):
+ """
+ Waits for the background thread to finish and raises any thrown
+ exception. This is called during fixture teardown.
+ """
+ # Give a little bit of wiggle room on the join timeout, since we're
+ # racing against the test's own use of remaining_timeout(). (It's
+ # preferable to let tests report timeouts; the stack traces will
+ # help with debugging.)
+ self._thread.join(remaining_timeout() + 1)
+ if self._thread.is_alive():
+ raise TimeoutError("background thread is still running after timeout")
+
+ if self._thread_exc is not None:
+ raise self._thread_exc
+
+ return _TCPServer
+
+
+@pytest.fixture
+def tcp_server(tcp_server_class):
+ """
+ Opens up a local TCP socket for mocking a Postgres server on a background
+ thread. See the _TCPServer API for usage.
+ """
+ with tcp_server_class() as s:
+ yield s
+
+
+@pytest.fixture
+def ssl_server(tcp_server_class, certs):
+ """
+ Like tcp_server, but with an additional .background_ssl() method which will
+ perform a SSLRequest handshake on the socket before handing the connection
+ to the test callback.
+
+ This server uses certs.server as its identity.
+ """
+
+ class _SSLServer(tcp_server_class):
+ def __init__(self):
+ super().__init__()
+
+ self.conninfo["host"] = certs.server_host
+
+ self._ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
+ self._ctx.load_cert_chain(certs.server.certpath, certs.server.keypath)
+
+ def background_ssl(self, fn: Callable[[ssl.SSLSocket], None]) -> None:
+ """
+ Invokes a server callback as with .background(), but an SSLRequest
+ handshake is performed first, and the socket provided to the
+ callback has been wrapped in an OpenSSL layer.
+ """
+
+ def handshake(s: socket.socket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Make sure we get an SSLRequest.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (1234, 5679)
+ assert pktlen == 8
+
+ # Accept the SSLRequest.
+ s.send(b"S")
+
+ with self._ctx.wrap_socket(s, server_side=True) as wrapped:
+ fn(wrapped)
+
+ self.background(handshake)
+
+ with _SSLServer() as s:
+ yield s
+
+
+#
+# Tests
+#
+
+
+@pytest.mark.parametrize("sslmode", ("require", "verify-ca", "verify-full"))
+def test_server_with_ssl_disabled(connect, tcp_server, certs, sslmode):
+ """
+ Make sure client refuses to talk to non-SSL servers with stricter
+ sslmodes.
+ """
+
+ def refuse_ssl(s: socket.socket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Make sure we get an SSLRequest.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (1234, 5679)
+ assert pktlen == 8
+
+ # Refuse the SSLRequest.
+ s.send(b"N")
+
+ # Wait for the client to close the connection.
+ assert not s.recv(1), "client sent unexpected data"
+
+ tcp_server.background(refuse_ssl)
+
+ with pytest.raises(LibpqError, match="server does not support SSL"):
+ connect(
+ **tcp_server.conninfo,
+ sslrootcert=certs.ca.certpath,
+ sslmode=sslmode,
+ )
+
+
+def test_verify_full_connection(connect, ssl_server, certs):
+ """Completes a verify-full connection and empty query."""
+
+ def handle_empty_query(s: ssl.SSLSocket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Check the startup packet version, then discard the remainder.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (3, 0)
+ s.recv(pktlen - 8)
+
+ # Send the required litany of server messages.
+ s.send(struct.pack("!cII", b"R", 8, 0)) # AuthenticationOK
+
+ # ParameterStatus: client_encoding
+ key = b"client_encoding\0"
+ val = b"UTF-8\0"
+ s.send(struct.pack("!cI", b"S", 4 + len(key) + len(val)) + key + val)
+
+ # ParameterStatus: DateStyle
+ key = b"DateStyle\0"
+ val = b"ISO, MDY\0"
+ s.send(struct.pack("!cI", b"S", 4 + len(key) + len(val)) + key + val)
+
+ s.send(struct.pack("!cIII", b"K", 12, 1234, 1234)) # BackendKeyData
+ s.send(struct.pack("!cIc", b"Z", 5, b"I")) # ReadyForQuery
+
+ # Expect an empty query.
+ pkttype = s.recv(1)
+ assert pkttype == b"Q"
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+ assert s.recv(pktlen - 4) == b"\0"
+
+ # Send an EmptyQueryResponse+ReadyForQuery.
+ s.send(struct.pack("!cI", b"I", 4))
+ s.send(struct.pack("!cIc", b"Z", 5, b"I"))
+
+ # libpq should terminate and close the connection.
+ assert s.recv(1) == b"X"
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+ assert pktlen == 4
+
+ assert not s.recv(1), "client sent unexpected data"
+
+ ssl_server.background_ssl(handle_empty_query)
+
+ conn = connect(
+ **ssl_server.conninfo,
+ sslrootcert=certs.ca.certpath,
+ sslmode="verify-full",
+ )
+ with conn:
+ assert conn.exec("").status() == ExecStatus.PGRES_EMPTY_QUERY
--
2.52.0
v9-0005-WIP-pytest-Add-some-server-side-SSL-tests.patchtext/x-patch; charset=utf-8; name=v9-0005-WIP-pytest-Add-some-server-side-SSL-tests.patchDownload
From 9f9dce9a106351e969b5adc6d0744a62a677470b Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 16 Dec 2025 09:31:46 +0100
Subject: [PATCH v9 5/5] WIP: pytest: Add some server-side SSL tests
In the same vein as the previous commit, this is a server-only test
suite operating against a mock client. The test itself is a heavily
parameterized check for direct-SSL handshake behavior, using a
combination of "standard" and "custom" certificates via the certs
fixture.
installcheck is currently unsupported, but the architecture has some
extension points that should make it possible later. For now, a new
server is always started for the test session.
TODOs:
- improve remaining_timeout() integration with socket operations; at the
moment, the timeout resets on every call rather than decrementing
---
src/test/ssl/pyt/conftest.py | 50 ++++++++++
src/test/ssl/pyt/test_server.py | 161 ++++++++++++++++++++++++++++++++
2 files changed, 211 insertions(+)
create mode 100644 src/test/ssl/pyt/test_server.py
diff --git a/src/test/ssl/pyt/conftest.py b/src/test/ssl/pyt/conftest.py
index 870f738ac44..d121724800b 100644
--- a/src/test/ssl/pyt/conftest.py
+++ b/src/test/ssl/pyt/conftest.py
@@ -126,3 +126,53 @@ def certs(cryptography, tmp_path_factory):
return f.name
return _Certs()
+
+
+@pytest.fixture(scope="module", autouse=True)
+def ssl_setup(pg_server_module, certs, datadir):
+ """
+ Sets up required server settings for all tests in this module.
+ """
+ try:
+ with pg_server_module.restarting() as s:
+ s.conf.set(
+ ssl="on",
+ ssl_ca_file=certs.ca.certpath,
+ ssl_cert_file=certs.server.certpath,
+ ssl_key_file=certs.server.keypath,
+ )
+
+ # Reject by default.
+ s.hba.prepend("hostssl all all all reject")
+
+ except subprocess.CalledProcessError:
+ # This is a decent place to skip if the server isn't set up for SSL.
+ logpath = datadir / "postgresql.log"
+ unsupported = re.compile("SSL is not supported")
+
+ with open(logpath, "r") as log:
+ for line in log:
+ if unsupported.search(line):
+ pytest.skip("the server does not support SSL")
+
+ # Some other error happened.
+ raise
+
+ users = pg_server_module.create_users("ssl")
+ dbs = pg_server_module.create_dbs("ssl")
+
+ return (users, dbs)
+
+
+@pytest.fixture(scope="module")
+def client_cert(ssl_setup, certs):
+ """
+ Creates a Cert for the "ssl" user.
+ """
+ from cryptography import x509
+ from cryptography.x509.oid import NameOID
+
+ users, _ = ssl_setup
+ user = users["ssl"]
+
+ return certs.new(x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, user)]))
diff --git a/src/test/ssl/pyt/test_server.py b/src/test/ssl/pyt/test_server.py
new file mode 100644
index 00000000000..d5cb14b6c9a
--- /dev/null
+++ b/src/test/ssl/pyt/test_server.py
@@ -0,0 +1,161 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import re
+import socket
+import ssl
+import struct
+
+import pytest
+
+import pypg
+
+# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
+pytestmark = pypg.require_test_extras("ssl")
+
+# For use with the `creds` parameter below.
+CLIENT = "client"
+SERVER = "server"
+
+
+# fmt: off
+@pytest.mark.parametrize(
+ "auth_method, creds, expected_error",
+[
+ # Trust allows anything.
+ ("trust", None, None),
+ ("trust", CLIENT, None),
+ ("trust", SERVER, None),
+
+ # verify-ca allows any CA-signed certificate.
+ ("trust clientcert=verify-ca", None, "requires a valid client certificate"),
+ ("trust clientcert=verify-ca", CLIENT, None),
+ ("trust clientcert=verify-ca", SERVER, None),
+
+ # cert and verify-full allow only the correct certificate.
+ ("trust clientcert=verify-full", None, "requires a valid client certificate"),
+ ("trust clientcert=verify-full", CLIENT, None),
+ ("trust clientcert=verify-full", SERVER, "authentication failed for user"),
+ ("cert", None, "requires a valid client certificate"),
+ ("cert", CLIENT, None),
+ ("cert", SERVER, "authentication failed for user"),
+],
+)
+# fmt: on
+def test_direct_ssl_certificate_authentication(
+ pg,
+ ssl_setup,
+ certs,
+ client_cert,
+ remaining_timeout,
+ # test parameters
+ auth_method,
+ creds,
+ expected_error,
+):
+ """
+ Tests direct SSL connections with various client-certificate/HBA
+ combinations.
+ """
+
+ # Set up the HBA as desired by the test.
+ users, dbs = ssl_setup
+
+ user = users["ssl"]
+ db = dbs["ssl"]
+
+ with pg.reloading() as s:
+ s.hba.prepend(
+ ["hostssl", db, user, "127.0.0.1/32", auth_method],
+ ["hostssl", db, user, "::1/128", auth_method],
+ )
+
+ # Configure the SSL settings for the client.
+ ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
+ ctx.load_verify_locations(cafile=certs.ca.certpath)
+ ctx.set_alpn_protocols(["postgresql"]) # for direct SSL
+
+ # Load up a client certificate if required by the test.
+ if creds == CLIENT:
+ ctx.load_cert_chain(client_cert.certpath, client_cert.keypath)
+ elif creds == SERVER:
+ # Using a server certificate as the client credential is expected to
+ # work only for clientcert=verify-ca (and `trust`, naturally).
+ ctx.load_cert_chain(certs.server.certpath, certs.server.keypath)
+
+ # Make a direct SSL connection. There's no SSLRequest in the handshake; we
+ # simply wrap a TCP connection with OpenSSL.
+ addr = (pg.hostaddr, pg.port)
+ with socket.create_connection(addr) as s:
+ s.settimeout(remaining_timeout()) # XXX this resets every operation
+
+ with ctx.wrap_socket(s, server_hostname=certs.server_host) as conn:
+ # Build and send the startup packet.
+ startup_options = dict(
+ user=user,
+ database=db,
+ application_name="pytest",
+ )
+
+ payload = b""
+ for k, v in startup_options.items():
+ payload += k.encode() + b"\0"
+ payload += str(v).encode() + b"\0"
+ payload += b"\0" # null terminator
+
+ pktlen = 4 + 4 + len(payload)
+ conn.send(struct.pack("!IHH", pktlen, 3, 0) + payload)
+
+ if not expected_error:
+ # Expect an AuthenticationOK to come back.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"R"
+ assert pktlen == 8
+
+ authn_result = struct.unpack("!I", conn.recv(4))[0]
+ assert authn_result == 0
+
+ # Read and discard to ReadyForQuery.
+ while True:
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ payload = conn.recv(pktlen - 4)
+
+ if pkttype == b"Z":
+ assert payload == b"I"
+ break
+
+ # Send an empty query.
+ conn.send(struct.pack("!cI", b"Q", 5) + b"\0")
+
+ # Expect EmptyQueryResponse+ReadyForQuery.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"I"
+ assert pktlen == 4
+
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"Z"
+
+ payload = conn.recv(pktlen - 4)
+ assert payload == b"I"
+
+ else:
+ # Match the expected authentication error.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"E"
+
+ payload = conn.recv(pktlen - 4)
+ msg = None
+
+ for component in payload.split(b"\0"):
+ if not component:
+ break # end of message
+
+ key, val = component[:1], component[1:]
+ if key == b"S":
+ assert val == b"FATAL"
+ elif key == b"M":
+ msg = val.decode()
+
+ assert re.search(expected_error, msg), "server error did not match"
+
+ # Terminate.
+ conn.send(struct.pack("!cI", b"X", 4))
--
2.52.0
On Wed Jan 7, 2026 at 2:01 AM CET, Jacob Champion wrote:
It's perfectly okay if you'd like to tie the GoAway proposal to this,
but that seems like it's unlikely to result in short-term success.
To be clear, I did not mean to tie the GoAway proposal to this. I meant
to tie committing of *automated tests* for GoAway to this. Given how
little of our libpq interface is tested, I don't think that needs to be
a blocker for the GoAway feature itself.
Timing wise, I'd myself much rather have this patchset as an early PG20
commit than a last minute PG19 one.
Writing code to start and stop a server and run SQL is a matter of
programming. Writing a test suite that newcomers can intuitively use,
and test interesting new things with, is a long-term collaboration. I
am much more interested in doing the latter, because we already have
the former, and personally I'm happy to build momentum slowly and wait
on a group of people who are in a good place to discuss it.
Sure, it's a matter of programming. But my feeling is that most people
on the list don't want to build their own test infrastructure. They want
good infrastructure to "just exist", so they can write tests easily with
it.
So that's why after trying to use your initial attempt for a test of
mine, I moved the useful parts to a shared part of the codebase. So that
people can easily try writing a test with it, and explain what they like
or don't like. Instead of having to create or copy a bunch of
boilerplate every time they want to do something.
In any case, that's where we're at now. It would be nice if you could
take a look at the actual patchset at some point, but no rush.
On Thu Jan 8, 2026 at 1:34 PM CET, Jelte Fennema-Nio wrote:
Attached is a simplified version with all of the things you mentioned
removed.
v10 attached to resolve merge conflicts.
Attachments:
v10-0001-Add-support-for-pytest-test-suites.patchtext/x-patch; charset=utf-8; name=v10-0001-Add-support-for-pytest-test-suites.patchDownload
From e6acef252bb1b155e1e2e54b28d87a68e5c6cd17 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 13 Aug 2025 10:58:56 -0700
Subject: [PATCH v10 1/5] Add support for pytest test suites
Specify --enable-pytest/-Dpytest=enabled at configure time. This
contains no Postgres test logic -- it is just a "vanilla" pytest
skeleton.
This contains a custom pytest plugin to generate TAP output. This plugin
is used by the Meson mtest runner, to show relevant information for
failed tests. The pytest-tap plugin would have been preferable, but it's
now in maintenance mode, and it has problems with accidentally
suppressing important collection failures.
Co-authored-by: Jelte Fennema-Nio <postgres@jeltef.nl>
---
.cirrus.tasks.yml | 11 +-
.gitignore | 3 +
configure | 166 +++++++++++++++++++++++++++++-
configure.ac | 24 ++++-
meson.build | 100 ++++++++++++++++++
meson_options.txt | 8 +-
pyproject.toml | 21 ++++
src/Makefile.global.in | 29 ++++++
src/makefiles/meson.build | 2 +
src/test/Makefile | 1 +
src/test/meson.build | 1 +
src/test/pytest/Makefile | 20 ++++
src/test/pytest/README | 1 +
src/test/pytest/meson.build | 15 +++
src/test/pytest/pgtap.py | 198 ++++++++++++++++++++++++++++++++++++
src/tools/testwrap | 6 +-
16 files changed, 598 insertions(+), 8 deletions(-)
create mode 100644 pyproject.toml
create mode 100644 src/test/pytest/Makefile
create mode 100644 src/test/pytest/README
create mode 100644 src/test/pytest/meson.build
create mode 100644 src/test/pytest/pgtap.py
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 2a821593ce5..c9db12d53b9 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -44,6 +44,7 @@ env:
-Dldap=enabled
-Dssl=openssl
-Dtap_tests=enabled
+ -Dpytest=enabled
-Dplperl=enabled
-Dplpython=enabled
-Ddocs=enabled
@@ -315,6 +316,7 @@ task:
-Dlibcurl=enabled
-Dnls=enabled
-Dpam=enabled
+ -DPYTEST=pytest-3.12
setup_additional_packages_script: |
#pkgin -y install ...
@@ -518,14 +520,15 @@ task:
set -e
./configure \
--enable-cassert --enable-injection-points --enable-debug \
- --enable-tap-tests --enable-nls \
+ --enable-tap-tests --enable-pytest --enable-nls \
--with-segsize-blocks=6 \
--with-libnuma \
--with-liburing \
\
${LINUX_CONFIGURE_FEATURES} \
\
- CLANG="ccache clang"
+ CLANG="ccache clang" \
+ PYTEST="env LD_PRELOAD=/lib/x86_64-linux-gnu/libasan.so.8 pytest"
EOF
build_script: su postgres -c "make -s -j${BUILD_JOBS} world-bin"
upload_caches: ccache
@@ -663,6 +666,8 @@ task:
p5.34-io-tty
p5.34-ipc-run
python312
+ py312-packaging
+ py312-pytest
tcl
zstd
@@ -712,6 +717,7 @@ task:
sh src/tools/ci/ci_macports_packages.sh $MACOS_PACKAGE_LIST
# system python doesn't provide headers
sudo /opt/local/bin/port select python3 python312
+ sudo /opt/local/bin/port select pytest pytest312
# Make macports install visible for subsequent steps
echo PATH=/opt/local/sbin/:/opt/local/bin/:$PATH >> $CIRRUS_ENV
upload_caches: macports
@@ -785,6 +791,7 @@ task:
-Dldap=enabled
-Dssl=openssl
-Dtap_tests=enabled
+ -Dpytest=enabled
-Dplperl=enabled
-Dplpython=enabled
diff --git a/.gitignore b/.gitignore
index 4e911395fe3..a550ce6194b 100644
--- a/.gitignore
+++ b/.gitignore
@@ -31,6 +31,7 @@ win32ver.rc
*.exe
lib*dll.def
lib*.pc
+__pycache__/
# Local excludes in root directory
/GNUmakefile
@@ -43,3 +44,5 @@ lib*.pc
/Release/
/tmp_install/
/portlock/
+/.venv/
+/uv.lock
diff --git a/configure b/configure
index 045c913865d..1263f84e699 100755
--- a/configure
+++ b/configure
@@ -630,6 +630,8 @@ vpath_build
PG_SYSROOT
PG_VERSION_NUM
LDFLAGS_EX_BE
+UV
+PYTEST
PROVE
DBTOEPUB
FOP
@@ -772,6 +774,7 @@ CFLAGS
CC
enable_injection_points
PG_TEST_EXTRA
+enable_pytest
enable_tap_tests
enable_dtrace
DTRACEFLAGS
@@ -850,6 +853,7 @@ enable_profiling
enable_coverage
enable_dtrace
enable_tap_tests
+enable_pytest
enable_injection_points
with_blocksize
with_segsize
@@ -1550,7 +1554,10 @@ Optional Features:
--enable-profiling build with profiling enabled
--enable-coverage build with coverage testing instrumentation
--enable-dtrace build with DTrace support
- --enable-tap-tests enable TAP tests (requires Perl and IPC::Run)
+ --enable-tap-tests enable (Perl-based) TAP tests (requires Perl and
+ IPC::Run)
+ --enable-pytest enable (Python-based) pytest suites (requires
+ Python)
--enable-injection-points
enable injection points (for testing)
--enable-depend turn on automatic dependency tracking
@@ -3632,7 +3639,7 @@ fi
#
-# TAP tests
+# Test frameworks
#
@@ -3660,6 +3667,32 @@ fi
+
+# Check whether --enable-pytest was given.
+if test "${enable_pytest+set}" = set; then :
+ enableval=$enable_pytest;
+ case $enableval in
+ yes)
+ :
+ ;;
+ no)
+ :
+ ;;
+ *)
+ as_fn_error $? "no argument expected for --enable-pytest option" "$LINENO" 5
+ ;;
+ esac
+
+else
+ enable_pytest=no
+
+fi
+
+
+
+
+
+
#
# Injection points
#
@@ -19174,6 +19207,135 @@ $as_echo "$modulestderr" >&6; }
fi
fi
+if test "$enable_pytest" = yes; then
+ if test -z "$PYTEST"; then
+ for ac_prog in pytest py.test
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_PYTEST+:} false; then :
+ $as_echo_n "(cached) " >&6
+else
+ case $PYTEST in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_PYTEST="$PYTEST" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_path_PYTEST="$as_dir/$ac_word$ac_exec_ext"
+ $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+PYTEST=$ac_cv_path_PYTEST
+if test -n "$PYTEST"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTEST" >&5
+$as_echo "$PYTEST" >&6; }
+else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+ test -n "$PYTEST" && break
+done
+
+else
+ # Report the value of PYTEST in configure's output in all cases.
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for PYTEST" >&5
+$as_echo_n "checking for PYTEST... " >&6; }
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTEST" >&5
+$as_echo "$PYTEST" >&6; }
+fi
+
+ if test -z "$PYTEST"; then
+ # If pytest not found, try installing with uv
+ if test -z "$UV"; then
+ for ac_prog in uv
+do
+ # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_UV+:} false; then :
+ $as_echo_n "(cached) " >&6
+else
+ case $UV in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_UV="$UV" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_path_UV="$as_dir/$ac_word$ac_exec_ext"
+ $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+ done
+IFS=$as_save_IFS
+
+ ;;
+esac
+fi
+UV=$ac_cv_path_UV
+if test -n "$UV"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $UV" >&5
+$as_echo "$UV" >&6; }
+else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+ test -n "$UV" && break
+done
+
+else
+ # Report the value of UV in configure's output in all cases.
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for UV" >&5
+$as_echo_n "checking for UV... " >&6; }
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $UV" >&5
+$as_echo "$UV" >&6; }
+fi
+
+ if test -n "$UV"; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether uv can install pytest dependencies" >&5
+$as_echo_n "checking whether uv can install pytest dependencies... " >&6; }
+ if "$UV" pip install "$srcdir" >&5 2>&1; then
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+ PYTEST="$UV run pytest"
+ else
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+ as_fn_error $? "pytest not found and uv failed to install dependencies" "$LINENO" 5
+ fi
+ else
+ as_fn_error $? "pytest not found" "$LINENO" 5
+ fi
+ fi
+fi
+
# If compiler will take -Wl,--as-needed (or various platform-specific
# spellings thereof) then add that to LDFLAGS. This is much easier than
# trying to filter LIBS to the minimum for each executable.
diff --git a/configure.ac b/configure.ac
index 145197e6bd6..0a4498999fe 100644
--- a/configure.ac
+++ b/configure.ac
@@ -225,11 +225,16 @@ AC_SUBST(DTRACEFLAGS)])
AC_SUBST(enable_dtrace)
#
-# TAP tests
+# Test frameworks
#
PGAC_ARG_BOOL(enable, tap-tests, no,
- [enable TAP tests (requires Perl and IPC::Run)])
+ [enable (Perl-based) TAP tests (requires Perl and IPC::Run)])
AC_SUBST(enable_tap_tests)
+
+PGAC_ARG_BOOL(enable, pytest, no,
+ [enable (Python-based) pytest suites (requires Python)])
+AC_SUBST(enable_pytest)
+
AC_ARG_VAR(PG_TEST_EXTRA,
[enable selected extra tests (overridden at runtime by PG_TEST_EXTRA environment variable)])
@@ -2405,6 +2410,21 @@ if test "$enable_tap_tests" = yes; then
fi
fi
+if test "$enable_pytest" = yes; then
+ PGAC_PATH_PROGS(PYTEST, [pytest py.test])
+ if test -z "$PYTEST"; then
+ # Try python -m pytest as a fallback
+ AC_MSG_CHECKING([whether python -m pytest works])
+ if "$PYTHON" -m pytest --version >&AS_MESSAGE_LOG_FD 2>&1; then
+ AC_MSG_RESULT([yes])
+ PYTEST="$PYTHON -m pytest"
+ else
+ AC_MSG_RESULT([no])
+ AC_MSG_ERROR([pytest not found])
+ fi
+ fi
+fi
+
# If compiler will take -Wl,--as-needed (or various platform-specific
# spellings thereof) then add that to LDFLAGS. This is much easier than
# trying to filter LIBS to the minimum for each executable.
diff --git a/meson.build b/meson.build
index 555c94796c6..a8ac9b203f0 100644
--- a/meson.build
+++ b/meson.build
@@ -1718,6 +1718,47 @@ endif
+###############################################################
+# Library: pytest
+###############################################################
+
+pytest_enabled = false
+pytest_version = ''
+pytest_cmd = ['pytest'] # dummy, overwritten when pytest is found
+# We also configure the same PYTHONPATH in the pytest settings in
+# pyproject.toml, but pytest versions below 8.4 only actually use that
+# value after plugin loading. On lower versions pytest will throw an error even
+# when just running 'pytest --version'. So we need to configure it here too.
+# This won't help people manually running pytest outside of meson/make, but we
+# expect those to use a recent enough version of pytest anyway (and if not they
+# can manually configure PYTHONPATH too).
+pytest_env = {'PYTHONPATH': meson.project_source_root() / 'src' / 'test' / 'pytest'}
+
+pytestopt = get_option('pytest')
+if not pytestopt.disabled()
+ pytest = find_program(get_option('PYTEST'), native: true, required: false)
+
+ if pytest.found()
+ pytest_enabled = true
+ pytest_version = run_command(pytest, '--version', env: pytest_env, check: false).stdout().strip().split(' ')[-1]
+ pytest_cmd = [pytest.full_path()]
+ else
+ # Try python -m pytest as a fallback
+ pytest_check = run_command(python, '-m', 'pytest', '--version', env: pytest_env, check: false)
+ if pytest_check.returncode() == 0
+ pytest_enabled = true
+ pytest_version = pytest_check.stdout().strip().split(' ')[-1]
+ pytest_cmd = [python.full_path(), '-m', 'pytest']
+ endif
+ endif
+
+ if not pytest_enabled and pytestopt.enabled()
+ error('pytest not found')
+ endif
+endif
+
+
+
###############################################################
# Library: zstd
###############################################################
@@ -3807,6 +3848,64 @@ foreach test_dir : tests
)
endforeach
install_suites += test_group
+ elif kind == 'pytest'
+ testwrap_pytest = testwrap_base
+ if not pytest_enabled
+ testwrap_pytest += ['--skip', 'pytest not enabled']
+ endif
+
+ test_command = pytest_cmd
+
+ test_command += [
+ '-c', meson.project_source_root() / 'pyproject.toml',
+ '--verbose',
+ '-p', 'pgtap', # enable our test reporter plugin
+ '-ra', # show skipped and xfailed tests too
+ ]
+
+ # Add temporary install, the build directory for non-installed binaries and
+ # also test/ for non-installed test binaries built separately.
+ env = test_env
+ env.prepend('PATH', temp_install_bindir, test_dir['bd'], test_dir['bd'] / 'test')
+ temp_install_datadir = '@0@@1@'.format(test_install_destdir, dir_prefix / dir_data)
+ env.set('share_contrib_dir', temp_install_datadir / 'contrib')
+ env.prepend('PYTHONPATH', pytest_env['PYTHONPATH'])
+
+ foreach name, value : t.get('env', {})
+ env.set(name, value)
+ endforeach
+
+ test_group = test_dir['name']
+ test_kwargs = {
+ 'protocol': 'tap',
+ 'suite': test_group,
+ 'timeout': 1000,
+ 'depends': test_deps + t.get('deps', []),
+ 'env': env,
+ } + t.get('test_kwargs', {})
+
+ foreach onetest : t['tests']
+ # Make test names prettier, remove pyt/ and .py
+ onetest_p = onetest
+ if onetest_p.startswith('pyt/')
+ onetest_p = onetest.split('pyt/')[1]
+ endif
+ if onetest_p.endswith('.py')
+ onetest_p = fs.stem(onetest_p)
+ endif
+
+ test(test_dir['name'] / onetest_p,
+ python,
+ kwargs: test_kwargs,
+ args: testwrap_pytest + [
+ '--testgroup', test_dir['name'],
+ '--testname', onetest_p,
+ '--', test_command,
+ test_dir['sd'] / onetest,
+ ],
+ )
+ endforeach
+ install_suites += test_group
else
error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
endif
@@ -3980,6 +4079,7 @@ summary(
'bison': '@0@ @1@'.format(bison.full_path(), bison_version),
'dtrace': dtrace,
'flex': '@0@ @1@'.format(flex.full_path(), flex_version),
+ 'pytest': pytest_enabled ? ' '.join(pytest_cmd) + ' ' + pytest_version : not_found_dep,
},
section: 'Programs',
)
diff --git a/meson_options.txt b/meson_options.txt
index 6a793f3e479..cb4825c3575 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -41,7 +41,10 @@ option('cassert', type: 'boolean', value: false,
description: 'Enable assertion checks (for debugging)')
option('tap_tests', type: 'feature', value: 'auto',
- description: 'Enable TAP tests')
+ description: 'Enable (Perl-based) TAP tests')
+
+option('pytest', type: 'feature', value: 'auto',
+ description: 'Enable (Python-based) pytest suites')
option('injection_points', type: 'boolean', value: false,
description: 'Enable injection points')
@@ -195,6 +198,9 @@ option('PERL', type: 'string', value: 'perl',
option('PROVE', type: 'string', value: 'prove',
description: 'Path to prove binary')
+option('PYTEST', type: 'array', value: ['pytest', 'py.test'],
+ description: 'Path to pytest binary')
+
option('PYTHON', type: 'array', value: ['python3', 'python'],
description: 'Path to python binary')
diff --git a/pyproject.toml b/pyproject.toml
new file mode 100644
index 00000000000..60abb4d0655
--- /dev/null
+++ b/pyproject.toml
@@ -0,0 +1,21 @@
+[project]
+name = "postgresql-hackers-tooling"
+version = "0.1.0"
+description = "Pytest infrastructure for PostgreSQL"
+requires-python = ">=3.6"
+dependencies = [
+ # pytest 7.0 was the last version which supported Python 3.6, but the BSDs
+ # have started putting 8.x into ports, so we support both. (pytest 8 can be
+ # used throughout once we drop support for Python 3.7.)
+ "pytest >= 7.0, < 10",
+
+ # Any other dependencies are effectively optional (added below). We import
+ # these libraries using pytest.importorskip(). So tests will be skipped if
+ # they are not available.
+]
+
+[tool.pytest.ini_options]
+minversion = "7.0"
+
+# Common test code can be found here.
+pythonpath = ["src/test/pytest"]
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 371cd7eba2c..160cdffd4f1 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -211,6 +211,7 @@ enable_dtrace = @enable_dtrace@
enable_coverage = @enable_coverage@
enable_injection_points = @enable_injection_points@
enable_tap_tests = @enable_tap_tests@
+enable_pytest = @enable_pytest@
python_includespec = @python_includespec@
python_libdir = @python_libdir@
@@ -354,6 +355,7 @@ MSGFMT = @MSGFMT@
MSGFMT_FLAGS = @MSGFMT_FLAGS@
MSGMERGE = @MSGMERGE@
OPENSSL = @OPENSSL@
+PYTEST = @PYTEST@
PYTHON = @PYTHON@
TAR = @TAR@
XGETTEXT = @XGETTEXT@
@@ -508,6 +510,33 @@ prove_installcheck = @echo "TAP tests not enabled. Try configuring with --enable
prove_check = $(prove_installcheck)
endif
+ifeq ($(enable_pytest),yes)
+
+pytest_installcheck = @echo "Installcheck is not currently supported for pytest."
+
+# We also configure the same PYTHONPATH in the pytest settings in
+# pyproject.toml, but pytest versions below 8.4 only actually use that value
+# after plugin loading. So we need to configure it here too. This won't help
+# people manually running pytest outside of meson/make, but we expect those to
+# use a recent enough version of pytest anyway (and if not they can manually
+# configure PYTHONPATH too).
+define pytest_check
+echo "# +++ pytest check in $(subdir) +++" && \
+rm -rf '$(CURDIR)'/tmp_check && \
+$(MKDIR_P) '$(CURDIR)'/tmp_check && \
+cd $(srcdir) && \
+ TESTLOGDIR='$(CURDIR)/tmp_check/log' \
+ TESTDATADIR='$(CURDIR)/tmp_check' \
+ PYTHONPATH='$(abs_top_srcdir)/src/test/pytest:$$PYTHONPATH' \
+ $(with_temp_install) \
+ $(PYTEST) -c '$(abs_top_srcdir)/pyproject.toml' --verbose -ra ./pyt/
+endef
+
+else
+pytest_installcheck = @echo "pytest is not enabled. Try configuring with --enable-pytest"
+pytest_check = $(pytest_installcheck)
+endif
+
# Installation.
install_bin = @install_bin@
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index aa2f9a87b14..00a24f0b36b 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -56,6 +56,8 @@ pgxs_kv = {
'enable_nls': libintl.found() ? 'yes' : 'no',
'enable_injection_points': get_option('injection_points') ? 'yes' : 'no',
'enable_tap_tests': tap_tests_enabled ? 'yes' : 'no',
+ 'enable_pytest': pytest_enabled ? 'yes' : 'no',
+ 'PYTEST': pytest_enabled ? ' '.join(pytest_cmd) : '',
'enable_debug': get_option('debug') ? 'yes' : 'no',
'enable_coverage': 'no',
'enable_dtrace': dtrace.found() ? 'yes' : 'no',
diff --git a/src/test/Makefile b/src/test/Makefile
index 3eb0a06abb4..0be9771d71f 100644
--- a/src/test/Makefile
+++ b/src/test/Makefile
@@ -18,6 +18,7 @@ SUBDIRS = \
modules \
perl \
postmaster \
+ pytest \
recovery \
regress \
subscription
diff --git a/src/test/meson.build b/src/test/meson.build
index cd45cbf57fb..09175f0eaea 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -5,6 +5,7 @@ subdir('isolation')
subdir('authentication')
subdir('postmaster')
+subdir('pytest')
subdir('recovery')
subdir('subscription')
subdir('modules')
diff --git a/src/test/pytest/Makefile b/src/test/pytest/Makefile
new file mode 100644
index 00000000000..2bdca96ccbe
--- /dev/null
+++ b/src/test/pytest/Makefile
@@ -0,0 +1,20 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for pytest
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/pytest/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/test/pytest
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+check:
+ $(pytest_check)
+
+clean distclean maintainer-clean:
+ rm -rf tmp_check
diff --git a/src/test/pytest/README b/src/test/pytest/README
new file mode 100644
index 00000000000..1333ed77b7e
--- /dev/null
+++ b/src/test/pytest/README
@@ -0,0 +1 @@
+TODO
diff --git a/src/test/pytest/meson.build b/src/test/pytest/meson.build
new file mode 100644
index 00000000000..b1f6061b307
--- /dev/null
+++ b/src/test/pytest/meson.build
@@ -0,0 +1,15 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+if not pytest_enabled
+ subdir_done()
+endif
+
+tests += {
+ 'name': 'pytest',
+ 'sd': meson.current_source_dir(),
+ 'bd': meson.current_build_dir(),
+ 'pytest': {
+ 'tests': [
+ ],
+ },
+}
diff --git a/src/test/pytest/pgtap.py b/src/test/pytest/pgtap.py
new file mode 100644
index 00000000000..c92cad98d95
--- /dev/null
+++ b/src/test/pytest/pgtap.py
@@ -0,0 +1,198 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import os
+import sys
+
+import pytest
+
+#
+# Helpers
+#
+
+
+class TAP:
+ """
+ A basic API for reporting via the TAP protocol.
+ """
+
+ def __init__(self):
+ self.count = 0
+
+ # XXX interacts poorly with testwrap's boilerplate diagnostics
+ # self.print("TAP version 13")
+
+ def expect(self, num: int):
+ self.print(f"1..{num}")
+
+ def print(self, *args):
+ print(*args, file=sys.__stdout__)
+
+ def ok(self, name: str):
+ self.count += 1
+ self.print("ok", self.count, "-", name)
+
+ def skip(self, name: str, reason: str):
+ self.count += 1
+ self.print("ok", self.count, "-", name, "# skip", reason)
+
+ def fail(self, name: str, details: str):
+ self.count += 1
+ self.print("not ok", self.count, "-", name)
+
+ # mtest has some odd behavior around TAP tests where it won't print
+ # diagnostics on failure if they're part of the stdout stream, so we
+ # might as well just dump the details directly to stderr instead.
+ print(details, file=sys.__stderr__)
+
+
+tap = TAP()
+
+
+class TestNotes:
+ """
+ Annotations for a single test. The existing pytest hooks keep interesting
+ information somewhat separated across the different stages
+ (setup/test/teardown), so this class is used to correlate them.
+ """
+
+ skipped = False
+ skip_reason = None
+
+ failed = False
+ details = ""
+
+
+# Register a custom key in the stash dictionary for keeping our TestNotes.
+notes_key = pytest.StashKey[TestNotes]()
+
+
+#
+# Hook Implementations
+#
+
+
+@pytest.hookimpl(tryfirst=True)
+def pytest_configure(config):
+ """
+ Hijacks the standard streams as soon as possible during pytest startup. The
+ pytest-formatted output gets logged to file instead, and we'll use the
+ original sys.__stdout__/__stderr__ streams for the TAP protocol.
+ """
+ logdir = os.getenv("TESTLOGDIR")
+ if not logdir:
+ raise RuntimeError("pgtap requires the TESTLOGDIR envvar to be set")
+
+ os.makedirs(logdir)
+ logpath = os.path.join(logdir, "pytest.log")
+ sys.stdout = sys.stderr = open(logpath, "a", buffering=1)
+
+
+@pytest.hookimpl(trylast=True)
+def pytest_sessionfinish(session, exitstatus):
+ """
+ Suppresses nonzero exit codes due to failed tests. (In that case, we want
+ Meson to report a failure count, not a generic ERROR.)
+ """
+ if exitstatus == pytest.ExitCode.TESTS_FAILED:
+ session.exitstatus = pytest.ExitCode.OK
+
+
+@pytest.hookimpl
+def pytest_collectreport(report):
+ # Include collection failures directly in Meson error output.
+ if report.failed:
+ print(report.longreprtext, file=sys.__stderr__)
+
+
+@pytest.hookimpl
+def pytest_internalerror(excrepr, excinfo):
+ # Include internal errors directly in Meson error output.
+ print(excrepr, file=sys.__stderr__)
+
+
+#
+# Hook Wrappers
+#
+# In pytest parlance, a "wrapper" for a hook can inspect and optionally modify
+# existing hooks' behavior, but it does not replace the hook chain. This is done
+# through a generator-style API which chains the hooks together (see the use of
+# `yield`).
+#
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_collection(session):
+ """Reports the number of gathered tests after collection is finished."""
+ res = yield
+ tap.expect(session.testscollected)
+ return res
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_runtest_makereport(item, call):
+ """
+ Annotates a test item with our TestNotes and grabs relevant information for
+ reporting.
+
+ This is called multiple times per test, so it's not correct to print the TAP
+ result here. (A test and its teardown stage can both fail, and we want to
+ see the details for both.) We instead combine all the information for use by
+ our pytest_runtest_protocol wrapper later on.
+ """
+ res = yield
+
+ if notes_key not in item.stash:
+ item.stash[notes_key] = TestNotes()
+ notes = item.stash[notes_key]
+
+ report = res.get_result()
+ if report.passed:
+ pass # no annotation needed
+
+ elif report.skipped:
+ notes.skipped = True
+ _, _, notes.skip_reason = report.longrepr
+
+ elif report.failed:
+ notes.failed = True
+
+ if not notes.details:
+ notes.details += "{:_^72}\n\n".format(f" {report.head_line} ")
+
+ if report.when in ("setup", "teardown"):
+ notes.details += "\n{:_^72}\n\n".format(
+ f" Error during {report.when} of {report.head_line} "
+ )
+
+ notes.details += report.longreprtext + "\n"
+
+ # Include captured stdout/stderr/log in failure output
+ for section_name, section_content in report.sections:
+ if section_content.strip():
+ notes.details += "\n{:-^72}\n".format(f" {section_name} ")
+ notes.details += section_content + "\n"
+
+ else:
+ raise RuntimeError("pytest_runtest_makereport received unknown test status")
+
+ return res
+
+
+@pytest.hookimpl(hookwrapper=True)
+def pytest_runtest_protocol(item, nextitem):
+ """
+ Reports the TAP result for this test item using our gathered TestNotes.
+ """
+ res = yield
+
+ assert notes_key in item.stash, "pgtap didn't annotate a test item?"
+ notes = item.stash[notes_key]
+
+ if notes.failed:
+ tap.fail(item.nodeid, notes.details)
+ elif notes.skipped:
+ tap.skip(item.nodeid, notes.skip_reason)
+ else:
+ tap.ok(item.nodeid)
+
+ return res
diff --git a/src/tools/testwrap b/src/tools/testwrap
index e91296ecd15..346f86b8ea3 100755
--- a/src/tools/testwrap
+++ b/src/tools/testwrap
@@ -42,7 +42,11 @@ open(os.path.join(testdir, 'test.start'), 'x')
env_dict = {**os.environ,
'TESTDATADIR': os.path.join(testdir, 'data'),
- 'TESTLOGDIR': os.path.join(testdir, 'log')}
+ 'TESTLOGDIR': os.path.join(testdir, 'log'),
+ # Prevent emitting terminal capability sequences that pollute the
+ # TAP output stream (i.e.\033[?1034h). This happens on OpenBSD with
+ # pytest for unknown reasons.
+ 'TERM': ''}
# The configuration time value of PG_TEST_EXTRA is supplied via argument
base-commit: e5a5e0a90750d665cab417322b9f85c806430d85
--
2.52.0
v10-0002-Add-pytest-infrastructure-to-interact-with-Postg.patchtext/x-patch; charset=utf-8; name=v10-0002-Add-pytest-infrastructure-to-interact-with-Postg.patchDownload
From 4354347134c121751348e06c289807d05e6c4b2d Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <postgres@jeltef.nl>
Date: Tue, 16 Dec 2025 09:25:48 +0100
Subject: [PATCH v10 2/5] Add pytest infrastructure to interact with PostgreSQL
servers
This adds functionality to the pytest infrastructure that allows tests
to do common things with PostgreSQL servers like:
- creating
- starting
- stopping
- connecting
- running queries
- handling errors
The goal of this infrastructure is to be so easy to use that the actual
tests really only contain the logic to test the behaviour that the tests
are testing, as opposed to a bunch of boilerplate. Examples of this are:
Types get converted to their Python counter parts automatically. Errors
become actual Python exceptions. Results of queries that only return a
single row or cell are unpacked automatically, so you don't have to do
rows[0][0] if the query only returns a single cell.
The only new tests that are part of this commit are tests that cover
this testing infrastructure itself. It's debatable whether such tests
are useful long term, because any infrastructure that's unused by actual
tests should probably not exist. For now it seems good to test this
basic functionality though, both to make sure we don't break it before
committing actual tests that use it, and also as an example for people
writing new tests.
---
doc/src/sgml/regress.sgml | 66 ++-
pyproject.toml | 3 +
src/test/pytest/README | 154 ++++++-
src/test/pytest/libpq/__init__.py | 35 ++
src/test/pytest/libpq/_core.py | 488 ++++++++++++++++++++++
src/test/pytest/libpq/errors.py | 62 +++
src/test/pytest/meson.build | 4 +
src/test/pytest/pypg/__init__.py | 10 +
src/test/pytest/pypg/_env.py | 72 ++++
src/test/pytest/pypg/fixtures.py | 335 +++++++++++++++
src/test/pytest/pypg/server.py | 470 +++++++++++++++++++++
src/test/pytest/pypg/util.py | 42 ++
src/test/pytest/pyt/conftest.py | 1 +
src/test/pytest/pyt/test_errors.py | 34 ++
src/test/pytest/pyt/test_libpq.py | 172 ++++++++
src/test/pytest/pyt/test_multi_server.py | 46 ++
src/test/pytest/pyt/test_query_helpers.py | 347 +++++++++++++++
17 files changed, 2339 insertions(+), 2 deletions(-)
create mode 100644 src/test/pytest/libpq/__init__.py
create mode 100644 src/test/pytest/libpq/_core.py
create mode 100644 src/test/pytest/libpq/errors.py
create mode 100644 src/test/pytest/pypg/__init__.py
create mode 100644 src/test/pytest/pypg/_env.py
create mode 100644 src/test/pytest/pypg/fixtures.py
create mode 100644 src/test/pytest/pypg/server.py
create mode 100644 src/test/pytest/pypg/util.py
create mode 100644 src/test/pytest/pyt/conftest.py
create mode 100644 src/test/pytest/pyt/test_errors.py
create mode 100644 src/test/pytest/pyt/test_libpq.py
create mode 100644 src/test/pytest/pyt/test_multi_server.py
create mode 100644 src/test/pytest/pyt/test_query_helpers.py
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index d80dd46c5fd..2d85edacec7 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -840,7 +840,7 @@ float4:out:.*-.*-cygwin.*=float4-misrounded-input.out
</sect1>
<sect1 id="regress-tap">
- <title>TAP Tests</title>
+ <title>Perl TAP Tests</title>
<para>
Various tests, particularly the client program tests
@@ -929,6 +929,70 @@ PG_TEST_NOCLEAN=1 make -C src/bin/pg_dump check
</sect1>
+ <sect1 id="regress-pytest">
+ <title>Pytest Tests</title>
+
+ <para>
+ Tests in <filename>pyt</filename> directories use the Python
+ <application>pytest</application> framework. These tests provide a
+ convenient way to test libpq client functionality and scenarios requiring
+ multiple PostgreSQL server instances.
+ </para>
+
+ <para>
+ The pytest tests require <productname>PostgreSQL</productname> to be
+ configured with the option <option>--enable-pytest</option> (or
+ <option>-Dpytest=enabled</option> for Meson builds). You also need
+ <application>pytest</application> installed. You can either install it
+ system-wide, or create a virtual environment in the source directory:
+<programlisting>
+python -m venv .venv
+source .venv/bin/activate
+pip install .
+</programlisting>
+ Alternatively, if you have <application>uv</application> installed:
+<programlisting>
+uv sync
+source .venv/bin/activate
+</programlisting>
+ Remember to activate the virtual environment before running
+ <command>configure</command> or <command>meson setup</command>.
+ </para>
+
+ <para>
+ With Meson builds, you can run the pytest tests using:
+<programlisting>
+meson test --suite pytest
+</programlisting>
+ With autoconf-based builds, you can run them from the
+ <filename>src/test/pytest</filename> directory using:
+<programlisting>
+make check
+</programlisting>
+ </para>
+
+ <para>
+ You can also run specific test files directly using pytest:
+<programlisting>
+pytest src/test/pytest/pyt/test_libpq.py
+pytest -k "test_connstr"
+</programlisting>
+ </para>
+
+ <para>
+ Many operations in the test suites use a 180-second timeout, which on slow
+ hosts may lead to load-induced timeouts. Setting the environment variable
+ <varname>PG_TEST_TIMEOUT_DEFAULT</varname> to a higher number will change
+ the default to avoid this.
+ </para>
+
+ <para>
+ For more information on writing pytest tests, see the
+ <filename>src/test/pytest/README</filename> file.
+ </para>
+
+ </sect1>
+
<sect1 id="regress-coverage">
<title>Test Coverage Examination</title>
diff --git a/pyproject.toml b/pyproject.toml
index 60abb4d0655..4628d2274e0 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -19,3 +19,6 @@ minversion = "7.0"
# Common test code can be found here.
pythonpath = ["src/test/pytest"]
+
+# Load the shared fixtures plugin
+addopts = ["-p", "pypg.fixtures"]
diff --git a/src/test/pytest/README b/src/test/pytest/README
index 1333ed77b7e..bb75e56a25d 100644
--- a/src/test/pytest/README
+++ b/src/test/pytest/README
@@ -1 +1,153 @@
-TODO
+src/test/pytest/README
+
+Pytest-based tests
+==================
+
+This directory contains infrastructure for Python-based tests using pytest,
+along with some core tests for the pytest infrastructure itself. The framework
+provides fixtures for managing PostgreSQL server instances and connecting to
+them via libpq.
+
+
+Running the tests
+=================
+
+NOTE: You must have given the --enable-pytest argument to configure (or
+-Dpytest=enabled for Meson builds). You also need to have pytest installed.
+
+If you don't have pytest installed system-wide, you can create a virtual
+environment:
+
+ python3 -m venv .venv
+ source .venv/bin/activate # On Windows: .venv\Scripts\activate
+ pip install . # Installs pytest and other dependencies
+
+Or using uv (https://docs.astral.sh/uv/):
+
+ uv sync
+ source .venv/bin/activate # On Windows: .venv\Scripts\activate
+
+Remember to activate the virtual environment before running configure/meson
+setup.
+
+With Meson builds, you can run:
+ meson test --suite pytest
+
+With autoconf based builds, you can run:
+ make check
+or
+ make installcheck
+
+You can run specific test files and/or use pytest's -k option to select tests:
+ pytest src/test/pytest/pyt/test_libpq.py
+ pytest -k "test_connstr"
+
+
+Directory structure
+===================
+
+pypg/
+ Python library providing common functions and pytest fixtures that can be
+ used in tests.
+
+libpq/
+ A simple but user-friendly python wrapper around libpq
+
+pyt/
+ Tests for the pytest infrastructure itself
+
+pgtap.py
+ A pytest plugin to output results in TAP format
+
+
+Writing tests
+=============
+
+Tests use pytest fixtures to manage server instances and connections. The
+most commonly used fixtures are:
+
+pg
+ A PostgresServer instance configured for the current test. Use this for
+ creating test users/databases or modifying server configuration. Changes
+ are automatically rolled back after the test.
+
+conn
+ A connected PGconn instance to the test server. Automatically cleaned up
+ after the test.
+
+connect
+ A function to create additional connections with custom options.
+
+create_pg
+ A factory function to create additional PostgreSQL servers within a test.
+ Servers are automatically cleaned up at the end of the test. Useful for
+ testing scenarios that require multiple independent servers.
+
+create_pg_module
+ Like create_pg, but servers persist for the entire test module. Use this
+ when multiple tests in a module can share the same servers, which is
+ faster than creating new servers for each test.
+
+
+Example test:
+
+ def test_simple_query(conn):
+ result = conn.sql("SELECT 1 + 1")
+ assert result == 2
+
+ def test_with_user(pg):
+ users = pg.create_users("test")
+ with pg.reloading() as s:
+ s.hba.prepend(["local", "all", users["test"], "trust"])
+
+ conn = pg.connect(user=users["test"])
+ assert conn.sql("SELECT current_user") == users["test"]
+
+ def test_multiple_servers(create_pg):
+ node1 = create_pg("primary")
+ node2 = create_pg("secondary")
+
+ conn1 = node1.connect()
+ conn2 = node2.connect()
+
+ # Each server is independent
+ assert node1.port != node2.port
+
+
+Server configuration
+====================
+
+Tests can temporarily modify server configuration using context managers:
+
+ with pg.reloading() as s:
+ s.conf.set(log_connections="on")
+ s.hba.prepend("local all all trust")
+ # Server is reloaded here
+ # After the test finished the original configuration is restored and
+ # the server is reloaded again
+
+Use pg.restarting() instead if the configuration change requires a restart.
+
+
+Timeouts
+========
+
+Tests inherit the PG_TEST_TIMEOUT_DEFAULT environment variable (defaulting
+to 180 seconds). The remaining_timeout fixture provides a function that
+returns how much time remains for the current test.
+
+
+Environment variables
+=====================
+
+PG_TEST_TIMEOUT_DEFAULT
+ Per-test timeout in seconds (default: 180)
+
+PG_CONFIG
+ Path to pg_config (default: uses PATH)
+
+TESTDATADIR
+ Directory for test data (default: pytest temp directory)
+
+PG_TEST_EXTRA
+ Space-separated list of optional test categories to run (e.g., "ssl")
diff --git a/src/test/pytest/libpq/__init__.py b/src/test/pytest/libpq/__init__.py
new file mode 100644
index 00000000000..6a71ebbe43f
--- /dev/null
+++ b/src/test/pytest/libpq/__init__.py
@@ -0,0 +1,35 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+libpq testing utilities - ctypes bindings and helpers for PostgreSQL's libpq library.
+
+This module provides Python wrappers around libpq for use in pytest tests.
+"""
+
+from . import errors
+from .errors import LibpqError
+from ._core import (
+ ConnectionStatus,
+ DiagField,
+ ExecStatus,
+ PGconn,
+ PGresult,
+ connect,
+ connstr,
+ load_libpq_handle,
+ register_type_info,
+)
+
+__all__ = [
+ "errors",
+ "LibpqError",
+ "ConnectionStatus",
+ "DiagField",
+ "ExecStatus",
+ "PGconn",
+ "PGresult",
+ "connect",
+ "connstr",
+ "load_libpq_handle",
+ "register_type_info",
+]
diff --git a/src/test/pytest/libpq/_core.py b/src/test/pytest/libpq/_core.py
new file mode 100644
index 00000000000..1c059b9b446
--- /dev/null
+++ b/src/test/pytest/libpq/_core.py
@@ -0,0 +1,488 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Core libpq functionality - ctypes bindings and connection handling.
+"""
+
+import contextlib
+import ctypes
+import datetime
+import decimal
+import enum
+import json
+import platform
+import os
+import uuid
+from typing import Any, Callable, Dict, Optional
+
+from .errors import LibpqError
+
+
+# PG_DIAG field identifiers from postgres_ext.h
+class DiagField(enum.IntEnum):
+ SEVERITY = ord("S")
+ SEVERITY_NONLOCALIZED = ord("V")
+ SQLSTATE = ord("C")
+ MESSAGE_PRIMARY = ord("M")
+ MESSAGE_DETAIL = ord("D")
+ MESSAGE_HINT = ord("H")
+ STATEMENT_POSITION = ord("P")
+ INTERNAL_POSITION = ord("p")
+ INTERNAL_QUERY = ord("q")
+ CONTEXT = ord("W")
+ SCHEMA_NAME = ord("s")
+ TABLE_NAME = ord("t")
+ COLUMN_NAME = ord("c")
+ DATATYPE_NAME = ord("d")
+ CONSTRAINT_NAME = ord("n")
+ SOURCE_FILE = ord("F")
+ SOURCE_LINE = ord("L")
+ SOURCE_FUNCTION = ord("R")
+
+
+class ConnectionStatus(enum.IntEnum):
+ """PostgreSQL connection status codes from libpq."""
+
+ CONNECTION_OK = 0
+ CONNECTION_BAD = 1
+
+
+class ExecStatus(enum.IntEnum):
+ """PostgreSQL result status codes from PQresultStatus."""
+
+ PGRES_EMPTY_QUERY = 0
+ PGRES_COMMAND_OK = 1
+ PGRES_TUPLES_OK = 2
+ PGRES_COPY_OUT = 3
+ PGRES_COPY_IN = 4
+ PGRES_BAD_RESPONSE = 5
+ PGRES_NONFATAL_ERROR = 6
+ PGRES_FATAL_ERROR = 7
+ PGRES_COPY_BOTH = 8
+ PGRES_SINGLE_TUPLE = 9
+ PGRES_PIPELINE_SYNC = 10
+ PGRES_PIPELINE_ABORTED = 11
+
+
+class _PGconn(ctypes.Structure):
+ pass
+
+
+class _PGresult(ctypes.Structure):
+ pass
+
+
+_PGconn_p = ctypes.POINTER(_PGconn)
+_PGresult_p = ctypes.POINTER(_PGresult)
+
+
+def load_libpq_handle(libdir, bindir):
+ """
+ Loads a ctypes handle for libpq. Some common function prototypes are
+ initialized for general use.
+ """
+ system = platform.system()
+
+ if system in ("Linux", "FreeBSD", "NetBSD", "OpenBSD"):
+ name = "libpq.so.5"
+ elif system == "Darwin":
+ name = "libpq.5.dylib"
+ elif system == "Windows":
+ name = "libpq.dll"
+ else:
+ assert False, f"the libpq fixture must be updated for {system}"
+
+ if system == "Windows":
+ # On Windows, libpq.dll is confusingly in bindir, not libdir. And we
+ # need to add this directory the the search path.
+ libpq_path = os.path.join(bindir, name)
+ lib = ctypes.CDLL(libpq_path)
+ else:
+ libpq_path = os.path.join(libdir, name)
+ lib = ctypes.CDLL(libpq_path)
+
+ #
+ # Function Prototypes
+ #
+
+ lib.PQconnectdb.restype = _PGconn_p
+ lib.PQconnectdb.argtypes = [ctypes.c_char_p]
+
+ lib.PQstatus.restype = ctypes.c_int
+ lib.PQstatus.argtypes = [_PGconn_p]
+
+ lib.PQexec.restype = _PGresult_p
+ lib.PQexec.argtypes = [_PGconn_p, ctypes.c_char_p]
+
+ lib.PQresultStatus.restype = ctypes.c_int
+ lib.PQresultStatus.argtypes = [_PGresult_p]
+
+ lib.PQclear.restype = None
+ lib.PQclear.argtypes = [_PGresult_p]
+
+ lib.PQerrorMessage.restype = ctypes.c_char_p
+ lib.PQerrorMessage.argtypes = [_PGconn_p]
+
+ lib.PQfinish.restype = None
+ lib.PQfinish.argtypes = [_PGconn_p]
+
+ lib.PQresultErrorMessage.restype = ctypes.c_char_p
+ lib.PQresultErrorMessage.argtypes = [_PGresult_p]
+
+ lib.PQntuples.restype = ctypes.c_int
+ lib.PQntuples.argtypes = [_PGresult_p]
+
+ lib.PQnfields.restype = ctypes.c_int
+ lib.PQnfields.argtypes = [_PGresult_p]
+
+ lib.PQgetvalue.restype = ctypes.c_char_p
+ lib.PQgetvalue.argtypes = [_PGresult_p, ctypes.c_int, ctypes.c_int]
+
+ lib.PQgetisnull.restype = ctypes.c_int
+ lib.PQgetisnull.argtypes = [_PGresult_p, ctypes.c_int, ctypes.c_int]
+
+ lib.PQftype.restype = ctypes.c_uint
+ lib.PQftype.argtypes = [_PGresult_p, ctypes.c_int]
+
+ lib.PQresultErrorField.restype = ctypes.c_char_p
+ lib.PQresultErrorField.argtypes = [_PGresult_p, ctypes.c_int]
+
+ return lib
+
+
+# PostgreSQL type OIDs and conversion system
+# Type registry - maps OID to converter function
+_type_converters: Dict[int, Callable[[str], Any]] = {}
+_array_to_elem_map: Dict[int, int] = {}
+
+
+def register_type_info(
+ name: str, oid: int, array_oid: int, converter: Callable[[str], Any]
+):
+ """
+ Register a PostgreSQL type with its OID, array OID, and conversion function.
+
+ Usage:
+ register_type_info("bool", 16, 1000, lambda v: v == "t")
+ """
+ _type_converters[oid] = converter
+ if array_oid is not None:
+ _array_to_elem_map[array_oid] = oid
+
+
+def _parse_array(value: str, elem_oid: int):
+ """Parse PostgreSQL array syntax into nested Python lists."""
+ stack: list[list] = []
+ current_element: list[str] = []
+ in_quotes = False
+ was_quoted = False
+ pos = 0
+
+ while pos < len(value):
+ char = value[pos]
+
+ if in_quotes:
+ if char == "\\":
+ next_char = value[pos + 1]
+ if next_char not in '"\\':
+ raise NotImplementedError('Only \\" and \\\\ escapes are supported')
+ current_element.append(next_char)
+ pos += 2
+ continue
+ elif char == '"':
+ in_quotes = False
+ else:
+ current_element.append(char)
+ elif char == '"':
+ in_quotes = True
+ was_quoted = True
+ elif char == "{":
+ stack.append([])
+ elif char in ",}":
+ if current_element or was_quoted:
+ elem = "".join(current_element)
+ if not was_quoted and elem == "NULL":
+ stack[-1].append(None)
+ else:
+ stack[-1].append(_convert_pg_value(elem, elem_oid))
+ current_element = []
+ was_quoted = False
+ if char == "}":
+ completed = stack.pop()
+ if not stack:
+ return completed
+ stack[-1].append(completed)
+ elif char != " ":
+ current_element.append(char)
+ pos += 1
+
+ raise ValueError(f"Malformed array literal: {value}")
+
+
+# Register standard PostgreSQL types that we'll likely encounter in tests
+register_type_info("bool", 16, 1000, lambda v: v == "t")
+register_type_info("int2", 21, 1005, int)
+register_type_info("int4", 23, 1007, int)
+register_type_info("int8", 20, 1016, int)
+register_type_info("float4", 700, 1021, float)
+register_type_info("float8", 701, 1022, float)
+register_type_info("numeric", 1700, 1231, decimal.Decimal)
+register_type_info("text", 25, 1009, str)
+register_type_info("varchar", 1043, 1015, str)
+register_type_info("date", 1082, 1182, datetime.date.fromisoformat)
+register_type_info("time", 1083, 1183, datetime.time.fromisoformat)
+register_type_info("timestamp", 1114, 1115, datetime.datetime.fromisoformat)
+register_type_info("timestamptz", 1184, 1185, datetime.datetime.fromisoformat)
+register_type_info("uuid", 2950, 2951, uuid.UUID)
+register_type_info("json", 114, 199, json.loads)
+register_type_info("jsonb", 3802, 3807, json.loads)
+
+
+def _convert_pg_value(value: str, type_oid: int) -> Any:
+ """
+ Convert PostgreSQL string value to appropriate Python type based on OID.
+ Uses the registered type converters from register_type_info().
+ """
+ # Check if it's an array type
+ if type_oid in _array_to_elem_map:
+ elem_oid = _array_to_elem_map[type_oid]
+ return _parse_array(value, elem_oid)
+
+ # Use registered converter if available
+ converter = _type_converters.get(type_oid)
+ if converter:
+ return converter(value)
+
+ # Unknown types - return as string
+ return value
+
+
+def simplify_query_results(results) -> Any:
+ """
+ Simplify the results of a query so that the caller doesn't have to unpack
+ lists and tuples of length 1.
+ """
+ if len(results) == 1:
+ row = results[0]
+ if len(row) == 1:
+ # If there's only a single cell, just return the value
+ return row[0]
+ # If there's only a single row, just return that row
+ return row
+
+ if len(results) != 0 and len(results[0]) == 1:
+ # If there's only a single column, return an array of values
+ return [row[0] for row in results]
+
+ # if there are multiple rows and columns, return the results as is
+ return results
+
+
+class PGresult(contextlib.AbstractContextManager):
+ """Wraps a raw _PGresult_p with a more friendly interface."""
+
+ def __init__(self, lib: ctypes.CDLL, res: _PGresult_p):
+ self._lib = lib
+ self._res = res
+
+ def __exit__(self, *exc):
+ self._lib.PQclear(self._res)
+ self._res = None
+
+ def status(self) -> ExecStatus:
+ return ExecStatus(self._lib.PQresultStatus(self._res))
+
+ def error_message(self):
+ """Returns the error message associated with this result."""
+ msg = self._lib.PQresultErrorMessage(self._res)
+ return msg.decode() if msg else ""
+
+ def _get_error_field(self, field: DiagField) -> Optional[str]:
+ """Get an error field from the result using PQresultErrorField."""
+ val = self._lib.PQresultErrorField(self._res, int(field))
+ return val.decode() if val else None
+
+ def raise_error(self) -> None:
+ """
+ Raises LibpqError with diagnostic information from the result.
+ """
+ if not self._res:
+ raise LibpqError("query failed: out of memory or connection lost")
+
+ sqlstate = self._get_error_field(DiagField.SQLSTATE)
+ primary = self._get_error_field(DiagField.MESSAGE_PRIMARY)
+ detail = self._get_error_field(DiagField.MESSAGE_DETAIL)
+ hint = self._get_error_field(DiagField.MESSAGE_HINT)
+ severity = self._get_error_field(DiagField.SEVERITY)
+ schema_name = self._get_error_field(DiagField.SCHEMA_NAME)
+ table_name = self._get_error_field(DiagField.TABLE_NAME)
+ column_name = self._get_error_field(DiagField.COLUMN_NAME)
+ datatype_name = self._get_error_field(DiagField.DATATYPE_NAME)
+ constraint_name = self._get_error_field(DiagField.CONSTRAINT_NAME)
+ context = self._get_error_field(DiagField.CONTEXT)
+
+ position_str = self._get_error_field(DiagField.STATEMENT_POSITION)
+ position = int(position_str) if position_str else None
+
+ raise LibpqError(
+ primary or self.error_message(),
+ sqlstate=sqlstate,
+ severity=severity,
+ primary=primary,
+ detail=detail,
+ hint=hint,
+ schema_name=schema_name,
+ table_name=table_name,
+ column_name=column_name,
+ datatype_name=datatype_name,
+ constraint_name=constraint_name,
+ position=position,
+ context=context,
+ )
+
+ def fetch_all(self):
+ """
+ Fetch all rows and convert to Python types.
+ Returns a list of tuples, with values converted based on their PostgreSQL type.
+ """
+ nrows = self._lib.PQntuples(self._res)
+ ncols = self._lib.PQnfields(self._res)
+
+ # Get type OIDs for each column
+ type_oids = [self._lib.PQftype(self._res, col) for col in range(ncols)]
+
+ results = []
+ for row in range(nrows):
+ row_data = []
+ for col in range(ncols):
+ if self._lib.PQgetisnull(self._res, row, col):
+ row_data.append(None)
+ else:
+ value = self._lib.PQgetvalue(self._res, row, col).decode()
+ row_data.append(_convert_pg_value(value, type_oids[col]))
+ results.append(tuple(row_data))
+
+ return results
+
+
+class PGconn(contextlib.AbstractContextManager):
+ """
+ Wraps a raw _PGconn_p with a more friendly interface. This is just a
+ stub; it's expected to grow.
+ """
+
+ def __init__(
+ self,
+ lib: ctypes.CDLL,
+ handle: _PGconn_p,
+ stack: contextlib.ExitStack,
+ ):
+ self._lib = lib
+ self._handle = handle
+ self._stack = stack
+
+ def __exit__(self, *exc):
+ self._lib.PQfinish(self._handle)
+ self._handle = None
+
+ def exec(self, query: str):
+ """
+ Executes a query via PQexec() and returns a PGresult.
+ """
+ res = self._lib.PQexec(self._handle, query.encode())
+ return self._stack.enter_context(PGresult(self._lib, res))
+
+ def sql(self, query: str):
+ """
+ Executes a query and raises an exception if it fails.
+ Returns the query results with automatic type conversion and simplification.
+ For commands that don't return data (INSERT, UPDATE, etc.), returns None.
+
+ Examples:
+ - SELECT 1 -> 1
+ - SELECT 1, 2 -> (1, 2)
+ - SELECT * FROM generate_series(1, 3) -> [1, 2, 3]
+ - SELECT * FROM (VALUES (1, 'a'), (2, 'b')) t -> [(1, 'a'), (2, 'b')]
+ - CREATE TABLE ... -> None
+ - INSERT INTO ... -> None
+ """
+ res = self.exec(query)
+ status = res.status()
+
+ if status == ExecStatus.PGRES_FATAL_ERROR:
+ res.raise_error()
+ elif status == ExecStatus.PGRES_COMMAND_OK:
+ return None
+ elif status == ExecStatus.PGRES_TUPLES_OK:
+ results = res.fetch_all()
+ return simplify_query_results(results)
+ else:
+ res.raise_error()
+
+
+def connstr(opts: Dict[str, Any]) -> str:
+ """
+ Flattens the provided options into a libpq connection string. Values
+ are converted to str and quoted/escaped as necessary.
+ """
+ settings = []
+
+ for k, v in opts.items():
+ v = str(v)
+ if not v:
+ v = "''"
+ else:
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+
+ if " " in v:
+ v = f"'{v}'"
+
+ settings.append(f"{k}={v}")
+
+ return " ".join(settings)
+
+
+def connect(
+ libpq_handle: ctypes.CDLL,
+ stack: contextlib.ExitStack,
+ remaining_timeout_fn: Callable[[], float],
+ **opts,
+) -> PGconn:
+ """
+ Connects to a server, using the given connection options, and
+ returns a PGconn object wrapping the connection handle. A
+ failure will raise LibpqError.
+
+ Connections honor PG_TEST_TIMEOUT_DEFAULT unless connect_timeout is
+ explicitly overridden in opts.
+
+ Args:
+ libpq_handle: ctypes.CDLL handle to libpq library
+ stack: ExitStack for managing connection cleanup
+ remaining_timeout_fn: Function that returns remaining timeout in seconds
+ **opts: Connection options (host, port, dbname, etc.)
+
+ Returns:
+ PGconn: Connected database connection
+
+ Raises:
+ LibpqError: If connection fails
+ """
+
+ if "connect_timeout" not in opts:
+ t = int(remaining_timeout_fn())
+ opts["connect_timeout"] = max(t, 1)
+
+ conn_p = libpq_handle.PQconnectdb(connstr(opts).encode())
+
+ # Check connection status before adding to stack
+ if libpq_handle.PQstatus(conn_p) != ConnectionStatus.CONNECTION_OK:
+ error_msg = libpq_handle.PQerrorMessage(conn_p).decode()
+ # Manually close the failed connection
+ libpq_handle.PQfinish(conn_p)
+ raise LibpqError(error_msg)
+
+ # Connection succeeded - add to stack for cleanup
+ conn = stack.enter_context(PGconn(libpq_handle, conn_p, stack=stack))
+ return conn
diff --git a/src/test/pytest/libpq/errors.py b/src/test/pytest/libpq/errors.py
new file mode 100644
index 00000000000..c665b663e22
--- /dev/null
+++ b/src/test/pytest/libpq/errors.py
@@ -0,0 +1,62 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Exception classes for libpq errors.
+"""
+
+from typing import Optional
+
+
+class LibpqError(RuntimeError):
+ """Exception for libpq errors with PostgreSQL diagnostic fields."""
+
+ sqlstate: Optional[str]
+ severity: Optional[str]
+ primary: Optional[str]
+ detail: Optional[str]
+ hint: Optional[str]
+ schema_name: Optional[str]
+ table_name: Optional[str]
+ column_name: Optional[str]
+ datatype_name: Optional[str]
+ constraint_name: Optional[str]
+ position: Optional[int]
+ context: Optional[str]
+
+ def __init__(
+ self,
+ message: str,
+ *,
+ sqlstate: Optional[str] = None,
+ severity: Optional[str] = None,
+ primary: Optional[str] = None,
+ detail: Optional[str] = None,
+ hint: Optional[str] = None,
+ schema_name: Optional[str] = None,
+ table_name: Optional[str] = None,
+ column_name: Optional[str] = None,
+ datatype_name: Optional[str] = None,
+ constraint_name: Optional[str] = None,
+ position: Optional[int] = None,
+ context: Optional[str] = None,
+ ):
+ super().__init__(message)
+ self.sqlstate = sqlstate
+ self.severity = severity
+ self.primary = primary
+ self.detail = detail
+ self.hint = hint
+ self.schema_name = schema_name
+ self.table_name = table_name
+ self.column_name = column_name
+ self.datatype_name = datatype_name
+ self.constraint_name = constraint_name
+ self.position = position
+ self.context = context
+
+ @property
+ def sqlstate_class(self) -> Optional[str]:
+ """Returns the 2-character SQLSTATE class."""
+ if self.sqlstate and len(self.sqlstate) >= 2:
+ return self.sqlstate[:2]
+ return None
diff --git a/src/test/pytest/meson.build b/src/test/pytest/meson.build
index b1f6061b307..b86be901e7c 100644
--- a/src/test/pytest/meson.build
+++ b/src/test/pytest/meson.build
@@ -10,6 +10,10 @@ tests += {
'bd': meson.current_build_dir(),
'pytest': {
'tests': [
+ 'pyt/test_errors.py',
+ 'pyt/test_libpq.py',
+ 'pyt/test_multi_server.py',
+ 'pyt/test_query_helpers.py',
],
},
}
diff --git a/src/test/pytest/pypg/__init__.py b/src/test/pytest/pypg/__init__.py
new file mode 100644
index 00000000000..4ee91289f70
--- /dev/null
+++ b/src/test/pytest/pypg/__init__.py
@@ -0,0 +1,10 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+from ._env import require_test_extras, skip_unless_test_extras
+from .server import PostgresServer
+
+__all__ = [
+ "require_test_extras",
+ "skip_unless_test_extras",
+ "PostgresServer",
+]
diff --git a/src/test/pytest/pypg/_env.py b/src/test/pytest/pypg/_env.py
new file mode 100644
index 00000000000..c4087be3212
--- /dev/null
+++ b/src/test/pytest/pypg/_env.py
@@ -0,0 +1,72 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import logging
+import os
+
+import pytest
+
+logger = logging.getLogger(__name__)
+
+
+def _test_extra_skip_reason(*keys: str) -> str:
+ return "requires {} to be set in PG_TEST_EXTRA".format(", ".join(keys))
+
+
+def _has_test_extra(key: str) -> bool:
+ """
+ Returns True if the PG_TEST_EXTRA environment variable contains the given
+ key.
+ """
+ extra = os.getenv("PG_TEST_EXTRA", "")
+ return key in extra.split()
+
+
+def require_test_extras(*keys: str):
+ """
+ A convenience annotation which will skip tests if all of the required keys
+ are not present in PG_TEST_EXTRA.
+
+ To skip a particular test function or class:
+
+ @pypg.require_test_extras("ldap")
+ def test_some_ldap_feature():
+ ...
+
+ To skip an entire module:
+
+ pytestmark = pypg.require_test_extra("ssl", "kerberos")
+ """
+ return pytest.mark.skipif(
+ not all([_has_test_extra(k) for k in keys]),
+ reason=_test_extra_skip_reason(*keys),
+ )
+
+
+def skip_unless_test_extras(*keys: str):
+ """
+ Skip the current test/fixture if any of the required keys are not present
+ in PG_TEST_EXTRA. Use this inside fixtures where decorators can't be used.
+
+ @pytest.fixture
+ def my_fixture():
+ skip_unless_test_extras("ldap")
+ ...
+ """
+ if not all([_has_test_extra(k) for k in keys]):
+ pytest.skip(_test_extra_skip_reason(*keys))
+
+
+def test_timeout_default() -> int:
+ """
+ Returns the value of the PG_TEST_TIMEOUT_DEFAULT environment variable, in
+ seconds, or 180 if one was not provided.
+ """
+ default = os.getenv("PG_TEST_TIMEOUT_DEFAULT", "")
+ if not default:
+ return 180
+
+ try:
+ return int(default)
+ except ValueError as v:
+ logger.warning("PG_TEST_TIMEOUT_DEFAULT could not be parsed: " + str(v))
+ return 180
diff --git a/src/test/pytest/pypg/fixtures.py b/src/test/pytest/pypg/fixtures.py
new file mode 100644
index 00000000000..8c0cb60daa5
--- /dev/null
+++ b/src/test/pytest/pypg/fixtures.py
@@ -0,0 +1,335 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import os
+import contextlib
+import pathlib
+import time
+from typing import List
+
+import pytest
+
+from ._env import test_timeout_default
+from .util import capture
+from .server import PostgresServer
+
+from libpq import load_libpq_handle, connect as libpq_connect
+
+
+# Stash key for tracking servers for log reporting.
+_servers_key = pytest.StashKey[List[PostgresServer]]()
+
+
+def _record_server_for_log_reporting(request, server):
+ """Record a server for log reporting on test failure."""
+ if _servers_key not in request.node.stash:
+ request.node.stash[_servers_key] = []
+ request.node.stash[_servers_key].append(server)
+
+
+@pytest.fixture
+def remaining_timeout():
+ """
+ This fixture provides a function that returns how much of the
+ PG_TEST_TIMEOUT_DEFAULT remains for the current test, in fractional seconds.
+ This value is never less than zero.
+
+ This fixture is per-test, so the deadline is also reset on a per-test basis.
+ """
+ now = time.monotonic()
+ deadline = now + test_timeout_default()
+
+ return lambda: max(deadline - time.monotonic(), 0)
+
+
+@pytest.fixture(scope="module")
+def remaining_timeout_module():
+ """
+ Same as remaining_timeout, but the deadline is set once per module.
+
+ This fixture is per-module, which means it's generally only really useful
+ for configuring timeouts of operations that happen in the setup phase of
+ another module fixtures. If you use it in a test it would mean that each
+ subsequent test in the module gets a reduced timeout.
+ """
+ now = time.monotonic()
+ deadline = now + test_timeout_default()
+
+ return lambda: max(deadline - time.monotonic(), 0)
+
+
+@pytest.fixture(scope="session")
+def libpq_handle(libdir, bindir):
+ """
+ Loads a ctypes handle for libpq. Some common function prototypes are
+ initialized for general use.
+ """
+ try:
+ return load_libpq_handle(libdir, bindir)
+ except OSError as e:
+ if "wrong ELF class" in str(e):
+ # This happens in CI when trying to lead a 32-bit libpq library
+ # with a 64-bit Python
+ pytest.skip("libpq architecture does not match Python interpreter")
+ raise
+
+
+@pytest.fixture
+def connect(libpq_handle, remaining_timeout):
+ """
+ Returns a function to connect to PostgreSQL via libpq.
+
+ The returned function accepts connection options as keyword arguments
+ (host, port, dbname, etc.) and returns a PGconn object. Connections
+ are automatically cleaned up at the end of the test.
+
+ Example:
+ conn = connect(host='localhost', port=5432, dbname='postgres')
+ result = conn.sql("SELECT 1")
+ """
+ with contextlib.ExitStack() as stack:
+
+ def _connect(**opts):
+ return libpq_connect(libpq_handle, stack, remaining_timeout, **opts)
+
+ yield _connect
+
+
+@pytest.fixture(scope="session")
+def pg_config():
+ """
+ Returns the path to pg_config. Uses PG_CONFIG environment variable if set,
+ otherwise uses 'pg_config' from PATH.
+ """
+ return os.environ.get("PG_CONFIG", "pg_config")
+
+
+@pytest.fixture(scope="session")
+def bindir(pg_config):
+ """
+ Returns the PostgreSQL bin directory using pg_config --bindir.
+ """
+ return pathlib.Path(capture(pg_config, "--bindir"))
+
+
+@pytest.fixture(scope="session")
+def libdir(pg_config):
+ """
+ Returns the PostgreSQL lib directory using pg_config --libdir.
+ """
+ return pathlib.Path(capture(pg_config, "--libdir"))
+
+
+@pytest.fixture(scope="session")
+def tmp_check(tmp_path_factory) -> pathlib.Path:
+ """
+ Returns the tmp_check directory that should be used for the tests. If
+ TESTDATADIR is provided, that will be used; otherwise a new temporary
+ directory is created in the pytest temp root.
+ """
+ d = os.getenv("TESTDATADIR")
+ if d:
+ d = pathlib.Path(d)
+ else:
+ d = tmp_path_factory.mktemp("tmp_check")
+
+ return d
+
+
+@pytest.fixture(scope="session")
+def datadir(tmp_check):
+ """
+ Returns the data directory to use for the pg fixture.
+ """
+
+ return tmp_check / "pgdata"
+
+
+@pytest.fixture(scope="session")
+def sockdir(tmp_path_factory):
+ """
+ Returns the directory name to use as the server's unix_socket_directories
+ setting. Local client connections use this as the PGHOST.
+
+ At the moment, this is always put under the pytest temp root.
+ """
+ return tmp_path_factory.mktemp("sockfiles")
+
+
+@pytest.fixture(scope="session")
+def pg_server_global(bindir, datadir, sockdir, libpq_handle):
+ """
+ Starts a running Postgres server listening on localhost. The HBA initially
+ allows only local UNIX connections from the same user.
+
+ Returns a PostgresServer instance with methods for server management, configuration,
+ and creating test databases/users.
+ """
+ server = PostgresServer("default", bindir, datadir, sockdir, libpq_handle)
+
+ yield server
+
+ # Cleanup any test resources
+ server.cleanup()
+
+ # Stop the server
+ server.stop()
+
+
+@pytest.fixture(scope="module")
+def pg_server_module(pg_server_global):
+ """
+ Module-scoped server context. Which can be useful so that certain settings
+ can be overriden at the module level through autouse fixtures. An example
+ of this is in the SSL tests.
+ """
+ with pg_server_global.subcontext() as s:
+ yield s
+
+
+@pytest.fixture
+def pg(request, pg_server_module, remaining_timeout):
+ """
+ Per-test server context. Use this fixture to make changes to the server
+ which will be rolled back at the end of the test (e.g., creating test
+ users/databases).
+
+ Also captures the PostgreSQL log position at test start so that any new
+ log entries can be included in the test report on failure.
+ """
+ with pg_server_module.start_new_test(remaining_timeout) as s:
+ _record_server_for_log_reporting(request, s)
+ yield s
+
+
+@pytest.fixture
+def conn(pg):
+ """
+ Returns a connected PGconn instance to the test PostgreSQL server.
+ The connection is automatically cleaned up at the end of the test.
+
+ Example:
+ def test_something(conn):
+ result = conn.sql("SELECT 1")
+ assert result == 1
+ """
+ return pg.connect()
+
+
+@pytest.fixture
+def create_pg(request, bindir, sockdir, libpq_handle, tmp_check, remaining_timeout):
+ """
+ Factory fixture to create additional PostgreSQL servers (per-test scope).
+
+ Returns a function that creates new PostgreSQL server instances.
+ Servers are automatically cleaned up at the end of the test.
+
+ Example:
+ def test_multiple_servers(create_pg):
+ node1 = create_pg()
+ node2 = create_pg()
+ node3 = create_pg()
+ """
+ servers = []
+
+ def _create(name=None, **kwargs):
+ if name is None:
+ count = len(servers) + 1
+ name = f"pg{count}"
+
+ datadir = tmp_check / f"pgdata_{name}"
+ server = PostgresServer(name, bindir, datadir, sockdir, libpq_handle, **kwargs)
+ server.set_timeout(remaining_timeout)
+ _record_server_for_log_reporting(request, server)
+ servers.append(server)
+ return server
+
+ yield _create
+
+ for server in servers:
+ server.cleanup()
+ server.stop()
+
+
+@pytest.fixture(scope="module")
+def _module_scoped_servers():
+ """Session-scoped list to track servers created by create_pg_module."""
+ return []
+
+
+@pytest.fixture(scope="module")
+def create_pg_module(
+ bindir,
+ sockdir,
+ libpq_handle,
+ tmp_check,
+ remaining_timeout_module,
+ _module_scoped_servers,
+):
+ """
+ Factory fixture to create additional PostgreSQL servers (module scope).
+
+ Like create_pg, but servers persist for the entire test module.
+ Use this when multiple tests in a module can share the same servers.
+
+ The timeout is automatically set on all servers at the start of each test
+ via the _set_module_server_timeouts autouse fixture.
+
+ Example:
+ @pytest.fixture(scope="module")
+ def shared_nodes(create_pg_module):
+ return [create_pg_module() for _ in range(3)]
+ """
+
+ def _create(name=None, **kwargs):
+ if name is None:
+ count = len(_module_scoped_servers) + 1
+ name = f"pg{count}"
+ datadir = tmp_check / f"pgdata_{name}"
+ server = PostgresServer(name, bindir, datadir, sockdir, libpq_handle, **kwargs)
+ server.set_timeout(remaining_timeout_module)
+ _module_scoped_servers.append(server)
+ return server
+
+ yield _create
+
+ for server in _module_scoped_servers:
+ server.cleanup()
+ server.stop()
+
+
+@pytest.fixture(autouse=True)
+def _set_module_server_timeouts(request, _module_scoped_servers, remaining_timeout):
+ """Autouse fixture that sets timeout, enters subcontext, and records log positions for module-scoped servers."""
+ with contextlib.ExitStack() as stack:
+ for server in _module_scoped_servers:
+ stack.enter_context(server.start_new_test(remaining_timeout))
+ _record_server_for_log_reporting(request, server)
+ yield
+
+
+@pytest.hookimpl(hookwrapper=True, trylast=True)
+def pytest_runtest_makereport(item, call):
+ """
+ Adds PostgreSQL server logs to the test report sections.
+ """
+ outcome = yield
+ report = outcome.get_result()
+
+ if report.when != "call":
+ return
+
+ if _servers_key not in item.stash:
+ return
+
+ servers = item.stash[_servers_key]
+ del item.stash[_servers_key]
+
+ include_name = len(servers) > 1
+
+ for server in servers:
+ content = server.log_content()
+ if content.strip():
+ section_title = "Postgres log"
+ if include_name:
+ section_title += f" ({server.name})"
+ report.sections.append((section_title, content))
diff --git a/src/test/pytest/pypg/server.py b/src/test/pytest/pypg/server.py
new file mode 100644
index 00000000000..9242ab25007
--- /dev/null
+++ b/src/test/pytest/pypg/server.py
@@ -0,0 +1,470 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import os
+import pathlib
+import platform
+import re
+import shutil
+import socket
+import subprocess
+import tempfile
+from collections import namedtuple
+from typing import Callable, Optional
+
+from .util import run
+from libpq import PGconn, connect as libpq_connect
+
+
+class FileBackup(contextlib.AbstractContextManager):
+ """
+ A context manager which backs up a file's contents, restoring them on exit.
+ """
+
+ def __init__(self, file: pathlib.Path):
+ super().__init__()
+
+ self._file = file
+
+ def __enter__(self):
+ with tempfile.NamedTemporaryFile(
+ prefix=self._file.name, dir=self._file.parent, delete=False
+ ) as f:
+ self._backup = pathlib.Path(f.name)
+
+ shutil.copyfile(self._file, self._backup)
+
+ return self
+
+ def __exit__(self, *exc):
+ # Swap the backup and the original file, so that the modified contents
+ # can still be inspected in case of failure.
+ tmp = self._backup.parent / (self._backup.name + ".tmp")
+
+ shutil.copyfile(self._file, tmp)
+ shutil.copyfile(self._backup, self._file)
+ shutil.move(tmp, self._backup)
+
+
+class HBA(FileBackup):
+ """
+ Backs up a server's HBA configuration and provides means for temporarily
+ editing it.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "pg_hba.conf")
+
+ def prepend(self, *lines):
+ """
+ Temporarily prepends lines to the server's pg_hba.conf.
+
+ As sugar for aligning HBA columns in the tests, each line can be either
+ a string or a list of strings. List elements will be joined by single
+ spaces before they are written to file.
+ """
+ with open(self._file, "r") as f:
+ prior_data = f.read()
+
+ with open(self._file, "w") as f:
+ for line in lines:
+ if isinstance(line, list):
+ print(*line, file=f)
+ else:
+ print(line, file=f)
+
+ f.write(prior_data)
+
+
+class Config(FileBackup):
+ """
+ Backs up a server's postgresql.conf and provides means for temporarily
+ editing it.
+ """
+
+ def __init__(self, datadir: pathlib.Path):
+ super().__init__(datadir / "postgresql.conf")
+
+ def set(self, **gucs):
+ """
+ Temporarily appends GUC settings to the server's postgresql.conf.
+ """
+
+ with open(self._file, "a") as f:
+ print(file=f)
+
+ for n, v in gucs.items():
+ v = str(v)
+
+ # TODO: proper quoting
+ v = v.replace("\\", "\\\\")
+ v = v.replace("'", "\\'")
+ v = "'{}'".format(v)
+
+ print(n, "=", v, file=f)
+
+
+Backup = namedtuple("Backup", "conf, hba")
+
+
+class PostgresServer:
+ """
+ Represents a running PostgreSQL server instance with management utilities.
+ Provides methods for configuration, user/database creation, and server control.
+ """
+
+ def __init__(
+ self,
+ name,
+ bindir,
+ datadir,
+ sockdir,
+ libpq_handle,
+ *,
+ hostaddr: Optional[str] = None,
+ port: Optional[int] = None,
+ ):
+ """
+ Initialize and start a PostgreSQL server instance.
+
+ Args:
+ name: The name of this server instance (for logging purposes)
+ bindir: Path to PostgreSQL bin directory
+ datadir: Path to data directory for this server
+ sockdir: Path to directory for Unix sockets
+ libpq_handle: ctypes handle to libpq
+ hostaddr: If provided, use this specific address (e.g., "127.0.0.2")
+ port: If provided, use this port instead of finding a free one,
+ is currently only allowed if hostaddr is also provided
+ """
+
+ if hostaddr is None and port is not None:
+ raise NotImplementedError("port was provided without hostaddr")
+
+ self.name = name
+ self.datadir = datadir
+ self.sockdir = sockdir
+ self.libpq_handle = libpq_handle
+ self._remaining_timeout_fn: Optional[Callable[[], float]] = None
+ self._bindir = bindir
+ self._pg_ctl = bindir / "pg_ctl"
+ self.log = datadir / "postgresql.log"
+ self._log_start_pos = 0
+
+ # Determine whether to use Unix sockets
+ use_unix_sockets = platform.system() != "Windows" and hostaddr is None
+
+ # Use INITDB_TEMPLATE if available (much faster than running initdb)
+ initdb_template = os.environ.get("INITDB_TEMPLATE")
+ if initdb_template and os.path.isdir(initdb_template):
+ shutil.copytree(initdb_template, datadir)
+ else:
+ if platform.system() == "Windows":
+ auth_method = "trust"
+ else:
+ auth_method = "peer"
+ run(
+ bindir / "initdb",
+ "--no-sync",
+ "--auth",
+ auth_method,
+ "--pgdata",
+ self.datadir,
+ )
+
+ # Figure out a port to listen on. Attempt to reserve both IPv4 and IPv6
+ # addresses in one go.
+ #
+ # Note: socket.has_dualstack_ipv6/create_server are only in Python 3.8+.
+ if hostaddr is not None:
+ # Explicit address provided
+ addrs: list[str] = [hostaddr]
+ temp_sock = socket.socket()
+ if port is None:
+ temp_sock.bind((hostaddr, 0))
+ _, port = temp_sock.getsockname()
+
+ elif hasattr(socket, "has_dualstack_ipv6") and socket.has_dualstack_ipv6():
+ addr = ("::1", 0)
+ temp_sock = socket.create_server(
+ addr, family=socket.AF_INET6, dualstack_ipv6=True
+ )
+
+ hostaddr, port, _, _ = temp_sock.getsockname()
+ assert hostaddr is not None
+ addrs = [hostaddr, "127.0.0.1"]
+
+ else:
+ addr = ("127.0.0.1", 0)
+
+ temp_sock = socket.socket()
+ temp_sock.bind(addr)
+
+ hostaddr, port = temp_sock.getsockname()
+ assert hostaddr is not None
+ addrs = [hostaddr]
+
+ # Store the computed values
+ self.hostaddr = hostaddr
+ self.port = port
+ # Including the host to use for connections - either the socket
+ # directory or TCP address
+ if use_unix_sockets:
+ self.host = str(sockdir)
+ else:
+ self.host = hostaddr
+
+ with open(os.path.join(datadir, "postgresql.conf"), "a") as f:
+ print(file=f)
+ if use_unix_sockets:
+ print(
+ "unix_socket_directories = '{}'".format(sockdir.as_posix()),
+ file=f,
+ )
+ else:
+ # Disable Unix sockets when using TCP to avoid lock conflicts
+ print("unix_socket_directories = ''", file=f)
+ print("listen_addresses = '{}'".format(",".join(addrs)), file=f)
+ print("port =", port, file=f)
+ print("log_connections = all", file=f)
+ print("fsync = off", file=f)
+ print("datestyle = 'ISO'", file=f)
+ print("timezone = 'UTC'", file=f)
+
+ # Between closing of the socket, s, and server start, we're racing
+ # against anything that wants to open up ephemeral ports, so try not to
+ # put any new work here.
+
+ temp_sock.close()
+ self.pg_ctl("start")
+
+ # Read the PID file to get the postmaster PID
+ with open(os.path.join(datadir, "postmaster.pid")) as f:
+ self.pid = int(f.readline().strip())
+
+ # ExitStack for cleanup callbacks
+ self._cleanup_stack = contextlib.ExitStack()
+
+ def current_log_position(self):
+ """Get the current end position of the log file."""
+ if self.log.exists():
+ return self.log.stat().st_size
+ return 0
+
+ def reset_log_position(self):
+ """Mark current log position as start for log_content()."""
+ self._log_start_pos = self.current_log_position()
+
+ @contextlib.contextmanager
+ def start_new_test(self, remaining_timeout):
+ """
+ Prepare server for a new test.
+
+ Sets timeout, resets log position, and enters a cleanup subcontext.
+ """
+ self.set_timeout(remaining_timeout)
+ self.reset_log_position()
+ with self.subcontext():
+ yield self
+
+ def psql(self, *args):
+ """Run psql with the given arguments."""
+ self._run(os.path.join(self._bindir, "psql"), "-w", *args)
+
+ def sql(self, query):
+ """Execute a SQL query via libpq. Returns simplified results."""
+ with self.connect() as conn:
+ return conn.sql(query)
+
+ def pg_ctl(self, *args):
+ """Run pg_ctl with the given arguments."""
+ self._run(self._pg_ctl, "--pgdata", self.datadir, "--log", self.log, *args)
+
+ def _run(self, cmd, *args, addenv: Optional[dict] = None):
+ """Run a command with PG* environment variables set."""
+ subenv = dict(os.environ)
+ subenv.update(
+ {
+ "PGHOST": str(self.host),
+ "PGPORT": str(self.port),
+ "PGDATABASE": "postgres",
+ "PGDATA": str(self.datadir),
+ }
+ )
+ if addenv:
+ subenv.update(addenv)
+ run(cmd, *args, env=subenv)
+
+ def create_users(self, *userkeys: str):
+ """Create test users and register them for cleanup."""
+ usermap = {}
+ for u in userkeys:
+ name = u + "user"
+ usermap[u] = name
+ self.psql("-c", "CREATE USER " + name)
+ self._cleanup_stack.callback(self.psql, "-c", "DROP USER " + name)
+ return usermap
+
+ def create_dbs(self, *dbkeys: str):
+ """Create test databases and register them for cleanup."""
+ dbmap = {}
+ for d in dbkeys:
+ name = d + "db"
+ dbmap[d] = name
+ self.psql("-c", "CREATE DATABASE " + name)
+ self._cleanup_stack.callback(self.psql, "-c", "DROP DATABASE " + name)
+ return dbmap
+
+ @contextlib.contextmanager
+ def reloading(self):
+ """
+ Provides a context manager for making configuration changes.
+
+ If the context suite finishes successfully, the configuration will
+ be reloaded via pg_ctl. On teardown, the configuration changes will
+ be unwound, and the server will be signaled to reload again.
+
+ The context target contains the following attributes which can be
+ used to configure the server:
+ - .conf: modifies postgresql.conf
+ - .hba: modifies pg_hba.conf
+
+ For example:
+
+ with pg_server_session.reloading() as s:
+ s.conf.set(log_connections="on")
+ s.hba.prepend("local all all trust")
+ """
+ # Push a reload onto the stack before making any other
+ # unwindable changes. That way the order of operations will be
+ #
+ # # test
+ # - config change 1
+ # - config change 2
+ # - reload
+ # # teardown
+ # - undo config change 2
+ # - undo config change 1
+ # - reload
+ #
+ self._cleanup_stack.callback(self.pg_ctl, "reload")
+ yield self._backup_configuration()
+
+ # Now actually reload
+ self.pg_ctl("reload")
+
+ @contextlib.contextmanager
+ def restarting(self):
+ """Like .reloading(), but with a full server restart."""
+ self._cleanup_stack.callback(self.pg_ctl, "restart")
+ yield self._backup_configuration()
+ self.pg_ctl("restart")
+
+ def _backup_configuration(self):
+ # Wrap the existing HBA and configuration with FileBackups.
+ return Backup(
+ hba=self._cleanup_stack.enter_context(HBA(self.datadir)),
+ conf=self._cleanup_stack.enter_context(Config(self.datadir)),
+ )
+
+ @contextlib.contextmanager
+ def subcontext(self):
+ """
+ Create a new cleanup context for per-test isolation.
+
+ Temporarily replaces the cleanup stack so that any cleanup callbacks
+ registered within this context will be cleaned up when the context exits.
+ """
+ old_stack = self._cleanup_stack
+ self._cleanup_stack = contextlib.ExitStack()
+ try:
+ self._cleanup_stack.__enter__()
+ yield self
+ finally:
+ self._cleanup_stack.__exit__(None, None, None)
+ self._cleanup_stack = old_stack
+
+ def stop(self, mode="fast"):
+ """
+ Stop the PostgreSQL server instance.
+
+ Ignores failures if the server is already stopped.
+ """
+ try:
+ self.pg_ctl("stop", "--mode", mode)
+ except subprocess.CalledProcessError:
+ # Server may have already been stopped
+ pass
+
+ def log_content(self) -> str:
+ """Return log content from the current context's start position."""
+ with open(self.log) as f:
+ f.seek(self._log_start_pos)
+ return f.read()
+
+ @contextlib.contextmanager
+ def log_contains(self, pattern, times=None):
+ """
+ Context manager that checks if the log matches pattern during the block.
+
+ Args:
+ pattern: The regex pattern to search for.
+ times: If None, any number of matches is accepted.
+ If a number, exactly that many matches are required.
+ """
+ start_pos = self.current_log_position()
+ yield
+ with open(self.log) as f:
+ f.seek(start_pos)
+ content = f.read()
+ if times is None:
+ assert re.search(pattern, content), f"Pattern {pattern!r} not found in log"
+ else:
+ match_count = len(re.findall(pattern, content))
+ assert match_count == times, (
+ f"Expected {times} matches of {pattern!r}, found {match_count}"
+ )
+
+ def cleanup(self):
+ """Run all registered cleanup callbacks."""
+ self._cleanup_stack.close()
+
+ def set_timeout(self, remaining_timeout_fn: Callable[[], float]) -> None:
+ """
+ Set the timeout function for connections.
+ This is typically called by pg fixture for each test.
+ """
+ self._remaining_timeout_fn = remaining_timeout_fn
+
+ def connect(self, **opts) -> PGconn:
+ """
+ Creates a connection to this PostgreSQL server instance.
+
+ Args:
+ **opts: Additional connection options (can override defaults)
+
+ Returns:
+ PGconn: Connected database connection
+
+ Example:
+ conn = pg.connect()
+ conn = pg.connect(dbname='mydb')
+ """
+ if self._remaining_timeout_fn is None:
+ raise RuntimeError(
+ "Timeout function not set. Use set_timeout() or pg fixture."
+ )
+
+ defaults = {
+ "host": self.host,
+ "port": self.port,
+ "dbname": "postgres",
+ }
+ defaults.update(opts)
+
+ return libpq_connect(
+ self.libpq_handle,
+ self._cleanup_stack,
+ self._remaining_timeout_fn,
+ **defaults,
+ )
diff --git a/src/test/pytest/pypg/util.py b/src/test/pytest/pypg/util.py
new file mode 100644
index 00000000000..b2a1e627e4b
--- /dev/null
+++ b/src/test/pytest/pypg/util.py
@@ -0,0 +1,42 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import shlex
+import subprocess
+import sys
+
+
+def eprint(*args, **kwargs):
+ """eprint prints to stderr"""
+ print(*args, file=sys.stderr, **kwargs)
+
+
+def run(*command, check=True, shell=None, silent=False, **kwargs):
+ """run runs the given command and prints it to stderr"""
+
+ if shell is None:
+ shell = len(command) == 1 and isinstance(command[0], str)
+
+ if shell:
+ command = command[0]
+ else:
+ command = list(map(str, command))
+
+ if not silent:
+ if shell:
+ eprint(f"+ {command}")
+ else:
+ # We could normally use shlex.join here, but it's not available in
+ # Python 3.6 which we still like to support
+ unsafe_string_cmd = " ".join(map(shlex.quote, command))
+ eprint(f"+ {unsafe_string_cmd}")
+
+ if silent:
+ kwargs.setdefault("stdout", subprocess.DEVNULL)
+
+ return subprocess.run(command, check=check, shell=shell, **kwargs)
+
+
+def capture(command, *args, stdout=subprocess.PIPE, encoding="utf-8", **kwargs):
+ return run(
+ command, *args, stdout=stdout, encoding=encoding, **kwargs
+ ).stdout.removesuffix("\n")
diff --git a/src/test/pytest/pyt/conftest.py b/src/test/pytest/pyt/conftest.py
new file mode 100644
index 00000000000..dd73917c68c
--- /dev/null
+++ b/src/test/pytest/pyt/conftest.py
@@ -0,0 +1 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
diff --git a/src/test/pytest/pyt/test_errors.py b/src/test/pytest/pyt/test_errors.py
new file mode 100644
index 00000000000..771fe8f76e3
--- /dev/null
+++ b/src/test/pytest/pyt/test_errors.py
@@ -0,0 +1,34 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests for libpq error types and SQLSTATE-based exception mapping.
+"""
+
+import pytest
+from libpq import LibpqError
+
+
+def test_syntax_error(conn):
+ """Invalid SQL syntax raises LibpqError with correct SQLSTATE."""
+ with pytest.raises(LibpqError) as exc_info:
+ conn.sql("SELEC 1")
+
+ err = exc_info.value
+ assert err.sqlstate == "42601"
+ assert err.sqlstate_class == "42"
+ assert "syntax" in str(err).lower()
+
+
+def test_unique_violation(conn):
+ """Unique violation includes all error fields."""
+ conn.sql("CREATE TEMP TABLE test_uv (id int CONSTRAINT test_uv_pk PRIMARY KEY)")
+ conn.sql("INSERT INTO test_uv VALUES (1)")
+
+ with pytest.raises(LibpqError) as exc_info:
+ conn.sql("INSERT INTO test_uv VALUES (1)")
+
+ err = exc_info.value
+ assert err.sqlstate == "23505"
+ assert err.table_name == "test_uv"
+ assert err.constraint_name == "test_uv_pk"
+ assert err.detail == "Key (id)=(1) already exists."
diff --git a/src/test/pytest/pyt/test_libpq.py b/src/test/pytest/pyt/test_libpq.py
new file mode 100644
index 00000000000..4fcf4056f41
--- /dev/null
+++ b/src/test/pytest/pyt/test_libpq.py
@@ -0,0 +1,172 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import os
+import socket
+import struct
+import threading
+from typing import Callable
+
+import pytest
+
+from libpq import connstr, LibpqError
+
+
+@pytest.mark.parametrize(
+ "opts, expected",
+ [
+ (dict(), ""),
+ (dict(port=5432), "port=5432"),
+ (dict(port=5432, dbname="postgres"), "port=5432 dbname=postgres"),
+ (dict(host=""), "host=''"),
+ (dict(host=" "), r"host=' '"),
+ (dict(keyword="'"), r"keyword=\'"),
+ (dict(keyword=" \\' "), r"keyword=' \\\' '"),
+ ],
+)
+def test_connstr(opts, expected):
+ """Tests the escape behavior for connstr()."""
+ assert connstr(opts) == expected
+
+
+def test_must_connect_errors(connect):
+ """Tests that connect() raises LibpqError."""
+ with pytest.raises(LibpqError, match="invalid connection option"):
+ connect(some_unknown_keyword="whatever")
+
+
+@pytest.fixture
+def local_server(tmp_path, remaining_timeout):
+ """
+ Opens up a local UNIX socket for mocking a Postgres server on a background
+ thread. See the _Server API for usage.
+
+ This fixture requires AF_UNIX support; dependent tests will be skipped on
+ platforms that don't provide it.
+ """
+
+ try:
+ from socket import AF_UNIX
+ except ImportError:
+ pytest.skip("AF_UNIX not supported on this platform")
+
+ class _Server(contextlib.ExitStack):
+ """
+ Implementation class for local_server. See .background() for the primary
+ entry point for tests. Postgres clients may connect to this server via
+ local_server.host/local_server.port.
+
+ _Server derives from contextlib.ExitStack to provide easy cleanup of
+ associated resources; see the documentation for that class for a full
+ explanation.
+ """
+
+ def __init__(self):
+ super().__init__()
+
+ self.host = tmp_path
+ self.port = 5432
+
+ self._thread = None
+ self._thread_exc = None
+ self._listener = self.enter_context(
+ socket.socket(AF_UNIX, socket.SOCK_STREAM),
+ )
+
+ def bind_and_listen(self):
+ """
+ Does the actual work of binding the UNIX socket using the Postgres
+ server conventions and listening for connections.
+
+ The listen backlog is currently hardcoded to one.
+ """
+ sockfile = self.host / ".s.PGSQL.{}".format(self.port)
+
+ # Lock down the permissions on the new socket.
+ prev_mask = os.umask(0o077)
+
+ # Bind (creating the socket file), and immediately register it for
+ # deletion from disk when the stack is cleaned up.
+ self._listener.bind(bytes(sockfile))
+ self.callback(os.unlink, sockfile)
+
+ os.umask(prev_mask)
+
+ self._listener.listen(1)
+
+ def background(self, fn: Callable[[socket.socket], None]) -> None:
+ """
+ Accepts a client connection on a background thread and passes it to
+ the provided callback. Any exceptions raised from the callback will
+ be re-raised on the main thread during fixture teardown.
+
+ Blocking operations on the connected socket default to using the
+ remaining_timeout(), though this can be changed by the test via the
+ socket's .settimeout().
+ """
+
+ def _bg():
+ try:
+ self._listener.settimeout(remaining_timeout())
+ sock, _ = self._listener.accept()
+
+ with sock:
+ sock.settimeout(remaining_timeout())
+ fn(sock)
+
+ except Exception as e:
+ # Save the exception for re-raising on the main thread.
+ self._thread_exc = e
+
+ # TODO: rather than using callback(), consider explicitly signaling
+ # the fn() implementation to stop early if we get an exception.
+ # Otherwise we'll hang until the end of the timeout.
+ self._thread = threading.Thread(target=_bg)
+ self.callback(self._join)
+
+ self._thread.start()
+
+ def _join(self):
+ """
+ Waits for the background thread to finish and raises any thrown
+ exception. This is called during fixture teardown.
+ """
+ # Give a little bit of wiggle room on the join timeout, since we're
+ # racing against the test's own use of remaining_timeout(). (It's
+ # preferable to let tests report timeouts; the stack traces will
+ # help with debugging.)
+ self._thread.join(remaining_timeout() + 1)
+ if self._thread.is_alive():
+ raise TimeoutError("background thread is still running after timeout")
+
+ if self._thread_exc is not None:
+ raise self._thread_exc
+
+ with _Server() as s:
+ s.bind_and_listen()
+ yield s
+
+
+def test_connection_is_finished_on_error(connect, local_server):
+ """Tests that PQfinish() gets called at the end of testing."""
+ expected_error = "something is wrong"
+
+ def serve_error(s: socket.socket) -> None:
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Quick check for the startup packet version.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (3, 0)
+
+ # Discard the remainder of the startup packet and send a v2 error.
+ s.recv(pktlen - 8)
+ s.send(b"E" + expected_error.encode() + b"\0")
+
+ # And now the socket should be closed.
+ assert not s.recv(1), "client sent unexpected data"
+
+ local_server.background(serve_error)
+
+ with pytest.raises(LibpqError, match=expected_error):
+ # Exiting this context should result in PQfinish().
+ connect(host=local_server.host, port=local_server.port)
diff --git a/src/test/pytest/pyt/test_multi_server.py b/src/test/pytest/pyt/test_multi_server.py
new file mode 100644
index 00000000000..8ee045b0cc8
--- /dev/null
+++ b/src/test/pytest/pyt/test_multi_server.py
@@ -0,0 +1,46 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests demonstrating multi-server functionality using create_pg fixture.
+
+These tests verify that the pytest infrastructure correctly handles
+multiple PostgreSQL server instances within a single test, and that
+module-scoped servers persist across tests.
+"""
+
+import pytest
+
+
+def test_multiple_servers_basic(create_pg):
+ """Test that we can create and connect to multiple servers."""
+ node1 = create_pg("primary")
+ node2 = create_pg("secondary")
+
+ conn1 = node1.connect()
+ conn2 = node2.connect()
+
+ # Each server should have its own data directory
+ datadir1 = conn1.sql("SHOW data_directory")
+ datadir2 = conn2.sql("SHOW data_directory")
+ assert datadir1 != datadir2
+
+ # Each server should be listening on a different port
+ assert node1.port != node2.port
+
+
+@pytest.fixture(scope="module")
+def shared_server(create_pg_module):
+ """A server shared across all tests in this module."""
+ server = create_pg_module("shared")
+ server.sql("CREATE TABLE module_state (value int DEFAULT 0)")
+ return server
+
+
+def test_module_server_create_row(shared_server):
+ """First test: create a row in the shared server."""
+ shared_server.connect().sql("INSERT INTO module_state VALUES (42)")
+
+
+def test_module_server_see_row(shared_server):
+ """Second test: verify we see the row from the previous test."""
+ assert shared_server.connect().sql("SELECT value FROM module_state") == 42
diff --git a/src/test/pytest/pyt/test_query_helpers.py b/src/test/pytest/pyt/test_query_helpers.py
new file mode 100644
index 00000000000..abcd9084214
--- /dev/null
+++ b/src/test/pytest/pyt/test_query_helpers.py
@@ -0,0 +1,347 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests for query helper functions with type conversion and result simplification.
+"""
+
+import uuid
+
+import pytest
+
+
+def test_single_cell_int(conn):
+ """Single cell integer query returns just the value."""
+ result = conn.sql("SELECT 1")
+ assert result == 1
+ assert isinstance(result, int)
+
+
+def test_single_cell_string(conn):
+ """Single cell string query returns just the value."""
+ result = conn.sql("SELECT 'hello'")
+ assert result == "hello"
+ assert isinstance(result, str)
+
+
+def test_single_cell_bool(conn):
+ """Single cell boolean query returns just the value."""
+
+ result = conn.sql("SELECT true")
+ assert result is True
+ assert isinstance(result, bool)
+
+ result = conn.sql("SELECT false")
+ assert result is False
+
+
+def test_single_cell_float(conn):
+ """Single cell float query returns just the value."""
+
+ result = conn.sql("SELECT 3.14::float4")
+ assert isinstance(result, float)
+ assert abs(result - 3.14) < 0.01
+
+
+def test_single_cell_null(conn):
+ """Single cell NULL query returns None."""
+
+ result = conn.sql("SELECT NULL")
+ assert result is None
+
+
+def test_single_row_multiple_columns(conn):
+ """Single row with multiple columns returns a tuple."""
+
+ result = conn.sql("SELECT 1, 'hello', true")
+ assert result == (1, "hello", True)
+ assert isinstance(result, tuple)
+
+
+def test_single_column_multiple_rows(conn):
+ """Single column with multiple rows returns a list of values."""
+
+ result = conn.sql("SELECT * FROM generate_series(1, 3)")
+ assert result == [1, 2, 3]
+ assert isinstance(result, list)
+
+
+def test_multiple_rows_and_columns(conn):
+ """Multiple rows and columns returns list of tuples."""
+
+ result = conn.sql("SELECT * FROM (VALUES (1, 'a'), (2, 'b'), (3, 'c')) AS t")
+ assert result == [(1, "a"), (2, "b"), (3, "c")]
+ assert isinstance(result, list)
+ assert all(isinstance(row, tuple) for row in result)
+
+
+def test_empty_result(conn):
+ """Empty result set returns empty list."""
+
+ result = conn.sql("SELECT 1 WHERE false")
+ assert result == []
+
+
+def test_query_error_handling(conn):
+ """Query errors raise RuntimeError with actual error message."""
+
+ with pytest.raises(RuntimeError) as exc_info:
+ conn.sql("SELECT * FROM nonexistent_table")
+
+ error_msg = str(exc_info.value)
+ assert "nonexistent_table" in error_msg or "does not exist" in error_msg
+
+
+def test_division_by_zero_error(conn):
+ """Division by zero raises RuntimeError."""
+
+ with pytest.raises(RuntimeError) as exc_info:
+ conn.sql("SELECT 1/0")
+
+ error_msg = str(exc_info.value)
+ assert "division by zero" in error_msg.lower()
+
+
+def test_simple_exec_create_table(conn):
+ """sql for CREATE TABLE returns None."""
+
+ result = conn.sql("CREATE TEMP TABLE test_table (id int, name text)")
+ assert result is None
+
+ # Verify table was created
+ count = conn.sql("SELECT COUNT(*) FROM test_table")
+ assert count == 0
+
+
+def test_simple_exec_insert(conn):
+ """sql for INSERT returns None."""
+
+ conn.sql("CREATE TEMP TABLE test_table (id int, name text)")
+ result = conn.sql("INSERT INTO test_table VALUES (1, 'Alice'), (2, 'Bob')")
+ assert result is None
+
+ # Verify data was inserted
+ count = conn.sql("SELECT COUNT(*) FROM test_table")
+ assert count == 2
+
+
+def test_type_conversion_mixed(conn):
+ """Test mixed type conversion in a single row."""
+
+ result = conn.sql("SELECT 42::int4, 123::int8, 3.14::float8, 'text', true, NULL")
+ assert result == (42, 123, 3.14, "text", True, None)
+ assert isinstance(result[0], int)
+ assert isinstance(result[1], int)
+ assert isinstance(result[2], float)
+ assert isinstance(result[3], str)
+ assert isinstance(result[4], bool)
+ assert result[5] is None
+
+
+def test_multiple_queries_same_connection(conn):
+ """Test running multiple queries on the same connection."""
+
+ result1 = conn.sql("SELECT 1")
+ assert result1 == 1
+
+ result2 = conn.sql("SELECT 'hello', 'world'")
+ assert result2 == ("hello", "world")
+
+ result3 = conn.sql("SELECT * FROM generate_series(1, 5)")
+ assert result3 == [1, 2, 3, 4, 5]
+
+
+def test_date_type(conn):
+ """Test date type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '2025-10-20'::date")
+ assert result == datetime.date(2025, 10, 20)
+ assert isinstance(result, datetime.date)
+
+
+def test_timestamp_type(conn):
+ """Test timestamp type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '2025-10-20 15:30:45'::timestamp")
+ assert result == datetime.datetime(2025, 10, 20, 15, 30, 45)
+ assert isinstance(result, datetime.datetime)
+
+
+def test_time_type(conn):
+ """Test time type conversion."""
+ import datetime
+
+ result = conn.sql("SELECT '15:30:45'::time")
+ assert result == datetime.time(15, 30, 45)
+ assert isinstance(result, datetime.time)
+
+
+def test_numeric_type(conn):
+ """Test numeric/decimal type conversion."""
+ import decimal
+
+ result = conn.sql("SELECT 123.456::numeric")
+ assert result == decimal.Decimal("123.456")
+ assert isinstance(result, decimal.Decimal)
+
+
+def test_int_array(conn):
+ """Test integer array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[1, 2, 3, 4, 5]")
+ assert result == [1, 2, 3, 4, 5]
+ assert isinstance(result, list)
+ assert all(isinstance(x, int) for x in result)
+
+
+def test_text_array(conn):
+ """Test text array type conversion."""
+
+ result = conn.sql("SELECT ARRAY['hello', 'world', 'test']")
+ assert result == ["hello", "world", "test"]
+ assert isinstance(result, list)
+ assert all(isinstance(x, str) for x in result)
+
+
+def test_bool_array(conn):
+ """Test boolean array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[true, false, true]")
+ assert result == [True, False, True]
+ assert isinstance(result, list)
+ assert all(isinstance(x, bool) for x in result)
+
+
+def test_empty_array(conn):
+ """Test empty array type conversion."""
+
+ result = conn.sql("SELECT ARRAY[]::int[]")
+ assert result == []
+ assert isinstance(result, list)
+
+
+def test_json_type(conn):
+ """Test JSON type (parsed to dict)."""
+
+ result = conn.sql('SELECT \'{"key": "value"}\'::json')
+ assert isinstance(result, dict)
+ assert result == {"key": "value"}
+
+
+def test_jsonb_type(conn):
+ """Test JSONB type (parsed to dict)."""
+
+ result = conn.sql('SELECT \'{"name": "test", "count": 42}\'::jsonb')
+ assert isinstance(result, dict)
+ assert result == {"name": "test", "count": 42}
+
+
+def test_json_array(conn):
+ """Test JSON array type."""
+
+ result = conn.sql("SELECT '[1, 2, 3, 4, 5]'::json")
+ assert isinstance(result, list)
+ assert result == [1, 2, 3, 4, 5]
+
+
+def test_json_nested(conn):
+ """Test nested JSON object."""
+
+ result = conn.sql(
+ 'SELECT \'{"user": {"id": 1, "name": "Alice"}, "active": true}\'::json'
+ )
+ assert isinstance(result, dict)
+ assert result == {"user": {"id": 1, "name": "Alice"}, "active": True}
+
+
+def test_mixed_types_with_arrays(conn):
+ """Test mixed types including arrays in a single row."""
+
+ result = conn.sql("SELECT 42, 'text', ARRAY[1, 2, 3], true")
+ assert result == (42, "text", [1, 2, 3], True)
+ assert isinstance(result[0], int)
+ assert isinstance(result[1], str)
+ assert isinstance(result[2], list)
+ assert isinstance(result[3], bool)
+
+
+def test_uuid_type(conn):
+ """Test UUID type conversion."""
+ test_uuid = "550e8400-e29b-41d4-a716-446655440000"
+ result = conn.sql(f"SELECT '{test_uuid}'::uuid")
+ assert result == uuid.UUID(test_uuid)
+ assert isinstance(result, uuid.UUID)
+
+
+def test_uuid_generation(conn):
+ """Test generated UUID type conversion."""
+ result = conn.sql("SELECT uuidv4()")
+ assert isinstance(result, uuid.UUID)
+ # Check it's a valid UUID by ensuring it can be converted to string
+ assert len(str(result)) == 36 # UUID string format length
+
+
+def test_text_array_with_commas(conn):
+ """Test text array with elements containing commas."""
+
+ result = conn.sql("SELECT ARRAY['A,B', 'C', ' D ']")
+ assert result == ["A,B", "C", " D "]
+
+
+def test_text_array_with_quotes(conn):
+ """Test text array with elements containing quotes."""
+
+ result = conn.sql(r"SELECT ARRAY[E'a\"b', 'c']")
+ assert result == ['a"b', "c"]
+
+
+def test_text_array_with_backslash(conn):
+ """Test text array with elements containing backslashes."""
+
+ result = conn.sql(r"SELECT ARRAY[E'a\\b', 'c']")
+ assert result == ["a\\b", "c"]
+
+
+def test_json_array_type(conn):
+ """Test array of JSON values with embedded quotes and commas."""
+
+ result = conn.sql("""SELECT ARRAY['{"abc": 123, "xyz": 456}'::json]""")
+ assert result == [{"abc": 123, "xyz": 456}]
+
+
+def test_json_array_multiple(conn):
+ """Test array of multiple JSON objects."""
+
+ result = conn.sql(
+ """SELECT ARRAY['{"a": 1}'::json, '{"b": 2}'::json, '["x", "y"]'::json]"""
+ )
+ assert result == [{"a": 1}, {"b": 2}, ["x", "y"]]
+
+
+def test_2d_int_array(conn):
+ """Test 2D integer array."""
+
+ result = conn.sql("SELECT ARRAY[[1,2],[3,4]]")
+ assert result == [[1, 2], [3, 4]]
+
+
+def test_2d_text_array(conn):
+ """Test 2D integer array."""
+
+ result = conn.sql("SELECT ARRAY[['a','b'],['c','d,e']]")
+ assert result == [["a", "b"], ["c", "d,e"]]
+
+
+def test_3d_int_array(conn):
+ """Test 3D integer array."""
+
+ result = conn.sql("SELECT ARRAY[[[1,2],[3,4]],[[5,6],[7,8]]]")
+ assert result == [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]
+
+
+def test_array_with_null(conn):
+ """Test array with NULL elements."""
+
+ result = conn.sql("SELECT ARRAY[1, NULL, 3]")
+ assert result == [1, None, 3]
--
2.52.0
v10-0003-POC-Convert-load-balance-tests-from-perl-to-pyth.patchtext/x-patch; charset=utf-8; name=v10-0003-POC-Convert-load-balance-tests-from-perl-to-pyth.patchDownload
From b81466d13c30db0d74381913fc7889cb7fc3f7c1 Mon Sep 17 00:00:00 2001
From: Jelte Fennema-Nio <postgres@jeltef.nl>
Date: Fri, 26 Dec 2025 12:31:43 +0100
Subject: [PATCH v10 3/5] POC: Convert load balance tests from perl to python
This is a proof of concept to show how to use the pytest test
infrastructure. It converts two existing tests that could not share
code. And now they do. If we ever introduce another load balance method
(e.g. round robin). We can easily test it for both DNS and hostlist
based load balancing by adding a single new test function.
---
src/interfaces/libpq/Makefile | 1 +
src/interfaces/libpq/meson.build | 7 +-
src/interfaces/libpq/pyt/test_load_balance.py | 170 ++++++++++++++++++
.../libpq/t/003_load_balance_host_list.pl | 94 ----------
.../libpq/t/004_load_balance_dns.pl | 144 ---------------
5 files changed, 176 insertions(+), 240 deletions(-)
create mode 100644 src/interfaces/libpq/pyt/test_load_balance.py
delete mode 100644 src/interfaces/libpq/t/003_load_balance_host_list.pl
delete mode 100644 src/interfaces/libpq/t/004_load_balance_dns.pl
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index bf4baa92917..4c4bdb4b3a3 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -167,6 +167,7 @@ check installcheck: export PATH := $(CURDIR)/test:$(PATH)
check: test-build all
$(prove_check)
+ $(pytest_check)
installcheck: test-build all
$(prove_installcheck)
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index c5ecd9c3a87..56790dd92a9 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -150,8 +150,6 @@ tests += {
'tests': [
't/001_uri.pl',
't/002_api.pl',
- 't/003_load_balance_host_list.pl',
- 't/004_load_balance_dns.pl',
't/005_negotiate_encryption.pl',
't/006_service.pl',
],
@@ -162,6 +160,11 @@ tests += {
},
'deps': libpq_test_deps,
},
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_load_balance.py',
+ ],
+ },
}
subdir('po', if_found: libintl)
diff --git a/src/interfaces/libpq/pyt/test_load_balance.py b/src/interfaces/libpq/pyt/test_load_balance.py
new file mode 100644
index 00000000000..0af46d8f37d
--- /dev/null
+++ b/src/interfaces/libpq/pyt/test_load_balance.py
@@ -0,0 +1,170 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+"""
+Tests for load_balance_hosts connection parameter.
+
+These tests verify that libpq correctly handles load balancing across multiple
+PostgreSQL servers specified in the connection string.
+"""
+
+import platform
+import re
+
+import pytest
+
+from libpq import LibpqError
+import pypg
+
+
+@pytest.fixture(scope="module")
+def load_balance_nodes_hostlist(create_pg_module):
+ """
+ Create 3 PostgreSQL nodes with different socket directories.
+
+ Each node has its own Unix socket directory for isolation.
+ Returns a tuple of (nodes, connect).
+ """
+ nodes = [create_pg_module() for _ in range(3)]
+
+ hostlist = ",".join(node.host for node in nodes)
+ portlist = ",".join(str(node.port) for node in nodes)
+
+ def connect(**kwargs):
+ return nodes[0].connect(host=hostlist, port=portlist, **kwargs)
+
+ return nodes, connect
+
+
+@pytest.fixture(scope="module")
+def load_balance_nodes_dns(create_pg_module):
+ """
+ Create 3 PostgreSQL nodes on the same port but different IP addresses.
+
+ Uses 127.0.0.1, 127.0.0.2, 127.0.0.3 with a shared port, so that
+ connections to 'pg-loadbalancetest' can be load balanced via DNS.
+
+ Since setting up a DNS server is more effort than we consider reasonable to
+ run this test, this situation is instead imitated by using a hosts file
+ where a single hostname maps to multiple different IP addresses. This test
+ requires the administrator to add the following lines to the hosts file (if
+ we detect that this hasn't happened we skip the test):
+
+ 127.0.0.1 pg-loadbalancetest
+ 127.0.0.2 pg-loadbalancetest
+ 127.0.0.3 pg-loadbalancetest
+
+ Windows or Linux are required to run this test because these OSes allow
+ binding to 127.0.0.2 and 127.0.0.3 addresses by default, but other OSes
+ don't. We need to bind to different IP addresses, so that we can use these
+ different IP addresses in the hosts file.
+
+ The hosts file needs to be prepared before running this test. We don't do
+ it on the fly, because it requires root permissions to change the hosts
+ file. In CI we set up the previously mentioned rules in the hosts file, so
+ that this load balancing method is tested.
+
+ Requires PG_TEST_EXTRA=load_balance because it requires this manual hosts
+ file configuration and also uses TCP with trust auth, which is potentially
+ unsafe on multiuser systems.
+ """
+ pypg.skip_unless_test_extras("load_balance")
+
+ if platform.system() not in ("Linux", "Windows"):
+ pytest.skip("DNS load balance test only supported on Linux and Windows")
+
+ if platform.system() == "Windows":
+ hosts_path = r"c:\Windows\System32\Drivers\etc\hosts"
+ else:
+ hosts_path = "/etc/hosts"
+
+ try:
+ with open(hosts_path) as f:
+ hosts_content = f.read()
+ except (OSError, IOError):
+ pytest.skip(f"Could not read hosts file: {hosts_path}")
+
+ count = len(re.findall(r"127\.0\.0\.[1-3]\s+pg-loadbalancetest", hosts_content))
+ if count != 3:
+ pytest.skip("hosts file not prepared for DNS load balance test")
+
+ first_node = create_pg_module(hostaddr="127.0.0.1")
+ nodes = [
+ first_node,
+ create_pg_module(hostaddr="127.0.0.2", port=first_node.port),
+ create_pg_module(hostaddr="127.0.0.3", port=first_node.port),
+ ]
+
+ # Allow trust authentication for TCP connections from loopback
+ for node in nodes:
+ hba_path = node.datadir / "pg_hba.conf"
+ with open(hba_path, "r") as f:
+ original_content = f.read()
+ with open(hba_path, "w") as f:
+ f.write("host all all 127.0.0.0/8 trust\n")
+ f.write(original_content)
+ node.pg_ctl("reload")
+
+ def connect(**kwargs):
+ return nodes[0].connect(host="pg-loadbalancetest", **kwargs)
+
+ return nodes, connect
+
+
+@pytest.fixture(scope="module", params=["hostlist", "dns"])
+def load_balance_nodes(request):
+ """
+ Parametrized fixture providing both load balancing test environments.
+ """
+ return request.getfixturevalue(f"load_balance_nodes_{request.param}")
+
+
+def test_load_balance_hosts_invalid_value(load_balance_nodes):
+ """load_balance_hosts doesn't accept unknown values."""
+ _, connect = load_balance_nodes
+
+ with pytest.raises(
+ LibpqError, match='invalid load_balance_hosts value: "doesnotexist"'
+ ):
+ connect(load_balance_hosts="doesnotexist")
+
+
+def test_load_balance_hosts_disable(load_balance_nodes):
+ """load_balance_hosts=disable always connects to the first node."""
+ nodes, connect = load_balance_nodes
+
+ with nodes[0].log_contains("connection received"):
+ connect(load_balance_hosts="disable")
+
+
+def test_load_balance_hosts_random_distribution(load_balance_nodes):
+ """load_balance_hosts=random distributes connections across all nodes."""
+ nodes, connect = load_balance_nodes
+
+ for _ in range(50):
+ connect(load_balance_hosts="random")
+
+ occurrences = [
+ len(re.findall("connection received", node.log_content())) for node in nodes
+ ]
+
+ # Statistically, each node should receive at least one connection.
+ # The probability of any node receiving 0 connections is (2/3)^50 ≈ 1.57e-9
+ assert occurrences[0] > 0, "node1 should receive at least one connection"
+ assert occurrences[1] > 0, "node2 should receive at least one connection"
+ assert occurrences[2] > 0, "node3 should receive at least one connection"
+ assert sum(occurrences) == 50, "total connections should be 50"
+
+
+def test_load_balance_hosts_failover(load_balance_nodes):
+ """load_balance_hosts continues trying hosts until it finds a working one."""
+ nodes, connect = load_balance_nodes
+
+ nodes[0].stop()
+ nodes[1].stop()
+
+ with nodes[2].log_contains("connection received"):
+ connect(load_balance_hosts="disable")
+
+ with nodes[2].log_contains("connection received", times=5):
+ for _ in range(5):
+ connect(load_balance_hosts="random")
diff --git a/src/interfaces/libpq/t/003_load_balance_host_list.pl b/src/interfaces/libpq/t/003_load_balance_host_list.pl
deleted file mode 100644
index 1f970ff994b..00000000000
--- a/src/interfaces/libpq/t/003_load_balance_host_list.pl
+++ /dev/null
@@ -1,94 +0,0 @@
-# Copyright (c) 2023-2026, PostgreSQL Global Development Group
-use strict;
-use warnings FATAL => 'all';
-use Config;
-use PostgreSQL::Test::Utils;
-use PostgreSQL::Test::Cluster;
-use Test::More;
-
-# This tests load balancing across the list of different hosts in the host
-# parameter of the connection string.
-
-# Cluster setup which is shared for testing both load balancing methods
-my $node1 = PostgreSQL::Test::Cluster->new('node1');
-my $node2 = PostgreSQL::Test::Cluster->new('node2', own_host => 1);
-my $node3 = PostgreSQL::Test::Cluster->new('node3', own_host => 1);
-
-# Create a data directory with initdb
-$node1->init();
-$node2->init();
-$node3->init();
-
-# Start the PostgreSQL server
-$node1->start();
-$node2->start();
-$node3->start();
-
-# Start the tests for load balancing method 1
-my $hostlist = $node1->host . ',' . $node2->host . ',' . $node3->host;
-my $portlist = $node1->port . ',' . $node2->port . ',' . $node3->port;
-
-$node1->connect_fails(
- "host=$hostlist port=$portlist load_balance_hosts=doesnotexist",
- "load_balance_hosts doesn't accept unknown values",
- expected_stderr => qr/invalid load_balance_hosts value: "doesnotexist"/);
-
-# load_balance_hosts=disable should always choose the first one.
-$node1->connect_ok(
- "host=$hostlist port=$portlist load_balance_hosts=disable",
- "load_balance_hosts=disable connects to the first node",
- sql => "SELECT 'connect1'",
- log_like => [qr/statement: SELECT 'connect1'/]);
-
-# Statistically the following loop with load_balance_hosts=random will almost
-# certainly connect at least once to each of the nodes. The chance of that not
-# happening is so small that it's negligible: (2/3)^50 = 1.56832855e-9
-foreach my $i (1 .. 50)
-{
- $node1->connect_ok(
- "host=$hostlist port=$portlist load_balance_hosts=random",
- "repeated connections with random load balancing",
- sql => "SELECT 'connect2'");
-}
-
-my $node1_occurrences = () =
- $node1->log_content() =~ /statement: SELECT 'connect2'/g;
-my $node2_occurrences = () =
- $node2->log_content() =~ /statement: SELECT 'connect2'/g;
-my $node3_occurrences = () =
- $node3->log_content() =~ /statement: SELECT 'connect2'/g;
-
-my $total_occurrences =
- $node1_occurrences + $node2_occurrences + $node3_occurrences;
-
-cmp_ok($node1_occurrences, '>', 1,
- "received at least one connection on node1");
-cmp_ok($node2_occurrences, '>', 1,
- "received at least one connection on node2");
-cmp_ok($node3_occurrences, '>', 1,
- "received at least one connection on node3");
-is($total_occurrences, 50, "received 50 connections across all nodes");
-
-$node1->stop();
-$node2->stop();
-
-# load_balance_hosts=disable should continue trying hosts until it finds a
-# working one.
-$node3->connect_ok(
- "host=$hostlist port=$portlist load_balance_hosts=disable",
- "load_balance_hosts=disable continues until it connects to the a working node",
- sql => "SELECT 'connect3'",
- log_like => [qr/statement: SELECT 'connect3'/]);
-
-# Also with load_balance_hosts=random we continue to the next nodes if previous
-# ones are down. Connect a few times to make sure it's not just lucky.
-foreach my $i (1 .. 5)
-{
- $node3->connect_ok(
- "host=$hostlist port=$portlist load_balance_hosts=random",
- "load_balance_hosts=random continues until it connects to the a working node",
- sql => "SELECT 'connect4'",
- log_like => [qr/statement: SELECT 'connect4'/]);
-}
-
-done_testing();
diff --git a/src/interfaces/libpq/t/004_load_balance_dns.pl b/src/interfaces/libpq/t/004_load_balance_dns.pl
deleted file mode 100644
index e1ff9a06024..00000000000
--- a/src/interfaces/libpq/t/004_load_balance_dns.pl
+++ /dev/null
@@ -1,144 +0,0 @@
-# Copyright (c) 2023-2026, PostgreSQL Global Development Group
-use strict;
-use warnings FATAL => 'all';
-use Config;
-use PostgreSQL::Test::Utils;
-use PostgreSQL::Test::Cluster;
-use Test::More;
-
-if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\bload_balance\b/)
-{
- plan skip_all =>
- 'Potentially unsafe test load_balance not enabled in PG_TEST_EXTRA';
-}
-
-# This tests loadbalancing based on a DNS entry that contains multiple records
-# for different IPs. Since setting up a DNS server is more effort than we
-# consider reasonable to run this test, this situation is instead imitated by
-# using a hosts file where a single hostname maps to multiple different IP
-# addresses. This test requires the administrator to add the following lines to
-# the hosts file (if we detect that this hasn't happened we skip the test):
-#
-# 127.0.0.1 pg-loadbalancetest
-# 127.0.0.2 pg-loadbalancetest
-# 127.0.0.3 pg-loadbalancetest
-#
-# Windows or Linux are required to run this test because these OSes allow
-# binding to 127.0.0.2 and 127.0.0.3 addresses by default, but other OSes
-# don't. We need to bind to different IP addresses, so that we can use these
-# different IP addresses in the hosts file.
-#
-# The hosts file needs to be prepared before running this test. We don't do it
-# on the fly, because it requires root permissions to change the hosts file. In
-# CI we set up the previously mentioned rules in the hosts file, so that this
-# load balancing method is tested.
-
-# Cluster setup which is shared for testing both load balancing methods
-my $can_bind_to_127_0_0_2 =
- $Config{osname} eq 'linux' || $PostgreSQL::Test::Utils::windows_os;
-
-# Checks for the requirements for testing load balancing method 2
-if (!$can_bind_to_127_0_0_2)
-{
- plan skip_all => 'load_balance test only supported on Linux and Windows';
-}
-
-my $hosts_path;
-if ($windows_os)
-{
- $hosts_path = 'c:\Windows\System32\Drivers\etc\hosts';
-}
-else
-{
- $hosts_path = '/etc/hosts';
-}
-
-my $hosts_content = PostgreSQL::Test::Utils::slurp_file($hosts_path);
-
-my $hosts_count = () =
- $hosts_content =~ /127\.0\.0\.[1-3] pg-loadbalancetest/g;
-if ($hosts_count != 3)
-{
- # Host file is not prepared for this test
- plan skip_all => "hosts file was not prepared for DNS load balance test";
-}
-
-$PostgreSQL::Test::Cluster::use_tcp = 1;
-$PostgreSQL::Test::Cluster::test_pghost = '127.0.0.1';
-my $port = PostgreSQL::Test::Cluster::get_free_port();
-my $node1 = PostgreSQL::Test::Cluster->new('node1', port => $port);
-my $node2 =
- PostgreSQL::Test::Cluster->new('node2', port => $port, own_host => 1);
-my $node3 =
- PostgreSQL::Test::Cluster->new('node3', port => $port, own_host => 1);
-
-# Create a data directory with initdb
-$node1->init();
-$node2->init();
-$node3->init();
-
-# Start the PostgreSQL server
-$node1->start();
-$node2->start();
-$node3->start();
-
-# load_balance_hosts=disable should always choose the first one.
-$node1->connect_ok(
- "host=pg-loadbalancetest port=$port load_balance_hosts=disable",
- "load_balance_hosts=disable connects to the first node",
- sql => "SELECT 'connect1'",
- log_like => [qr/statement: SELECT 'connect1'/]);
-
-
-# Statistically the following loop with load_balance_hosts=random will almost
-# certainly connect at least once to each of the nodes. The chance of that not
-# happening is so small that it's negligible: (2/3)^50 = 1.56832855e-9
-foreach my $i (1 .. 50)
-{
- $node1->connect_ok(
- "host=pg-loadbalancetest port=$port load_balance_hosts=random",
- "repeated connections with random load balancing",
- sql => "SELECT 'connect2'");
-}
-
-my $node1_occurrences = () =
- $node1->log_content() =~ /statement: SELECT 'connect2'/g;
-my $node2_occurrences = () =
- $node2->log_content() =~ /statement: SELECT 'connect2'/g;
-my $node3_occurrences = () =
- $node3->log_content() =~ /statement: SELECT 'connect2'/g;
-
-my $total_occurrences =
- $node1_occurrences + $node2_occurrences + $node3_occurrences;
-
-cmp_ok($node1_occurrences, '>', 1,
- "received at least one connection on node1");
-cmp_ok($node2_occurrences, '>', 1,
- "received at least one connection on node2");
-cmp_ok($node3_occurrences, '>', 1,
- "received at least one connection on node3");
-is($total_occurrences, 50, "received 50 connections across all nodes");
-
-$node1->stop();
-$node2->stop();
-
-# load_balance_hosts=disable should continue trying hosts until it finds a
-# working one.
-$node3->connect_ok(
- "host=pg-loadbalancetest port=$port load_balance_hosts=disable",
- "load_balance_hosts=disable continues until it connects to a working node",
- sql => "SELECT 'connect3'",
- log_like => [qr/statement: SELECT 'connect3'/]);
-
-# Also with load_balance_hosts=random we continue to the next nodes if previous
-# ones are down. Connect a few times to make sure it's not just lucky.
-foreach my $i (1 .. 5)
-{
- $node3->connect_ok(
- "host=pg-loadbalancetest port=$port load_balance_hosts=random",
- "load_balance_hosts=random continues until it connects to a working node",
- sql => "SELECT 'connect4'",
- log_like => [qr/statement: SELECT 'connect4'/]);
-}
-
-done_testing();
--
2.52.0
v10-0004-WIP-pytest-Add-some-SSL-client-tests.patchtext/x-patch; charset=utf-8; name=v10-0004-WIP-pytest-Add-some-SSL-client-tests.patchDownload
From dd3cde0039ac91aada7004a1fea10238a4e0a6a8 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 16 Dec 2025 09:30:55 +0100
Subject: [PATCH v10 4/5] WIP: pytest: Add some SSL client tests
This is a sample client-only test suite. It tests some handshake
failures against a mock server, as well as a full SSL handshake + empty
query + response.
pyca/cryptography is added as a new package dependency. Certificates for
testing are generated on the fly.
The mock design is threaded: the server socket is listening on a
background thread, and the test provides the server logic via a
callback. There is some additional work still needed to make this
production-ready; see the notes for _TCPServer.background(). (Currently,
an exception in the wrong place could result in a hang-until-timeout
rather than an immediate failure.)
TODOs:
- local_server and tcp_server_class are nearly identical and should
share code.
- fix exception-related timeouts for .background()
- figure out the proper use of "session" vs "module" scope
- ensure that pq.libpq unwinds (to close connections) before tcp_server;
see comment in test_server_with_ssl_disabled()
---
.cirrus.tasks.yml | 2 +
pyproject.toml | 8 +
src/test/ssl/Makefile | 2 +
src/test/ssl/meson.build | 6 +
src/test/ssl/pyt/conftest.py | 128 +++++++++++++++
src/test/ssl/pyt/test_client.py | 278 ++++++++++++++++++++++++++++++++
6 files changed, 424 insertions(+)
create mode 100644 src/test/ssl/pyt/conftest.py
create mode 100644 src/test/ssl/pyt/test_client.py
diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index c9db12d53b9..c75f12b779b 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -646,6 +646,7 @@ task:
CIRRUS_WORKING_DIR: ${HOME}/pgsql/
CCACHE_DIR: ${HOME}/ccache
MACPORTS_CACHE: ${HOME}/macports-cache
+ PYTEST_DEBUG_TEMPROOT: /tmp # default is too long for UNIX sockets on Mac
MESON_FEATURES: >-
-Dbonjour=enabled
@@ -666,6 +667,7 @@ task:
p5.34-io-tty
p5.34-ipc-run
python312
+ py312-cryptography
py312-packaging
py312-pytest
tcl
diff --git a/pyproject.toml b/pyproject.toml
index 4628d2274e0..00c8ae88583 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -12,6 +12,14 @@ dependencies = [
# Any other dependencies are effectively optional (added below). We import
# these libraries using pytest.importorskip(). So tests will be skipped if
# they are not available.
+
+ # Notes on the cryptography package:
+ # - 3.3.2 is shipped on Debian bullseye.
+ # - 3.4.x drops support for Python 2, making it a version of note for older LTS
+ # distros.
+ # - 35.x switched versioning schemes and moved to Rust parsing.
+ # - 40.x is the last version supporting Python 3.6.
+ "cryptography >= 3.3.2",
]
[tool.pytest.ini_options]
diff --git a/src/test/ssl/Makefile b/src/test/ssl/Makefile
index aa062945fb9..287729ad9fb 100644
--- a/src/test/ssl/Makefile
+++ b/src/test/ssl/Makefile
@@ -30,6 +30,8 @@ clean distclean:
# Doesn't depend on sslfiles because we don't rebuild them by default
check:
$(prove_check)
+ # XXX these suites should run independently, not serially
+ $(pytest_check)
installcheck:
$(prove_installcheck)
diff --git a/src/test/ssl/meson.build b/src/test/ssl/meson.build
index 9e5bdbb6136..6ec274d8165 100644
--- a/src/test/ssl/meson.build
+++ b/src/test/ssl/meson.build
@@ -15,4 +15,10 @@ tests += {
't/003_sslinfo.pl',
],
},
+ 'pytest': {
+ 'tests': [
+ 'pyt/test_client.py',
+ 'pyt/test_server.py',
+ ],
+ },
}
diff --git a/src/test/ssl/pyt/conftest.py b/src/test/ssl/pyt/conftest.py
new file mode 100644
index 00000000000..870f738ac44
--- /dev/null
+++ b/src/test/ssl/pyt/conftest.py
@@ -0,0 +1,128 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import datetime
+import re
+import subprocess
+import tempfile
+from collections import namedtuple
+
+import pytest
+
+
+@pytest.fixture(scope="session")
+def cryptography():
+ return pytest.importorskip("cryptography", "3.3.2")
+
+
+Cert = namedtuple("Cert", "cert, certpath, key, keypath")
+
+
+@pytest.fixture(scope="session")
+def certs(cryptography, tmp_path_factory):
+ """
+ Caches commonly used certificates at the session level, and provides a way
+ to create new ones.
+
+ - certs.ca: the root CA certificate
+
+ - certs.server: the "standard" server certficate, signed by certs.ca
+
+ - certs.server_host: the hostname of the certs.server certificate
+
+ - certs.new(): creates a custom certificate, signed by certs.ca
+ """
+
+ from cryptography import x509
+ from cryptography.hazmat.primitives import hashes, serialization
+ from cryptography.hazmat.primitives.asymmetric import rsa
+ from cryptography.x509.oid import NameOID
+
+ tmpdir = tmp_path_factory.mktemp("test-certs")
+
+ class _Certs:
+ def __init__(self):
+ self.ca = self.new(
+ x509.Name(
+ [x509.NameAttribute(NameOID.COMMON_NAME, "PG pytest CA")],
+ ),
+ ca=True,
+ )
+
+ self.server_host = "example.org"
+ self.server = self.new(
+ x509.Name(
+ [x509.NameAttribute(NameOID.COMMON_NAME, self.server_host)],
+ )
+ )
+
+ def new(self, subject: x509.Name, *, ca=False) -> Cert:
+ """
+ Creates and signs a new Cert with the given subject name. If ca is
+ True, the certificate will be self-signed; otherwise the certificate
+ is signed by self.ca.
+ """
+ key = rsa.generate_private_key(
+ public_exponent=65537,
+ key_size=2048,
+ )
+
+ builder = x509.CertificateBuilder()
+ now = datetime.datetime.now(datetime.timezone.utc)
+
+ builder = (
+ builder.subject_name(subject)
+ .public_key(key.public_key())
+ .serial_number(x509.random_serial_number())
+ .not_valid_before(now)
+ .not_valid_after(now + datetime.timedelta(hours=1))
+ )
+
+ if ca:
+ builder = builder.issuer_name(subject)
+ else:
+ builder = builder.issuer_name(self.ca.cert.subject)
+
+ builder = builder.add_extension(
+ x509.BasicConstraints(ca=ca, path_length=None),
+ critical=True,
+ )
+
+ cert = builder.sign(
+ private_key=key if ca else self.ca.key,
+ algorithm=hashes.SHA256(),
+ )
+
+ # Dump the certificate and key to file.
+ keypath = self._tofile(
+ key.private_bytes(
+ serialization.Encoding.PEM,
+ serialization.PrivateFormat.PKCS8,
+ serialization.NoEncryption(),
+ ),
+ suffix=".key",
+ )
+ certpath = self._tofile(
+ cert.public_bytes(serialization.Encoding.PEM),
+ suffix="-ca.crt" if ca else ".crt",
+ )
+
+ return Cert(
+ cert=cert,
+ certpath=certpath,
+ key=key,
+ keypath=keypath,
+ )
+
+ def _tofile(self, data: bytes, *, suffix) -> str:
+ """
+ Dumps data to a file on disk with the requested suffix and returns
+ the path. The file is located somewhere in pytest's temporary
+ directory root.
+ """
+ f = tempfile.NamedTemporaryFile(suffix=suffix, dir=tmpdir, delete=False)
+ with f:
+ f.write(data)
+
+ return f.name
+
+ return _Certs()
diff --git a/src/test/ssl/pyt/test_client.py b/src/test/ssl/pyt/test_client.py
new file mode 100644
index 00000000000..556bad33bf8
--- /dev/null
+++ b/src/test/ssl/pyt/test_client.py
@@ -0,0 +1,278 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import contextlib
+import ctypes
+import socket
+import ssl
+import struct
+import threading
+from typing import Callable
+
+import pytest
+
+import pypg
+from libpq import LibpqError, ExecStatus
+
+# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
+pytestmark = pypg.require_test_extras("ssl")
+
+
+@pytest.fixture(scope="session", autouse=True)
+def skip_if_no_ssl_support(libpq_handle):
+ """Skips tests if SSL support is not configured."""
+
+ # Declare PQsslAttribute().
+ PQsslAttribute = libpq_handle.PQsslAttribute
+ PQsslAttribute.restype = ctypes.c_char_p
+ PQsslAttribute.argtypes = [ctypes.c_void_p, ctypes.c_char_p]
+
+ if not PQsslAttribute(None, b"library"):
+ pytest.skip("requires SSL support to be configured")
+
+
+#
+# Test Fixtures
+#
+
+
+@pytest.fixture
+def tcp_server_class(remaining_timeout):
+ """
+ Metafixture to combine related logic for tcp_server and ssl_server.
+
+ TODO: combine with test_libpq.local_server
+ """
+
+ class _TCPServer(contextlib.ExitStack):
+ """
+ Implementation class for tcp_server. See .background() for the primary
+ entry point for tests. Postgres clients may connect to this server via
+ **tcp_server.conninfo.
+
+ _TCPServer derives from contextlib.ExitStack to provide easy cleanup of
+ associated resources; see the documentation for that class for a full
+ explanation.
+ """
+
+ def __init__(self):
+ super().__init__()
+
+ self._thread = None
+ self._thread_exc = None
+ self._listener = self.enter_context(
+ socket.socket(socket.AF_INET, socket.SOCK_STREAM),
+ )
+
+ self._bind_and_listen()
+ sockname = self._listener.getsockname()
+ self.conninfo = dict(
+ hostaddr=sockname[0],
+ port=sockname[1],
+ )
+
+ def _bind_and_listen(self):
+ """
+ Does the actual work of binding the socket and listening for
+ connections.
+
+ The listen backlog is currently hardcoded to one.
+ """
+ self._listener.bind(("127.0.0.1", 0))
+ self._listener.listen(1)
+
+ def background(self, fn: Callable[[socket.socket], None]) -> None:
+ """
+ Accepts a client connection on a background thread and passes it to
+ the provided callback. Any exceptions raised from the callback will
+ be re-raised on the main thread during fixture teardown.
+
+ Blocking operations on the connected socket default to using the
+ remaining_timeout(), though this can be changed by the test via the
+ socket's .settimeout().
+ """
+
+ def _bg():
+ try:
+ self._listener.settimeout(remaining_timeout())
+ sock, _ = self._listener.accept()
+
+ with sock:
+ sock.settimeout(remaining_timeout())
+ fn(sock)
+
+ except Exception as e:
+ # Save the exception for re-raising on the main thread.
+ self._thread_exc = e
+
+ # TODO: rather than using callback(), consider explicitly signaling
+ # the fn() implementation to stop early if we get an exception.
+ # Otherwise we'll hang until the end of the timeout.
+ self._thread = threading.Thread(target=_bg)
+ self.callback(self._join)
+
+ self._thread.start()
+
+ def _join(self):
+ """
+ Waits for the background thread to finish and raises any thrown
+ exception. This is called during fixture teardown.
+ """
+ # Give a little bit of wiggle room on the join timeout, since we're
+ # racing against the test's own use of remaining_timeout(). (It's
+ # preferable to let tests report timeouts; the stack traces will
+ # help with debugging.)
+ self._thread.join(remaining_timeout() + 1)
+ if self._thread.is_alive():
+ raise TimeoutError("background thread is still running after timeout")
+
+ if self._thread_exc is not None:
+ raise self._thread_exc
+
+ return _TCPServer
+
+
+@pytest.fixture
+def tcp_server(tcp_server_class):
+ """
+ Opens up a local TCP socket for mocking a Postgres server on a background
+ thread. See the _TCPServer API for usage.
+ """
+ with tcp_server_class() as s:
+ yield s
+
+
+@pytest.fixture
+def ssl_server(tcp_server_class, certs):
+ """
+ Like tcp_server, but with an additional .background_ssl() method which will
+ perform a SSLRequest handshake on the socket before handing the connection
+ to the test callback.
+
+ This server uses certs.server as its identity.
+ """
+
+ class _SSLServer(tcp_server_class):
+ def __init__(self):
+ super().__init__()
+
+ self.conninfo["host"] = certs.server_host
+
+ self._ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
+ self._ctx.load_cert_chain(certs.server.certpath, certs.server.keypath)
+
+ def background_ssl(self, fn: Callable[[ssl.SSLSocket], None]) -> None:
+ """
+ Invokes a server callback as with .background(), but an SSLRequest
+ handshake is performed first, and the socket provided to the
+ callback has been wrapped in an OpenSSL layer.
+ """
+
+ def handshake(s: socket.socket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Make sure we get an SSLRequest.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (1234, 5679)
+ assert pktlen == 8
+
+ # Accept the SSLRequest.
+ s.send(b"S")
+
+ with self._ctx.wrap_socket(s, server_side=True) as wrapped:
+ fn(wrapped)
+
+ self.background(handshake)
+
+ with _SSLServer() as s:
+ yield s
+
+
+#
+# Tests
+#
+
+
+@pytest.mark.parametrize("sslmode", ("require", "verify-ca", "verify-full"))
+def test_server_with_ssl_disabled(connect, tcp_server, certs, sslmode):
+ """
+ Make sure client refuses to talk to non-SSL servers with stricter
+ sslmodes.
+ """
+
+ def refuse_ssl(s: socket.socket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Make sure we get an SSLRequest.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (1234, 5679)
+ assert pktlen == 8
+
+ # Refuse the SSLRequest.
+ s.send(b"N")
+
+ # Wait for the client to close the connection.
+ assert not s.recv(1), "client sent unexpected data"
+
+ tcp_server.background(refuse_ssl)
+
+ with pytest.raises(LibpqError, match="server does not support SSL"):
+ connect(
+ **tcp_server.conninfo,
+ sslrootcert=certs.ca.certpath,
+ sslmode=sslmode,
+ )
+
+
+def test_verify_full_connection(connect, ssl_server, certs):
+ """Completes a verify-full connection and empty query."""
+
+ def handle_empty_query(s: ssl.SSLSocket):
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+
+ # Check the startup packet version, then discard the remainder.
+ version = struct.unpack("!HH", s.recv(4))
+ assert version == (3, 0)
+ s.recv(pktlen - 8)
+
+ # Send the required litany of server messages.
+ s.send(struct.pack("!cII", b"R", 8, 0)) # AuthenticationOK
+
+ # ParameterStatus: client_encoding
+ key = b"client_encoding\0"
+ val = b"UTF-8\0"
+ s.send(struct.pack("!cI", b"S", 4 + len(key) + len(val)) + key + val)
+
+ # ParameterStatus: DateStyle
+ key = b"DateStyle\0"
+ val = b"ISO, MDY\0"
+ s.send(struct.pack("!cI", b"S", 4 + len(key) + len(val)) + key + val)
+
+ s.send(struct.pack("!cIII", b"K", 12, 1234, 1234)) # BackendKeyData
+ s.send(struct.pack("!cIc", b"Z", 5, b"I")) # ReadyForQuery
+
+ # Expect an empty query.
+ pkttype = s.recv(1)
+ assert pkttype == b"Q"
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+ assert s.recv(pktlen - 4) == b"\0"
+
+ # Send an EmptyQueryResponse+ReadyForQuery.
+ s.send(struct.pack("!cI", b"I", 4))
+ s.send(struct.pack("!cIc", b"Z", 5, b"I"))
+
+ # libpq should terminate and close the connection.
+ assert s.recv(1) == b"X"
+ pktlen = struct.unpack("!I", s.recv(4))[0]
+ assert pktlen == 4
+
+ assert not s.recv(1), "client sent unexpected data"
+
+ ssl_server.background_ssl(handle_empty_query)
+
+ conn = connect(
+ **ssl_server.conninfo,
+ sslrootcert=certs.ca.certpath,
+ sslmode="verify-full",
+ )
+ with conn:
+ assert conn.exec("").status() == ExecStatus.PGRES_EMPTY_QUERY
--
2.52.0
v10-0005-WIP-pytest-Add-some-server-side-SSL-tests.patchtext/x-patch; charset=utf-8; name=v10-0005-WIP-pytest-Add-some-server-side-SSL-tests.patchDownload
From 328de114ac60f1a274b289258da7f8129725b13f Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 16 Dec 2025 09:31:46 +0100
Subject: [PATCH v10 5/5] WIP: pytest: Add some server-side SSL tests
In the same vein as the previous commit, this is a server-only test
suite operating against a mock client. The test itself is a heavily
parameterized check for direct-SSL handshake behavior, using a
combination of "standard" and "custom" certificates via the certs
fixture.
installcheck is currently unsupported, but the architecture has some
extension points that should make it possible later. For now, a new
server is always started for the test session.
TODOs:
- improve remaining_timeout() integration with socket operations; at the
moment, the timeout resets on every call rather than decrementing
---
src/test/ssl/pyt/conftest.py | 50 ++++++++++
src/test/ssl/pyt/test_server.py | 161 ++++++++++++++++++++++++++++++++
2 files changed, 211 insertions(+)
create mode 100644 src/test/ssl/pyt/test_server.py
diff --git a/src/test/ssl/pyt/conftest.py b/src/test/ssl/pyt/conftest.py
index 870f738ac44..d121724800b 100644
--- a/src/test/ssl/pyt/conftest.py
+++ b/src/test/ssl/pyt/conftest.py
@@ -126,3 +126,53 @@ def certs(cryptography, tmp_path_factory):
return f.name
return _Certs()
+
+
+@pytest.fixture(scope="module", autouse=True)
+def ssl_setup(pg_server_module, certs, datadir):
+ """
+ Sets up required server settings for all tests in this module.
+ """
+ try:
+ with pg_server_module.restarting() as s:
+ s.conf.set(
+ ssl="on",
+ ssl_ca_file=certs.ca.certpath,
+ ssl_cert_file=certs.server.certpath,
+ ssl_key_file=certs.server.keypath,
+ )
+
+ # Reject by default.
+ s.hba.prepend("hostssl all all all reject")
+
+ except subprocess.CalledProcessError:
+ # This is a decent place to skip if the server isn't set up for SSL.
+ logpath = datadir / "postgresql.log"
+ unsupported = re.compile("SSL is not supported")
+
+ with open(logpath, "r") as log:
+ for line in log:
+ if unsupported.search(line):
+ pytest.skip("the server does not support SSL")
+
+ # Some other error happened.
+ raise
+
+ users = pg_server_module.create_users("ssl")
+ dbs = pg_server_module.create_dbs("ssl")
+
+ return (users, dbs)
+
+
+@pytest.fixture(scope="module")
+def client_cert(ssl_setup, certs):
+ """
+ Creates a Cert for the "ssl" user.
+ """
+ from cryptography import x509
+ from cryptography.x509.oid import NameOID
+
+ users, _ = ssl_setup
+ user = users["ssl"]
+
+ return certs.new(x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, user)]))
diff --git a/src/test/ssl/pyt/test_server.py b/src/test/ssl/pyt/test_server.py
new file mode 100644
index 00000000000..d5cb14b6c9a
--- /dev/null
+++ b/src/test/ssl/pyt/test_server.py
@@ -0,0 +1,161 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+import re
+import socket
+import ssl
+import struct
+
+import pytest
+
+import pypg
+
+# This suite opens up local TCP ports and is hidden behind PG_TEST_EXTRA=ssl.
+pytestmark = pypg.require_test_extras("ssl")
+
+# For use with the `creds` parameter below.
+CLIENT = "client"
+SERVER = "server"
+
+
+# fmt: off
+@pytest.mark.parametrize(
+ "auth_method, creds, expected_error",
+[
+ # Trust allows anything.
+ ("trust", None, None),
+ ("trust", CLIENT, None),
+ ("trust", SERVER, None),
+
+ # verify-ca allows any CA-signed certificate.
+ ("trust clientcert=verify-ca", None, "requires a valid client certificate"),
+ ("trust clientcert=verify-ca", CLIENT, None),
+ ("trust clientcert=verify-ca", SERVER, None),
+
+ # cert and verify-full allow only the correct certificate.
+ ("trust clientcert=verify-full", None, "requires a valid client certificate"),
+ ("trust clientcert=verify-full", CLIENT, None),
+ ("trust clientcert=verify-full", SERVER, "authentication failed for user"),
+ ("cert", None, "requires a valid client certificate"),
+ ("cert", CLIENT, None),
+ ("cert", SERVER, "authentication failed for user"),
+],
+)
+# fmt: on
+def test_direct_ssl_certificate_authentication(
+ pg,
+ ssl_setup,
+ certs,
+ client_cert,
+ remaining_timeout,
+ # test parameters
+ auth_method,
+ creds,
+ expected_error,
+):
+ """
+ Tests direct SSL connections with various client-certificate/HBA
+ combinations.
+ """
+
+ # Set up the HBA as desired by the test.
+ users, dbs = ssl_setup
+
+ user = users["ssl"]
+ db = dbs["ssl"]
+
+ with pg.reloading() as s:
+ s.hba.prepend(
+ ["hostssl", db, user, "127.0.0.1/32", auth_method],
+ ["hostssl", db, user, "::1/128", auth_method],
+ )
+
+ # Configure the SSL settings for the client.
+ ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
+ ctx.load_verify_locations(cafile=certs.ca.certpath)
+ ctx.set_alpn_protocols(["postgresql"]) # for direct SSL
+
+ # Load up a client certificate if required by the test.
+ if creds == CLIENT:
+ ctx.load_cert_chain(client_cert.certpath, client_cert.keypath)
+ elif creds == SERVER:
+ # Using a server certificate as the client credential is expected to
+ # work only for clientcert=verify-ca (and `trust`, naturally).
+ ctx.load_cert_chain(certs.server.certpath, certs.server.keypath)
+
+ # Make a direct SSL connection. There's no SSLRequest in the handshake; we
+ # simply wrap a TCP connection with OpenSSL.
+ addr = (pg.hostaddr, pg.port)
+ with socket.create_connection(addr) as s:
+ s.settimeout(remaining_timeout()) # XXX this resets every operation
+
+ with ctx.wrap_socket(s, server_hostname=certs.server_host) as conn:
+ # Build and send the startup packet.
+ startup_options = dict(
+ user=user,
+ database=db,
+ application_name="pytest",
+ )
+
+ payload = b""
+ for k, v in startup_options.items():
+ payload += k.encode() + b"\0"
+ payload += str(v).encode() + b"\0"
+ payload += b"\0" # null terminator
+
+ pktlen = 4 + 4 + len(payload)
+ conn.send(struct.pack("!IHH", pktlen, 3, 0) + payload)
+
+ if not expected_error:
+ # Expect an AuthenticationOK to come back.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"R"
+ assert pktlen == 8
+
+ authn_result = struct.unpack("!I", conn.recv(4))[0]
+ assert authn_result == 0
+
+ # Read and discard to ReadyForQuery.
+ while True:
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ payload = conn.recv(pktlen - 4)
+
+ if pkttype == b"Z":
+ assert payload == b"I"
+ break
+
+ # Send an empty query.
+ conn.send(struct.pack("!cI", b"Q", 5) + b"\0")
+
+ # Expect EmptyQueryResponse+ReadyForQuery.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"I"
+ assert pktlen == 4
+
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"Z"
+
+ payload = conn.recv(pktlen - 4)
+ assert payload == b"I"
+
+ else:
+ # Match the expected authentication error.
+ pkttype, pktlen = struct.unpack("!cI", conn.recv(5))
+ assert pkttype == b"E"
+
+ payload = conn.recv(pktlen - 4)
+ msg = None
+
+ for component in payload.split(b"\0"):
+ if not component:
+ break # end of message
+
+ key, val = component[:1], component[1:]
+ if key == b"S":
+ assert val == b"FATAL"
+ elif key == b"M":
+ msg = val.decode()
+
+ assert re.search(expected_error, msg), "server error did not match"
+
+ # Terminate.
+ conn.send(struct.pack("!cI", b"X", 4))
--
2.52.0